Windows 8 Release Preview now available, Adobe Flash included, finished version expected August 2012

Microsoft has made the Release Preview of Windows 8 available to download. So what’s new?


The press release:

  • Confirms that a “touch-friendly and power-optimized Adobe Flash Player” is integrated into Internet Explorer 10
  • Announces new Family Safety features
  • States that IE10 is “Do not track” by default
  • Announces new apps and improvements to existing ones

All of which will come as a disappointment to those hoping for any sort of change of direction following a mixed-to-negative reception for the Consumer Preview.

Windows chief Steven Sinofsky says:

If the feedback and telemetry on Windows 8 and Windows RT match our expectations, then we will enter the final phases of the RTM process in about 2 months.

My guess that feedback along the lines of “Bring back the Start menu” will not count as an obstacle.

Developers like coding in the dark

Many developers prefer to code against dark backgrounds, according to this post by Monty Hammontree, Director of User Experience in Microsoft’s developer tools division.

Many of you have expressed a preference for coding within a dark editor. For example, dark editor themes dominate the list of all-time favorites at web sites such as which serve as a repository for different Visual Studio styles.

Chief among the reasons many of you have expressed for preferring dark backgrounds is the reduced strain placed on the eyes when staring at the screen for many hours. Many developers state that light text on a dark background is easier to read over longer periods of time than dark text on a light background.


Personally I am not in this group. A white-ish background works well for me, and if it is too bright, simply reducing the monitor brightness is an effective fix.

Interesting post though, if only for the snippets of information about the new Visual Studio. Apparently it has around 6000 icons used in 28,000 locations. Another little fact:

Visual Studio’s UI is a mix of WPF, Windows Forms, Win32, HTML, and other UI technologies which made scrollbar theming a challenging project.

If you will be using Visual Studio 2012, are you on the dark side?

The application that would not uninstall

I install a ton of pre-release and test software so it is not surprising that I sometimes run into Windows Installer issues. Here is an entertaining error though. It is unlikely, I guess, that you will hit this problem; but I present it as an illustration of what can go wrong, as we move into the era of locked-down operating systems and easy app installs. Though even these are not perfect. Notice how the operating system fights me all the way.

Years ago I installed Microsoft’s Office Labs Ribbon Hero, a tutorial add-on for Office. At the time I was running Windows Vista. Since then I have done an in-place upgrade to Windows 7. I tried to remove it today through Control Panel and got this message:


After presenting this information setup closed and the application was not uninstalled.

So … the application does not support Windows 7 and therefore you cannot remove it. Clever, and I found this a tricky problem to get around.

I took a look at the Windows installer files which you can find in %SYSTEMROOT%\Installer. All the msi files have random names. However, you can right-click the column heading area and choose More, then check Subject in the list. Click OK, and now the application to which each msi relates appears.


Now you can click the column heading to sort by subject and find the problem msi.


I copied the msi to my desktop.

For the next step you need the Orca tool from the Windows Installer SDK. If Orca is installed, you can right-click the MSI and choose Edit with Orca.


I then selected LaunchCondition and deleted the launch condition that required Windows XP.


Hmm, something odd here as it should pass INSTALLED? Still, save, right-click the msi, choose Uninstall. You still hit the error. Why? Somehow, Windows works out that you are uninstalling a product for which an msi exists in the official location and uses that one instead. You have to copy your modified msi to the correct location. Open an administrator command prompt:


Now right-click the msi and choose Uninstall.

It worked. Phew.

Review: Cygnett Bluetooth Keyboard for iPad, Windows

In the iPad era there is increasing demand for wireless keyboards that will transform your tablet into a productive writing machine. I have tried a number of such gadgets recently, including a bargain-price iPad keyboard case and an expensive Samsung keyboard to go with the Slate I have been using for Windows 8 Consumer Preview.

Both keyboards work, but with so many annoyances that I rarely use them. The keyboard case works well enough, if you can cope with squishy keys and a tiny power switch, but adds so much weight and bulk to the iPad that it becomes like a laptop, and in doing so loses much of its appeal. The Samsung keyboard on the other hand has a quality feel but lacks a proper power switch, and I found the only way to prevent it powering up when in your bag is to remove the batteries, which is a nuisance. Further, there is some kind of design fault with the keys which can get stuck down; they pop back easily enough, but after a few times something snaps and I now have a key that slopes slightly.

Enter the Cygnett Bluetooth Keyboard, primarily designed for the iPad but which works find with the Slate and no doubt numerous other devices, and which is priced competitively considering it has hard keys and is rechargeable.


I found several things to like.

First, it has a real on/off switch on the back, something I value having experienced problems with Samsung’s soft power key.

Second, it is small, and will fit in the the top inside pocket of a man’s jacket or tucked into a flap in almost any bag or case. The longest side of the keypad is around 1.5cm less than the length of the iPad itself.

Third, it seems robust and the keys are pleasantly responsive.


Getting started was simple enough. Charge it using the supplied USB connector, and pair with the iPad or other device by depressing the recessed pairing key, scanning for new devices, and typing the code given.

I find I can get a good speed on this device, though it is a little cramped especially if you do true touch typing using all your fingers. Still, this is mainly a matter of practice and it is a big step up, for me, from the soft keyboard on an iPad or tablet. Another reason to prefer a physical keyboard is that you get twice as much screen space to view your document.

The keypad also works fine with my Windows 8 Slate, though it has Mac-style keys so no Windows key. Of course you can use Ctrl-Esc for this. There is a Print Screen key though, so from my point of view all the important keys are covered. There is no right Shift key.

One small disappointment: although it has a mini USB socket for charging, this keypad is wireless only. It will not work as a USB keyboard even if you use a full USB cable, rather than the charge-only cable supplied. A shame, because there are circumstances when a USB keyboard is useful, such as for changing BIOS settings on a Windows tablet.

The keypad also works with some Android devices. However I was unable to pair it with an HTC Desire smartphone, and I have seen reports of similar issues with other Android mobile devices. If the device prompts for a number to type on the keyboard, you are in business. If it suggests typing a generic code such as 0000 on the device, it does not work, though there may be a workaround of which I am not aware.

Another limitation: you can only pair the keypad with one device at a time.

Nevertheless, I like this keypad better than the Samsung keyboard which cost much more. Recommended.


Adobe Flash in Windows 8 Metro, but not technically a plug-in

Today’s Windows 8 rumour is that Adobe Flash will be baked into Internet Explorer 10 in Windows 8, not only in the desktop edition but also in Metro.

Until this is confirmed by Microsoft, it is only a rumour. However, it seems likely to me. The way this rumour mill works is:

  • Some journalists and book authors working closely with Microsoft already have information on Windows 8 that is under non-disclosure.
  • Some enthusiast sites obtain leaked builds of Windows 8 and poke around in them. Unlike new Mac OS X releases, Windows builds are near-impossible to keep secure because Microsoft needs to share them with hardware partners, and mysteriously copies turn up on on the Internet.
  • When an interesting fact is leaked, this allows those journalists and book authors who already have the information to write about it, since most non-disclosure agreements allow reporting on what is already known from other sources.

That is my understanding, anyway. So when you read on that Flash is in IE10 you may be sceptical; but when Paul Thurrott and Rafael Rivera report the same story in more detail, you can probably believe it.

Back to the main story: presuming this is accurate, Microsoft has received Flash source code from Adobe and integrated it into IE10, in a similar manner to what Google has done with Flash in Chrome. This means that Flash in IE10 is not quite a plug-in. However, on the Metro side the inclusion of Flash is apparently a compatibility feature:

So, Microsoft has extended the Internet Explorer Compatibility View list to include rules for popular Flash-based web sites that are known to meet certain criteria. That is, Flash is supported for only those popular but legacy web sites that need it. This feature is not broadly available for all sites.

say Thurrott and Rivera, though I presume this only applies to the Metro IE10 rather than the desktop version.

Does this make sense? Not altogether. Oddly, while I have heard plenty of criticism of Windows 8 Consumer Preview, I have not heard many objections to the lack of Flash in Metro IE. Since Apple does not support Flash on iOS, many sites already provide Flash-free content for tablet users. Further, on the x86 version of Windows 8 there is an easy route to Flash compatibility: just open the site in the desktop browser.

That said, there is still plenty of Flash content out there and being able to view it in Windows 8 is welcome, especially if you can make your own edits to the compatibility list to get Flash content on less well-known sites. My guess is that Microsoft wants to support Flash for the same reason Android devices embraced it: a tick-box feature versus Apple iOS.

One further thought: this is a sad moment for Silverlight, if Microsoft is supporting Flash but not Silverlight on the Metro side of Windows 8.

Making sense of Microsoft’s Windows 8 strategy

Here are two things we learn from Jensen Harris’s post of 18 May.


First, Microsoft cares more about WinRT and Metro, the new tablet-oriented user interface in Windows 8, than about the desktop. In the section entitled Goals of the Windows 8 user experience, Harris refers almost exclusively to WinRT apps. Further, he asks the question: what is the role of desktop in Windows 8?

It is pretty straightforward. The desktop is there to run the millions of existing, powerful, familiar Windows programs that are designed for mouse and keyboard. Office. Visual Studio. Adobe Photoshop. AutoCAD. Lightroom. This software is widely-used, feature-rich, and powers the bulk of the work people do on the PC today.

Does that mean the desktop is for legacy, like XP Mode in Windows 7? Harris denies it:

We do not view the desktop as a mode, legacy or otherwise—it is simply a paradigm for working that suits some people and specific apps.

He adds though that “We think in a short time everyone will mix and match” desktop and Metro apps – though he does not call them Metro apps, he calls them “new Windows 8 apps.”

Second, Microsoft considers that the poor reaction to the Consumer Preview can be fixed by tweaking the detail rather than by changing the substance of how Windows 8 is designed.

But fundamentally, we believe in people and their ability to adapt and move forward. Throughout the history of computing, people have again and again adapted to new paradigms and interaction methods—even just when switching between different websites and apps and phones. We will help people get off on the right foot, and we have confidence that people will quickly find the new paradigms to be second-nature.

In fact, this post is peppered with references to negative reactions for previous versions of Windows. Microsoft is presuming that this is normal and that history will repeat:

Although some people had critical reactions and demanded changes to the user interface, Windows 7 quickly became the most-used OS in the world.

This is revisionist, as I am sure Harris and his team are aware. The reaction to Windows 7 was mainly positive, from the earliest preview on. It was better than Windows Vista; it was better than Windows XP.

Windows Vista on the other hand had a troubled launch and was widely disliked. User Account Control and its constant approval prompts was part of the problem, but more serious was that OEMs released Vista machines with underpowered hardware further slowed down by foistware and in many cases it Vista worked badly out of the box. You could get Vista working nicely with sufficient effort, but many just stayed with Windows XP.

The failure of Vista was damaging to Microsoft, but mitigated in that most users simply skipped a version and waited for Windows 7. The situation now is more serious for Microsoft, both because of the continuing popularity of the Mac and the rise of tablets, especially Apple’s iPad.

It is precisely because of that threat that Microsoft is making such a big bet on Metro and WinRT. The reasoning is that while shipping a build of Windows that improves on 7 would please the Microsoft platform community, it would be ineffective in countering the iPad. It would also fail to address problems inherent in Windows: lack of isolation between applications, and between applications and the operating system; the complexity of application installs and the difficulty of troubleshooting them when they go wrong; and the unsuitability of Windows for touch control.

There is also a hint in this most recent post that classic Windows uses too much power:

Once we understood how important great battery life was, certain aspects of the new experience became clear. For instance, it became obvious early on in the planning process that to truly reimagine the Windows experience we would need to reimagine apps as well. Thus, WinRT and a new kind of app were born.

Another key point: Microsoft’s partnership with hardware manufacturers has become a problem, since they damage the user experience with trialware and low quality utilities. The Metro-style side of Windows 8 fixes that by offering a locked-down environment. This will be most fully realised in Windows RT, Windows on ARM, which only allows WinRT apps to be installed.

Microsoft decided that only a new generation of Windows, a “reimagining”, would be able to compete in the era of BYOD (Bring Your Own Device).

One thing is for sure: the Windows team under Steven Sinofsky does not lack courage. They have form too. Many of the key players worked on the Office 2007 Ribbon UI, which was also controversial at the time, since it removed the familiar drop-down menus that had been in every previous version of Office. They stuck by their decision, and refused to add an option to restore the menus, thereby forcing users to use the ribbon even if they disliked it. That strategy was mostly successful. Users got used to the ribbon, and there was no mass refusal to upgrade from Office 2003, nor a substantial migration to OpenOffice which still has drop-down menus.

I have an open mind about Windows 8. I see the reasoning behind it, and agree that it works better on a real tablet than on a traditional PC or laptop, or worst of all, a virtual machine. Harris says:

The full picture of the Windows 8 experience will only emerge when new hardware from our partners becomes available, and when the Store opens up for all developers to start submitting their new apps.

Agreed; but it also seems that Windows 8 will ship with a number of annoyances which at the moment Microsoft looks unlikely to fix. These are mainly in the integration, or lack of it, between the Metro-style UI and the desktop. I can live without the Start menu, but will miss the taskbar with its guide to running applications and its preview thumbnails; these remain in the desktop but do not include Metro apps. Having only full-screen apps can be irritation, and I wonder if the commitment to the single-app “immersive UI” has been taken too far. When working in Windows 8 I miss the little clock that sits in the notification area; you have to swipe to see the equivalent and the fast and fluid UI is making me work harder than before.

I believe Microsoft will listen to complaints like these, but probably not until Windows 9. I also believe that by the time Windows 9 comes around the computing landscape will look very different; and the reception won by Windows 8 will be a significant factor in how it is shaped.

Apple’s space-age Campus 2 plans revealed: complete with amphitheatre

I am just back from San Jose; and on the flight back happened to be seated next to a Cupertino resident who had just received a brochure from Apple entitled Apple Campus 2, along with a letter from Apple CFO Peter Oppenheimer beginning “Dear Neighbor”, describing the plans and requesting support.


I found the document fascinating for several reasons. First, this will be a remarkable building. The building is a four-storey circle that looks like an elegant flying saucer come to land. It will include “one of the largest corporate campus solar installations in the world” and will be 100% powered by renewable energy. It will also have 300 electric vehicle charging stations. The “High performance smart building” will use, according to the document, 30% less energy than a typical office building.


The majority of the parking will be underground and Apple will create a landscape that is 80% green space, 120 acres of it, creating “a peaceful environment for our employees”. Currently there are 4,273 trees on the site; this will increase to 6,000 trees.

The landscape design of meadows and woodland will create an ecologically rich oak savanna and forest reminiscent of the early Santa Clara Valley. Extensive landscaping including apricot, apple, plum and cherry trees will recall Cupertino’s agricultural past.

The site also includes a “world-class auditorium to host product launches and our corporate events”. For unstated reasons there is also an amphitheatre in the enclosed garden.

It sounds delightful; but Apple does note that “As with the current site, Apple Campus 2 will not be open to the public.”

Another key point: “The campus will be clean, with no manufacturing or heavy industrial activity onsite”. The reason of course is that Apple has exported such activity to China, far out of sight of its genteel Cupertino neighbours.

Since the site, delightful though it may be, will be closed to the public, Apple’s appeal for the support of local residents is based on other things: improvements the company plans for surrounding roads, and the fact that Apple is the largest tax payer in Cupertino. The new campus will “allow Apple to remain in Cupertino,” the brochure says, with the veiled threat of departure should the plans not be granted.

Finally, I was intrigued by Apple’s solicitation of support. Here are the options on the reply-paid card:


There is no option to object to the plans; but there is space for written comments.

Apple says that the plans will be considered by the City of Cupertino “later this year”, that it will break ground immediately approval is granted, and expects to occupy the campus in 2015.

Microsoft appeals to Windows 8 Metro developers not to stray from the official API

Microsoft’s John Hazen has posted on the official Building Windows 8 blog about the security and reliability principles in the Metro platform in Windows 8. Hazen explains how apps are installed from the Windows store, use contracts to interact with the operating system, and have to ask user consent for access to device capabilities such as the webcam or GPS, or to access user data such as documents and music.

The most intriguing part of the document comes when Hazen appeals to developers to stick to the API that is referenced in the official Windows 8 Metro SDK:

Resist the temptation to find ways to invoke APIs that are not included in the SDK. This ultimately undermines the expectations that customers have for your app. APIs that are outside the SDK are not guaranteed to work with Metro style apps either in this release or in future releases, so you may find that your app doesn’t function properly for all customers. These APIs may also not function properly in the async environment that is foundational to Metro style app design. Finally these APIs may undermine customer confidence by accessing resources or data that Metro style apps would not normally interact with. For all these reasons, we have provided checks in the Windows App Certification Kit to help you catch places where you might have inadvertently called interfaces not exposed by the SDK.

While it is possible to hide or obfuscate calls to APIs that are not included in the SDK, this is still a violation of customer expectations and Store policy. In the end, we have created this platform to help developers like you to build amazing apps that work well with the system and with other apps and devices to delight customers. Working with the Metro style SDK is fundamental to your realizing that goal.

The worrying aspect of this appeal to developers to play nice is Hazen’s admission that crafty developers may find ways to escape the Metro sandbox, undermining both the security and the privacy protection built into Metro. The main protection against this is such such an app should be blocked from the Windows Store, but can Microsoft check with 100% confidence that no hidden or obfuscated API calls exist? How effective is the Metro sandbox?

My guess is that the danger will be greater on the x86 version of Windows 8 than in Windows RT, which is locked down to prevent any third-party desktop applications from being installed. Nevertheless, a large part of the non-Metro Windows API must exist in Windows RT, to support the desktop, Explorer and Microsoft Office.

Programming NVIDIA GPUs and Intel MIC with directives: OpenACC vs OpenMP

Last month I was at Intel’s software conference learning about Many Integrated Core (MIC), the company’s forthcoming accelerator card for HPC (High Performance Computing). This month I am in San Jose for NVIDIA’s GPU Technology Conference learning about the latest development in NVIDIA’s platform for accelerated massively parallel computing using GPU cards and the CUDA architecture. The approaches taken by NVIDIA and Intel have much in common – focus on power efficiency, many cores, accelerator boards with independent memory space controlled by the CPU – but also major differences. Intel’s boards have familiar x86 processors, whereas NVIDIA’s have GPUs which require developer to learn CUDA C or an equivalent such as OpenCL.

In order to simplify this, NVIDIA and partners Cray, CAPS and PGI announced OpenACC last year, a set of directives which when added to C/C++ code instruct the compiler to run code parallelised on the GPU, or potentially on other accelerators such as Intel MIC. The OpenACC folk have stated from the outset their hope and intention that OpenACC will converge with OpenMP, an existing standard for directives enabling shared memory parallelisation. OpenMP is not suitable for accelerators since these have their own memory space.

One thing that puzzled me though: Intel clearly stated at last month’s event that it would support OpenMP (not OpenACC) on MIC, due to go into production at the end of this year or early next. How can this be?

I took the opportunity here at NVIDIA’s conference to ask Duncan Poole, who is NVIDIA’s Senior Manager for High Performance Computing and also the President of OpenACC, about what is happening with these two standards. How can Intel implement OpenMP on MIC, if it is not suitable for accelerators?

“I think OpenMP in the form that’s being discussed inside of the sub-committee is suitable. There’s some debate about some of the specific features that continues. Also, in the OpenMP committee they’re trying to address the concerns of TI and IBM so it’s a broader discussion than just the Intel architecture. So OpenMP will be useful on this class of processor. What we needed to do is not wait for it. That standard, if we’re lucky it will be draft at the end of this year, and maybe a year later will be ratified. We want to unify this developer base now,” Poole told me.

How similar will this adapted OpenMP be to what OpenACC is now?

“It’s got the potential to be quite close. The guy that drafted OpenACC is the head of that sub-committee. There’ll probably be changes in keywords, but there’s also some things being proposed now that were not conceived of. So there’s good debate going on, and I expect that we’ll benefit from it.

“Some of the features for example that are shared by Kepler and MIC with respect to nested parallelism are very useful. Nested parallelism did not exist at the time that we started this work. So there’ll be an evolution that will happen and probably a logical convergence over time.

If OpenMP is not set to support acclerators until two years hence, what can Intel be doing with it?

“It will be a vendor implementation of a pre-release standard. Something like that,” said Poole, emphasising that he cannot speak for Intel. “To be complementary to Intel, they have some good ideas and it’s a good debate right now.”

Incidentally, I also asked Intel about OpenACC last month, and was told that the company has no plans to implement it on its compilers. OpenMP is the standard it supports.

The topic is significant, in that if a standard set of directives is supported across both Intel and NVIDIA’s HPC platforms, developers can easily port code from one to the other. You can do this today with OpenCL, but converting an application to use OpenCL to enhance performance is a great deal more effort than adding directives.

The pros and cons of NVIDIA’s cloud GPU

Yesterday NVIDIA announced the Geforce GRID, a cloud GPU service, here at the GPU Technology Conference in San Jose.

The Geforce GRID is server-side software that takes advantage of new features in the “Kepler” wave of NVIDIA GPUs, such as GPU virtualising, which enables the GPU to support multiple sessions, and an on-board encoder that lets the GPU render to an H.264 stream rather than to a display.

The result is a system that lets you play games on any device that supports H.264 video, provided you can also run a lightweight client to handle gaming input. Since the rendering is done on the server, you can play hardware-accelerated PC games on ARM tablets such as the Apple iPad or Samsung Galaxy Tab, or on a TV with a set-top box such as Apple TV, Google TV, or with a built-in client.

It is an impressive system, but what are the limitations, and how does it compare to the existing OnLive system which has been doing something similar for a few years? I attended a briefing with NVIDIA’s Phil Eisler, General Manager for Cloud Gaming & 3D Vision, and got a chance to put some questions.

The key problem is latency. Games become unplayable if there is too much lag between when you perform an action and when it registers on the screen. Here is NVIDIA’s slide:


This looks good: just 120-150ms latency. But note that cloud in the middle: 30ms is realistic if the servers are close by, but what if they are not? The demo here at GTC in yesterday’s keynote was done using servers that are around 10 miles away, but there will not be a GeForce GRID server within 10 miles of every user.

According to Eisler, the key thing is not so much the distance, as the number of hops the IP traffic passes through. The absolute distance is less important than being close to an Internet backbone.

The problem is real though, and existing cloud gaming providers like OnLive and Gaikai install servers close to major conurbations in order to address this. In other words, it pays to have many small GPU clouds dotted around, than to have a few large installations.

The implication is that hosting cloud gaming is expensive to set up, if you want to reach a large number of users, and that high quality coverage will always be limited, with city dwellers favoured over rural communities, for example. The actual breadth of coverage will depend on the hoster’s infrastructure, the users broadband provider, and so on.

It would make sense for broadband operators to partner with cloud gaming providers, or to become cloud gaming providers, since they are in the best position to optimise performance.

Another question: how much work is involved in porting a game to run on Geforce GRID? Not much, Eisler said; it is mainly a matter of tweaking the game’s control panel options for display and adapting the input to suit the service. He suggested 2-3 days to adapt a PC game.

What about the comparison with OnLive? Eisler let slip that OnLive does in fact use NVIDIA GPUs but would not be pressed further; NVIDIA has agreed not to make direct comparisons.

When might Geforce GRID come to Europe? Later this year or early next year, said Eisler.

Eisler was also asked about whether Geforce GRID will cannibalise sales of GPUs to gamers. He noted that while Geforce GRID latency now compares favourably with that of a games console, this is in part because the current consoles are now a relatively old generation, and a modern PC delivers around half the latency of a console. Nevertheless it could have an impact.

One of the benefits of the Geforce GRID is that you will, in a sense, get an upgraded GPU every time your provider upgrades its GPUs, at no direct cost to you.

I guess the real question is how the advent of cloud GPU gaming, if it takes off, will impact the gaming market as a whole. Casual gaming on iPhones, iPads and other smartphones has already eaten into sales of standalone games. Now you can play hardcore games on those same devices. If the centre of gaming gravity shifts further to the cloud, there is less incentive for gamers to invest in powerful GPUs on their own PCs.

Finally, note that the latency issues, while still important, matter less for the non-gaming cloud GPU applications, such as those targeted by NVIDIA VGX. Put another way, a virtual desktop accelerated by VGX could give acceptable performance over connections that are not good enough for Geforce GRID.