Multicore processor wars: NVIDIA squares up to Intel

I first became aware of NVIDIA’s propaganda war against Intel at the 2012 GPU Technology conference in Beijing. CEO Jen-Hsun Huang stated that CPUs are remarkably inefficient for multicore processing:

The CPU is fast and is terrific at single-threaded performance, but because so much of the electronics inside the CPU is dedicated to out of order execution, branch prediction, speculative execution, all of the technology that has gone into sustaining instruction throughput and making the CPU faster at single-threaded applications, the electronics necessary to enable it to do that has grown tremendously. With four cores, in order to execute an operation, a floating point add or a floating point multiply, 50 times more energy is dedicated to the scheduling of that operation than the operation itself. If you look at the silicone of a CPU, the floating point unit is only a few percentage of the overall die, and it is consistent with the usage of the energy to sequence, to schedule the instructions running complicated programs.

That figure of 50 times surprised me, and I asked Intel’s James Reinders for a comment. He was quick to respond, noting that:

50X is ridiculous if it encourages you to believe that there is an alternative which is 50X better.  The argument he makes, for a power-efficient approach for parallel processing, is worth about 2X (give or take a little). The best example of this, it turns out, is the Intel MIC [Many Integrated Core] architecture.

Reinders went on to say:

Knights Corner is superior to any GPGPU type solution for two reasons: (1) we don’t have the extra power-sucking silicon wasted on graphics functionality when all we want to do is compute in a power efficient manner, and (2) we can dedicate our design to being highly programmable because we aren’t a GPU (we’re an x86 core – a Pentium-like core for “in order” power efficiency). These two turn out to be substantial advantages that the Intel MIC architecture has over GPGPU solutions that will allow it to have the power efficiency we all want for highly parallel workloads, but able to run an enormous volume of code that will never run on GPGPUs (and every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor).

So Intel is evangelising its MIC vs GPCPU solutions such as NVIDIA’s Tesla line. Yesterday NVIDIA’s Steve Scott spoke up to put the other case. If Intel’s point is that a Tesla is really a GPU pressed into service for general computing, then Scott’s first point is that the cores in MIC are really CPUs, albeit of an older, simpler design:

They don’t really have the equivalent of a throughput-optimized GPU core, but were able to go back to a 15+ year-old Pentium design to get a simpler processor core, and then marry it with a wide vector unit to get higher flops per watt than can be achieved by Xeon processors.

Scott then takes on Intel’s most compelling claim, compatibility with existing x86 code. It does not matter much, says Scott, since you will have to change your code anyway:

The reality is that there is no such thing as a “magic” compiler that will automatically parallelize your code. No future processor or system (from Intel, NVIDIA, or anyone else) is going to relieve today’s programmers from the hard work of preparing their applications for the future.

What is the real story here? It would, of course, be most interesting to compare the performance of MIC vs Tesla, or against the next generation of NVIDIA GPGPUs based on Kepler; and may the fastest and most power-efficient win. That will have to wait though; in the meantime we can see that Intel is not enjoying seeing the world’s supercomputers install NVIDIA GPGPUs – the Oak Ridge National Laboratory Jaguar/Titan (the most powerful supercomputer in the USA) being a high profile example:

In addition, 960 of Jaguar’s 18,688 compute nodes now contain an NVIDIA graphical processing unit (GPU). The GPUs were added to the system in anticipation of a much larger GPU installation later in the year.

Equally, NVIDIA may be rattled by the prospect of Intel offering strong competition for Tesla. It has not had a lot of competition in this space.

There is an ARM factor here too. When I spoke to Scott in Beijing, he hinted that NVIDIA would one day produce GPGPUs with ARM chips embedded for CPU duties, perhaps sharing the same memory.

Run Metro apps in a window on Windows 8

I have been drilling into Visual Studio 11 beta recently. This includes a simulator for debugging Windows 8 Metro style apps and I was surprised by the way it works. Unlike the Windows Phone emulators, which are isolated environments for testing apps, the simulator is actually a window into your own machine.

image

You can do some strange stuff. For example, you can not only debug your app in the simulator, you can run up Visual Studio 11 on the desktop within the simulator and edit it as well. It will not let you run the simulator within the simulator though – I tried!

It occurred to me that the metro simulator accomplishes one of the things some users of the consumer preview have asked for. It lets you run Metro apps in a window, so that you can resize them, minimize them, and avoid the jarring context switch between full-screen Metro and the normal desktop with the taskbar.

image

What is the simulator? It is actually a remote desktop session into your own machine. Normally you cannot do this, as Windows client only allows one session at a time and you already have one running, but Microsoft has given itself special permission.

Running Metro apps in a windows is not its intended purpose but it is interesting to try as it shows how this might have worked if Microsoft had taken a more desktop-centric approach to the dual personality in Windows 8.

A further thought is to consider why the Visual Studio team decided to do things this way. Microsoft’s developers saw the necessity of working in the Visual Studio IDE while also exercising the Metro-style app.

Well, what if you are not a developer, but you still want to have Excel open while you check out, for example, the Bing Finance app? It is not only developers that may have good reasons to have a desktop and a Metro app running side by side.

Dual monitors accomplish this of course, and to some extent so does the “Snap” split view if you have the right screen resolution, but running Metro in its own window is a rather convenient solution.

Apple breaks web storage in iOS 5.1, does not care about web apps?

Many iOS apps which rely on web storage APIs for persistent data have been broken by the recent upgrade to iOS 5.1. The issue affects apps built with PhoneGap or others which use WebKit APIs to store data. The affect for users is that they lose all their data after the upgrade. For example, it sounds like the issue has hit this app:

image

Another developer says:

My statistics show users abandoning ship as their settings are wiped over and over, after each app restart.
This is a critical error that must be patched as soon as possible. Remember there’s also a delay from Apples app approval process to consider.

Put more precisely, WebKit used to store its local databases in Library/WebKit which is a location that the OS regards as persistent and which is backed up to iCloud. In iOS 5.1 this data is stored in Library/Caches which means it is regarded as temporary and likely to be deleted. The W3C Candidate Recommendation says of localStorage:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

An embedded browser is not quite the same as a web browser though, and if you are using SQLite in Webkit then that falls outside the W3C HTML 5 API since Web SQL is no longer included.

The issue is complicated in that there also seems to be a bug, described here, which causes data to be lost after upgrading an app to a newer version; and there are problems with actual web apps as well as with apps that use an embedded UIWebView.

PhoneGap is fixable in that it can call native APIs and there is work going on to implement this. The danger is that more platform-specific code undermines the cross-platform benefits.

Discussions on the Apple developer forums during the beta period for 1OS 5.1 show that Apple was aware of the issue and that it is by design. The impression given is that Apple was annoyed by the number of apps using web storage to speed up their apps (whether web or native) rather than just storing customer-created content, and felt it was imposing too much burden on the constrained storage space in an iOS device.

It does not help that there is no way to increase the storage in an iPad or iPhone other than by replacing it with a newer one with more memory.

The problem is a real one, but you cannot escape the impression that Apple considers solutions like PhoneGap, or even web apps that behave like local apps, as a kind of workaround or hack that is to be discouraged in favour of apps written entirely with the iOS SDK.

Apple benefits from true native apps as they are more likely to be exclusive to its platform, and must be sold through the App Store with a fee to Apple.

The official Data Storage Guidelines for iOS are here.

Developers dislike monochrome Visual Studio 11 beta

Microsoft is having trouble convincing developers that its new Metro-influenced Visual Studio user interface, in the forthcoming version now in beta, is a good idea.

To be more precise, it is not so much Metro, but the way Microsoft has chosen to use it, with toolbox icons now black and white. The change also affects menus such as IntelliSense in the code editor. Here is the new design:

image

or you can choose a “Dark” colour scheme:

image

and the old 2010 design for comparison:

image

Developers voting on this over at UserVoice, the official feedback site, have made this the single biggest issue, with 4707 votes.

image

They do not much like the All Caps in the toolbox names either.

Microsoft has marked this as “Under review” so maybe there could yet be a more colourful future for Visual Studio 11.