All posts by onlyconnect

The pros and cons of NVIDIA’s cloud GPU

Yesterday NVIDIA announced the Geforce GRID, a cloud GPU service, here at the GPU Technology Conference in San Jose.

The Geforce GRID is server-side software that takes advantage of new features in the “Kepler” wave of NVIDIA GPUs, such as GPU virtualising, which enables the GPU to support multiple sessions, and an on-board encoder that lets the GPU render to an H.264 stream rather than to a display.

The result is a system that lets you play games on any device that supports H.264 video, provided you can also run a lightweight client to handle gaming input. Since the rendering is done on the server, you can play hardware-accelerated PC games on ARM tablets such as the Apple iPad or Samsung Galaxy Tab, or on a TV with a set-top box such as Apple TV, Google TV, or with a built-in client.

It is an impressive system, but what are the limitations, and how does it compare to the existing OnLive system which has been doing something similar for a few years? I attended a briefing with NVIDIA’s Phil Eisler, General Manager for Cloud Gaming & 3D Vision, and got a chance to put some questions.

The key problem is latency. Games become unplayable if there is too much lag between when you perform an action and when it registers on the screen. Here is NVIDIA’s slide:

image

This looks good: just 120-150ms latency. But note that cloud in the middle: 30ms is realistic if the servers are close by, but what if they are not? The demo here at GTC in yesterday’s keynote was done using servers that are around 10 miles away, but there will not be a GeForce GRID server within 10 miles of every user.

According to Eisler, the key thing is not so much the distance, as the number of hops the IP traffic passes through. The absolute distance is less important than being close to an Internet backbone.

The problem is real though, and existing cloud gaming providers like OnLive and Gaikai install servers close to major conurbations in order to address this. In other words, it pays to have many small GPU clouds dotted around, than to have a few large installations.

The implication is that hosting cloud gaming is expensive to set up, if you want to reach a large number of users, and that high quality coverage will always be limited, with city dwellers favoured over rural communities, for example. The actual breadth of coverage will depend on the hoster’s infrastructure, the users broadband provider, and so on.

It would make sense for broadband operators to partner with cloud gaming providers, or to become cloud gaming providers, since they are in the best position to optimise performance.

Another question: how much work is involved in porting a game to run on Geforce GRID? Not much, Eisler said; it is mainly a matter of tweaking the game’s control panel options for display and adapting the input to suit the service. He suggested 2-3 days to adapt a PC game.

What about the comparison with OnLive? Eisler let slip that OnLive does in fact use NVIDIA GPUs but would not be pressed further; NVIDIA has agreed not to make direct comparisons.

When might Geforce GRID come to Europe? Later this year or early next year, said Eisler.

Eisler was also asked about whether Geforce GRID will cannibalise sales of GPUs to gamers. He noted that while Geforce GRID latency now compares favourably with that of a games console, this is in part because the current consoles are now a relatively old generation, and a modern PC delivers around half the latency of a console. Nevertheless it could have an impact.

One of the benefits of the Geforce GRID is that you will, in a sense, get an upgraded GPU every time your provider upgrades its GPUs, at no direct cost to you.

I guess the real question is how the advent of cloud GPU gaming, if it takes off, will impact the gaming market as a whole. Casual gaming on iPhones, iPads and other smartphones has already eaten into sales of standalone games. Now you can play hardcore games on those same devices. If the centre of gaming gravity shifts further to the cloud, there is less incentive for gamers to invest in powerful GPUs on their own PCs.

Finally, note that the latency issues, while still important, matter less for the non-gaming cloud GPU applications, such as those targeted by NVIDIA VGX. Put another way, a virtual desktop accelerated by VGX could give acceptable performance over connections that are not good enough for Geforce GRID.

NVIDIA’s GPU in the cloud: will you still want an Xbox or PlayStation?

NVIDIA’s GPU Technology conference is an unusual event, in part a get-together for academic researchers using HPC, in part a marketing pitch for the company. The focus of the event is on GPU computing, in other words using the GPU for purposes other than driving a display, such as processing simulations to model climate change or fluid dynamics, or to process huge amounts of data in order to calculate where best to drill for oil. However NVIDIA also uses the event to announce its latest GPU innovations, and CEO Jen-Hsun Huang used this morning’s keynote to introduce its GPU in the cloud initiative.

This takes two forms, though both are based on a feature of the new “Kepler” wave of NVIDIA GPUs which allows them to render graphics to a stream rather than to a display. It is the world’s first virtualized GPU, he claimed.

image

The first target is enterprise VDI (Virtual Desktop Infrastructure). The idea is that in the era of BYOD (Bring Your Own Device) there is high demand for the ability to run Windows applications on devices of every kind, perhaps especially Apple iPads. This works fine via virtualisation for everyday applications, but what about GPU-intensive applications such as Autocad or Adobe Photoshop? Using a Kepler GPU you can run up to 100 virtual desktop instances with GPU acceleration. NVIDIA calls this the VGX Platform.

image

What actually gets sent to the client is mostly H.264 video, which means most current devices have good support, though of course you still need a remote desktop client.

The second target is game streaming. The key problem here – provided you have enough bandwidth – is minimising the lag between when a player moves or clicks Fire, and when the video responds. NVIDIA has developed software called the Geforce GRID which it will supply along with specially adapted Kepler GPUs to cloud companies such as Gaikai. Using the Geforce GRID, lag is reduced, according to NVIDIA, to something close to what you would get from a game console.

image

We saw a demo of a new Mech shooter game in which one player is using an Asus Transformer Prime, an Android tablet, and the other an LG television which has a streaming client built in. The game is rendered in the cloud but streamed to the clients with low latency.

image

“This is your game console,” said NVIDIA CEO Jen-Sun Huang, holding the Ethernet cable that connected the TV to the internet.

image

The concept is attractive for all sorts of reasons. Users can play games without having to download and install, or connect instantly to a game being played by a friend. Game companies are protected from piracy, because the game code runs in the cloud, not on the device.

NVIDIA does not plan to run its own cloud services, but is working with partners, as the following slide illustrates. On the VDI side, Citrix, Microsoft, VMWare and Xen were mentioned as partners.

image

If cloud GPU systems take off, will it cannibalise the market for powerful GPUs in client devices, whether PCs, game consoles or tablets? I put this to Huang in the press Q&A after the keynote, and he denied it, saying that people like designers hate to share their PCs. It was an odd and unsatisfactory answer. After all, if Huang is saying that your games console is now an Ethernet cable, he is also saying that there is no need any longer for game consoles which contain powerful NVIDIA GPUs. The same might apply to professional workstations, with the logic that cloud computing always presents: that shared resources have better utilisation and therefore lower cost.

NVIDIA Nsight comes to Eclipse for Mac, Linux GPU programming

NVIDIA has ported its Nsight development tools, previously a plug-in for Visual Studio, to run within the open source Eclipse IDE for use on Mac and Linux.

image

The Nsight tools include profiling, refactoring, syntax highlighting and auto-completion, as well as a bunch of code samples.

The Windows version for Visual Studio has also been updated, and now supports local GPU debugging as well as new support for DirectX frame debugging and analysis.

Although Eclipse of course runs on Windows, Nsight users should continue to use the Visual Studio version. NVIDIA is not supporting use of the Eclipse Nsight on Windows.

The tools are in preview and you can sign up to try them here.

Another significant development is the availability of the CUDA LLVM Compiler. NVIDIA has contributed CUDA compiler code to the open source LLVM project. This means that other languages which compile to LLVM intermediate assembly language can be adapted to support parallel processing on NVIDIA GPUs. The CUDA Compiler SDK will be made available this week at the NVIDIA GPU Technology Conference in San Jose.

Review: Digital Wars by Charles Arthur

Subtitled Apple, Google, Microsoft and the battle for the internet, this is an account by the Guardian’s Technology Editor of the progress of three tech titans between 1998 and the present day. In 1998, Google was just getting started, Apple was at the beginning of its recovery under the returning CEO Steve Jobs, and Microsoft dominated PCs and was busy crushing Netscape.

Here is how the market capitalization of the three changed between 1998 and 2011:

  End 1998 Mid 2011
Apple $5.4 billion $346.7 billion
Google $10 million $185.1 billion
Microsoft $344.6 billion $214.3 billion

This book tells the story behind that dramatic change in fortunes. It is a great read, written in a concise, clear and engaging style, and informed by the author’s close observation of the technology industry over that period.

That said, it is Apple that gets the best quality coverage here, not only because it is the biggest winner, but also because it is the company for which Arthur feels most affinity. When it comes to Microsoft the book focuses mainly on the company’s big failures in search, digital music and smartphones, but although these failures are well described, the question of why it has performed so badly is not fully articulated, though there is reference to the impact of antitrust legislation and an unflattering portrayal of CEO Steve Ballmer. The inner workings of Google are even less visible and if your main interest is the ascent of Google you should look elsewhere.

Leaving aside Google then, describing the success of Apple alongside Microsoft’s colossal blunders makes compelling reading. Arthur is perhaps a little unfair to Microsoft, because he skips over some of the company’s better moments, such as the success of Windows 7 and Windows Server, or even the Xbox 360, though he would argue I think that those successes are peripheral to his theme which is internet and mobile.

The heart of the book is in chapters four, on digital music, and five, on smartphones. The iPod, after all, was the forerunner of the Apple iPhone, and the iPhone was the forerunner of the iPad. Microsoft’s famous ecosystem of third-party hardware partners failed to compete with the Ipod, and by the time the company got it mostly right by abandoning its partners and creating the Zune, it was too late.

The smartphone story played out even worse for Microsoft, given that this was a market where it already had significant presence with Windows Mobile. Arthur describes the launch of the iPhone, and then recounts how Microsoft acquired a great mobile phone team with a company called Danger, and proceeded to destroy it. The Danger/Pink episode shows more than any other how broken is Microsoft’s management and mobile strategy. Danger was acquired in February 2008. There was then, Arthur describes, an internal battle between the Windows Mobile team and the Danger team, won by the Windows Mobile team under Andy Lees, and resulting in 18 months delay while the Danger operating system was rewritten to use Windows CE. By the time the first new “Project Pink” phone was delivered it was short on features and no longer wanted by Verizon, the partner operator. The “Kin” phone was on the market for only 48 days.

The Kin story was dysfunctional Microsoft at its worst, a huge waste of money and effort, and could have broken a smaller company. Microsoft shrugged it off, showing that its Windows and Office cash cows continue to insulate it against incompetence, probably too much for its own long-tem health.

Finally, the book leaves the reader wondering how the story continues. Arthur gets the significance of the iPad in business:

Cook would reel off statistics about the number of Fortune 500 companies ‘testing or deploying’ iPads, of banks and brokers that were trying it, and of serious apps being written for it. Apple was going, ever so quietly, after the business computing market – the one that had belonged for years to Microsoft.

Since he wrote those words that trend has increased, forming a large part of what is called Bring Your Own Device or The Consumerization of IT. Microsoft does have what it hopes is an answer, which is Windows 8, under a team led by the same Steven Sinofsky who made a success of Windows 7. The task is more challenging this time round though: Windows 7 was an improved version of Windows Vista, whereas Windows 8 is a radical new departure, at least in respect of its Metro user interface which is for the Tablet market. If Windows 8 fares as badly against the iPad as Plays for Sure fared against the iPod, then expect further decline in Microsoft’s market value.

 

System Center 2012, Windows 8 and the BYOD revolution

Yesterday I attended a UK Microsoft MMS catch-up session in Manchester, aimed at those who could not make it to Las Vegas last month. The subject was the new System Center 2012, and how it fits with Microsoft’s concept of the private cloud, and its strategy for supporting Bring Your Own Device (BYOD), the proliferation of mobile devices on which users now expect to be able to receive work email and do other work.

The session, I have to say, was on the dry side; but taken on its own terms System Center 2012 looks good. I was particularly interested in how Microsoft defines “private cloud” versus just a bunch of virtual machines (JBVM?). Attendees where told that a private cloud has four characteristics:

  • Pooled resources: an enterprise cloud, not dedicated servers for each department.
  • Self service: users (who might also be admins) can get new server resources on demand.
  • Elasticity: apps that scale on demand.
  • Usage based: could be charge-back, but more often show-back, the ability to report on what resources each user is consuming.

Microsoft’s virtualization platform is based on Hyper-V, which we were assured now represents 28% of new server virtual machines, but System Center has some support for VMWare and Citrix Xen as well.

System Center now consists of eight major components:

  • Virtual Machine Manager: manage your private cloud
  • Configuration Manager (SCCM): deploy client applications, manage your mobile devices
  • Operations Manager: monitor network and application health
  • Data Protection Manager: backup, not much mentioned
  • Service Manager: Help desk and change management, not much mentioned
  • Orchestrator: a newish product acquired from Opalis in 2009, automates tasks and is critical for self-service
  • App Controller: manage applications on your cloud
  • Endpoint protection: anti-malware, praised occasionally but not really presented yesterday

I will not bore you by going through this blow by blow, but I do have some observations.

First, in a Microsoft-platform world System Center makes a lot of sense for large organisations who do not want public cloud and who want to move to the next stage in managing their servers and clients without radically changing their approach.

Following on from that, System Center meets some of the requirements Microsoft laid out as the start of the session, but not all. In particular, it is weak on elasticity. Microsoft needs something like Amazon’s Elastic Beanstalk which lets you deploy an application, set a minimum and maximum instance count, and have the platform handle the mechanics of load balancing and scaling up and down on demand. You can do it on System Center, we were told, if you can write a bunch of scripts to make it work. At some future point Orchestrator will get auto scale-out functionality.

Second, it seems to me unfortunate that Microsoft has two approaches to cloud management, one in System Center for private cloud, and one in Azure for public cloud. You would expect some differences, of course; but looking at the deployment process for applications on System Center App Controller it seems to be a different model from what you use for Azure.

Third, System Center 2012 has features to support BYOD and enterprise app stores, and my guess is that this is the way forward. Mobile device management in Configuration Manager uses a Configuration Manager Client installed on the device, or where that is not possible, exploits the support for Exchange ActiveSync policies found in many current smartphones, including features like Approved Application List, Require Device Encryption, and remote wipe after a specified number of wrong passwords entered.

The Software Center in Configuration Manager lets users request and install applications using a variety of different mechanisms under the covers, from Windows Installer to scripts and virtualised applications.

Where this gets even more interesting is in the next version of InTune, the cloud-based PC and device management tool. We saw a demonstration of a custon iOS app installed via self-service from InTune onto an iPhone. I presume this feature will also come to Software Center in SCCM though it is not there yet as far as I aware.

You can also see this demonstrated in the second MMS keynote here – it is the last demo in the Day 2 keynote.

image

InTune differs from System Center in that it is not based on Windows domains, though you can apply a limited set of policies. In some respects it is similar to the new self-service portal which Microsoft is bringing out for deploying Metro apps to Windows RT (Windows on ARM) devices, as described here.

This set me thinking. Which machines will be easier to manage in the enterprise, the Windows boxes with their group policy and patch management and complex application installs? Or the BYOD-style devices, including Windows RT, with their secure operating systems, isolated applications, and easy self-service app install and removal?

The latter approach seems to me a better approach. Of course most corporate apps do not work that way yet, though app virtualisation and desktop virtualisation helps, but it seems to me that this is the right direction for corporate IT.

The implication is two-fold. One is that basing your client device strategy around iPads makes considerable sense. This, I imagine, is what Microsoft fears.

The other implication is that Windows RT (which includes Office) plus Metro apps is close to the perfect corporate client. Microsoft VP Steven Sinofsky no doubt gets this, which is why he is driving Metro in Windows 8 despite the fact that the Windows community largely wants Windows 7 + and not the hybrid Metro and desktop OS that we have in Windows 8.

Windows 8 on x86 will be less suitable, because it perpetuates the security issues in Windows 7, and because users will tend to spend their time in familiar Windows desktop applications which lack the security and isolation benefits of Metro apps, and which will be hard to use on a tablet without keyboard and mouse.

A little colour returns to Visual Studio 11 – but not much

Microsoft has responded to user feedback by re-introducing colour into the Visual Studio 11 IDE. The top request in the official feedback forum was for more colour in the toolbars and icons.

image

Now Microsoft’s Monty Hammontree, who is Director of User Experience, Microsoft Developer Tools Division – it is interesting that such a post exists – has blogged about the company’s response:

We’ve taken this feedback and based on what we heard have made a number of changes planned for Visual Studio 11 RC.

That said, developers expecting a return to the relatively colourful icons in Visual Studio 2010 will be disappointed. Hammontree posted the following side by side image:

image

This shows Visual Studio 10 first, then the beta, and then the forthcoming release candidate. Squint carefully and you can see a few new splashes of colour.

image

You can also see the the word toolbox is no longer all upper case, another source of complaint.

Hammontree explains that colour has been added to selected icons in order to help distinguish between common actions, differentiate icons within the Solution Explorer, and to reintroduce IntelliSense cues.

Did Microsoft do enough? Some users have welcomed the changes:

You have to appreciate a company that listens to there [sic] users and actually makes changes based off feedback. You guys rock!

while others are doubtful:

with respect, I fear that the changes are token ones and that whoever’s big idea this monochromatic look is, is stubbornly refusing to let go of it in spite of the users overwhelming rejection of it.

or the wittier:

I’m glad you noticed all the feedback about the Beta, when people were upset that you chose the wrong shade of gray.

While the changes are indeed subtle, they are undoubtedly an improvement for those hankering for more colour.

Another issue is that by the time a product hits beta in the Microsoft product cycle, it is in most cases too late to make really major changes. The contentious Metro UI in Windows 8 will be another interesting example.

That said, there are more important things in Visual Studio 11 than the colour scheme, despite the attention the issue has attracted.

Great sounding recordings

There was a discussion on a music form of which the best sounding recordings out there.

I am always amused by these discussions because I see stuff picked that is great music (at least to those who pick it) but cannot honestly be described as great-sounding in a technical sense.

Of course it is hard to separate; and maybe there are albums that sound deliberately “bad” as part of the artistic statement.

Conversely, if the music does not interest you, it is hard to appreciate the sonics.

Here were my picks though: six albums that I know will always sound good.

Kind of Blue by Miles Davis – great presence and realism, interesting bass lines to follow.

 
Carpenters by The Carpenters. Probably influenced by the great voice, but I find this a really natural sounding recording. CD you can get for pennies at any supermarket here in the UK.

New Blood by Peter Gabriel. Modern recording, just very nicely done. Probably helped by natural acoustic sound of orchestra etc.

The Freewheelin’ Bob Dylan. I like this for its simplicity and realism. If you want a recording where you can close your eyes and imagine a man there singing, this is excellent.

Electric Cafe by Kraftwerk. Great sounding electronica.

Stravinksky: Le sacre de Printemps/L’Oiseau de feu; Detroit Symphony Orchestra, Antal Dorati (Decca) No idea how this ranks in a list of fine-sounding classical recordings but I like it, beautifully conveys the drama of the music.

Always interested in hearing about other people’s favourites, from a sonic point of view.

What next for the Nook as Microsoft invests in Barnes & Noble’s digital business?

Today Microsoft and Barnes & Noble announced a partnership to sell eBooks, based on the existing Banes & Noble digital bookstore and eBook reader called the Nook.

The new subsidiary, referred to in this release as Newco, will bring together the digital and College businesses of Barnes & Noble. Microsoft will make a $300 million investment in Newco at a post-money valuation of $1.7 billion in exchange for an approximately 17.6% equity stake. Barnes & Noble will own approximately 82.4% of the new subsidiary, which will have an ongoing relationship with the company’s retail stores. Barnes & Noble has not yet decided on the name of Newco.

In addition, Barnes & Noble, which was in litigation with Microsoft over the Redmond company’s claim to royalties on Android, has agreed to a “royalty-bearing license” for the Nook eReader and tablets. Both the Nook Color and the Nook Tablet are based on Android.

image

Another detail is that there will be a Nook application for Windows 8:

One of the first benefits for customers will be a NOOK application for Windows 8

though the release does not state whether or not this will be a Metro app. I would guess that it is, since otherwise it would not work on Windows RT (the ARM version of Windows), but nothing can be taken for granted.

Note that Barnes & Noble already has Nook apps for iPad, iPhone, Android, Windows and Mac, but not for Windows Phone.

It is an intriguing deal. Has Microsoft just taken a 17.6% stake in an Android company, or is there some plan in the works to base a future Nook on Windows?

As an attendee at developer conferences, I regularly see the Nook developer evangelists, and had a look at last year’s Adobe Max. Barnes & Noble claim that Nook apps sell relatively well, compared to apps on the official Google Play market, because Nook customers expect to pay for their content. The Nook is not an officially Google-blessed Android device, so has no access to the Play market.

If a future Nook is Windows-based, Barnes & Noble will have a tricky time explaining to developers why they will have to port their apps.

Overall this is a hard deal to interpret. Barnes & Noble was a thorn in Microsoft’s side with its resistance to Android royalties, a thorn which has now been removed, but what else does it signify? You would have thought there would have been a Nook app for Windows 8 anyway, unless it is a complete flop.

Convert .NET Intermediate Language to JavaScript

Whomever called JavaScript the assembly language of the web was a true prophet.

Compiling .Net code to JavaScript is not new. I have heard that Microsoft’s Office Web Apps, browser-hosted editing of Office documents, are built with Script#.

The difference with JSIL is that it compiles .NET Intermediate Language (IL), and therefore works with any .NET language – though note that:

JSIL is still in development. You will hit bugs

The screenshot says it all

image

Microsoft’s Visual Studio LightSwitch: does it have a future?

A recent and thorough piece on Visual Studio LightSwitch prompted a Twitter discussion on what kind of future the product has. Background:

  • LightSwitch is an application generator which builds data-driven applications.
  • A LightSwitch application uses ASP.NET on the server and Silverlight on the client.
  • LightSwitch applications can be deployed to Windows Azure
  • LightSwitch apps can either be browser-hosted or use Silverlight out of browser for the desktop
  • LightSwitch is model-driven so in principle it could generate other kinds of client, such as HTML5 or Windows 8 Metro.
  • LightSwitch first appeared last year, and has been updated for Visual Studio 11, now in beta.

I have looked at LightSwitch in some detail, including a hands-on where I built an application. I have mixed feelings about the product. It was wrongly marketed, as the kind of thing a non-professional could easily pick up to generate an application for their business. In my opinion it is too complex for most such people. The real market is professional developers looking for greater productivity. As a way of building a multi-tier application which does its best to enforce good design principles, LightSwitch is truly impressive; though I also found annoyances like skimpy documentation, and that some things which should have been easy turned out to be difficult. The visual database designer is excellent.

The question now: what kind of future does LightSwitch have? Conceptually, it is a great product and could evolve into something useful, but I question whether Microsoft will stick with it long enough. Here is what counts against it:

  • The decision to generate Silverlight applications now looks wrong. Microsoft is not going to do much more with Silverlight, and is more focused on HTML5 and JavaScript, or Windows Runtime for Metro-style apps in Windows 8 and some future Windows Phone. There is some family resemblance between Windows Runtime and Silverlight, but not necessarily enough to make porting easy.
  • There is no mobile support, not even for Windows Phone 7 which runs Silverlight.
  • I imagine sales have been dismal. The launch product was badly marketed and perplexing to many.

What about the case in favour? Silverlight enthusiast Michael Washington observes that the new Visual Studio 11 version of LightSwitch generates OData feeds on the server, rather than WCF RIA Services. OData is a REST-based service that is suitable for consumption by many different kinds of client. To prove his point, Washington has created demo mobile apps using HTML5 and JQuery – no Silverlight in sight.

image

Pic from here.

Washington also managed to extract this comment from Microsoft’s Steve Hoag on the future of LightSwitch, in an MSDN forum discussion:

LightSwitch is far from dead. Without revealing anything specific I can confirm that the following statements are true:

– There is a commitment for a long term life of this product, with other versions planned

– There is a commitment to explore creation of apps other than Silverlight, although nothing will be announced at this time

Hoag is the documentation lead for LightSwitch.

That said, Microsoft has been known to make such commitments before but later abandon them. Microsoft told me it was committed to cross-platform Silverlight, for example. And it was, for a bit, at least on Windows and Mac; but it is not now. Microsoft was committed to IronRuby and IronPython, once.

For those with even longer memories, I recall a discussion on CompuServe about Visual Basic for DOS. This was the last version of Microsoft Basic for DOS, a fine language in its way, and with a rather good character-based interface builder. Unfortunately it was buggy, and users were desperate for a bug-fix release. Into this discussion appeared a guy from Microsoft, who announced that he was responsible for the forthcoming update to Visual Basic for DOS and asked for the top requests.

Good news – except that there never was an update.

The truth is that with LightSwitch still in beta for Visual Studio 11, it is unlikely that any decision has been made about its future. My guess, and it is only that, is that the Visual Studio 11 version will be little used and that there will be no major update. If I am wrong and it is a big hit, then there will be an update. If I am right about its lack of uptake, but its backing within Microsoft is strong enough, then maybe in Visual Studio 12 or even sooner we will get a version that does it right, with output options for cross-platform HTML5 clients and for Windows Phone and Windows Metro. But do not hold your breath.