Category Archives: professional

Adobe’s plenoptic lens enables refocus magic

The most eye-opening demonstration at the NVIDIA GPU Technology Conference last week was from Adobe’s David Salesin (Sr. Principal Scientist) and Todor Georgiev (Sr Research Scientist), who showed their Plenoptic Lens along with software for processing the resulting images.

image

There was a gasp of amazement from the audience when we saw what the process is capable of. We saw an image refocused after the event.

image image

For anyone who has ever taken an out of focus picture – which I guess is everyone – the immediate reaction is to want one NOW. Another appealing idea is to take an image that has several items of interest, but at different depths, and shift the focus from one to another.

So how does it work? It starts with the plenoptic lens, which lets you “capture multiple views of the scene from slightly different viewpoints,” said Salesin:

If you have a high resolution sensor then each one of those images can be fairly high resolution. The neat thing is that with software, with computation, you can put this together into one large high-resolution image.

In a sense you are capturing a whole 4D lightfield. You’ve got two dimensions of the spatial position of the light ray, and also two dimensions of the orientation of the light ray.

With that 4D image, you can then after the fact use computation to take the place of optics. With computation you have a lot more flexibility. You can change the vantage point, the viewpoint a little bit, and you can also change the focus.

To resolve that, to take these individual little pieces of an image and put them together into one large image from any arbitrary view with any arbitrary focus, it turns out that texture mapping hardware is exactly what you need to do that. Using GPU chips we’ve been able to get speedups over the CPU of about 500 times.

Note that the image ends up being constructed in software. It is not just a matter of overlaying the small images in a certain way.

There is a good reason NVIDIA showed this at its conference. Suddenly we all want little cameras with GPUs powerful enough to do this on the fly.

I guess this demo is likely to show up again at the Adobe MAX conference next month.

There’s another report on this with diagrams here.

GPU programming for .NET

Exhibiting here at the NVIDIA GPU Technology Conference is a Cambridge-based company called tidepowerd, whose product GPU.NET brings GPU programming to .NET developers. The product includes both a compiler and a runtime engine, and one of the advantages of this hybrid approach is that your code will run anywhere. It will use both NVIDIA and AMD GPUs, if they support GPU programming, or fall back to the CPU if neither is available. From the samples I saw, it also looks easier than coding CUDA C or OpenCL; just add an attribute to have a function run on the GPU rather than the CPU. Of course underlying issues remain – the GPU is still isolated and cannot access global variables available to the rest of your application – but the Visual Studio tools try to warn of any such issues at compile time.

GPU.NET is in development and will go into public beta “by October 31st 2010” – head to the web site for more details.

I am not sure what sort of performance or other compromises GPU.NET introduces, but it strikes me that this kind of tool is needed to make GPU programming accessible to a wider range of developers.

NVIDIA CEO on the spot: explains Fermi delays, CUDA vs OpenCL, rise of the tablet

NVIDIA CEO Jen-Hsung Huang spoke to the press at the GPU Technology Conference and I took the opportunity to ask some questions.

image

I asked for his views on the cloud as a supercomputer and whether that would impact the need for local supercomputers of the kind GPU computing enables.

Although we expect more and more to happen in the cloud, in the meantime we’re going to keep buying devices with more and more solid state memory. The way to think about it is, storage is simply a surrogate for bandwidth. If we had infinite bandwidth none of us would need storage. As bandwidth improves the requirement for storage should reduce. But there’s another trend which is that the amount of data we collect is growing incredibly fast … It’s going to be quite a long time before our need for storage will reduce.

But what about local computing power, Gigaflops as opposed to storage?

Wherever there is storage, there’s GigaFlops. Local storage, local computing.

Next, I brought up a subject which has been puzzling me here at GTC. You can do GPU programming with NVIDIA’s CUDA C, which only works on NVIDIA GPUs, or with OpenCL which works with other vendor’s GPUs as well. Why is there more focus here on CUDA, when on the face of it developers would be better off with the cross-GPU approach? (Of course I know part of the answer, that NVIDIA does not mind locking developers to its own products).

The reason we focus all our evangelism and energy on CUDA is because CUDA requires us to, OpenCL does not. OpenCL has the benefit of IBM, AMD, Intel, and ourselves. Now CUDA is a little difference in that its programming approach is different. Instead of an API it’s a language extension. You program in C, it’s a different model.

The reason why CUDA is more adopted than OpenCL is because it is simply more advanced. We’ve invested in CUDA much longer. The quality of the compiler is much better. The robustness of the programming environment is better. The tools around it are better, and there are more people programming it. The ecosystem is richer.

People ask me how do we feel about the fact that it is proprietary. There’s two ways to think about it. There’s CUDA and there’s Tesla. Tesla’s not proprietary at all, Tesla supports OpenCL and CUDA. If you bought a server with Tesla in it, you’re not getting anything less, you’re getting CUDA more. That’s the reason Tesla has been adopted by all the OEMs. If you want a GPU cluster, would you want one that only does OpenCL? Or does OpenCL and CUDA? 80% of GPU computing today is CUDA, 20% is OpenCL. If you want to reach 100% of it, you’re better off using Tesla. Over time, if more people use OpenCL that’s fine with us. The most important thing is GPU computing, the next most important thing to us is NVIDIA’s GPUs, and the next is CUDA. It’s way down the list.

Next, a hot topic. Jen-Hsun Huang explained why he announced a roadmap for future graphics chip architectures – Kepler in 2011, Maxwell in 2013 – so that software developers engaged in GPU programming can plan their projects. I asked him why Fermi, the current chip architecture, had been so delayed, and whether there was good reason to have confidence in the newly announced dates.

He answered by explaining the Fermi delay in both technical and management terms.

The technical answer is that there’s a piece of functionality that is between the shared symmetric multiprocessors (SMs), 236 processors, that need to communicate with each other, and with memory of all different types. So there’s SMs up here, and underneath the memories. In between there is a very complicated inter-connecting system that is very fast. It’s nearly all wires, dense metal with very little logic … we call that the fabric.

When you have wires that are next to each other that closely they couple, they interfere … it’s a solid mesh of metal. We found a major breakdown between the models, the tools, and reality. We got the first Fermi back. That piece of fabric – imagine we are all processors. All of us seem to be working. But we can’t talk to each other. We found out it’s because the connection between us is completely broken. We re-engineered the whole thing and made it work.

Your question was deeper than that. Your question wasn’t just what broke with Fermi – it was the fabric – but the question is how would you not let it happen again? It won’t be fabric next time, it will be something else.

The reason why the fabric failed isn’t because it was hard, but because it sat between the responsibility of two groups. The fabric is complicated because there’s an architectural component, a logic design component, and there’s a physics component. My engineers who know physics and my engineers who know architecture are in two different organisations. We let it sit right in the middle. So the management lesson learned – there should always be a pilot in charge.

Huang spent some time discussing changes in the industry. He identifies mobile computing “superphones” and tablets as the focus of a major shift happening now. Someone asked “What does that mean for your Geforce business?”

I don’t think like that. The way I think is, “what is my personal computer business”. The personal computer business is Geforce plus Tegra. If you start a business, don’t think about the product you make. Think about the customer you’re making it for. I want to give them the best possible personal computing experience.

Tegra is NVIDIA’s complete system on a chip, including ARM processor and of course NVIDIA graphics, aimed at mobile devices. NVIDIA’s challenge is that its success with Geforce does not guarantee success with Tegra, for which it is early days.

The further implication is that the immediate future may not be easy, as traditional PC and laptop sales decline.

The mainstream business for the personal computer industry will be rocky for some time. The reason is not because of the economy but because of mobile computing. The PC … will be under disruption from tablets. The difference between a tablet and a PC is going to become very small. Over the next few years we’re going to see that more and more people use their mobile device as their primary computer.

[Holds up Blackberry] There’s no question right now that this is my primary computer.

The rise of mobile devices is a topic Huang has returned to on several occasions here. “ARM is the most important CPU architecture, instruction set architecture, of the future” he told the keynote audience.

Clearly NVIDIA’s business plans are not without risk; but you cannot fault Huang for enthusiasm or awareness of coming changes. It is clear to me that NVIDIA has the attention of the scientific and academic community for GPU computing, and workstation OEMs are scrambling to built Tesla GPU computing cards into their systems, but transitions in the market for its mass-market graphics cards will be tricky for the company.

Update: Huang’s comments about the reasons for Fermi’s delay raised considerable interest as apparently he had not spoken about this on record before. Journalist Nico Ernst captured the moment on video:

Is the triumph of the GPU the failure of the CPU?

I’m at NVIDIA’s GPU tech conference in San Jose. The central theme of the conference is that the capabilities of modern GPUs enable substantial performance gains for general computing, not just for graphics, though most of the examples we have seen involve some element of graphical processing. The reason you should care about this is that the gains are huge.

Take Matlab for example, a popular language and IDE for algorithm development, data analysis and mathematical computation. We were told in the keynote here yesterday that Matlab is offering a parallel computing toolkit based on NVIDIA’s CUDA, with speed-ups from 10 to 40 times. Dramatic performance improvements opens up new possibilities in computing.

Why has GPU performance advanced so rapidly, whereas CPU performance has levelled off? The reason is that they use different computing models. CPUs are general-purpose. The focus is on fast serial computation, executing a single thread as rapidly as possible. Since many applications are largely single-thread, this is what we need, but there are technical barriers to increasing clock speed. Of course multi-core and multi-processor systems are now standard, so we have dual-core or quad-core machines, with big performance gains for multi-threaded applications.

By contrast, GPUs are designed to be massively parallel. A Tesla C1060 has not 2 or 4 or 8 cores, but 240; the C2050 has 448. These are not the same as CPU cores, but nevertheless do execute in parallel. The clock speed is only 1.3Ghz, whereas an Intel Core i7 Extreme is 3.3Ghz, but the Intel CPU has a mere 6 cores.  An Intel Xeon 7560 runs at 2.266 Ghz and has 8 cores.The lower clock speed in the GPU is one reason it is more power-efficient.

NVIDIA’s CUDA initiative is about making this capability available to any application. NVIDIA made changes to its hardware to make it more amenable to standard C code, and delivered CUDA C with extensions to support it. In essence it is pretty simple. The extensions let you specify functions to execute on the GPU, allocate memory for pointers on the GPU, and copy memory between the GPU (called the device) and the main memory on the PC (called the host). You can also synchronize threads and use shared memory between threads.

The reward is great performance, but there are several disadvantages. One is the challenge of concurrent programming and the subtle bugs it can introduce.

Another is the hassle of copying memory between host and device. The device is in effect a computer within a computer. Shifting data between the two is relatively show.

A third is that CUDA is proprietary to NVIDIA. If you want your code to work with ATI’s equivalent, called Streams, then you should use the OpenCL library, though I’ve noticed that most people here seem to use CUDA; I presume they are able to specify the hardware and would rather avoid the compromises of a cross-GPU library. In the worst case, if you need to support both CUDA and non-CUDA systems, you might need to support different code paths depending on what is detected at runtime.

It is all a bit messy, though there are tools and libraries to simplify the task. For example, this morning we heard about GMAC, which makes host and device appear to use a single address space, though I imagine there are performance implications.

NVIDIA says it is democratizing supercomputing, bringing high performance computing within reach for almost anyone. There is something in that; but at the same time as a developer I would rather not think about whether my code will execute on the CPU or the GPU. Viewed at the highest level, I find it disappointing that to get great performance I need to bolster the capabilities of the CPU with a specialist add-on. The triumph of the GPU is in a sense the failure of the CPU. Convergence in some form or other strikes me as inevitable.

NVIDIA talks up GPU computing, presents roadmap

At the NVIDIA GPU Technology Conference in San Jose CEO Jen-Hsun Huang talked up the company’s progress in GPU computing, showed some example applications, and announced a high-level roadmap for future graphics chip architectures. NVIDIA has three areas of focus, he said: the Quadro line for visualisation, Tesla for parallel computing, and GeForce/Tegra for personal computing. Tegra is a system on a chip aimed at mobile devices. Mobile, says Huang, is “a completely disruptive force to all of computing.”

NVIDIA’s current chip architecture is called Fermi. The company is settling on a two-year product cycle and will deliver Kepler in 2011 with 3 to 4 times the performance (expressed as Gigaflops per watt) of Fermi. Maxwell in 2013 will have around 12 times the performance of Fermi. In between these architecture changes, NVIDIA will do “kicker” updates to refresh its products, with one for Fermi due soon.

The focus of the conference though is not on super-fast graphics cards in themselves, but rather on using the GPU for general purpose computing. GPUs are very, very good at doing mathematics fast and in parallel. If you have an application that does intensive calculations, then executing that part of the code on the GPU can offer impressive performance increases. NVIDIA’s CUDA library for C lets you do exactly that. Another option is OpenCL, a standard that works across GPUs from multiple vendors.

Adobe uses CUDA for the Mercury Playback engine in Creative Suite 5, greatly improving performance in After Effects, Premiere Pro and Photoshop, but with the annoyance that you have to use a compatible NVIDIA graphics card.

The performance gain from GPU programming is so great that it is unavoidable for applications in relevant areas, such as simulation or statistical analysis. Huang gave a compelling example during the keynote, bringing heart surgeon Dr Michael Black on stage to talk about his work. Operating on a beating heart is difficult because it presents a moving target. By combining robotic surgery with software that is able to predict the heart’s movement through simulation, he is researching how to operate on a heart almost as if it were stopped and with just a small incision.

Programming the GPU is compelling, but difficult. NVIDIA is keen to see it become part of mainstream programming, for obvious reasons, and there are new libraries and tools which help with this, like Parallel Nsight for Visual Studio 2010. Another interesting development, announced today, is CUDA for x86, being developed by PGI, which will let your CUDA code run even when an NVIDIA GPU is not present. Even if the performance gains are limited, it will mean developers who need to support diverse systems can run the same code, rather than having a different code path when no CUDA GPU is detected.

That said, GPU programming still has all the challenges of concurrent development, prone to race conditions and synchronization problems.

Stuffing a server full of GPUs is a cost-effective route to super-computing. I took a brief look at the exhibition, which includes this Colfax CXT8000 with 8 Tesla GPUs; it also has three 1200W power supplies. It may cost $25,000 but if you look at the performance you are getting for the price, machines like this are great value.

image

Crisis for ASP.Net – how serious is the Padding Oracle attack?

Security vulnerabilities are reported constantly, but some have more impact than others. The one that came into prominence last weekend (though it had actually been revealed several months ago) strikes me as potentially high impact. Colourfully named the Padding Oracle attack, it was explained and demonstrated at the ekoparty security conference. In particular, the researchers showed how it can be used to compromise ASP.NET applications:

The most significant new discovery is an universal Padding Oracle affecting every ASP.NET web application. In short, you can decrypt cookies, view states, form authentication tickets, membership password, user data, and anything else encrypted using the framework’s API! … The impact of the attack depends on the applications installed on the server, from information disclosure to total system compromise.

This is alarming simply because of the huge number of ASP.NET applications out there. It is not only a popular framework for custom applications, but is also used by Microsoft for its own applications. If you have a SharePoint site, for example, or use Outlook Web Access, then you are running an ASP.NET application.

The report was taken seriously by Microsoft, keeping VP Scott Guthrie and his team up all night, eventually coming up with a security advisory and a workaround posted to his blog. It does not make comfortable reading, confirming that pretty much every ASP.NET installation is vulnerable. A further post confirms that SharePoint sites are affected.

It does not help that the precise way the attack works is hard to understand. It is a cryptographic attack that lets the attacker decrypt data encrypted by the server. One of the consequences, thanks to what looks like another weakness in ASP.NET, is that the attacker can then download any file on the web server, including web.config, a file which may contain security-critical data such as database connection strings with passwords, or even the credentials of a user in Active Directory. The researchers demonstrate in a YouTube video how to crack a site running the DotNetNuke content management application, gaining full administrative rights to the application and eventually a login to the server itself.

Guthrie acknowledges that the problem can only be fixed by patching ASP.NET itself. Microsoft is working on this; in the meantime his suggested workaround is to configure ASP.NET to return the same error page regardless of what the underlying error really is. The reason for this is that the vulnerability involves inspecting the error returned by ASP.NET when you submit a corrupt cookie or viewstate data.

The most conscientious ASP.NET administrators will have followed Guthrie’s recommendations, and will be hoping that they are sufficient; it is not completely clear to me whether it is. One of the things that makes me think “hmmm” is that a more sophisticated workaround, involving random time delays before an error is returned, is proposed for later versions of ASP.NET that support it. What does that suggest about the efficacy of the simpler workaround, which is a static error page?

The speed with which the ASP.NET team came up with the workaround is impressive; but it is a workaround and not a fix. It leaves me wondering what proportion of ASP.NET sites exposed to the public internet will have implemented the workaround or do so before attacks are widespread?

A characteristic of the attack is that the web server receives thousands of requests which trigger cryptographic errors. Rather than attempting to fix up ASP.NET and every instance of web.config on a server, a more robust approach might be to monitor the requests and block IP numbers that are triggering repeated errors of this kind.

More generally, what should you do if you run a security-critical web application and a flaw of this magnitude is reported? Applying recommended workarounds is one possibility, but frankly I wonder if they should simply be taken offline until more is known about how to protect against it.

One thing about which I have no idea is the extent to which hackers are already trying this attack against likely targets such as ecommerce and banking sites. Of course in principle virtually any site is an attractive target, because of the value of compromised web servers for serving spam and malware.

If you run Windows servers and have not yet investigated, I recommend that you follow the links, read the discussions on Scott Guthrie’s blog, and at least implement the suggested actions.

Salesforce.com is the wrong kind of cloud says Oracle’s Larry Ellison

Oracle CEO Larry Ellison took multiple jabs at Salesforce.com in the welcome keynote at OpenWorld yesterday.

He said it was old, not fault tolerant, not elastic, and built on a bad security model since all customers share the same application. “Elastic” in this context means able to scale on demand.

Ellison was introducing Oracle’s new cloud-in-a-box, the Exalogic Elastic Cloud. This features 30 servers and 360 cores packaged in a single cabinet. It is both a hardware and software product, using Oracle’s infiniband networking internally for fast communication and the Oracle VM for hosting virtual machines running either Oracle Linux or Solaris. Oracle is positioning Exalogic as the ideal machine for Java applications, especially if they use the Oracle WebLogic application server, and as a natural partner for the Exadata Database Machine.

Perhaps the most interesting aspect of Exalogic is that it uses the Amazon EC2 (Elastic Compute Cloud) API. This is also used by Eucalyptus, the open source cloud infrastructure adopted by Canonical for its Ubuntu Enterprise Cloud. With these major players adopting the Amazon API, you could almost call it as standard.

Ellison’s Exalogic cloud is a private cloud, of course, and although he described it as low maintenance it is nevertheless the customer’s responsibility to provide the site, the physical security and to take responsibility for keeping it up and running. Its elasticity is also open to question. It is elastic from the perspective of an application running on the system, presuming that there is spare capacity to run up some more VMs as needed. It is not elastic if you think of it as a single powerful server that will be eye-wateringly expensive; you pay for all of it even though you might not need all of it, and if your needs grow to exceed its capacity you have to buy another one – though Ellison claimed you could run the entire Facebook web layer on just a couple of Exalogics.

In terms of elasticity, there is actually an advantage in the Salesforce.com approach. If you share a single multi-tenanted application with others, then elasticity is measured by the ability of that application to scale on demand. Behind the scenes, new servers or virtual servers may come into play, but that is not something that need concern you. The Amazon approach is more hands-on, in that you have to work out how to spin up (or down) VMs as needed. In addition, running separate application instances for each customer means a larger burden of maintenance falling on the customer – which with a private cloud might mean an internal customer – rather than on the cloud provider.

In the end it is not a matter of right and wrong, more that the question of what is the best kind of cloud is multi-faceted. Do not believe all that you hear, whether the speaker is Oracle’s Ellison or Marc Benioff from Salesforce.com.

Incidentally, Salesforce.com runs on Oracle and Benioff is a former Oracle VP.

Postscript: as Dennis Howlett observes, the high capacity of Exalogic is actually a problem – he estimates that only 5% at most of Oracle’s customers could make use of such an expensive box. Oracle will address this by offering public cloud services, presumably sharing some of the same technology.

Oracle: a good home for MySQL?

I’m not able to attend the whole of Oracle OpenWorld / JavaOne, but I have sneaked in to MySQL Sunday, which is a half-day pre-conference event. One of the questions that interests me: is MySQL in safe hands at Oracle, or will it be allowed to wither in order to safeguard Oracle’s closed-source database business?

It is an obvious question, but not necessarily a sensible one. There is some evidence for a change in direction. Prior to the takeover, the MySQL team was working on a database engine called Falcon, intended to lift the product into the realm of enterprise database management. Oracle put Falcon on the shelf; Oracle veteran Edward Screven (who also gave the keynote here) said that the real rationale for Falcon was that InnoDB would be somehow jiggered by Oracle, and that now both MySQL and InnoDB were at Oracle, it made no sense.

Context: InnoDB is the grown-up database engine for MySQL, with support for transactions, and already belonged to Oracle from an earlier acquisition.

There may be something in it; but it is also true that Oracle has fine-tuned the positioning of MySQL. Screven today emphasised that MySQL is Oracle’s small and nimble database manager; it is “quite performant and quite functional”, he said; the word “quite” betraying a measure of corporate internal conflict. Screven described how Oracle has improved the performance of MySQL on Windows and is cheerful about the possibility of it taking share from Microsoft’s SQL Server.

It is important to look at the actions as well as the words. Today Oracle announced the release candidate of MySQL 5.5, which uses InnoDB by default, and has performance and scalability improvements that are dramatic in certain scenarios, as well as new and improved tools. InnoDB is forging ahead, with the team working especially on taking better advantage of multi-core systems; we also heard about full text search coming to the engine.

The scalability of MySQL is demonstrated by some of its best-known deployments, including Facebook and Wikipedia. Facebook’s Mark Callaghan spoke today about making MySQL work well, and gave some statistics concerning peak usage: 450 million rows read per second, 3.5 million rows changed per second, query response time 4ms.

If pressed, Screven talks about complexity and reliability with critical data as factors that point to an Oracle rather than a MySQL solution, rather than lack of scalability.

In practice it matters little. No enterprise currently using an Oracle database is going to move to MySQL; aside from doubts over its capability, it is far too difficult and risky to switch your database manager to an alternative, since each one has its own language and its own optimisations. Further, Oracle’s application platform is built on its own database and that will not change. Customers are thoroughly locked in.

What this means is that Oracle can afford to support MySQL’s continuing development without risk of cannibalising its own business. In fact, MySQL presents an opportunity to get into new markets. Oracle is not the ideal steward for this important open source project, but it is working out OK so far.

Delphi XE includes licenses for older versions back to Delphi 7

I’ve just picked up that Delphi XE, the latest RAD Windows development suite from Embarcadero, includes licenses for older versions going back to Delphi 7.

There’s an explanation and list of what’s on offer here. Delphi 7 was the last version to use the old fully native code IDE and is delightfully fast and lightweight by today’s standards. Delphi 2007 was the last version before big Unicode changes in Delphi 2009, which often broke code, so could be useful for older projects.

The FAQ includes a few points of interest. Embarcadero is dismissive of the old Delphi for .NET (before Prism) and will not supply it:

That is an old technology that was replaced by Delphi Prism and we don’t want to encourage use of that old product.

If you have purchased XE and want to take advantage of the offer, you must do so within 180 days.

Microsoft’s Scott Guthrie: We have 200+ engineers working on Silverlight and WPF

Microsoft is countering rumours that WPF (Windows Presentation Foundation) or Silverlight, a cross-platform browser plug-in based on the same XAML markup language and .NET programming combination as WPF, are under any sort of threat from HTML 5.0.

We have 200+ engineers right now working on upcoming releases of SL and WPF – which is a heck of a lot.

says Corporate VP .NET Developer Platform Scott Guthrie in a Twitter post. Other comments include this one:

We are investing heavily in Silverlight and WPF

and this one:

We just shipped Silverlight for Windows Phone 7 last week, and WPF Ribbon about 30 days ago: http://bit.ly/aB6e6X

In addition, Microsoft has been showing off IIS Media Services 4.0 at the International Broadcasting Conference, which uses Silverlight as the multimedia client:

Key new features include sub-two-second low-latency streaming, transmuxing between H.264 file formats and integrated transcoding through Microsoft Expression Encoder 4. Microsoft will also show technology demonstrations of Silverlight Enhanced Movies, surround sound in Silverlight and live 3-D 1080p Internet broadcasting using IIS Smooth Streaming and Silverlight technologies.

No problem then? Well, Silverlight is great work from Microsoft, powerful, flexible, and surprisingly small and lightweight for what it can do. Combined with ASP.NET or Windows Azure it forms part of an excellent cloud-to-client .NET platform. Rumours of internal wrangling aside, the biggest issue is that Microsoft seems reluctant to grasp its cross-platform potential, leaving it as a Windows and desktop Mac solution just at the time when iPhone, iPad and Android devices are exploding in popularity. 

I will be interested to see if Microsoft announces Silverlight for Android this autumn, and if it does, how long it will take to deliver. The company could also give more visibility to its work on Silverlight for Symbian – maybe this will come more into the spotlight following the appointment of Stephen Elop, formerly of Microsoft, as Nokia CEO.

Apple is another matter. A neat solution I’ve seen proposed a few times is to create a Silverlight-to-JavaScript compiler along the lines of GWT (Google Web Toolkit) which converts Java to JavaScript. Of course it would also need to convert XAML layout to SVG. Incidentally, this could also be an interesting option for Adobe Flash applications.

As for WPF, I would be surprised if Microsoft is giving it anything like the attention being devoted to Silverlight, unless the Windows team has decided to embrace it within the OS itself. That said, WPF is already a mature framework. WPF will not go away, but I can readily believe that its future progress will be slow.