WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

The Beatles come to Apple iTunes

Apple made an extraordinary fuss about the arrival of Beatles music on its iTunes download store – even allowing the news to take over its home page for a day or two.

image

Why? I can think of a few reasons. Because Steve Jobs was born in 1955 and this is the music of his teen years. Because it is the finale in a long battle between Apple Computer and Apple Corps Ltd. And because the Beatles are arguably the pinnacle of popular music, regularly topping lists like the Rolling Stone 500 Greatest Albums of All Time. In fact, Beatles albums occupy four of the top ten slots.

image

It follows that the Beatles coming to iTunes is a landmark moment for Apple (computer) and shows the extent to which it now dominates music delivery.

That said, some observers were bewildered. Beatles fans already have the music, and have ripped their CDs to music servers and iPods so that iTunes availability will make no difference to them; and people born from a decade or two later than Steve Jobs mostly do not revere the band in the same way.

Speaking personally, those four albums are not in my top ten all-time favourites, good though they are, and I am more likely to put on Lennon’s cathartic Plastic Ono Band album than Sergeant Pepper.

I also wonder how long iTunes can sustain its position. To my mind, the streaming model of Spotify, where you pay a subscription and can listen to anything you want, makes more sense than the download model of iTunes.

But you want to own the music? Well, you cannot; even a CD or LP only sells you a licence. An iTunes purchase is more ephemeral than a CD, because it is a personal licence with no resale value, and comes with no physical container that you can put on the shelf. It is also, in the case of the Beatles albums and many others,  more expensive to buy the iTunes download than the CD, so you are paying a premium for the convenience of near-instant digital delivery.

It follows that iTunes offers rather poor value in an absolute sense. It is best to think of it as a service; and Apple does a nice job of making music easy to find and enjoy.

Final note: even if you have no interest in buying, it is worth running up iTunes and playing the hitherto unavailable video Live at the Washington Coliseum, 1964, which you can stream for free for an introductory period.

Now you can rent GPU computing from Amazon

I wrote back in September about why programming the GPU is going mainstream. That’s even more the case today, with Amazon’s announcement of a Cluster GPU instance for the Elastic Compute Cloud. It is also a vote of confidence for NVIDIA’s CUDA architecture. Each Cluster GPU instance has two NVIDIA Tesla M2050 GPUs installed and costs $2.10 per hour. If one GPU instance is not enough, you can use up to 8 by default, with more available on request.

GPU programming in the cloud makes sense in cases where you need the performance of a super-computer, but not very often. It could also enable some powerful mobile applications, maybe in financial analysis, or image manipulation, where you use a mobile device to input data and view the results, but cloud processing to do the heavy lifting.

One of the ideas I discussed with someone from Adobe at the NVIDIA GPU conference was to integrate a cloud processing service with PhotoShop, so you could send an image to the cloud, have some transformative magic done, and receive the processed image back.

The snag with this approach is that in many cases you have to shift a lot of data back and forth, which means you need a lot of bandwidth available before it makes sense. Still, Amazon has now provided the infrastructure to make processing as a service easy to offer. It is now over to the rest of us to find interesting ways to use it.

The Java crisis and what it means for developers

What is happening with the Java language and runtime? Since Java passed into the hands of Oracle, following its acquisition of Sun, there has been a succession of bad news. To recap:

  • The JavaOne conference in September 2010 was held in the shadow of Oracle OpenWorld making it a less significant event than in previous years.
  • Oracle is suing Google, claiming that Java as used in the Android SDK breaches its copyright.
  • IBM has abandoned the Apache open source Harmony project and is committing to the Oracle-supported Open JDK. Although IBM’s Sutor claims that this move will “help unify open source Java efforts”, it seems to have been done without consultation with Apache and is as much divisive as unifying.
  • Apple is deprecating Java and ceasing to develop a Mac-specific JVM. This should be seen in context. Apple is averse to runtimes of any kind – note its war against Adobe Flash – and seems to look forward to a day when all or most applications delivered to Apple devices come via the Apple-curated and taxed app store. In mitigation, Apple is cooperating with the OpenJDK and OpenJDK for Mac OS X has been announced.
  • Apache has written a strongly-worded blog post claiming that Oracle is “violating their contractual obligation as set forth under the rules of the JCP”, where JCP is the Java Community Process, a multi-vendor group responsible for the Java specification but in which Oracle/Sun has special powers of veto. Apache’s complaint is that Oracle stymies the progress of Harmony by refusing to supply the test kit for Java (TCK) under a free software license. Without the test kit, Harmony’s Java conformance cannot be officially verified.
  • The JCP has been unhappy with Oracle’s handling of Java for some time. Many members disagree with the Google litigation and feel that Oracle has not communicated well with the JCP. JCP member Doug Lea stood down, claiming that “the JCP is no longer a credible specification and standards body”. Another member, Stephen Colebourne, has a series of blog posts in which he discusses the great war of Java and what he calls the “unravelling of the JCP”, and recently  expressed his view that Oracle was trying to manipulate the recent JCP elections.

To set this bad news in context, Java was not really in a good way even before the acquisition. While Sun was more friendly towards open source and collaboration, the JCP has long been perceived as too slow to evolve Java, and unrepresentative of the wider Java community. Further, Java’s pre-eminence as a pervasive cross-platform runtime has been reduced. As a browser plug-in it has fallen behind Adobe Flash, the JavaFX initiative failed to win wide developer support, and on mobile it has also lost ground. Java’s advance as a language has been too slow to keep up with Microsoft’s C#.

There are a couple of ways to look at this.

One is to argue that bad news followed by more bad news means Java will become a kind of COBOL, widely used forever but not at the cutting edge of anything.

The other is to argue that since Java was already falling behind, radical change to the way it is managed may actually improve matters.

Mike Milinkovich at the Eclipse Foundation takes a pragmatic view in a recent post. He concedes that Oracle has no idea how to communicate with the Java community, and that the JCP is not vendor-neutral, but says that Java can nevertheless flourish:

I believe that many people are confusing the JCP’s vendor neutrality with its effectiveness as a specifications organization. The JCP has never and will never be a vendor-neutral organization (a la Apache and Eclipse), and anyone who thought it so was fooling themselves. But it has been effective, and I believe that it will be effective again.

It seems to me Java will be managed differently after it emerges from its crisis, and that on the scale between “open” and “proprietary” it will have moved towards proprietary but not in a way that destroys the basic Java proposition of a free development kit and runtime. It is also possible, even likely, that Java language and technology will advance more rapidly than before.

For developers wondering what will happen to Java at a technical level, the best guide currently is still the JDK Roadmap, published in September. Some of its key points:

  • The open source Open JDK is the basis for the Oracle JDK.
  • The Oracle JDK and Java Runtime Environment (JRE) will continue to be available as free downloads, with no changes to the existing licensing models.
  • New features proposed for JDK 7 include better support for dynamic languages and concurrent programming. JDK 8 will get Lambda expression.

While I cannot predict the outcome of Oracle vs Google or even Apache vs Oracle, my guess is that there will be a settlement and that Android’s momentum will not be disrupted.

That said, there is little evidence that Oracle has the vision that Sun once had, to make Java truly pervasive and a defence against lock-in to proprietary operating systems. Microsoft seems to have lost that vision for .NET and Silverlight as well – though the Mono folk have it. Adobe still has it for Flash, though like Oracle it seems if anything to be retreating from open source.

There is therefore some sense in which the problems facing Java (and Silverlight) are good for .NET, for Mono and for Adobe. Nevertheless, 2010 has been a bad year for write once – run anywhere.

Update: Oracle has posted a statement saying:

The recently released statement by the ASF Board with regard to their participation in the JCP calling for EC members to vote against SE7 is a call for continued delay and stagnation of the past several years. We would encourage Apache to reconsider their position and work together with Oracle and the community at large to collectively move Java forward.  Oracle provides TCK licenses under fair, reasonable, and non-discriminatory terms consistent with its obligations under the JSPA.   Oracle believes that with EC approval to initiate the SE7 and SE8 JSRs, the Java community can get on with the important work of driving forward Java SE and other standards in open, transparent, consensus-driven expert groups.   This is the priority.   Now is the time for positive action.  Now is the time to move Java forward.

to which Apache replies succinctly:

The ball is in your court. Honor the agreement.

First impressions of Microsoft Kinect – great hardware waiting for great software

The moment of magic comes when someone walks through the gaming area and Xbox flashes up the message that they have signed in. No button was pressed; this was face recognition working in the background during gameplay.

So Kinect is amazing. And it is amazing: it is controller-less video gaming that works well enough to have a lot of fun. That said, I also have reservations about the device, though these are first impressions only, and feel it is let down in a big way by the games currently available.

My device arrived on the UK launch day, November 10th. It is a relatively compact affair, around 28 cm wide on a stubby stand. The first task is positioning it, which can be a challenge. You are meant to place it above or below your TV screen, at a height of between 0.6m to 1.8m. I was lucky, in that our TV is on a stand that has space for it; the height is fractionally below 0.6m but it seems to be happy. Alternatively, you can purchase a free-standing support or a bracket that clips to the top of a TV. I imagine there are some frustrated first-day purchasers who received a device but cannot satisfactorily position it.

You also need free space in front of the set. Our coffee table got moved when the Nintendo Wii arrived, so the 6ft required for one-player play is not a problem.  Two-player is more difficult; we can do it but it means moving furniture, which is a nuisance. Overall it is more intrusive than the Wii, but less than Rock Band or Guitar Hero with the drum kit, so not a deal-breaker.

Microsoft takes full advantage of over-the-wire updates with Kinect. After connecting, the Xbox, the device firmware, and the bundled Kinect Adventures game all received patches; but the procedure went smoothly.

Kinect is a sophisticated device, a lot more than just a camera. There are three major subsystems in Kinect: optical, audio and motor.

  • Motor is the simplest – the stubby stand also contains a motor assembly that swivels the device up and down, enabling it to allow for different positions and to find the optimal angle for players of different heights.
  • The optical subsystem includes two cameras and an infra-red projector. The projector overlays a pattern on the field of view. This allows the first camera, a depth sensor, to map the position of the players in three dimensions. This lets the system detect hand movements, for example, which are usually closer to the camera than the rest of the body. The second camera is a colour device more like the one in your webcam, and enables Kinect to take pictures of your gaming antics which you can share with the world if you feel so inclined, as well as presumably feeding into the positioning system.
  • The audio subsystem includes no less than four microphones. The reason is that Kinect does voice recognition at a distance, so needs to be able to compensate for both the sounds of the video game and other background noise. Using multiple microphones enables the audio processor to calculate the position of sounds, since each microphone will receive a sound at a fractionally different time.

These sensors systems are backed by considerable processing power – necessary because the Xbox itself devotes most of its processing to the game being played. The trade-off in systems like this is that the more processing means more accurate interpretation of voice and gestures, but taking too much time introduces lag. As I saw at the NVIDIA GPU conference in September – see here and here for posts – very rapid processing enables magic like robotic pinhole surgery on a beating heart – and like Kinect, that magic is based on real-time interpretation of physical movement. Kinect is not at that level, but has audio and image processor chips and 512MB RAM, along with other components including for some reason an accelerometer, mounted on three circuit boards squashed into the slim plastic container. See for yourself in the ifixit teardown.

But how is it in practice? It certainly works, and we had a good and energetic time playing Kinect Adventures and a little bit of Joy Ride. Playing without a controller is a liberating experience. That said, there were some annoyances:

  • Kinect play is more vulnerable to interference than controller gaming. If someone walks across the play area, for example, it will interfere.
  • In the Kinect system, there is no such thing as a click. Therefore, to activate an option you have to hover over it for a short period while a progress circle fills; when the circle is filled, the system decides that you have “clicked”. It is slower and less reliable than clicking a button.
  • The audio system enables voice control which seems to work well when available, but most of the time it seems not to be available. Considering the amount of hardware dedicated to this, it seems rather a waste; but presumably more is to come. Controlling Sky player by voice, for example, would be great; no more hunting for the remote.
  • The Kinect seems to work best when you are standing. For something like a driving game, that is not what you want. Apparently seated gameplay is supported, but does not work properly with the launch games; so watch this space.

Launching stuff before it is really ready seems to be ingrained in Microsoft’s culture. Is Kinect another example? To some extent I suspect it is. I recall the early days with the Nintendo Wii as exciting moments of discovery: the system worked well from the get-go, and the bundled Wii Sports game is a masterpiece. The Kinect games so far are less impressive.

In fact, my overwhelming impression so far is that this is great hardware waiting for software to show what it can do. The 20,000 Leaks mini-game in Adventures is not very good – you are in a glass cage underwater and have to cover leaks to stem them – but it is interesting because you have to use head, hands and feet to play it. It could not be duplicated with a conventional controller, because a conventional controller does not allow you to move one thing this way, and another thing that way, at the same time.

It follows that Kinect should enable some brilliant new gaming concepts. I’d love to see a stealth adventure done for Kinect, for example; there are new possibilities for realism and excitement.

As it is, the Kinect launch games show little imagination and seem to be heavily Wii-influenced – and if you compare Kinect with Wii on that basis, you might well conclude that the Wii is better in some ways, worse in others, but cheaper and with better games, and without the friction of Kinect’s somewhat fussy requirements.

Such a comparison is not fair to Kinect, which in concept and hardware is a generation ahead of Wii or PlayStation Move. It now awaits software to take advantage.

UK business applications stagger towards the cloud

I spent today evaluating several competing vertical applications for a small business working in a particular niche – I am not going to identify it or the vendors involved. The market is formed by a number of companies which have been serving the market for some years, and which have Windows applications born in the desktop era and still being maintained and enhanced, plus some newer companies which have entered the market more recently with web-based solutions.

Several things interested me. The desktop applications seemed to suffer from all the bad habits of application development before design for usability became fashionable, and I saw forms with a myriad of fields and controls, each one no doubt satisfying a feature request, but forming a confusing and ugly user interface when put together. The web applications were not great, but seemed more usable, because a web UI encourages a simpler page-based approach.

Next, I noticed that the companies providing desktop applications talking to on-premise servers had found a significant number of their customers asking for a web-hosted option, but were having difficulty fulfilling the request. Typically they adopted a remote application approach using something like Citrix XenApp, so that they could continue to use their desktop software. In this type of solution, a desktop application runs on a remote machine but its user interface is displayed on the user’s desktop. It is a clever solution, but it is really a desktop/web hybrid and tends to be less convenient than a true web application. I felt that they needed to discard their desktop legacy and start again, but of course that is easier said than done when you have an existing application widely deployed, and limited development resources.

Even so, my instinct is to be wary of vendors who call desktop applications served by XenApp or the like cloud computing.

Finally, there was friction around integrating with Outlook and Exchange. Most users have Microsoft Office and use Outlook and Exchange for email, calendar and tasks. The vendors with web application found their users demanding integration, but it is not easy to do this seamlessly and we saw a number of imperfect attempts at synchronisation. The vendors with desktop applications had an easier task, except when these were repurposed as remote applications on a hosted service. In that scenario the vendors insisted that customers also use their hosted Exchange, so they could make it work. In other words, customers have to build almost their entire IT infrastructure around the requirements of this single application.

It was all rather unsatisfactory. The move towards the cloud is real, but in this particular small industry sector it seems slow and painful.

The cloud permeates Microsoft’s business more than we may realise

I’m in the habit of summarising Microsoft’s financial results in a simple table. Here is how it looks for the recently announced figures.

Quarter ending September 30 2010 vs quarter ending September 30 2009, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4785 1905 3323 1840
Server and Tools 3959 409 1630 393
Online 527 40 -560 -83
Business (Office) 5126 612 3388 561
Entertainment and devices 1795 383 382 122

The Windows figures are excellent, mostly reflecting Microsoft’s success in delivering a successor to Windows XP that is good enough to drive upgrades.

I’m more impressed though with the Server and tools performance – which I assume is mostly Server – though noting that it now includes Windows Azure. Microsoft does not break out the Azure figures but said that it grew 40% over the previous quarter; not especially impressive given that Azure has not been out long and will have grown from a small base.

The Office figures, also good, include Sharepoint, Exchange and BPOS (Business Productivity Online Suite), which is to become Office 365. Microsoft reported “tripled number of business customers using cloud services.”

Online, essentially the search and advertising business, is poor as ever, though Microsoft says Bing gained market share in the USA. Entertainment and devices grew despite poor sales for Windows Mobile, caught between the decline of the old mobile OS and the launch of Windows Phone 7.

What can we conclude about the health of the company? The simple fact is that despite Apple, Google, and mis-steps in Windows, Mobile, and online, Microsoft is still a powerful money-making machine and performing well in many parts of its business. The company actually does a poor job of communicating its achievements in my experience. For example, the rather dull keynote from TechEd Berlin yesterday.

Of course Microsoft’s business is still largely dependent on an on-premise software model that many of us feel will inevitably decline. Still, my other reflection on these figures is that the cloud permeates Microsoft’s business more than a casual glance reveals.

The “Online” business is mainly Bing and advertising as far as I can tell; and despite CTO Ray Ozzie telling us back in 2005 of the importance of services financed by advertising, that business revolution has not come to pass as he imagined. I assume that Windows Live is no more successful than Online.

What is more important is that we are seeing Server and tools growing Azure and cloud-hosted virtualisation business, and Office growing hosted Exchange and SharePoint business. I’d expect both businesses to continue to grow, as Microsoft finally starts helping both itself and its customers with cloud migration.

That said, since the hosted business is not separated from the on-premise business, and since some is in the hands of partners, it is hard to judge its real significance.

Apple gives up on Xserve dedicated server hardware – looking towards the cloud?

Apple is scrapping is Xserve products, according to the latest information on its web site:

Xserve will no longer be available after January 31, but we’ll continue to fully support it. To learn more, view the PDF.

If you do indeed view the PDF, it confirms that:

Apple will not be developing a future version of Xserve

However, the Snow Leopard Server, a version of OS X tuned for server use, remains; and Apple suggests that you install it either on a Mac Pro or on a Mac Mini.

image

That’s all very well; but while a Mini might well make sense for a very small business, larger organisations will not be impressed by the lack of features like dual redundant power supplies, lights out management, and rack mounting, which the Xserve provides.

There are a couple of ways to look at this. One is that Apple is giving up on the server market. Largely true, I think; but my guess is that Apple realises that this type of on-premise server is under threat from the cloud. I do not see this as Apple giving up on corporate computing; that would be unexpected considering the gains it is making with Mac, iPhone and iPad. I do see this as a move towards a client and cloud, or device and cloud, strategy. In that context it is not so surprising.

That said, I imagine there are a few businesses out there focused on supplying Xserve-based systems who will be disappointed by the news. I’ve not used one myself; but from what I’ve heard it is rather good.

Understanding the Silverlight controversy

There has been much discussion of the future of Microsoft’s Silverlight plugin since Server and Tools President Bob Muglia’s statement in a PDC interview that “Our strategy with Silverlight has shifted”, and spoke of HTML as the “only true cross platform solution”.

The debate was even reported on the BBC’s web site under the headline Coders decry Silverlight change.

It is unfortunate that headlines tend to think in binary; alive or dead. In other words, if Microsoft is repositioning Silverlight then it must be killing it.

That is not the case. Muglia did not say that Silverlight has no future, nor that it was unimportant. He affirmed that there will be another version of Silverlight for Windows and Mac, as well as highlighting that it is the development platform for Windows Phone.

Speaking personally for a moment, I have reviewed Silverlight favourably in the past and still regard it as a great achievement by Microsoft: the power of the .NET runtime, the elegance of C#, the flexible layout capabilities of XAML, integrated with a capable multimedia player, and wrapped in a lightweight package that in my experience installs quickly and easily.

Silverlight forms an excellent client for cloud services such as those delivered by the Azure platform which we heard about at PDC.

Perhaps it is the case that IE9 maestro Dean Hachamovitch tended towards the gleeful as he demonstrated features in HTML and JavaScript that previously would have required Silverlight or Flash. At the same time, IE9 is not yet released, and even when it is, will not match the capabilities or the tooling and libraries available for Silverlight.

The Silverlight press generated by PDC must have been disappointing and frustrating for Microsoft’s Silverlight team. I am reading reports of Developer VP Scott Guthrie’s remarks at the DevConnections conference this week.

The reports of my death are greatly exaggerated … I have more people working on Silverlight now than any time in Silverlight history … don’t believe everything you read on the internet.

I have great respect for Guthrie; you need only see the speed and manner with which he reacted to the recent ASP.NET security scare – not trying to diminish its importance, delivering practical advice, answering comments, and working with his team to come up with workarounds and a proper solution as quickly as possible – to appreciate his commitment and that he understands the needs of developers.

So were posts like my own Silverlight dream is over unfair and inaccurate? Well, there is always a risk of being misunderstood; but the problem, as I perceive it, is not primarily about Silverlight’s progress on Windows and Mac. The problem is that those two desktop platforms no longer have sufficient reach; or rather, even if they have sufficient reach today, they will not tomorrow. We have the rise of iOS and Android; an explosion of non-Windows tablets in the wings; we have a man like James Gardner, CTO at the UK’s Department for Work and Pensions, writing of Windows 7 that:

Personally, I think it likely this is  the last version of Windows anyone ever widely deploys

See also Cliff Saran’s comments at Computer Weekly.

In other words, Guthrie’s team can do a cracking job with Silverlight 5 for Windows and Mac – it could even merge Silverlight with WPF and make it the primary application platform for Windows – but that would still not address the concerns raised by what happened at PDC. If Silverlight remains imprisoned in Windows and Mac, it cannot deliver on its original promise.

What could Microsoft do to restore confidence in Silverlight? Something along these lines would make me change my mind:

  1. Announce Silverlight for Android.
  2. Nurture Silverlight for Symbian.
  3. Follow through on commitments for Silverlight on Moblin/MeeGo.
  4. Either implement Silverlight for Linux, or enter a deeper partnership with Novell’s Mono so that Microsoft-certified Silverlight runtimes appear on Linux in a timely manner alongside Microsoft’s releases.
  5. Come up with a solution for Silverlight on iOS. One idea is to follow Adobe with a native code compiler from Silverlight to iOS. Another would be a way of compiling XAML and C# to SVG and JavaScript. Neither would be perfect; but as it is, every company that starts deploying iPads or their successors is a customer that cannot use Silverlight.

Do I think Microsoft will implement the above? I doubt it. My interpretation of Muglia’s remarks is that Microsoft has decided not to go down that path, but to reserve Silverlight for Windows, Mac, and Windows Phone, and to invest in HTML for broad-reach applications.

That may well be the right decision; it is one that makes sense, though Microsoft was perhaps unwise to highlight it before IE9 is released. Further, cross-platform is not in Microsoft’s blood, and the path that Silverlight has taken is in line which what you would expect from a company built on Windows.

Silverlight is not dead, and for developers targeting Windows, Mac and Windows Phone it is as good as ever, and no doubt will be even better in its next version. But failing another change of heart, it will never now be WPF Everywhere; and PDC 2010 was when that truth sank home.

Update: this is pretty much what Guthrie says in his latest post:

Where our strategy has shifted since we first started working on Silverlight is that the number of Internet connected devices out there in the world has increased significantly in the last 2 years (not just with phones, but also with embedded devices like TVs), and trying to get a single implementation of a runtime across all of them is no longer really practical (many of the devices are closed platforms that do not allow extensibility).  This is true for any single runtime implementation – whether it is Silverlight, Flash, Java, Cocoa, a specific HTML5 implementation, or something else.  If people want to have maximum reach across *all* devices then HTML will provide the broadest reach (this is true with HTML4 today – and will eventually be true with HTML5 in the future).  One of the things we as a company are working hard on is making sure we have the best browser and HTML5 implementation on Windows devices through the great work we are doing with IE9.

Adobe MAX 2010 – it’s all about the partners

Last week was all conferences – Adobe MAX 2010 followed by Microsoft PDC – which left me with plenty of input but too little time to write it up. It is not too late though; and one advantage of attending these two events back-to-back was to highlight the tale of two runtimes, Adobe Flash and Microsoft Silverlight. MAX was a good event for Flash, and PDC a bad one for Silverlight, though the tale has a long way yet to run.

The key difference at this point is not technical, but all about partners. At MAX we saw how the Flash runtime is integral to the Blackberry PlayBook, with RIM founder Mike Lazaridis coming on stage to tell us so. Flash is also built into Google TV, and Andres Ferrate and Daniels Lee from Google Developer Relations presented a session on creating web apps for the platform – worth watching as it brings out the difference between developing for a TV “lean back” environment and traditional mouse or touch user interfaces -  and we also heard from Samsung about its Flash-enabled TVs coming in 2011. In each case, it is not just Flash but AIR, for applications that run outside the browser, which is supported. Google TV runs Android; and AIR for Android in general drew attention at MAX, encouraged by free Motorola Droid 2 smartphones handed out to attendees.

If the task was to convince Flash developers – and those on the fence – that the platform has a future, MAX delivered in spades; and Adobe can only benefit from the uncertainty surrounding the most obvious runtime rivals to Flash, Java and Silverlight.

But what about that other platform, HTML? Well, Adobe made a bit of noise about projects like EDGE, which exports animations and transitions to SVG and JavaScript using an extended JQuery library, as well as showing a “sneak peek” of a tool to export a Flash animation (but not application) to  HTML. Outside the Adobe fan club there is still considerable aversion to Flash, stoked by Apple; in one of the sessions at MAX we were told that Steve Jobs’ open memo Thoughts on Flash has done real damage.

My impression though is that Adobe still has a Flash-first philosophy. The Solution Accelerators announced for LiveCycle 2.5, for example, all seem to be based on Flash clients, which could prove difficult if Apple’s iPad continues to take off in the enterprise. Adobe could do more to provide JavaScript libraries for LiveCycle clients, and tools for creating HTML applications. If you came to MAX looking for evidence that Adobe is moving towards web standard HTML clients, you would have been largely disappointed; though seeing JQuery guy John Resig in the day two keynote would give you some comfort.

Some other MAX highlights:

  • Round-tripping between Catalyst and Flash Builder at last. This makes Catalyst more useful, though I still find myself thinking that the Catalyst features could be rolled into one of the other products, either as a designer personality for Flash Builder, or maybe in Flash Professional. The former would be easier as both Catalyst and Flash Builder are built on Eclipse.
  • Enhancements in the Flash Player – I am writing a separate piece on this, but it is great to see the 3D extensions codenamed Molehill, which together with game controller support lay the foundations for Flash games that compete more closely with console games.
  • Analytics – Adobe’s acquisition of Omniture a year ago was a far-sighted move, and the company talked about analytics in the context of applications as well as web sites. Despite unsettling privacy implications, the ability for developers to drill down into exactly how an application is used, and which parts are hardly used, has great potential for improving usability.
  • Digital publishing – it was fascinating to hear from publisher Condé Nast about its plans for digital publishing, using Adobe’s Digital Publishing Suite to create files targeting Adobe’s content viewer on iOS and eventually AIR. As a web enthusiast I have mixed feelings, and there was some foot-shuffling when I asked about SEO (Search Engine Optimisation); but as someone with a professional interest in a flourishing media industry I also hope this becomes a solid and profitable platform.

Disappointments? I was sorry to hear that Adobe is closing down contributions and reducing transparency in the open source Flex SDK, though it is said to be temporary. It also seems that plans to enhance ActionScript are not well advanced; Silverlight remains well ahead in this respect with its C# and .NET support.

What about Adobe’s enterprise ambitions? Klint Finley’s post on the Adobe Stack and what it means for Enterprise Development is a good read. The pieces are almost in place, but the focus on document processing at the back end, and Flash and Acrobat on the front end, makes this a specialist rather than a generic application platform.

Overall though it was a strong MAX. I appreciate Adobe for not being Google or Apple or Microsoft or IBM, and hope that takeover rumours remain as rumours.

See also my earlier post Adobe aims to fill mobile vacuum with AIR.