Category Archives: internet

Single sign-on from Active Directory to Windows Azure: big feature, still challenging

Microsoft has posted a white paper setting out what you need to do in order to have users who are signed on to a local Windows domain seamlessly use an Azure-hosted application, without having to sign in again.

I think this is a huge feature. Maintaining a single user directory is more secure and more robust than efforts to synchronise a local directory with a cloud-hosted directory, and this is a point of friction when it comes to adopting services such as Google Apps or Salesforce.com. Single sign-on with federated directory services takes that away. As an application developer, you can write code that looks the same as it would for a locally deployed application, but host it on Azure.

There is also a usability issue. Users hate having to sign in multiple times, and hate it even more if they have to maintain separate username/password combinations for different applications (though we all do).

The white paper explains how to use Active Directory Federation Services (ADFS) and Windows Identity Foundation (WIF, part of the .NET Framework) to achieve both single sign-on and access to user data across local network and cloud.

image

The snag? It is a complex process. The white paper has a walk-through, though to complete it you also need this guide on setting up ADFS and WIF. There are numerous steps, some of which are not obvious. Did you know that “.NET 4.0 has new behavior that, by default, will cause an error condition on a page request that contains a WS-Federation authentication token”?

Of course dealing with complexity is part of the job of a developer or system administrator. Then again, complexity also means more to remember and more to troubleshoot, and less incentive to try it out.

One of the reasons I am enthusiastic about Windows Small Business Server Essentials (codename Aurora) is that it promises to do single sign-on to the cloud in a truly user-friendly manner. According to a briefing I had from SBS technical product manager Michael Leworthy, cloud application vendors will supply “cloud integration modules,” connectors that you install into your SBS to get instant single sign-on integration.

SBS Essentials does run ADFS under the covers, but you will not need a 35-page guide to get it working, or so we are promised. I admit, I have not been able to test this feature yet, and aside from Microsoft’s BPOS/Office 365 I do not know how many online applications will support it.

Still, this is the kind of thing that will get single sign-on with Active Directory widely adopted.

Consider FaceBook Connect. Register your app with Facebook; write a few lines of JavaScript and PHP; and you can achieve the same results: single sign-on and access to user account information. Facebook knows that to get wide adoption for its identity platform it has to be easy to implement.

On Microsoft’s platform, another option is to join your Azure instance to the local domain. This is a feature of Azure Connect, currently in beta.

Are you using ADFS, with Azure or another platform? I would be interested to hear how it is going.

Adobe declares glittering results as CEO says Apple’s Flash ban has no impact on its revenue

Adobe has proudly declared its first billion dollar quarter, $1,008 m in the quarter ending Dec 3 2010 versus $757.3 m in the same quarter of 2009.

I am not a financial analyst, but a few things leap out from the figures. One is that Omniture, the analytics company Adobe acquired at the end of 2009, is doing well and contributing significantly to Adobe’s revenue – $98.4 m in Q4 2010. The billion dollar quarter would not have happened without it. Second, Creative Suite 5 is selling well, better than Creative Suite 4.

Creative Suite 4 was released in October 2008, and Creative Suite 5 in April 2010. It is not perfect, but the following table compares the Creative Solutions segment (mainly Creative Suite) of the two products quarter by quarter from their respective release dates:

Quarters after release 1st 2nd 3rd 4th 5th 6th
Creative Suite 4 508.7 460.7 411.7 400.4 429.30 432.0
Creative Suite 5 532.7 549.7 542.1      

CS4 drops off noticeably following an initial surge, whereas CS5 has kept on selling. It is a good product and a de-facto industry standard, but not every user is persuaded to upgrade every time a new release appears. My guess is that things like better 64-bit support – which make a huge difference in the production tools – and new tricks in PhotoShop have been successful in driving upgrades to CS5. Further, the explosion of premium mobile devices led by Apple’s iPhone and iPad has not been bad for Adobe despite Apple CEO Steve Jobs doing his best to put down Flash. Publishers creating media for the iPad, for example, will most likely use Adobe’s tools to do so. CEO Shantanu Narayen said in the earnings calls, “We have not seen any impact on our revenue from Apple’s choice [to not support Flash]”, though I am sure he would make a big deal of it if Apple were to change its mind.

Before getting too carried away though, I note that Creative Suite 3, published in March 2007, did just as well as CS5.  Here are the figures:

Quarters after release 1st 2nd 3rd 4th 5th 6th
Creative Suite 3 436.6 545.5 570.5 543.5 527.2 493.6

In fact, Q4 2007 at $570.5 m is still a record for Adobe’s Creative Solutions segment. So maybe CS4 was an unfortunate blip. Then again, not quite all the revenue in Creative Solutions is the suite; it also includes Flash Platform services such as media streaming. Further, the economy looked rosier in 2007.

Here is the quarter vs quarter comparison over the whole company:

  Q4 2009 Q4 2010
Creative Solutions 429.3 542.1
Digital Enterprise 211.8 274.10
Omniture 26.3 98.4
Platform 47 46.1
Print and publishing 42.9 47.3

In this table, Creative Solutions has already been mentioned. Digital Enterprise, formerly called Business Productivity, includes Acrobat, LiveCycle and Connect web conferencing. Platform is confusing; according to the Q4 09 datasheet it includes the developer tools, Flash Platform Services and ColdFusion. However, the Q4 10 datasheet omits any list of products for Platform, though it includes them for the other segments, and lists ColdFusion under Print and Publishing along with Director, Contribute, PostScript, eLearning Suite and some other older products. According to this document [pdf] InDesign which is huge in print publishing is not included in Print and Publishing, so I guess it is in Creative Solutions.

In the earnings call, Adobe’s Mark Garrett did mention Platform, and attributed its growth (compared to Q3 2010) to “higher toolbar distribution revenue driven primarily by the release of the new Adobe Reader version 10 in the quarter.” This refers to the vile practice of foisting a third-party toolbar (unless they opt-out) on people forced to download Adobe Reader because they have been send a PDF. Perhaps in the light of these good results Adobe could be persuaded to stop doing so?

I am not sure how much this breakdown can be trusted as it makes little sense to me. Do not take the segment names too seriously then; but they are all we have when it comes to trying to compare like with like.

Still, clearly Adobe is doing well and has successfully steered around some nasty rocks that Apple threw in its way. I imagine that Microsoft’s decision to retreat from its efforts to establish Silverlight as a cross-platform rival to Flash has also helped build confidence in Adobe’s platform. The company’s point of vulnerability is its dependence on shrink-wrap software for the majority of its revenue; projects like the abandoned Rome show that Adobe knows how to move towards cloud-deployed, subscription-based software but with business booming under its current model, and little sign of success for cloud projects like Acrobat.com, you can understand why the company is in no hurry to change.  

First impressions of Google TV – get an Apple iPad instead?

I received a Google TV as an attendee at the Adobe MAX conference earlier this year; to be exact, a Logitech Revue. It is not yet available or customised for the UK, but with its universal power supply and standard HDMI connections it works OK, with some caveats.

The main snag with my evaluation is that I use a TV with built-in Freeview (over-the-air digital TV) and do not use a set top box. This is bad for Google TV, since it wants to sit between your set top box and your TV, with an HDMI in for the set top box and an HDMI out to your screen. Features like picture-in-picture, TV search, and the ability to choose a TV channel from within Google TV, depend on this. Without a set-top box you can only use Google TV for the web and apps.

image

I found myself comparing Google TV to Windows Media Center, which I have used extensively both directly attached to a TV, and over the network via Xbox 360. Windows Media Center gets round the set top box problem by having its own TV card. I actually like Windows Media Center a lot, though we had occasional glitches. If you have a PC connected directly, of course this also gives you the web on your TV. Sony’s PlayStation 3 also has a web browser with Adobe Flash support, as does Nintendo Wii though it is more basic.

image

What you get with Google TV is a small set top box – in my case it slipped unobtrusively onto a shelf below the TV, a wireless keyboard, an HDMI connector, and an IR blaster. Installation is straightforward and the box recognised my TV to the extent that it can turn it on and off via the keyboard. The IR blaster lets you position an infra-red transmitter optimally for any IR devices you want to control from Google TV – typically your set-top box.

I connected to the network through wi-fi initially, but for some reason this was glitchy and would lose the connection for no apparent reason. I plugged in an ethernet cable and all was well. This problem may be unique to my set-up, or something that gets a firmware fix, so no big deal.

There is a usability issue with the keyboard. This has a trackpad which operates a mouse pointer, under which are cursor keys and an OK button. You would think that the OK button represents a mouse click, but it does not. The mouse click button is at top left on the keyboard. Once I discovered this, the web browser (Chrome, of course) worked better. You do need the OK button for navigating the Google TV menus.

I also dislike having a keyboard floating around in the living room, though it can be useful especially for things like Gmail, Twitter or web forums on your TV. Another option is to control it from a mobile app on an Android smartphone.

The good news is that Google TV is excellent for playing web video on your TV. YouTube has a special “leanback” mode, optimised for viewing from a distance that works reasonably well, though amateur videos that look tolerable in a small frame in a web browser look terrible played full-screen in the living room. BBC iPlayer works well in on-demand mode; the download player would not install. Overall it was a bit better than the PS3, which is also pretty good for web video, but probably not by enough to justify the cost if you already have a PS3.

The bad news is that the rest of the Web on Google TV is disappointing. Fonts are blurry, and the resolution necessary to make a web page viewable from 12 feet back is often annoying. Flash works well, but Java seems to be absent.

Google also needs to put more thought into personalisation. The box encouraged me to set up a Google account, which will be necessary to purchase apps, giving me access to Gmail and so on; and I also set up the Twitter app. But typically the living room is a shared space: do you want, for example, a babysitter to have access to your Gmail and Twitter accounts? It needs some sort of profile management and log-in.

In general, the web experience you get by bringing your own laptop, netbook or iPad into the room is better than Google TV in most ways apart from web video. An iPad is similar in size to the Google TV keyboard.

Media on Google TV has potential, but is currently limited by the apps on offer. Logitech Media Player is supplied and is a DLNA client, so if you are lucky you will be able to play audio and video from something like a NAS (network attached storage) drive on your network. Codec support is limited.

In a sane, standardised world you would be able to stream music from Apple iTunes or a Squeezebox server to Google TV but we are not there yet.

One key feature of Google TV is for purchasing streamed videos from Netflix, Amazon VOD (Video on Demand) or Dish Network. I did not try this; they do not work yet in the UK. Reports are reasonably positive; but I do not think this is a big selling point since similar services are available by many other routes. 

Google TV is not in itself a DVR (Digital Video Recorder) but can control one.

All about the apps

Not too good so far then; but at some point you will be able to purchase apps from the Android marketplace – which is why attendees at the Adobe conference were given boxes. Nobody really knows what sort of impact apps for TV could have, and it seems to me that as a means of running apps – especially games – on a TV this unobtrusive device is promising.

Note that some TVs will come with Google TV built-in, solving the set top box issue, and if Google can make this a popular option it would have significant impact.

It is too early then to write it off; but it is a shame that Google has not learned the lesson of Apple, which is not to release a product until it is really ready.

Update: for the user’s perspective there is a mammoth thread on avsforum; I liked this post.

Salesforce.com acquires Heroku, wants your Enterprise apps

The big news today is that Salesforce.com has agreed to acquire Heroku, a company which hosts Ruby applications using an architecture that enables seamless scalability. Heroku apps run on “dynos”, each of which is a single process running Ruby code on the Heroku “grid” – an abstraction which runs on instances of Amazon EC2 virtual machines. To scale your app, you simply add more dynos.

image

Why is Salesforce.com acquiring Heroku? Well, for some years an interesting question about Salesforce.com has been how it can escape its cloud CRM niche. The obvious approach is to add further applications, which it has done to some extent with FinancialForce, but it seems the strategy now is to become a platform for custom business applications. We already knew about VMForce, a partnership with VMWare currently in beta that lets you host Java applications that are integrated with Force.com, but it is with the announcements here at Dreamforce that the pieces are falling into place. Database.com for data access and storage; now Heroku for Ruby applications.

These services join several others which Salesforce.com is branding at Force.com 2:

Appforce – in effect the old Force.com, build departmental apps with visual tools and declarative code.

Siteforce – again an existing capability, build web sites on Force.com.

ISVForce – build your own multi-tenant application and sign up customers.

Salesforce.com is thoroughly corporate in its approach and its obvious competition is not so much Google AppEngine or Amazon EC2, but Microsoft Azure: too expensive for casual developers, but with strong Enterprise features.

Identity management is key to this battle. Microsoft’s identity system is Active Directory, with federation between local and cloud directories enabling single sign-on. Salesforce.com has its own user directory and developing on its platform will push you towards using it.

Today’s announcement makes sense of something that puzzled me: why we got a session on node.js at Monday’s Cloudstock event. It was a great session and I wrote it up here. Heroku has been experimenting with node.js support, with considerable success, and says it will introduce a new version next year.

Finally, the Heroku acquisition is great news for Enterprise use of Ruby. Today many potential new developers will be looking at it with interest.

Silverlight 5 unveiled: more power, more Windows

Microsoft has announced details of Silverlight 5, a major new release of its browser plug-in and desktop runtime for Windows and Mac. Silverlight is also the primary application runtime for Windows Phone 7, though this update does not apply to the phone yet. Silverlight 5 will go into beta in the first half of 2011, and release is planned for the second half of 2011 – no more than a year or so away.

So what’s in Silverlight 5?

On the media side, there is hardware decoding of H.264 video (an overdue feature) plus enhancements including TrickPlay which enables fast-forward and rewind. There is also remote control support of some kind. According to VP Scott Guthrie, you will be able to stream HD video to a netbook.

The bigger area of change is in Silverlight as an application runtime. Here are the highlights:

  • Text rendering is much improved, with multi-columns, OpenType support, and control of tracking and leading.
  • Postscript vector printing greatly improves printing support, and you can now create a dedicated print view different from what is on screen.
  • A new hardware-accelerated 3D graphics API, as well as immediate mode graphics which lets you render directly to the GPU.
  • There is a 64-bit version of Silverlight 5.
  • WS-Trust support for secure messaging in tandem with Windows Communication Foundation.
  • Databinding enhancements, and support for debugging a binding by setting a breakpoint on it.

Alongside these, trusted Silverlight applications have new capabilities. But what is a trusted application? In the past, Silverlight applications become trusted if they run out of the browser and the user gives permission via a dialog. In Silverlight 5 this changes. A Silverlight application can be trusted within the browser as well, though Microsoft says this only works “when enabled via a group policy registry key and an application certificate”. This implies that the feature is aimed at corporate environments rather than for applets with a broad reach.

Once trusted, an in-browser Silverlight applet has the following additional features:

  • A new web browser control lets you host HTML content within a Silverlight application.
  • Read and write access to My Documents
  • Ability to launch Microsoft Office applications – examples include creating an email message or opening a report in Word
  • Access to COM components – Microsoft gives the example of accessing a USB security key or a bar-code scanner
  • Ability to call native code vith PInvoke (Platform Invoke)

In addition, out of browser applications support multiple windows including child windows, so they can be made to behave even more like normal desktop Windows applications.

You can see the theme here: making trusted Silverlight applications more powerful so that a larger proportion of custom business applications can be implemented in the browser or as Silverlight out-of-browser applications, rather than as traditional Windows applications that require desktop deployment. Put this together with Office 365 and Windows Azure, and you can see how well Silverlight works as a component in Microsoft’s cloud stack – provided users do not have anything inconvenient like an Apple iPad.

But what about the Mac? All these “trusted” features appear to be Windows-only. I asked about Mac support and was told:

We’re evaluating mechanisms for enabling similar trusted applications on the mac.

Fair enough; but the way this is put does suggest that having retreated from any ambitions for broad device reach in statements at the recent PDC conference, it now seems that Microsoft is further retreating from Mac and Windows parity, and moving Silverlight more towards being an application runtime for Windows – though note that there will still be a Silverlight 5 for the Mac and which will have the features that do not require COM or PInvoke.

It is disappointing that there is still no built-in local database support, though there are third-party offerings.

There are a couple of ways to look at Silverlight. Microsoft’s lack of commitment to cross-platform parity and its unwillingness to address broad device support means it does not look good as a broad-reach browser plugin, despite its great features on systems that do support it.

On the other hand, as an alternative to desktop Windows applications Silverlight looks increasingly attractive as its capabilities increase.

More information on the new features here – though note it neglects to mention what will and will not work on a Mac.

Adobe abandons Project ROME, focuses on apps rather than cloud

Adobe is ceasing investment in Project ROME, a labs project which provides a rich design and desktop publishing application implemented as an Adobe AIR application, running either in the browser or on the desktop using the Flash player as a runtime.

image

According to the announcement:

Project ROME by Adobe was intended to explore the opportunity and usability of creative tools as software-as-a-service in the education market and beyond. We have received valuable input from the community after a public preview of the software. Following serious evaluation and consideration of customer input and in weighing this product initiative against other projects currently in development, we have made the difficult decision to stop development on Project ROME. Given our priorities, we’re focusing resources on delivering tablet applications, which we believe will have significant impact on creative workflows.

There must be some broken hearts at Adobe because ROME is a beautiful and capable application that serves, if nothing else, as a demonstration of how capable a Rich Internet Application can be. In fact, I have used it for that purpose: when asked whether a web application could ever deliver the a user interface that comes close to the best desktop applications, I showed Project Rome with great effect.

I first saw Project ROME as a “sneak peek” at the Adobe MAX conference in 2009. It had made it past those initial prototypes and was being worked up as a full release, with a free version for education and a commercial version for the rest of us. Curiously, Adobe says the commercial version will remain available as an unsupported freebie, but the educational offering is being pulled: “we do not want to see pre-release software used in the classroom “.

Why abandon it now? I think we have Apple’s Steve Jobs to thank. AIR applications do not run on the iPad; and when Adobe says it is focusing instead on tablet applications, the iPad will figure largely in those plans. Still, there are a few other factors:

  • One thing that was not convincing in the briefing I received about Project ROME was the business model. It was going to be subscription-based, but how many in this non-professional target market would subscribe to online desktop publishing, when there are well-established alternatives like Microsoft Publisher?
  • Adobe makes most of its money from selling desktop software, in the Creative Suite package. ROME was always going to be a toy relative to the desktop offerings.
  • The output from ROME is primarily PDF. If Rome had been able to build web pages rather than PDF documents, perhaps that would have made better sense for a cloud application.
  • Adobe did not market the pre-release effectively. I do not recall hearing about it at MAX in October, which surprised me – it may have been covered somewhere, but was not covered in the keynotes despite being a great example of a RIA.
  • The ROME forum shows only modest activity, suggesting that Project ROME had failed to attract the attention Adobe may have hoped for.

It is still worth taking a look at Project ROME; and I guess that some of the ideas may resurface in apps for iPad, Android and other tablets. It will be interesting to see to what extent Adobe itself uses Flash and AIR for the commercial design apps it delivers.

Final reflection: this decision is a tangible example of the ascendancy of mobile apps versus web applications – though note that Adobe still has a bunch of web applications at Acrobat.com, including the online word processor once called Buzzword and a spreadsheet application called Tables.

HTML 5 Canvas: the only plugin you need?

The answer is no, of course. And Canvas is not a plugin. That said, here is an interesting proof of concept blog and video from Alexander Larsson: a GTK3 application running in Firefox without any plugin.

image

GTK is an open source cross-platform GUI framework written in C but with bindings to other languages including Python and C#.

So how does C native code run the browser without a plugin? The answer is that the HTML 5 Canvas element, already widely implemented and coming to Internet Explorer in version 9, has a rich drawing API that goes right down to pixel manipulation if you need it. In Larsson’s example, the native code is actually running on a remote server. His code receives the latest image of the application from the server and transmits mouse and keyboard operations back, creating the illusion that the application is running in the browser. The client only needs to know what is different in the image as it changes, so although sending screen images sounds heavyweight, it is amenable to optimisation and compression.

It is the same concept as Windows remote desktop and terminal services, or remote access using vnc, but translated to a browser application that requires no additional client or setup.

There are downsides to this approach. First, it puts a heavy burden on the server, which is executing the application code as well as supplying the images, especially when there are many simultaneous users. Second, there are tricky issues when the user expects the application to interact with the local machine, such as playing sounds, copying to the clipboard or printing. Everything is an image, and not character-by-character text, for example. Third, it is not well suited to graphics that change rapidly, as in a game with fast-paced action.

On the other hand, it solves an immense problem: getting your application running on platforms which do not support the runtime you are using. Native applications, Flash and Silverlight on Apple’s iPad and iPhone, for example. I recall seeing a proof of concept for Flash at an Adobe MAX conference (not the most recent one) as part of the company’s research on how to break into Apple’s walled garden.

It is not as good as a true local application in most cases, but it is better than nothing.

Now, if Microsoft were to do something like this for Silverlight, enabling users to run Silverlight apps on their Apple and Linux devices, I suspect attitudes to the viability of Silverlight in the browser would change considerably.

WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

Which mobile platforms will fail?

Gartner’s Nick Jones addressed this question in a blog post yesterday. He refers to the “rule of three” which conjectures that no more than three large vendors can succeed in a mature market. If this applies in mobile, then we will see no more than three survivors, after failures and consolidation, from the following group plus any I’ve missed. I have shown platforms that have common ownership and are already slated to be replaced in strikeout format.

  • Apple iOS
  • Google Android
  • Samsung Bada
  • Maemo MeeGo
  • RIM BlackBerry OS BlackBerry Tablet OS (QNX)
  • HP/Palm WebOS
  • Symbian
  • Windows Mobile Windows Phone 7 and successors

Jones says that success requires differentiation, critical mass, and a large handset manufacturer. I am not sure that the last two are really distinct. It is easy to fall into the tautology trap: to be successful a platform needs to be successful. Quite so; but what we are after is the magic ingredient(s) that make it so.

Drawing up a list like this is hard, since some operating systems are more distinct than others. Android, Bada, MeeGo and WebOS are all Linux-based; iOS is also a Unix-like OS. Windows Mobile and Windows Phone 7 are both based on Windows CE.

While it seems obvious that not all the above will prosper, I am not sure that the rule of three applies. I agree that it is unlikely that mobile app vendors will want to support and build 8 or more versions of each app in order to cover the whole market; but this problem does not apply to web apps, and cross-platform frameworks and runtimes can solve the problem to some extent – things like Adobe AIR for mobile, PhoneGap and Appcelerator. Further, there will probably always be mobile devices on which few if any apps are installed, where the user will not care about the OS or application store.

Still, pick your winners. Gartner is betting on iOS and Android, predicting decline for RIM and Symbian, and projecting a small 3.9% share for Microsoft by 2014.

I am sure there will be surprises. The question of mobile OS market share should not be seen in isolation, but as part of a bigger picture in which cloud+device dominates computing. Microsoft has an opportunity here, because in theory it can offer smooth migration to existing Microsoft-platform businesses, taking advantage of their investment – or lock-in – to Active Directory, Exchange, Office and .NET. In the cloud that makes Microsoft BPOS and Azure attractive, while a mobile device with great support for Exchange and SharePoint, for example, is attractive to businesses that already use these platforms.

The cloud will be a big influence at the consumer end too. There is talk of a Facebook phone which could disrupt the market; but I wonder if we will see the existing Facebook and Microsoft partnership strengthen once people realise that Windows Phone 7 has, from what I have seen, the best Facebook integration out there.

So there are two reasons why Gartner may have under-rated Microsoft’s prospects. Equally, you can argue that Microsoft is too late into this market, with Android perfectly positioned to occupy the same position with respect to Apple that worked so well for Microsoft on the desktop.

It is all too early to call. The best advice is to build in the cloud and plan for change when it comes to devices.

Data analysis hot at Future of Web Applications Day One

I’ve been attending the Future of Web Applications conference in London. I spoke to several attendees in the evening and the general perception was that the event had been weaker than usual so far. Complaints concerned uninspiring sessions, lack of deep technical content, and information on HTML 5 that was really nothing new.

That said, several said how much they enjoyed a session from Hilary Mason at bit.ly on data analysis. Bit.ly does url shortening, with 70% of so of its traffic coming from Twitter clients, and Mason is a statistical expert who has worked on analysing and visualising the resulting data. She told us, for example, that news links are more popular than sports links, and sports links more popular than food links. She was also able to discover the best time to post a link for any particular Twitter account, if you want maximum clicks. There is no quick way to discover this, so this type of analysis is valuable for companies using Twitter as a PR tool. Another snippet of information was the half-life of a typical bit.ly link – in other words, the time interval by which it has recorded 50% of its likely total clicks – which in the example she showed us was between 20 and 25 minutes.

The consequence was that I went into the next session, on social gaming, with data analysis on my mind. The  session was presented by Kristian Segerstrale at Playfish, part of Electronic Arts focused on casual games for Facebook and the like. Gaming by the way is a huge part of Facebook, accounting for 30% to 40% of overall engagement, according to Segerstrale. As an insight into the future of gaming, it was a good session, but perhaps did not connect well with typical FOWA attendees.

Nevertheless, Segerstrale made a compelling point about how his company’s games evolve, which is also applicable to other kinds of web applications. He said that there is intense analysis of what works and what does not work, based on the flow of data that is available with web applications. You can see who is playing, when they are playing, which features are used, and get a level of insight into the strengths and weaknesses of your application which is typically unavailable for desktop applications. I imagine this works particularly well within Facebook, because of the rich user profile information there. If you take advantage of that data, you can get a lead over the competition; if you fail to make use of it, you will likely fall behind. There is now a data analytics skills gap, Segerstrale told us.

It was thought-provoking to see how data analytics was a common thread between such different sessions.