Microsoft maybe gets the cloud – maybe too late

Microsoft CEO Steve Ballmer gave a talk on the company’s cloud strategy at the University of Washington yesterday. Although a small event, the webcast was widely publicised and coincides with a leaked internal memo on “how cloud computing will change the way people and businesses use technology”, a new Cloud website, and a Cloud Computing press portal, so it is fair to assume that this represents a significant strategy shift.

According to Ballmer:

about 70 percent of our folks are doing things that are entirely cloud-based, or cloud inspired. And by a year from now that will be 90 percent

I watched the webcast, and it struck me as significant that Ballmer kicked off with a vox pop video where various passers by were asked what they thought about cloud computing. Naturally they had no idea, the implication being, I suppose, that the cloud is some new thing that most people are not yet aware of. Ballmer did not spell out why Microsoft made the video, but I suspect he was trying to reassure himself and others that his company is not too late.

I thought the vox pop was mis-conceived. Cloud computing is a technical concept. What if you did a vox pop on the graphical user interface? or concurrency? or Unix? or SQL? You would get equally baffled responses.

It was an interesting contrast with Google’s Eric Schmidt who gave a talk at last month’s Mobile World Congress that was also a big strategy talk; I posted about it here. Schmidt takes the cloud for granted. He does not treat it as the next big thing, but as something that is already here. His talk was both inspiring and chilling. It was inspiring in the sense of what is now possible – for example, that you can go into a restaurant, point your mobile at a foreign-language menu, and get back an instant translation, thanks to Google’s ability to mine its database of human activity. It was chilling with its implications for privacy and Schmidt’s seeming disregard for them.

Ballmer on the other hand is focused on how to transition a company whose business is primarily desktop operating systems and software to one that can prosper in the cloud era:

If you think about where we grew up, other than Windows, we grew up with this product called Microsoft Office. And it’s all about expressing yourself. It’s e-mail, it’s Word, it’s PowerPoint. It’s expression, and interaction, and collaboration. And so really taking Microsoft Office to the cloud, letting it run in the cloud, letting it run from the cloud, helping it let people connect and communicate, and express themselves. That’s one of the core kind of technical ambitions behind the next release of our Office product, which you’ll see coming to market this June.

Really? That’s not my impression of Office 2010. It’s the same old desktop suite, with a dollop of new features and a heavily cut-down online version called Office Web Apps. The problem is not only that Office Web Apps is designed to keep you dependent on offline Office. The problem is that the whole model is wrong. The business model is still based on the three-year upgrade cycle. The real transition comes when the Web Apps are the main version, to which we subscribe, which get constant incremental updates and have an API that lets them participate in mash-ups across the internet.

That said, there are parallels between Ballmer’s talk and that of Schmidt. Ballmer spoke of 5 dimensions:

  • The cloud creates opportunities and responsibilities
  • The cloud learns and helps you learn, decide and take action
  • The cloud enhances your social and professional interactions
  • The cloud wants smarter devices
  • The cloud drives server advances

In the most general sense, those are similar themes. I can even believe that Ballmer, and by implication Microsoft, now realises the necessity of a deep transition, not just adding a few features to Office and Windows. I am not sure though that it is possible for Microsoft as we know it, which is based on Windows, Office and Partners.

Someone asks if Microsoft is just reacting to others. Ballmer says:

You know, if I take a look and say, hey, look, where am I proud of where we are relative to other guys, I’d point to Azure. I think Azure is very different than anything else on the market. I don’t think anybody else is trying to redefine the programming model. I think Amazon has done a nice job of helping you take the server-based programming model, the programming model of yesterday that is not scale agnostic, and then bringing it into the cloud. They’ve done a great job; I give them credit for that. On the other hand, what we’re trying to do with Azure is let you write a different kind of application, and I think we’re more forward-looking in our design point than on a lot of things that we’re doing, and at least right now I don’t see the other guy out there who’s doing the equivalent.

Sorry, I don’t buy this either. Azure does have distinct advantages, mainly to do with porting your existing ASP.NET application and integrating with existing Windows infrastructure. I don’t believe it is “scale agnostic”; something like Google App Engine is better in that respect. With Azure you have to think about how many virtual machines you want to purchase. Nor do I think Azure lets you write “a different kind of application.” There is too little multi-tenancy, too much of the old Windows server model remains in Azure.

Finally, I am surprised how poor Microsoft has become at articulating its message. Azure was badly presented at last year’s PDC, which Ballmer did not attend. It is not an attractive platform for small-scale developers, which makes it hard to get started.

Windows Phone 7 incompatibility may drive developers elsewhere

Microsoft’s Charlie Kindel has blogged about the Windows Phone 7 development platform.

As widely leaked, the new mobile device supports Silverlight and XNA; Kindel also mentions .NET, but since both Silverlight and XNA are .NET platforms, that might not mean anything additional.

The big story is about compatibility:

To deliver what developers expect in the developer platform we’ve had to change how phone apps were written. One result of this is previous Windows mobile applications will not run on Windows Phone 7 Series.

This puts Microsoft in an awkward position. Support for custom business apps has been one of the better aspects of Windows Mobile. What Microsoft should do is to have some way of continuing to run those old apps on the new devices. Instead, Kindel adds:

To be clear, we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come, so it’s not as though one line ends as soon as the other begins.

I would not take much account of this. No doubt there will some devices, but demand for Windows Mobile will dive through the floor (if it has not already) once Phone 7 is available, making it an unattractive proposition for hardware partners.

The danger for Microsoft is that after this let-down, those with existing Windows Mobile apps that are now forced to choose a new development platform might choose one from a competitor.

The mitigation is that apps which use the Compact Framework will likely be easier to port to Windows Phone 7, because the language is the same. Native code apps are a different matter. Of course it will be technically possible to write native code apps for Windows Phone 7, but probably locked down and restricted to special cases, such as perhaps the Adobe Flash runtime (I am speculating here).

PS – I see that developer Thomas Amberg has articulated exactly these concerns in a comment to Kindel’s post:

Platform continuity was the single most important feature of Windows Mobile. Being able to run code from 2003 on a current phone is more important to our customers than a fancy UI (which Microsoft seems not able to get right anyway). Further, the ability to access hardware specific APIs through P/Invoke has been vital in many of our projects (e.g. to use Bluetooth in the early days). Those advantages have now gone. You just rendered useless years of development work and many thousands of lines of code.

"we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come"

You will, I bet. But which device manufacturer will produce such "dead-end" devices?

Time to switch to another mobile OS.

Microsoft’s super-exciting Sky TV on Xbox with social interaction

I’m watching Microsoft’s Steve Ballmer present a session on cloud computing. It’s been underwhelming so far, but I was interested to see how Sky TV will look on Xbox 360 (though I’d readily swap it for BBC iPlayer, which Microsoft seems to be obstructing). The key point: you can watch with your Xbox Live friends and interact during the broadcast.

The broadcast was coming all the way from the UK to west coast USA, which was apparently why the avatars spent some time watching a buffering thermometer. Still, it worked eventually.

image

More on Ballmer’s cloud perspective later.

Peter Gabriel’s high-res music bargain with scratch my back

Peter Gabriel’s Scratch my Back is an intriguing release – an album of cover versions of pop and rock songs, but with an orchestral backing. It actually works, once you set your expectations accordingly.

The thing I want to draw attention though is a remarkable offer that comes with the deluxe version of the CD (worth it anyway for Waterloo Sunset, otherwise unavailable). You get a code with it that buys a three month trial of membership at the Bower & Wilkins Society of Sound site. The details are here:

The stunning super-high quality version of Peter Gabriel’s new album ‘Scratch My Back’ is available now from Society of Sound as a 24-bit FLAC download.

If you have bought an enhanced CD you will have a voucher code entitling you to download the album from us as well as giving you three months full membership. If you don’t own the album you can subscribe for six or twelve months to access it.

This means you get not only the high-res version of Scratch my Back (without Waterloo Sunset, unfortunately), but also “any past albums of the month” on Society of Sound, many of which are also in 24-bit FLAC. I counted 19 albums in all, with artists including David Rhodes, Ennio Morricone, Speed Caravan, Brett Anderson, Charlie Winston, Gwyneth Herbert, Tom Kerstens, Skip McDonald, and the Portico Quartet.

I’ve been working through them and enjoying what I hear.

This still begs the question, of course, of whether hi-res is audibly any different from standard CD quality. If this is a question that interests you, as it does me, then you get plenty of material to experiment with. In addition, the overall standard of the recording quality found here seems excellent.

What next for the BBC and its world-beating website?

The UK’s public broadcasting company the BBC is in the spotlight, thanks to a new strategy review and ensuing discussion. I have only just read it, because of other work, but I think it is significant. The BBC’s Director-General Mark Thompson says:

Clearly the BBC needs the space to evolve as audiences and technologies develop, but it must be far more explicit than it has been in the past about what it will not do. Its commercial activity should help fund and actively support the BBC’s public mission, and never distort or supplant that mission.
Where actual or potential market impact outweighs public value, the BBC should leave space clear for others. The BBC should not attempt to do everything. It must listen to legitimate concerns from commercial media players more carefully than it has in the past and act sooner to meet them. It needs the confidence and clarity to stop as well as to start doing things.

Why such negativity? The essence of the problem is that the BBC has been too successful for some. Commercial broadcasters and web sites have to compete with an organisation that is publically funded, and complain that it is unfair competition. The BBC demonstrates the effectiveness of the subscription model, especially when that subscription is all-but compulsory. In the UK, you have to pay the licence fee if you have equipment capable of receiving its TV broadcasts.

My main interest is in the BBC website. It is one I use constantly, and I do not think there is anything like it in the world. It offers comprehensive news, features and comment, on a site that is fast and resilient, and without the irritation of advertising. For example, if I want to know the latest state of play in financial markets, I head straight to the BBC’s Market Data page.

The absence of advertising has several benefits. First, it increases confidence in the neutrality of the site. Second, it improves performance – I’m aware that my own blog is slowed down by ad scripts, for example, and I’m not happy about it; but I’m also trying to make business sense out of running the site. Third, it improves usability in other ways, with less distraction and increased space for content. Note though that the BBC site does carry advertising when viewed from non-UK locations.

The BBC web site is an enormous success, the 44th most visited in the world according to Alexa, and the top news site (cnn.com is next at 61) unless you count Yahoo, which is something different to my mind.

So what do you do with a world leader? Cut it, apparently. The report talks about “focusing” the BBC web site by:

  • Halving the number of sections on the site and improving its quality by closing lower-performing sites and consolidating the rest
  • Spending 25% less on the site per year by 2013
  • Turning the site into a window on the web by providing at least one external link on every page and doubling monthly ‘click-throughs’ to external sites

This is made more explicit later in the report:

  • To help ensure that this refocusing takes place, the BBC will spend 25% less on BBC Online by 2013, with a corresponding reduction in staffing levels
  • The number of sections on the site (its ‘top-level directories’, in the form bbc.co.uk/sitename) will be halved by 2012, with many sites closed and others consolidated
  • New investment will be in pursuit of the five content priorities only, and there will be far fewer bespoke programme websites
  • BBC Online will be transformed into a window on the web with, by 2012, an external link on every page and at least double the current rate of ‘click-throughs’ to external sites.

There is an even more explicit section on BBC Online further down (pages 36-37) – the report seems to say the same thing several times with more detail on each iteration – but I won’t quote it all here. I will note that the sections identified for removal are not ones that matter to me, with the possible exception of local news:

Restricting local sites in England to news, sport, weather, travel and local knowledge (where ‘local knowledge’ means supporting BBC initiatives such as Coast and A History of the World in 100 Objects where there is local relevance, but not general feature content)

I do understand the problem here. Consider, for example, UK newspaper sites like the excellent guardian.co.uk – disclaimer – there are a few of my own contributions there. Such sites do not really make money, because they depend on synergy with print media that is in decline, not least because of advertisers turning to the web. There is a big debate in the media industry about whether to charge subscriptions for sites like these, as the New York Times has done, and will do again. However, the existence and quality of the BBC’s free site significantly impairs the prospects for subscriptions to UK newspaper sites.

This, I presume, is why the BBC intends to increase the number of external links; a small compensation for its unfair advantage.

Nevertheless, I think the BBC is mad to consider reducing its online investment. It is against the trend; the web is rising in importance, and traditional broadcasting decreasing. It is bad for the UK, for which the BBC is excellent PR and a genuine service to the world. It is bad for subscribers such as myself, enforced or not, who want the online service to get better, not worse.

Rather than cutting back on the BBC’s most strategic services, I’d favour looking again at the way the BBC is funded and what happens to the licence fee, which is an anomaly. I don’t see any reason in principle why it should not be shared with other organisations that are serving the public interest in news and media.

AVI ADM 9.1 loudspeaker review – should we all go active?

I have reviewed the AVI ADM 9.1 active speaker system. This is a distinctive system, in that it builds amplification – both pre- and power amps – and a DAC into the loudspeaker cabinets. There is a remote to control volume and to select between two digital inputs or an analog input.

Why distinctive? Well, most consumer hi-fi is based on passive speakers with an external amplifier. There are lots of active monitors on the market aimed at the professional audio engineer, but most of these lack the pre-amp, DAC and remote.

What is an active speaker? Read the review; but in a nutshell it is one where each speaker driver has a dedicated amplifier, so that the crossover, which divides the audio signal into frequencies suitable for each driver, works on a low-level signal rather than one that is already amplified. This is well-known to reduce distortion. It also means that the amplifiers can be tailored exactly to the characteristics of the speaker drivers, since they are the only ones they ever have to drive.

The ADM 9.1 is expensive, but less so than the very high-end active systems on the market from the likes of Naim and Bang and Olufsen.

It raises the question: why are there not more active systems in consumer hi-fi? The short answer is that they do not sell that well, because they are inherently more expensive – you need at least double the number of amplifiers, presuming a two-way loudspeaker.

The long answer, claimed by AVI, is that the hi-fi industry is wedded to the idea of an upgrade cycle that keeps customers buying more. Passive systems, with several separate boxes, are more amenable to that process.

Microsoft .NET gotchas revealed by Visual Studio team

The Visual Studio Blog makes great reading for .NET developers, and not only because of the product it describes. Visual Studio 2010 is one of the few Microsoft products that has made a transition from native C++ code to .NET managed code – the transition is partial, in that parts of Visual Studio remain in native code, but this is true of the shell and the editor, two of the core components. Visual Studio is also a complex application, and one that is extensible by third parties. Overall the development team stressed the .NET platform, which is good for the rest of us because the developers are in a strong position both to understand problems, and to get them fixed even if it means changes to the .NET Framework.

Two recent posts interested me. One is Marshal.ReleaseComObject Considered Dangerous. I have some familiarity with this obscure-sounding topic, thanks to work on embedding Internet Explorer components. It relates to a fundamental feature of .NET: the ability to interact with the older COM component model, which is still widely used. In fact, Microsoft still uses COM for new Windows 7 APIs; but I digress. A strong feature of .NET from its first release is that it can easily consume COM objects, and also expose .NET objects to COM.

The .NET platform manages memory using garbage collection, where the runtime detects objects that are no longer referenced by active code and deletes them. COM on the other hand uses reference counting, maintaining a count of the number of references to an object and deleting the object when it reaches zero.

Visual Studio 2008 and earlier has lots of COM APIs. Some of these were called from .NET code, and for the same of efficiency called the method mentioned above, Marshal.ReleaseComObject, to reduce the reference count immediately so that the COM object would be deleted.

Now here comes Visual Studio 2010, and some of those COM APIs are re-implemented as .NET code. For compatibility with existing code, the new .NET code is also exposed as a COM API. Some of that existing code is actually .NET code which wraps the COM API as .NET code. Yes, we have .NET to COM to .NET, a double wrapper. Everything still works though, until you call Marshal.ReleaseComObject on the doubly-wrapped object. At this point the .NET runtime throws up its hands and says it cannot decrement the reference count, because it isn’t really a COM object. Oops.

The post goes on to observe that Marshal.ReleaseComObject is dangerous in any cause, because it leaves you with an invalid .NET wrapper. This means you should only call it when the .NET instance is definitely not going to be used again. Obvious, really.

Once you’ve digested that, try this illuminating post on WPF in Visual Studio 2010 – Part 2 : Performance tuning. WPF, or Windows Presentation Foundation, is the .NET API for rich graphical user interfaces on desktop Windows applications. Here is an example of why you should read the post, if you work with WPF. Many of us frequently use Remote Desktop to run applications on remote PCs or PCs that do not have a screen and keyboard attached. This is really a bad scenario for WPF, which is designed to take advantage of local accelerated graphics. Here’s the key statement:

Over a remote desktop connection, all WPF content is rendered as a bitmap. This is in contrast to GDI rendering, where primitives such as rectangles and text are sent over the wire for reconstruction on the client.

It’s a bad scenario, but mitigated if you use graphics that are amenable to compression, like solid colours. There are also some tweaks introduced in WPF 4.0, like the ability to scroll an area on the remote client, which saves having to re-send the entire bitmap if it has moved.

Flash 10.1 mobile roadmap confusion, Windows phone support far off

When is the right moment to buy a mobile phone? Usually the answer is not quite yet; and that seems to the case if you want to be sure of support for Flash Player 10.1, the first full version of the runtime to run on mobile devices. Adobe recently struck off support for Windows Mobile in its entirety. Adobe’s Antonio Flores said on the company’s forums:

As for WinMo, we have made the tough decision to defer support for that platform until WinMo7.  This is due to the fact that WinMo6.5 does not support some of the critical APIs that we need.

“Defer support” is not straight talking. Windows Phone 7 is by all accounts very different from Windows Mobile and application compatibility is in question. In addition, the indications so far are that Windows Phone 7 primarily targets consumers in its first release, suggesting that Windows Mobile devices may continue in parallel for a while, to support business applications built for the platform. It is disappointing that Adobe has abandoned its previously announced support; and the story about critical APIs looks suspect, bearing in mind that Flash 10.1 on Windows Mobile demos have already been shown.

As for Flash on Windows Phone 7, that too looks some way off. Microsoft says it is not opposed to Flash, but that it will not feature in the first release.

There may also be politics here. Microsoft Silverlight competes with Flash, and it looks as if Silverlight is to some extent the development platform for Windows Phone 7. While Flash on Windows Phone 7 would be a selling point for the device, I doubt Microsoft likes the idea of developers choosing Adobe’s platform instead of Silverlight. Equally, I doubt it would break Adobe’s heart if Windows Phone 7 wasn’t much of a success, and if lack of Flash puts off customers, that cannot be helped.

In other words, both companies may want to make haste slowly when it comes to Flash on Windows Phone 7.

When it talks about Apple devices, Adobe is the even-handed runtime vendor doing everything it can to make its platform ubiquitous. However, the more it succeeds in its aim, the more power it has when it comes to less favoured platforms. This is a problem inherent to a platform where all the implementations come from a single vendor.

Google Chrome usage growing fast; Apple ahead on mobile web

Looking at my browser stats for February one thing stands out: Google Chrome. The top five browsers are these:

  1. Internet Explorer 40.5%
  2. Firefox 34.1%
  3. Chrome 10.5%
  4. Safari 4.3%
  5. Opera 2.9%

Chrome usage has more than doubled in six months, on this site.

I don’t pretend this is representative of the web as a whole, though I suspect it is a good leading indicator because of the relatively technical readership. Note that although I post a lot about Microsoft, IE usage here is below that on the web as a whole. Here are the figures from NetMarketShare for February:

  1. Internet Explorer 61.58%
  2. Firefox 24.23%
  3. Chrome 5.61%
  4. Safari 4.45%
  5. Opera 2.35%

and from  statcounter:

  1. Internet Explorer 54.81%
  2. Firefox 31.29%
  3. Chrome 6.88%
  4. Safari 4.16%
  5. Opera 1.94%

There are sizeable variations (so distrust both), but similar trends: gradual decline for IE, Firefox growing slightly, Chrome growing dramatically. Safari I suspect tracks Mac usage closely, a little below because some Mac users use Firefox. Mobile is interesting too, here’s StatCounter:

  1. Opera 24.26
  2. iPhone 22.5
  3. Nokia 16.8
  4. Blackberry 11.29
  5. Android 6.27
  6. iTouch 10.87

Note that iPhone/iTouch would be top if combined. Note also the complete absence of IE: either Windows Mobile users don’t browse the web, or they use Opera to do so.

I’m most interested in how Chrome usage is gathering pace. There are implications for web applications, since Chrome has an exceptionally fast JavaScript engine. Firefox is fast too, but on my latest quick Sunspider test, Firefox 3.6 scored 998.2ms vs Chrome 4.0’s 588.4ms (lower is better). IE 8.0 is miserably slow on this of course; just for the record, 5075.2ms.

Why are people switching to Chrome? I’d suggest the following. First, it is quick and easy to install, and installs into the user’s home directory on Windows so does not require local administrative rights. Second, it starts in a blink, contributing to a positive impression. Third, Google is now promoting it vigorously – I frequently see it advertised. Finally, users just like it; it works as advertised, and generally does so quickly.