PDC day one: Windows in the cloud

Today was cloud day at PDC. Microsoft announced that Windows Azure will become a production platform on January 1st, with billing starting from February 1st. It also announced the beta of Windows Server AppFabric role, for on-premise apps that can either stay on-premise or be deployed to Azure later; and some new developments like the Windows Server Virtual Machine role on Azure, a pre-configured Windows Server VM into which you will be able to deploy an application.

Azure was first announced at the 2008 PDC, and had a stuttering start, with a CTP (Community Tech Preview) that was difficult to use, major changes to SQL Server Data Services – a simplified cloud database that was scrapped and replaced with full SQL Server – and generally poor marketing from Microsoft. I was not sure whether the company was serious about Azure, or merely trying to tick the cloud box.

I do now think it is serious, and delivering some interesting technology for easily scalable cloud-hosted applications. Microsoft does not sees its cloud services as replacing your in-house servers (no surprise there), but more as a way of deploying certain kinds of web applications. A great feature is that thanks to Active Directory Federation Services in combination with the new .NET library called Windows Identity Foundation you can relatively easily have use your Azure applications authenticated against your internal Active Directory.

The surprise of the day was when Matt Mullenweg of WordPress fame turned up to demo WordPress running on Azure, which now supports PHP and MySQL as well as Java applications. Another unexpected guest was Loic Le Meur of Seesmic, who introduced Seesmic for Windows and also talked about a coming Silverlight version.

That said, the keynote did not exactly crackle with excitement. Microsoft seemed almost to downplay what is now possible with Azure, perhaps sensing that it could be disruptive to its own business model. A telling moment came during a press briefing when Doug Hauger, Azure General Manager, denied that Windows or Office were in any sort of decline. Despite his position he seems to be under the illusion that we will happily continue with our fragile on-premise, single platform, micro-managed IT systems.

I enjoyed the day though. The beauty of PDC is that Microsoft rolls out its best speakers; it was great to hear Mark Russinovich explain the kernel changes in Windows 7 and Server 2008 R2 – same kernel of course – and I will be writing more about the session shortly.

I’m expecting more focus on Office, Silverlight and Visual Studio tomorrow, when Steven Sinofsky, Scott Guthrie and Kurt DelBene will be giving the keynote, and hoping for some compelling announcements.

A critical PDC for Microsoft

I’m in Los Angeles for Microsoft’s Professional Developer’s Conference – one that has a strangely subdued build-up. I have been to many PDCs but this one is different. One thing I’ve noticed is that a combination of the difficult economy and a rumoured shortfall in attendance has resulted in some obvious slimming-down: flimsy bag, no breakfast for attendees, no free shuttle to the airport on the last day, no big party at LA Universal Studios.

There’s always a case for less extravagance at conferences; but it conveys a subtle PR message that isn’t a good one for Microsoft.

What matters though is the content. Clearly there are two strands to this. One is the regular turning of the Microsoft wheel – Windows 7 development, new Office, new SharePoint, maybe a new IE and a new Silverlight.

Microsoft has to do this; but there is no escaping: the world is changing, and bloated desktop apps and complex in-house servers and server applications are not the wave of the future.

I am still mulling over something said to me recently by an IT admin in education, when I was researching the progress there of Google Apps and Microsoft Live@Edu. He had overseen a migration of student email to Google Apps, over 20,000 accounts, and I asked him what problems he had encountered. I’ve been in IT for years, he told me, and there are always unexpected issues; but this time there really were none.

So the other theme at PDC is whether Microsoft’s cloud efforts can get off the ground and compete in this new world. The interesting thought is this: even if Windows Azure is a wild success, and if the Live properties start to perform, what chance does Microsoft have of even sustaining its current level of revenue and profit?

In practice, the company will always be under irresistible pressure to use any cloud success to promote the products from which it makes its money: Windows and Office. And that in turn will undermine its cloud efforts, as users realise they are not getting the liberation from hefty client-side dependencies which is inherent to a true cloud story.

Just to remind you: check out the “online” section of recent Microsoft financial reports.

That said, there is an unexpected twist in the run-up to PDC, which is the gathering Google backlash. The must-read is Tim O’Reilly’s War for the Web:

We’re heading into a war for control of the web. And in the end, it’s more than that, it’s a war against the web as an interoperable platform. Instead, we’re facing the prospect of Facebook as the platform, Apple as the platform, Google as the platform, Amazon as the platform, where big companies slug it out until one is king of the hill.

And it’s time for developers to take a stand. If you don’t want a repeat of the PC era, place your bets now on open systems. Don’t wait till it’s too late.

O’Reilly closes his piece with a thought-provoking comment:

P.S. One prediction: Microsoft will emerge as a champion of the open web platform, supporting interoperable web services from many independent players, much as IBM emerged as the leading enterprise backer of Linux.

It sounds unlikely; but where do you go if your mood is “anything but Google”? We could see some surprising new alliances; though I honestly do not see the Windows-Office empire within Microsoft accepting that kind of role under the current leadership.

The PDC is generally where Microsoft sets out its strategy for the coming year or more. It had better be good.

Xobni makes its point on the streets of LA

I’m in Los Angeles just before Microsoft’s Professional Developers conference, where one of the themes will be Office 2010 and its new features. Yesterday though the streets near the conference centre were full of “ambassadors” for Xobni, handing out T-shirts to attendees going to pre-conference events and promising a chance of cash prizes if you are seen wearing one of them at the show.

Xobni is an Outlook add-in (read the name backwards) that pulls out contact details in a side panel as you read your emails, complete with previous activities and social connections from Facebook, Twitter, Salesforce.com and elsewhere.

According to the ambassador I spoke to, there is a similar feature in Outlook 2010; and the company is hoping that the hall will be filled with Xobni T-shirts when this is announced.

You have to give the company credit for its initiative.

Every add-on vendor has this problem – what to do if your feature ends up baked into the main product.

Picture to follow.

Technorati Tags: ,,,

Have Windows OEM vendors learnt anything from Apple?

I’ve just set up a new consumer Windows 7 PC – it was HP’s Compaq Presario CQ5231UK, not bad value at £399 (VAT included) with Core 2 Duo E7500 (2.93 Ghz), 3GB RAM, Windows 7 Home Premium 64-bit – yes, 64-bit Windows really is mainstream now – 500GB hard drive and NVIDIA G210 graphics.

For comparison, the cheapest current Apple Mac is the Mini at £499 – it’s not directly comparable since its neat compact size is worth a premium, but it is slightly less well specified with slower processor, 2GB RAM and 160GB drive. As for an iMac, this comes with a screen but costs more than twice as much as the HP Compaq.

A good deal then; but have Microsoft’s efforts to make Windows 7 “quieter” and less intrusive been wrecked by OEM vendors who cannot resist bundling deals with 3rd parties, otherwise known as crapware?

I draw your attention to my interview with Microsoft’s Bill Buxton last year, when I raised this point. He said:

Everybody in that food chain gets it now. Everybody’s motivated to fix it. Thinking about the holistic experience is much easier now than it was two years ago.

I was interested therefore to see what sort of experience HP delivers with one of its new home PCs. Unfortunately I forgot to keep a list, but I removed a number of add-ons that the user agreed were unwanted, including:

I also removed a diagnostics tool called PC-Doctor and an HP utility that stuck itself prominently on the desktop, HP Advisor Dock. It is possible that these tools might in some circumstances be useful, though I’m wary. I have no idea why HP has decided to supply its own Dock accessory after Microsoft’s efforts with the Windows 7 Taskbar.

We left in place an application called HP Games which is a branded version of WildTangent ORB and includes some free games.

The short answer is that the Windows ecosystem has not changed. The deal is that your cheap PC is subsidised by the trialware that comes with it. Another issue is OEM utilities – like HP’s Advisor Dock – which jar with the careful design Microsoft put into Windows 7 and offer overlapping functionality with what is built in.

In mitigation, Windows 7 runs so well on current hardware that even this budget PC offers snappy performance. I also had no difficulty removing the unwanted add-ons. The speed of setup – number of restarts – was much better than I recall from the last Toshiba laptop I set up.

Nevertheless, on the basis of this example there is still work to do if the experience of starting with a Windows PC is to come close to that offered by the Mac. Further, bundling anti-malware software that requires a subscription is actually a security risk, since a proportion of users will not renew and therefore end up without updates. I would be interested in other reports.

Technorati Tags: ,,,,

Google’s new language: Go

Google has a new language. The language is called Go, though issue 9 on the bug tracker is from the inventor of another language called Go and asks for a name change. Co-inventor Rob Pike says [PDF] that Google’s Go is a response to the problem of long build times and uncontrolled dependencies; fast compilation is an important feature. It is a garbage-collected language with C-like syntax – echoes of Java and C# there – and has strong support for concurrency and communication. Pike’s examples in the paper referenced above do show a simple and effective approach to communication, with communication channel objects, and to concurrency, with Goroutines.

Go runs only on Linux or Mac OS X. I installed it on Ubuntu and successfully compiled and ran a one-line application. I used the 32-bit version, though apparently the 64-bit implementation is the most advanced.

Pike claims that performance is “typically within 10%-20% of C”. No debugger yet, but in preparation. No generics yet, but planned long-term. Pointers, but no pointer arithmetic.

Go does not support type inheritance, but “Rather than requiring the programmer to declare ahead of time that two types are related, in Go a type automatically satisfies any interface that specifies a subset of its methods.”

Google has many projects, and while Go looks significant, it is dangerous to make assumptions about its future importance.

I don’t think Google is doing this just to prove that it can; I think it is trying to solve some problems and doing so in an interesting way.

Technorati Tags: ,,,

Surveys are useless

I’m at Microsoft Tech-Ed in Berlin where 7000-odd IT admins and developers (though more admins) are looking at Microsoft technology.

I was browsing round the stands in the Technical Learning Centre here when I came to one where the technical documentation team at Microsoft was handing out a survey. Fill in the survey, get a plastic rocket. I looked through the survey where you had to rate innumerable aspects of the documentation on Microsoft’s technical resource sites (MDSN, TechNet etc).

I refused to complete it, on the grounds that it would not yield anything of value. I can put numbers in boxes as well as anyone else, but they tend to be arbitrary, and all too often the real answers cannot be easily condensed into a 1 to 5 rating. I said that the way to find out what people thought of the documentation was to ask them, not to get them putting numbers on a form.

Inevitably, the guys asked me that question, and we has a discussion of the issues I’ve found with the sites including:

  • Broken links. I don’t think Microsoft should ever delete a knowledgebase entry. Mark them obsolete or even wrong, but don’t remove them.
  • Too many locations with overlapping content – MSDN, Technet, specialist sites, team blogs etc.
  • Documentation that states the obvious – eg how to enable or disable a feature – but neglects to mention the interesting stuff like why you would want to enable or disable it and what the implications are.
  • Documentation that is excessively verbose or makes you drill down into link after link before finding the real content.
  • Documentation that is not clearly dated, so that you might be reading obsolete advice.

Anyway, I felt I had a worthwhile discussion and was listened to; whereas completing the survey would not have brought out these points effectively.

Love and hate for Microsoft Small Business Server

I’ve just completed a migration from Small Business Server 2003 to 2008. I’ve worked on and off with SBS since version 4.0, and have mixed feelings about the product. It has always been great value, but massive complexity lurks not far beneath its simple wizards.

The difficulty of migration is probably its worst feature: it chugs along for a few years gradually outgrowing its hardware, and then when the time comes for a new server customers are faced with either starting from scratch with a clean install – set up new accounts, import mailboxes, every client machine removed and rejoined to a new domain – or else a painful migration.

I took the latter route, and also decided to go virtual on Hyper-V Server 2008 R2. In most important respects it went smoothly: Active Directory behaved itself, and the Exchange mailboxes all came over cleanly.

Still, several things struck me during the migration. Microsoft has a handy 79-page step-by-step document, but anyone who thinks that carefully following the steps will guarantee success will be disappointed. There are always surprises. The document does not properly cover DHCP, for example. The migration is surprisingly messy in places. The new SBS has different sets of permissions than the old one, and after the upgrade you have to somehow merge the two. The migration is not fully automated, and there is plenty of manual editing of various settings.

Even migrating SBS 2008 to SBS 2008, for a new server, has brought forth a 58-page document from Microsoft.

Then there are the errors to deal with. There are always errors. You have to figure out which ones are significant and how to fix them. I would like to meet a windows admin who could look me in the eye and say they have no errors in their event log.

Things got bad when applying all the updates to bring the server up-to-date. At one point SharePoint broke completely and could not contact its configuration database.  There’s also the mystery of security update KB967723, which Windows Update installed insisting that it was “important,” and which then generated the following logged message 79 times in the space of a few seconds:

Windows Servicing identified that package KB967723(Security Update) is not applicable for this system

Nevertheless, a little tender care and attention got the system into reasonable shape. It is even smart enough to change Outlook settings to the new server automatically. A great feature of the migration is that email flow is never interrupted.

One problem: although running SBS virtual is a supported configuration, the built-in backup system doesn’t handle it well, because it assumes use of external USB drives which Hyper-V guests cannot access directly. There are many solutions, none perfect, and it appears that Microsoft did not think this one through.

That said, the virtual solution has some inherent advantages for backup and restore, the main one being that you can guarantee identical hardware for disaster recovery. If you shut the guests down and backup the host, or export the VM, you have a reliable system backup. You can also back up a running guest from the host, though in my experience this is more fragile.

Migrating an SBS system is actually harder than working with grown-up Windows systems on separate servers (or virtual servers) because it all has to be done together. I reckon Microsoft could do a better job with the tools; but it is a complex process with multiple potential points of failure.

The experience overall does nothing to shake my view that cloud-based services are the future. I would like to see SBS become a kind of smart cache for cloud storage and services, rather than being a local all-or-nothing box that can absorb large amounts of troubleshooting time. Microsoft is going to lose a lot of this SME business, because it has ploughed on with more of the same rather than helping its existing SBS customers to move on.

Nevertheless, if you have made the decision to run your own email and collaboration services, rather than being at the mercy of a hosted service, SBS 2008 does it all.

Migrating to Hyper-V 2008 R2

I have a test setup in my office which runs mostly on Hyper-V. It is a kind of home-brew small  business server, with Exchange, ISA and SharePoint all running on separate VMs. I’ve followed Microsoft’s advice and kept Active Directory on a separate physical server. Until today, Hyper-V itself was running on Server 2008.

I’m reviewing Hyper-V Server 2008 R2, so I figured it would be interesting to migrate the VMs. I attached an external USB drive, shut down the  VMs and exported them. Next, I verified that there was nothing else I needed to preserve on that machine, and set about installing Hyper-V Server 2008 R2 from scratch.

Aside: when I first set this up I broke the rules by having Active Directory on the Hyper-V host. That worked well enough in my small setup; but I realised that you lose some of the benefit of virtualisation if you have anything of value on the host, so I moved Active Directory to a separate box.

I wish I could tell you that the migration went smoothly. Actually, from the Hyper-V perspective it did go smoothly. However, I had an ordeal with my server, a cheapie HP ML110 G5. The driver for the embedded Adaptec Sata RAID did not work with Hyper-V Server 2008 R2, and I couldn’t find an update, so I disabled the RAID. The driver for my second network card also didn’t work, and I had to replace the card. Finally, my efforts at updating the BIOS had landed me with a known problem on this server: the fans staying at maximum speed and deafening volume. Fortunately I found this thread which gives a fix: installing upgraded firmware for HP’s Lights-Out Remote Management as well. Blissful (near) silence.

Once I’d got the operating system installed successfully, bringing the VMs back on line was a snap. I used the console menu to join the machine to the domain, set up remote management, and configure the network cards. Next, I copied the exported VMs to the new server, imported them using Hyper-V manager running on Windows 7, and shortly afterwards everything was up and running again. I did get a warning logged about the integration services being out-of-date, but this was easy to upgrade. I’m hoping to see some performance benefit, since my .vhd virtual drives are dynamic, and these are meant to be much faster in the R2 update.

Although I’m impressed with Hyper-V itself, some aspects of Hyper-V Server 2008 R2 are lacking. Mostly this is to do with Server Core. Shipping a cut-down Server OS without a GUI is a great idea in itself, but Microsoft either needs to make it easy to manage from the command line, or easy to hook up to remote tools. Neither is the case. If you want to manage Hyper-V from the command line you need this semi-official management library, which seems to be the personal project of technical evangelist James O’Neill. Great work, but you would have thought it would be built into the product.

As for remote tools, the tools themselves exist, but getting the permissions right is such an arcane process that another dedicated Microsoft individual, program manager John Howard, wrote a script to make it possible for humans. It is not so bad with domain-joined hosts like mine, but even then I’ve had strange errors. I haven’t managed to get device manager working remotely yet – “Access denied” – and sometimes I get a kerberos error “network path not found”.

Fortunately there’s only occasional need to access the host once it is up and running; it seems very stable and I doubt it will require much attention.

Sophos Windows 7 anti-virus test tells us nothing we don’t already know

Sophos is getting good publicity for its latest sales pitch virus test on Windows 7. This tells us:

We grabbed the next 10 unique samples that arrived in the SophosLabs feed to see how well the newer, more secure version of Windows and UAC held up. Unfortunately, despite Microsoft’s claims, Windows 7 disappointed just like earlier versions of Windows. The good news is that, of the freshest 10 samples that arrived, 2 would not operate correctly under Windows 7.

Unfortunately Chester Wisniewski from Sophos is vague about his methodology, though he does say that Windows 7 was set up in its default state and without anti-virus installed. The UAC setting was on its new default, which is less secure (and intrusive) than the default in Windows Vista.

My presumption is that he copied each virus to the machine and executed it – and was apparently disappointed (or more likely elated) to discover that 8 out of 10 examples infected the machine.

It might be more accurate to say that he infected the machine, when he copied the virus to it and executed it.

I am not sure what operating system would pass this test. What about a script, for example, that deleted all a user’s documents? UAC would not attempt to prevent that; users have the right do delete their own documents if they wish. Would that count as a failure?

Now, it may be that Wisniewski means that these executables successfully escalated their permissions. This means, for example, that they might have written to system locations which are meant to be protected unless the user passes the UAC prompt. That would count as some sort of failure – although Microsoft has never claimed that UAC will prevent it, particularly if the user is logged on with administrative rights.

If this were a serious study, we would be told what the results were if the user is logged on with standard user rights (Microsoft’s long-term goal), and what the results were if UAC is wound up to its highest level (which I recommend).

Even in that case, it would not surprise me if some of the malware succeeded in escalating its permissions and infecting system areas, though it would make a more interesting study. The better way to protect your machine is not to execute the malware in the first place. Unfortunately, social engineering means that even skilled users make mistakes; or sometimes a bug in the web browser enables a malicious web site to install malware (that would also be a more interesting study). Sometimes a user will even agree to elevate the malware’s rights – UAC cannot prevent that.

My point: the malware problem is too important to trivialise with this sort of headline-grabbing, meaningless test.

Nor do I believe the implicit message in Wisniewski’s post, that buying and installing Sophos will make a machine secure. Anti-virus software has by and large failed to protect us, though undoubtedly it will prevent some infections.

See also this earlier post about UAC and Windows security, which has links to some Microsoft statements about it.

Technorati Tags: ,,,

The cloud in education: Google Apps vs Live@Edu

I’ve been researching the use of cloud apps in education for a talk I am giving next week. I’m normally more business-focused, and it’s been interesting to uncover another area where Microsoft and Google are in hot competition. Both companies are happy to give educational institutions free cloud email and collaboration services; and the offer is being snapped up by colleges and universities hard-pressed for money and tired of fighting spam-clogged inboxes. 

Microsoft has first mover advantage here: Live@Edu has been around since March 2005 as a service based on hotmail, though its evolution into a fuller collaboration system is more recent, whereas Google Apps for Education did not appear until October 2006. They are both generous schemes – of course the providers want to get students hooked on their stuff – and as far as I can tell both are well liked.

What is interesting is to look at the points of differentiation, which show the contrasting approach of these two companies. Microsoft is pursuing its “software plus services” strategy, which means desktop applications still play an important role. The email is Exchange-based, so you can use other email clients, but only Outlook on Windows will deliver full features. Document collaboration is based primarily on cloud storage rather then editing, though when Office Web Apps appear next year users will have some lightweight editing tools.

Google on the other hand is primarily web based, with desktop support as an add-on. Google has the lead when it comes to online document editing, since it has had Google Docs for some time, whereas Office Web Apps are still in beta. Google has no bias towards Windows and Office. With Google, a document’s primary existence is in the cloud, although you can export and import with possible loss of data or formatting.

Something else I noticed is that Google has big plans for integration with mobile devices, whereas Microsoft seems mainly concerned with Exchange synchronisation.

Microsoft’s pitch is that if you live in Windows anyway, with Exchange and SharePoint on the server, and Windows and Office on the client, then its cloud service integrates nicely. Google on the other hand is more revolutionary, not caring about what you run as long as you can connect to its services.

Although the software plus services idea has attractions, it sounds more like a transitional strategy than one for the long term. Over time, as the web platform gets more powerful, and as rich internet applications take over from pure desktop applications, the services part will grow absolutely dominant.

Google is a cooler brand than Microsoft, which helps its case when students are asked which platform they prefer.

Has anyone tried both platforms? Or even just one of them? I’d be interested in hearing your comments.