Category Archives: virtualization

Notes from the field: virtualising an existing Windows server using UEFI and Secure Boot

Over the weekend I had the task of converting an existing Windows server running on HP RAID to a virtual machine on Hyper-V. This is a very small network with only one server so nice and simple. I used the sysinternals tool Disk2vhd which converts all the drives on an existing server to a single VHD or VHDX. It’s a nice tool that uses shadow copy to make a consistent snapshot.

The idea is that you then take your VHDX and and make it the drive for a new VM on the target host, in my case running Server 2019. Unfortunately my new VM would not boot. Generally there are three things that can happen in these cases. One is that the VM boots fine. Second it tries to boot but comes up with a STOP error. Third, it just sits there with a flashing cursor and nothing happens.

At this point I should say that Microsoft does not really support this type of migration. It is considered something that might or might nor work and at the user’s risk. However I have had success with it in the past and when it works, it does save a lot of time especially in small setups like this, because the new VM is a clone of the old server with all the shared folders, printer drivers, applications, databases and other configuration ready to go.

Disclaimer: please consider this procedure unsupported and if you follow any tips here do not blame me if it does not work! Normally the approach is to take the existing server off the network, do the P2V (Physical to Virtual), run up the new VM and check its health. If it cannot be made to work, scrap the idea, fire up the old server again, and do a migration to a new VM using other techniques, re-install applications and so on.

In my case I got a flashing cursor. What this means, I discovered after some research, is that there is no boot device. If you get a STOP error instead, you have a boot device but there is some other problem, usually with accessing the storage (see notes below about disabling RAID). At this point you will need an ISO of Windows Server xxxx (matching the OS you are troubleshooting) so you can run the troubleshooting tools. I downloaded the Windows Server 2016 Hyper-V, which is nice and small and has the tools.

Note that if the source server uses UEFI boot you must create a generation 2 Hyper-V VM. Well, either that or go down the rabbit hole of converting the GPT partitions to MBR without wiping the data so you can use generation 1.

For troubleshooting, the basic technique is to boot into the Windows recovery tools and then the command prompt.

I am not sure if this is necessary, but the first thing I did was to run regedit, load the system hive using the Load Hive option, and set the Intel RAID controller entries to zero. What this does is to tell Windows not to look for an Intel RAID for its storage. Essentially go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSetXXX\Services (usually XXX is 001 but it might not be) and find the entries if they exist for:

iaStor

iaStorAVC

iaStorAV

iaStorV

storAHCI

and set the Start or StartOverride parameters to 0. This even works for storAHCI since 0 is on and 3 is off.

The VM still would not boot. Flashing cursor. I am grateful for this thread in the Windows EightForums which explains how to fix EFI boot. My problem, I discovered via the diskpart utility, was that my EFI boot partition, which should show as a small, hidden, FAT32 partition, was instead showing as RAW, meaning no filesystem.

The solution, which I am copying here just in case the link fails in future, was (within the recovery command prompt for the failing VM) to do as follows – the bracketed comments are not to be typed, they are notes.

diskpart
list disk
select disk # ( # = disk number for the disk with the efi partition)
list partition (and note size of old efi or presumed efi partition, which will be small and hidden)
select partition # (# = efi partition)
create partition efi size=# (size of old partition, mine was 99)
format quick fs=fat32 label=”SYSTEM”
assign letter=”S”
exit

assuming C is still the drive letter assigned to your windows partition

type:

C:\Windows\System32\bcdboot C:\Windows

This worked perfectly for me. The VM booted, spent a while detecting devices, following which everything was straightforward.

Final comment: although it is unsupported, the Windows engineers have done an amazing job enabling Windows to boot on new hardware with relatively little fuss in most cases – you will end up of course with lots of hidden missing devices in Device Manager that you can clean up with care though I don’t think they do much harm.

Hyper-V compatible Android emulator now available

An annoying issue for Android developers on Windows is that the official Android emulator uses Intel’s HAXM hypervisor platform, which is incompatible with Microsoft’s Hyper-V.

The pain of dual-boot just to run the Android emulator is coming to an end. Google has announced that the latest release of the Android Emulator will support Hyper-V on both AMD and Intel PCs. This a relief to Docker users, for example, since Docker now uses Hyper-V by default.

Google Product Manager Jamal Eason has made a rather confusing post, positioning the new feature as mainly for the benefit of developers with AMD processors. Intel HAXM does not work with AMD processors.”Thanks to on-going development by Intel, the fastest emulator performance on Windows is still with Intel HAXM,” says Eason, stating that HAXM remains the default on Intel PCs and is recommended.

However the new Hyper-V support works fine on Intel as well as AMD PCs. The official docs say:

Though we recommend using HAXM on Windows, it is possible to use Windows Hypervisor Platform (WHPX) with the emulator. Situations in which you should use WHPX with the emulator are the following:

  • You need to use Hyper-V at the same time.
  • You are using an AMD CPU.

The new feature is “thanks to a new Microsoft Windows Hypervisor Platform (WHPX) API and recent open-source contributions from Microsoft,” says Eason.

It is another case of Microsoft doing the hard work to make Windows a better platform for developers, even when they are targeting non-Windows platforms (as is increasingly the case).

No more infrastructure roles for Windows Nano Server, and why I still like Server Core

Microsoft’s General Manager for Windows Server Erin Chapple posted last week about Nano Server (under a meaningless PR-speak headline) to explain that Nano Server, the most stripped-down edition of Windows Server, is being repositioned. When it was introduced, it was presented not only as a lightweight operating system for running within containers, but also for infrastructure roles such as hosting Hyper-V virtual machines, hosting containers, file server, web server and DNS Server (but without AD integration).

In future, Nano Server will be solely for the container role, enabling it to shrink in size (for the base image) by over 50%, according to Chapple. It will no longer be possible to install Nano Server as a standalone operating system on a server or VM. 

This change prompted Microsoft MVP and Hyper-V enthusiast Aidan Finn to declare Nano Server all but dead (which I suppose it is from a Hyper-V perspective) and to repeat his belief that GUI installs of Windows Server are best, even on a server used only for Hyper-V hosting.

Prepare for a return to an old message from Microsoft, “We recommend Server Core for physical infrastructure roles.” See my counter to Nano Server. PowerShell gurus will repeat their cry that the GUI prevents scripting. Would you like some baloney for your sandwich? I will continue to recommend a full GUI installation. Hopefully, the efforts by Microsoft to diminish the full installation will end with this rollback on Nano Server.

Finn’s main argument is that the full GUI makes troubleshooting easier. Server Core also introduces a certain amount of friction as most documentation relating to Windows Server (especially from third parties) presumes you have a GUI and you have to do some work to figure out how to do the same thing on Core.

Nevertheless I like Server Core and use it where possible. The performance overhead of the GUI is small, but running Core does significantly reduce the number of security patches and therefore required reboots. Note that you can run GUI applications on Server Core, if they are written to a subset of the Windows API, so vendors that have taken the trouble to fix their GUI setup applications can support it nicely.

Another advantage of Server Core, in the SMB world where IT policies can be harder to enforce, is that users are not tempted to install other stuff on their Server Core Domain Controllers or Hyper-V hosts. I guess this is also an advantage of VMWare. Users log in once, see the command-line UI, and do not try installing file shares, print managers, accounting software, web browsers (I often see Google Chrome on servers because users cannot cope with IE Enhanced Security Configuration), remote access software and so on.

Only developers now need to pay attention to Nano Server, but that is no reason to give up on Server Core.

Microsoft Hyper-V vs VMWare: is System Center the weak point?

The Register reports that Google now runs all its cloud apps in Docker-like containers; this is in line with what I heard at the QCon developer event earlier this year, where Docker was the hot topic. What caught my eye though was Trevor Pott’s comment comparing, not Hyper-V to VMWare, but System Center Virtual Machine Manager to VMWare’s management tools:

With VMware, I can go from "nothing at all" to "fully managed cluster with everything needed for a five nines private cloud setup" in well under an hour. With SCVMM it will take me over a week to get all the bugs knocked out, because even after you get the basics set up, there are an infinite number of stupid little nerd knobs and settings that need to be twiddled to make the goddamned thing actually usable.

VMWare guy struggling to learn a different way of doing things? There might be a little of that; but Pott makes a fair point (in another comment) about the difficulty, with Hyper-V, of isolating the hypervisor platform from the virtual machines it is hosting. For example, if your Hyper-V hosts are domain-joined, and your Active Directory (AD) servers are virtualised, and something goes wrong with AD, then you could have difficulty logging in to fix it. Pott is talking about a 15,000 node datacenter, but I have dealt with this problem at a micro level; setting up Windows to manage a non-domain joined host from a domain-joined client is challenging, even with the help of the scripts written by an enterprising Program Manager at Microsoft. Of course your enterprise AD setup should be so resilient that this cannot happen, but it is an awkward dependency.

Writing about enterprise computing is a challenge for journalists because of the difficulty of getting hands-on experience or objective insight from practitioners; vendors of course are only too willing to show off their stuff but inevitably they paint with a broad brush and with obvious self-interest. Much of IT is about the nitty-gritty. I do a little work with small businesses partly to get some kind of real-world perspective. Even the little I do is educational.

For example, recently I renewed the certificate used by a Microsoft Dynamics CRM installation. Renewing and installing the certificate was easy; but I neglected to set permissions on the private key so that the CRM service could access it, so it did not work. There was a similar step needed on the ADFS server (because this is an internet-facing deployment); it is not an intuitive process because the errors which surface in the event viewer often do not pinpoint the actual problem, but rather are a symptom of the problem. It does not help that the CRM Email Router, when things go wrong, logs an identical error event every few seconds, drowning out any other events.

In other words, I have shared some of the pain of sysadmins and know what Pott means by “stupid little nerd knobs”.

Getting back to the point, I have actually installed System Center including Virtual Machine Manager in my own lab, and it was challenging. System Center is actually a suite of products developed at different times and sometimes originating from different companies (Orchestrator, for example), and this shows in lack of consistency in the user interface, and in occasional confusing overlap in functionality.

I have a high regard for Hyper-V itself, having found it a solid and fast performer in my own use and an enormous advance over working with physical servers. The free management tool that you can install on Windows 7 or 8 is also rather good. The free Hyper-V server you can download from Microsoft is one of the best bargains in IT. Feature-wise, Hyper-V has improved rapidly with each new release and it seems to me a strong offering.

We have also seen from Microsoft’s own Azure cloud platform, which uses Hyper-V for virtualisation, that it is possible to automate provisioning and running Hyper-V at huge scale, controlled by easy to use management tools, either browser-based or using PowerShell scripts.

Talk private cloud though, and you are back with System Center with all its challenges and complexity.

Well, now you have the option of Azure Pack, which brings some of Azure’s technology (including its user-friendly portal) to enterprise or hosting provider datacenters. Microsoft needed to harmonise System Center with Azure; and the fact that it is replacing parts of System Center with what has been developed for Azure suggests recognition that it is much better; though no doubt installing and configuring Azure Pack also has challenges.

My last reflection on the above is that ease of use matters in enterprise IT just as it does in the consumer world. Yes, the users are specialists and willing to accept a certain amount of complexity; but if you have reliable tools with clearly documented steps and which help you to do things right, then there are fewer errors and greater productivity. 

Windows XP Mode hassles for Windows 8 upgraders

One of the reasons for the success of Windows 7 was the provision Microsoft made for customers stuck with applications that only run on Windows XP. Windows XP Mode is a free add-on for Windows 7 Professional that runs Windows XP. It can also hide the XP desktop and run individual applications in their own window, though this is cosmetic and merely hides the desktop. Windows XP Mode uses Virtual PC as its virtualisation platform.

What would expect to happen if you upgraded Windows 7 with XP Mode to Windows 8? Without having researched it, my expectation was that Windows XP Mode would migrate smoothly to Hyper-V in Windows 8.

Not so. Here is the official word:

With the end of extended support for Windows XP in April 2014, Microsoft has decided not to develop Windows XP Mode for Windows 8.  If you’re a Windows 7 customer who uses Windows XP Mode and are planning a move to Windows 8, this article may be helpful to you.  
When you upgrade from Windows 7 to Windows 8, Windows XP Mode is installed on your machine, however Windows Virtual PC is not present anymore. This issue occurs because Windows Virtual PC is not supported on Windows 8. To retrieve data from the Windows XP Mode virtual machine, perform the steps listed in the More Information section.

If you were relying on XP Mode to run some old but essential application, this is definitely worth knowing. Microsoft’s guidance on retrieving the data is unlikely to be much use, since the reason you use XP Mode is to run applications rather than to store data. Some users are not impressed:

This is SHOCKING.  I was using Win 7 Pro and had a fully configured (hours of work) XP Virtual Machine with my complete web development environment in it.  It didn’t even occur to me that it wouldn’t work on Windows 8.  I’ve only just discovered now when I tried to access it to do some updates!

I MUST recover this virtual PC.

Why did the Upgrade Advisor not mention this!?!?  I carefully resolved all the issues highlighted there before moving on.

Of course it is desirable to move off Windows XP completely, even in XP Mode, but the rationale is that it is better to be on a recent and supported version of Windows and to run XP in a virtual environment, than to run Windows XP itself.

Another oddity is that you can run Windows XP on Hyper-V in Windows 8. However you cannot get XP Mode to work unless you perform a repair install that changes the way it is licensed. Yes, it is licensing rather than technical reasons that blocks the XP Mode upgrade:

Note: The Windows XP Mode virtual hard disk will not work on Windows 8 as Windows 8 does not provide the Windows XP Mode license. The Windows XP Mode license is a benefit provided on Windows 7 only.

Users have discovered workarounds. Aside from the repair install mentioned above, you can also use Oracle Virtual Box and trick XP Mode into thinking that it is running on Windows 7 and Virtual PC. You can also run a virtual instance of Windows 7 and run XP Mode within that.

Microsoft takes aim at VMware, talks cloud and mobile device management at MMS 2013

I am attending the Microsoft Management Summit in Las Vegas (between 5 and 6,000 attendees I was told), where Brad Anderson, corporate vice president of Windows Server & System Center, gave the opening keynote this morning.

image

There was not a lot of news as such, but a few things struck me as notable.

Virtualisation rival VMware was never mentioned by name, but frequently referenced by Anderson as “the other guys”. Several case studies from companies that had switched from “the other guys” were mentioned, with improved density and lower costs claimed as you would expect. The most colourful story concerned Dominos (pizza delivery) which apparently manages 15,000 servers across 5,000 stores using System Center and has switched to Hyper-V in 750 of them. The results:

  • 28% faster hard drive writes
  • 36% faster memory speeds
  • 99% reduction in virtualisation helpdesk calls

That last figure is astonishing but needs more context before you can take it seriously. Nevertheless, there is momentum behind Hyper-V. Microsoft says it is now optimising products like Exchange and SQL Server specifically for running on virtual machines (that is, Hyper-V) and it now looks like a safe choice, as well as being conveniently built into Windows Server 2012.

I also noticed how Microsoft is now letting drop some statistics about use of its cloud offerings, Azure and Office 365. The first few years of Azure were notable in that the company never talked about the numbers, which is reason to suppose that they were poor. Today we were told that Azure storage is doubling in capacity every six to nine months, that 420,000 domains are now managed in Azure Active Directory (also used by Office 365), and that Office 365 is now used in some measure by over 20% of enterprises worldwide. Nothing dramatic, but this is evidence of growth.

Back in October 2012 Microsoft acquired a company called StorSimple which specialises in integrating cloud and on-premise storage. There are backup and archiving services as you would expect, but the most innovative piece is called Cloud Integrated Storage (CiS) and lets you access storage via the standard iSCSI protocol that is partly on-premise and partly in the cloud. There was a short StorSimple demo this morning which showed how how you could use CiS for a standard Windows disk volume. Despite the inherent latency of cloud storage performance can be good thanks to data tiering, which puts the most active data on the fastest storage, and the least active data in the cloud. From the white paper (find it here):

CiS systems use three different types of storage: performance-oriented flash SSDs, capacity- oriented SAS disk drives and cloud storage. Data is moved from one type of storage to another according to its relative activity level and customer-chosen policies. Data that becomes more active is moved to a faster type of storage and data that becomes less active is moved to a higher capacity type of storage. 

CiS also uses compression and de-duplication for maximum efficiency.

This is a powerful concept and could be just the thing for admins coping with increased demands for storage. I can also foresee this technology becoming part of Windows server, integrated into Storage Spaces for example.

A third topic in the keynote was mobile device management. When Microsoft released service pack 1 of Configuration Manager (part of System Center) it added the ability to integrate with InTune for cloud management of mobile devices, provided that the devices are iOS, Android, Windows RT, or Windows Phone 8. A later conversation with product manager Andrew Conway confirmed that InTune rather than EAS (Exchange ActiveSync) policies is Microsoft’s strategic direction for mobile device management, though EAS is still used for Android. “Modern devices should be managed from the cloud” was the line from the keynote. InTune includes policy management as well as a company portal where users can install corporate apps.

What if you have a BlackBerry 10 device? Back to EAS. A Windows Mobile 6.x device? System Center Configuration Manager can manage those. There is still some inconsistency then, but with iOS and Android covered InTune does support a large part of what is needed.

Microsoft’s Hyper-V Server 2012: too painful to use?

A user over on the technet forums says that the free standalone Hyper-V is too painful to use:

I was excited about the free stand-alone version and decided to try it out.  I downloaded the Hyper-V 2012 RC standalone version and installed it.  This thing is a trainwreck!  There is not a chance in hell that anyone will ever use this thing in scenarios like mine.  It obviously intended to be used by IT Geniuses in a domain only.  I would really like a version that I can up and running in less than half an hour like esxi.  How the heck is anyone going to evaluate it this in a reasonable manner? 

To be clear, this is about the free Hyper-V Server, which is essentially Server Core with only the Hyper-V role available. It is not about Hyper-V in general as a feature of Windows Server and Windows 8.

Personally I think the standalone Hyper-V Server is a fantastic offering; but at the same time I see this user’s point. If you join the Hyper-V server to a Windows domain and use the administration tools in Windows 8 everything is fine; but if you are, say, a Mac user and download Hyper-V Server to have a look, it is not obvious what to do next. As it turns out you can get started just by typing powershell at a command prompt and then New-VM, but how would you know that? Further, if Hyper-V is not joined to a domain you will have permission issues trying to manage it remotely.

Install Hyper-V Server, and the screen you see after logging on does not even mention virtualization.

image

By contrast, with VMWare’s free ESXi has a web UI that works from any machine on the network and lets you get started creating and managing VMs. It is less capable than Hyper-V Server; but for getting up and running quickly in a non-domain environment it wins easily.

I have been working with Hyper-V Server 2012 myself recently, upgrading two servers on my own network which run a bunch of servers for development and test. From my perspective the free Hyper-V Server, which is essentially Server Core with only the Hyper-V role available, is a great offer from Microsoft, though I am still scratching my head over how to interpret the information (or lack of it) on the new product page, which refers to the download as a trial. I am pretty sure it is still offered on similar terms to those outlined for Hyper-V Server 2008 R2 by Program Manager Jeff Woolsey, who is clear that it is a free offering:

  • Up to 8 processors
  • Up to 64 logical processors
  • Up to 1TB RAM
  • Up to 64GB RAM per VM

These specifications may have been improved for Hyper-V Server 2012; or perhaps reduced; or perhaps Microsoft really is making it a trial. It is all rather unclear, though I would guess we will get more details soon.

It is worth noting that if you do have a Windows domain and a Windows 8 client, Hyper-V Server is delightfully easy to use, especially with the newly released Remote Server Administration Tools that now work fine with Windows 8 RTM, even though at the time of writing the download page still says Release Preview. You can use Server Manager as well as Hyper-V Manager, giving immediate access to events, services and performance data, plus a bunch of useful features on a right-click menu:

image

In addition, File and Storage services are installed by default, which I presume means you can use Storage Spaces with Hyper-V Server, which could be handy for hosting VMs with dynamically expanding virtual hard drives. Technically you could also use it as a file server, but I presume that would breach the license.

For working with VMs themselves of course you have the Hyper-V Manager which is a great tool and not difficult to use.

image

The question then: with all the work that has gone into these nice GUI tools, why does Microsoft throw out Hyper-V Server with so little help that a potential customer calls it “too painful to use”?

Normally the idea of free editions is to entice customers into upgrading to a paid-for version. That is certainly VMWare’s strategy, but Hyper-V seems to be different. It is actually good enough on its own that for many users it will be a long time before there is any need to upgrade. Microsoft’s hope, presumably, is that you will run Windows Server instances in those Hyper-V VMs, and these of course do need licenses. If you buy Windows 8 to run the GUI tools, that is another sale for Microsoft. In fact, the paid-for Windows Server 2012 can easily work out cheaper than the free editions, if you need a lot of server licenses, since they come with an allowance of licenses for virtual instances of Windows Server. Hyper-V Server is only really free if you run free software, such as Linux, in the VMs.

Personally I like Hyper-V Server for another reason. Its restricted features mean that there is no temptation to run other stuff on the host, and that in itself is an advantage.

Upgrading to Hyper-V Server 2012

After discovering that in-place upgrade of Windows Hyper-V Server 2008 R2 to the 2012 version is not possible, I set about the tedious task of exporting all the VMs from a Hyper-V Server box, installing Hyper-V Server 2012, and re-importing.

There are many reasons to upgrade, not least the irritation of being unable to manage the VMs from Windows 8. Hyper-V Manager in Windows 8 only works with Windows 8/Server 2012 VMs. It does seem to work the other way round: Hyper-V Manager in Windows 7 recognises the Server 2012 VMs successfully, though of course new features are not exposed.

The export and import has worked smoothly. A couple of observations:

1. Before exporting, it pays to set the MAC address of virtual network cards to static:

image

The advantage is that the operating system will recognise it as the same NIC after the import.

2. Remove any snapshots before the export. In one case I had a machine with a snapshot and the import required me to delete the saved state.

3. After installing Hyper-V 2012, don’t forget to check the date, time and time zone and adjust if necessary. You can do this from the sconfig menu.

4. The import dialog has a new option, called Restore:

image

What is the difference between Register and Restore? Do not bother pressing F1, it will not tell you. Instead, check Ben Armstrong’s post here. If you choose Register, the VM will be activated where it is; not what you want if you mistakenly ran Import against a VM exported to a portable drive, for example. Restore on the other hand presents options in a further step for you to move the files to another location.

5. For some reason I got a remote procedure call failed message in Hyper-V Manager after importing a Linux VM, but then when I refreshed the console found that the import had succeeded.

6. Don’t forget to upgrade the integration services. Connect to the server using the Hyper-V Manager, then choose Insert Integration Services Setup Disk from the Action menu.

image

Cosmetically the new Hyper-V Server looks almost identical to the old: you log in and see two command prompts, one empty and one running the SConfig administration menu.

Check the Hyper-V settings though and you see all the new settings, such as Enable Replication, Virtual SAN Manager, single-root IO virtualization (SR-IOV), extension support in a virtual switch, Live Migrations and Storage Migrations, and more.

Farewell to Microsoft Small Business Server

Microsoft has announced pricing and licensing for Windows Server 2012. A dry topic perhaps; but one which confirms the end of a product with which I am perhaps too familiar: Small Business Server. It is spelt out in the FAQ:

Q33. Will there be a next version of Windows Small Business Server 2011 Standard?

No. Windows Small Business Server 2011 Standard, which includes Exchange Server and Windows server component products, will be the final such Windows Server offering. This change is in response to small business market trends and behavior. The small business computing trends are moving in the direction of cloud computing for applications and services such as email, online back-up and line-of-business tools.

The next question confirms that there will not be a new edition of Small Business Server 2011 Premium either. The official replacement is Windows Server 2012 Essentials, which is in effect the next version of Small Business Server Essentials. This handles local Active Directory, file sharing, local applications, and a connector to Office 365. However there is a 25 user account limit, whereas SBS standard supported up to 75 users, so there will be some businesses who are now forced to choose between moving to Windows Server Standard, or ditching the local server completely (which is often impractical).

image

Microsoft is pinning the reason on cloud computing, which makes some sense. Now and again I am asked by small businesses what sort of technology they should adopt; and my answer in general is to point them at either Microsoft Office 365 or Google Apps.

It is not quite clear-cut. A Small Business Server can theoretically work out cheaper, if you presume that it will not require any external maintenance. That is rarely the case though, and for most people the cloud-hosted option will be both cheaper and less troublesome.

What if you do need on-premise Active Directory, Exchange and SharePoint, which are the core components of SBS? Technically, there are in my opinion better ways to do this than with SBS. While SBS has always been excellent value for money, it is over-complex because it crams onto one box applications which are designed to run on separate boxes. It does work, but if anything goes wrong it is actually harder to troubleshoot than when you have separate servers. I prefer to see one Hyper-V box with separate Virtual Machines (VMs) for each major function, than SBS running on bare metal. VMs are also more flexible, and easier to restore if the hardware breaks.

Farewell then to SBS. I will remember it with some affection though. Think back to the nineties, when most email was POP3, and most internet was dial-up. People had problems like losing emails, because they had been downloaded to a desktop PC and they were out and about with a laptop. Moving to Microsoft Exchange, for which Outlook is the client, was bliss by comparison. Email synchronised itself to all your PCs, you could work offline, and Outlook for all its faults became a one-stop application for calendar, contacts and messages.

The beauty of SBS was that you could get Exchange along with the benefits of a Windows domain – one central directory of users and the ability to assign permissions to file shares – at a price that was more than reasonable.

I also think of SBS as a reliable product, when correctly installed. When it does go wrong it is often due to users trying to do stuff that does not quite work, or other applications which get installed on the same box, or hardware faults which users have attempted to fix by messing around with Windows, or anti-virus software misbehaving (Sophos! Confess!).

Microsoft is doing the right thing though. The SBS bundle makes little sense today, and if you do still need it, you can stick with the 2011 edition for a few years yet.

aQuantive may be Microsoft’s biggest acquisition failure. Have there been good ones? A look back.

Today’s news that Microsoft  is writing off $6.2 billion from the useless acquisition of aQuantive in August 2007 gives me pause for thought.

How bad is this company at acquisitions? Particularly those under CEO Steve Ballmer’s watch. He became CEO in January 2000.

image

Microsoft acquired Danger in February 2008 for $500M. Small relative to the aQuantive acquisition, but how much further money did the company burn transforming Danger from an excellent cloud and mobile company to the group that came up with Kin, the phone withdrawn from the market after just two months on sale? Not to mention the downtime and threatened loss of data suffered by Danger’s online service under Microsoft’s stewardship.

Microsoft attempted to buy Yahoo for $44.6bn in 2008. Yahoo’s executives declined, a move that was (very) bad for Yahoo shareholders but quite possibly right in a business sense; it would not have been a good fit.

Microsoft acquired Groove Networks complete with Notes inventor Ray Ozzie in March 2005. I put this in the disaster category. Groove went nowhere at Microsoft. Ozzie became Chief Software Architect and talked of internet vision but did not deliver. The wretched SharePoint Workspace is apparently based on Groove.

What about the good ones? My view is that Microsoft paid too much for Skype at $8.5 billion but at least it acquired a large number of users and has some chance of enhancing its mobile offerings with Skype integration.

Microsoft acquired Bungie in 2000 and given the success of Halo (without which, maybe, the whole Xbox project might have faltered) we have to count that a success, even though Bungie was spun off back to independence in 2007.

Other notables include Great Plains in December 2000 (now morphed into Dynamics ERP); Connectix in February 2003 which got Microsoft started in virtualization; and Opalis in December 2009 whose software now plays a key role in Microsoft’s System Center 2012 private cloud software.

Winternals in July 2006 was a great acquisition. Microsoft acquired some indispensable Windows troubleshooting tools, and also Mark Russinovich and Bryce Cogwell, able people who I suspect contributed to the transformation of Windows Vista into Windows 7, and in the case of Russinovich, to the technology in Windows Azure which now seems reborn as an excellent cloud platform.

You can see all Microsoft’s completed acquisitions here.

(If the company would like to acquire itwriting.com for a few billion I am willing to talk.)