Category Archives: virtualization

image

Bare-metal recovery of a Hyper-V virtual machine

Over the weekend I ran some test restores of Microsoft Hyper-V virtual machines. You can restore a Hyper-V host, complete with its VMs, using the same technique as with any Windows server; but my main focus was on a different scenario. Let’s say you have a Server 2008 VM that has been backed up from the guest using Windows Server Backup. In my case, the backup had been made to a VHD mounted for that purpose. Now the server has been stolen and all you have is your backup. How do you restore the VM?

In principle you can do a bare-metal restore in the same way as with a physical machine. Configure the VM as closely as possible to how it was before, attach the backup, boot the VM from the Server 2008 install media, and perform a system recovery.

Unfortunately this doesn’t work if your VM uses VHDs attached to the virtual SCSI controller. The reason is that the recovery console cannot see the SCSI-attached drives. This is possibly related to the Hyper-V limitation that you cannot boot from a virtual SCSI drive.

The workaround I found was first to attach the backup VHD to the virtual IDE controller (not SCSI), so the recovery console can see it. Then to do a system recovery of the IDE drives, which will include the C drive. Then to shutdown the VM (before the restart), mount both the backup and the SCSI-attached VHDs on the host using diskpart, and use wbadmin to restore each individual volume. Finally, detach the VHDs and restart the VM.

It worked. One issue I noticed though is that the network adapter in the restored VM was considered different to the one in the original VM, even though I applied the same MAC address. Not a great inconvenience, but it meant fixing networking as the old settings were attached to the NIC that was now missing.

I’ve appended the details to my post on How to backup Small Business Server 2008 on Hyper-V.

USB devices and Hyper-V – remote client yes, host no

At TechEd in New Orleans, Microsoft has announced that the version of Hyper-V in Windows Server 2008 R2 Service Pack 1 – a typical Microsoft mouthful – will include support for generic USB devices. That is, you can remote into a Hyper-V VM, plug in your USB camera, scanner or bar-code reader, and it will be re-directed to the remote desktop.

It’s a welcome feature, and removes one of the annoyances of working on a remote desktop. However, there is another scenario that Microsoft has not addressed, which is support for USB devices on the Hyper-V host. For example, USB drives are often used for backup, but if you plug a USB drive into a Hyper-V host, it is not easy to use it for backup from within a Hyper-V guest. Well, there are ways, but you are not going to like any of them – mount the drive in the host, mark it as offline, attach it to the guest using pass-through, and so on.

So will Hyper-V ever support USB devices in the host as well as on remote clients? I asked about this, and was told that it is not a priority, because although the topic comes up regularly, it is “not in the top ten feature requests”.

That’s a shame. Even if Microsoft supported only USB storage devices, it would help significantly with tasks like backing up Small Business Server when run on a virtual machine.

VMforce: Salesforce partners VMware to run Java in the cloud

Salesforce and VMware have announced VMforce, a new cloud platform for enterprise applications. You will be able to deploy Java applications to VMforce, where they will run on a virtual platform provided by VMware. There will be no direct JDBC database access on the platform itself, but it will support the Java persistence API, with objects stored on Force.com. Applications will have full access to the Salesforce CRM platform, including new collaboration features such as Chatter, as well as standard Java Enterprise Edition features provided by Tomcat and the Spring framework. Springsource is a division of VMware.

A developer preview will be available in the second half of 2010; no date is yet announced for the final release.

There are a couple of different ways to look at this announcement. From the perspective of a Force.com developer, it means that full Java is now available alongside the existing Apex language. That will make it easier to port code and use existing skills. From the perspective of a Java developer looking for a hosted deployment platform, it means another strong contender alongside others such as Amazon’s Elastic Compute Cloud (EC2).

The trade-off is that with Amazon EC2 you have pretty much full control over what you deploy on Amazon’s servers. VMforce is a more restricted platform; you will not be able to install what you like, but have to run on what is provided. The advantage is that more of the management burden is lifted; VMforce will even handle backup.

I could not get any information about pricing or even how the new platform will be charged. I suspect it will compete more on quality than on price. However I was told that smooth scalability is a key goal.

More information here.

VMWare: the cloud is private

I attended this morning’s VMWare roundtable, debating the rather silly proposition that IT should be removed from the boardroom agenda. To be fair, even VMWare does not really believe this, but is arguing that its virtualisation technology makes IT service provision so trouble-free that the board can focus on IT as it advances their business, rather than just keeping the show on the road. I don’t believe that either, though no doubt it can help. It was nevertheless interesting to hear Jim Fennell, Information Systems Manager for the Lagan Group, explain how his virtual infrastructure allowed him to run up servers or applications such as SharePoint on demand, with internal charges based on usage.

The very definition of a private cloud, in fact; and this chimed nicely with some other research I’ve been doing on cloud security. Current cloud computing models are flawed, for the following reason among others.

So-called private clouds do not relieve organisations of the IT burden, though they may simplify it, and do not fully yield the benefits of multi-tenancy, elasticity and economies of scale except perhaps in the case of the largest enterprises, or governments.

On the other hand, public clouds are also flawed, because the customer retains legal responsibility for their data but loses operational responsibility. That split surfaces in debates about SLAs, legal liability and consequential loss, compliance with regulations concerning data location and segregation, and conflicts over whether customers should have the right to audit their cloud provider’s technology and security practices. The public cloud is not yet mature; it lacks the standards and regulatory frameworks that it needs, though work is being done.

VMWare may not mind about this, because it has positioned itself as the first choice for technology to drive private clouds. I talked to Chief Operating Office Tod Nielsen (formerly of Microsoft) after the event, and he told me that the majority of enquiries from potential customers relate to setting up private cloud infrastructures.

Another big growth area is desktop virtualisation, where customers with thousands of aging PCs running Windows XP want their next desktop upgrade to be their last, and see virtual desktops as a route to that goal.

I am intrigued by the desktop issue, since maintaining desktop PCs remains a significant maintenance challenge. The rise of non-PC devices is also relevant here. Isn’t the future more in pure web applications – perhaps enhanced with RIA technologies like Flash and Silverlight – rather than in virtual desktops? Nielsen said that the huge numbers of legacy applications out there made this impossible in the near future.

Nevertheless, you can see how VMWare is planning for more of a pure web play longer term, with acquisitions such as the Java application framework Springsource. One idea that was mentioned during the roundtable was a sort of server app market, where you can plug in pre-built applications into VMWare’s ESX platform.

Finally, one side-effect of increasing desktop virtualisation, in Nielsen’s view, is that more users will choose to run Apple Macs as the host. He also says that the number one customer request, in the weeks since Apple’s announcement, is for iPad support for their virtual clients. Make of that what you will.

New HP and Microsoft agreement commits $50 million less than similar 2006 deal

I’ve held back comment on the much-hyped HP and Microsoft three-year deal announced on Wednesday mainly because I’ve been uncertain of its significance, if any. It didn’t help that the press release was particularly opaque, full of words with many syllables but little meaning. I received the release minutes before the conference call, during which most of us were asking the same thing: how is this any different from what HP and Microsoft have always done?

It’s fun to compare and contrast with this HP and Microsoft release from December 2006 – three years ago:

We’ve agreed to a three-year, US$300 million investment between our two companies, and a very aggressive go-to-market program on top of that. What you’ll see us do is bring these solutions to the marketplace in a very aggressive way, and go after our customers with something that we think is quite unique in what it can do to change the way people work.

$300 million for three years in 2006; $250 million for three years in 2010. Hmm, not exactly the new breakthrough partnership which has been billed. Look here for what the press release should have said: it’s mainly common-sense cooperation and joint marketing.

Still, I did have a question for CEOs Mark Hurd and Steve Ballmer which was what level of cloud focus was in this new partnership, drawing these remarks from Ballmer:

The fact that our two companies are very directed at the cloud is the driving force behind this deal at this time. The cloud really means a modern architecture for how you build and deploy applications. If you build and deploy them to our service that we operate that’s called Windows Azure. If a customer deploys them inside their own data centre or some other hosted environment, they need a stack on which to build, hardware software and services, that instances the same application model that we’ll have on Windows Azure. I think of it as the private cloud version of Windows Azure.

That thing is going to be an integrated stack from the hardware, the virtualization layer, the management layer and the app model. It’s on that that we are focusing the technical collaboration here … we at Microsoft need to evangelize that same application model whether you choose to host in the the cloud or on your own premises. So in a sense this is entirely cloud motivated.

Hurd added his insistence that this is not just more of the same:

I would not want you to write that it sounds a lot like what Microsoft and HP have been talking about for years. This is the deepest level of collaboration and integration and technical work we’ve done that I’m aware of … it’s a different thing that what you’ve seen before. I guarantee Steve and I would not be on this phone call if this was just another press release from HP and Microsoft.

Well, you be the judge.

I did think Ballmer’s answer was interesting though, in that it shows how much Microsoft (and no doubt HP) are pinning their hopes on the private cloud concept. The term “private cloud” is a dubious one, in that some of the defining characteristics of cloud – exporting your infrastructure, multi-tenancy, shifting the maintenance burden to a third-party – are simply not delivered by a private cloud. That said, in a large organisation they might look similar to most users.

I can’t shake off the thought that since HP wants to carry on selling us servers, and Microsoft wants to carry on selling us licences for Windows and Office, the two are engaged in disguised cloud avoidance. Take Office Web Apps in Office 2010 for example: good enough to claim the online document editing feature; bad enough to keep us using locally installed Office.

That will not work long-term and we will see increasing emphasis on Microsoft’s hosted offerings, which means HP will sell fewer servers. Maybe that’s why the new deal is for a few dollars less than the old one.

The virtual Small Business Server 2008 backup problem

Microsoft’s Small Business Server 2008 is supported running as a Hyper-V guest; but there’s one nasty problem. The built-in backup expects external USB drives, and a Hyper-V guest does not have direct access to USB.

Here’s a solution I’ve come up with. It lets you use the built-in backup wizard, and lets users simply attach a new external USB drive each day as they expect. It is not perfect, since it requires copying the entire backup afresh to the USB drive, rather than doing a differential backup – though SBS itself still does a differential backup. It also requires Hyper-V 2008 R2, which means struggling with server core if you use the free version. Still, it’s better than any solution I’ve seen from Microsoft.

Hyper-V VMs can fail to start if the host is copying a large file

I have a couple of Microsoft Hyper-V servers which I’ve been working with, one of which has 20GB RAM. It had two virtual machine guests, one with 12GB allocated and another with 2GB allocated. I created a third VM with 2GB and started it up. It worked initially, but on rebooting the VM I got the message:

Failed to create partition: Insufficient system resources exist to complete the requested service. (0x800705AA)

This was puzzling. Most people consider that the Hyper-V host does not need very much RAM for its own operations – Brien Possey suggests 2GB, for example – and I am running the stripped-down Hyper-V 2008 R2. 4GB should be more than enough.

After chasing round for a bit, and wondering if it was something to do with NUMA, or WMIPrvse.exe gobbling all the RAM, I found out the reason. At the time I was trying to start the VM, the Hyper-V host was copying a large file (a .VHD) to an external drive for backup. In order to perform this action, the host was using a large amount of RAM for a temporary cache; and was apparently unable to release it for a VM to use until the copy completed.

In some circumstances this could be unfortunate. If you had a scheduled task in the host for copying a large file at the same moment that a guest needed a restart, perhaps triggered by Windows Update, the guest might fail to restart.

Something worth knowing if you work with Hyper-V.

Technorati Tags: ,

Wrestling with Windows Server Core

Windows Server Core is a stripped-down build of Windows Server 2008 which lacks most of the GUI. It’s a great idea: more lightweight, less to go wrong, and as the Unix folk have always said, who needs a GUI on a server anyway?

That said, the Windows culture has always assumed the presence of the GUI and most of the tools and utilities out there assume it. This means that you can expect some extra friction in managing your Server Core installation.

I recently attended a couple of Microsoft conferences and one of the things I was trying gently to discover was the extent of the take-up for Server Core, and to what extent hardware vendors such as HP had taken it to heart and were no longer assuming that all their Windows server customers could use GUI tools. I didn’t come away with any useful information on the subject, though perhaps that in itself says something.

I’ve been using Hyper-V 2008 R2, which is in effect Server Core with just one role, and a recent experience illustrates my point. After considerable effort (and help from semi-official scripts) I managed to get Hyper-V Manager working remotely, in order to create and manage the virtual machines. However, I ran into an annoying problem. There are three physical NICs in this box, and the idea was to have one for the host, and two others for virtual switches (for use by guests). Somehow, probably as a result of an early experiment, the virtual switch configuration got slightly messed up. I only had one virtual switch, and when I tried to create a second one on an otherwise unused NIC, I got the message:

Cannot bind to [Network connection name] because it is already bound to another virtual network.

That wasn’t the case as far as I could see; but that was no consolation.

The problem led me to this blog post which says that, if you are lucky, all you need to do to resolve it is to remove the binding to Microsoft Virtual Network Switch Protocol from the affected network connection. To do this, just open Local Area Connection Properties … but wait, this is Server Core, I don’t have a Local Area Connection Properties dialog.

Luckily, the guy has thought of that and says you can use the command-line tool nvspbind.exe instead. Great. But where is it? It has a page on MSDN which documents the tool, authored by a member of the Hyper-V team called Keith Mange, but there is no download. How infuriating can you get? There are a few desperate requests for a download link, and a comment “Unfortunately the nvspbind is no longer available for download”, and that is that.

All was not lost. I poked around Mange’s other downloads on MSDN and found two other utilities, nvspscrub.js and nvspinfo.js. Nvspscrub.js is a tool of last resort: it removes all the Virtual Switch bindings and deletes them from Hyper-V. I did not want that, because my first virtual switch was working fine. However, I figured I could modify Nvspscrub.js just to delete the one that was troublesome. I modified the script, deleted most of the code that modified the system, and added an if condition so that only the device with the GUID which I specified would be unbound.

It worked first time, and I was able to create my second virtual switch.

Still, the fact that this problem is known, and that the only documented cure (that I can find) is in a blog post which refers to a tool that has been pulled, suggests to me that this stuff is not yet mainstream.

Love and hate for Microsoft Small Business Server

I’ve just completed a migration from Small Business Server 2003 to 2008. I’ve worked on and off with SBS since version 4.0, and have mixed feelings about the product. It has always been great value, but massive complexity lurks not far beneath its simple wizards.

The difficulty of migration is probably its worst feature: it chugs along for a few years gradually outgrowing its hardware, and then when the time comes for a new server customers are faced with either starting from scratch with a clean install – set up new accounts, import mailboxes, every client machine removed and rejoined to a new domain – or else a painful migration.

I took the latter route, and also decided to go virtual on Hyper-V Server 2008 R2. In most important respects it went smoothly: Active Directory behaved itself, and the Exchange mailboxes all came over cleanly.

Still, several things struck me during the migration. Microsoft has a handy 79-page step-by-step document, but anyone who thinks that carefully following the steps will guarantee success will be disappointed. There are always surprises. The document does not properly cover DHCP, for example. The migration is surprisingly messy in places. The new SBS has different sets of permissions than the old one, and after the upgrade you have to somehow merge the two. The migration is not fully automated, and there is plenty of manual editing of various settings.

Even migrating SBS 2008 to SBS 2008, for a new server, has brought forth a 58-page document from Microsoft.

Then there are the errors to deal with. There are always errors. You have to figure out which ones are significant and how to fix them. I would like to meet a windows admin who could look me in the eye and say they have no errors in their event log.

Things got bad when applying all the updates to bring the server up-to-date. At one point SharePoint broke completely and could not contact its configuration database.  There’s also the mystery of security update KB967723, which Windows Update installed insisting that it was “important,” and which then generated the following logged message 79 times in the space of a few seconds:

Windows Servicing identified that package KB967723(Security Update) is not applicable for this system

Nevertheless, a little tender care and attention got the system into reasonable shape. It is even smart enough to change Outlook settings to the new server automatically. A great feature of the migration is that email flow is never interrupted.

One problem: although running SBS virtual is a supported configuration, the built-in backup system doesn’t handle it well, because it assumes use of external USB drives which Hyper-V guests cannot access directly. There are many solutions, none perfect, and it appears that Microsoft did not think this one through.

That said, the virtual solution has some inherent advantages for backup and restore, the main one being that you can guarantee identical hardware for disaster recovery. If you shut the guests down and backup the host, or export the VM, you have a reliable system backup. You can also back up a running guest from the host, though in my experience this is more fragile.

Migrating an SBS system is actually harder than working with grown-up Windows systems on separate servers (or virtual servers) because it all has to be done together. I reckon Microsoft could do a better job with the tools; but it is a complex process with multiple potential points of failure.

The experience overall does nothing to shake my view that cloud-based services are the future. I would like to see SBS become a kind of smart cache for cloud storage and services, rather than being a local all-or-nothing box that can absorb large amounts of troubleshooting time. Microsoft is going to lose a lot of this SME business, because it has ploughed on with more of the same rather than helping its existing SBS customers to move on.

Nevertheless, if you have made the decision to run your own email and collaboration services, rather than being at the mercy of a hosted service, SBS 2008 does it all.

Migrating to Hyper-V 2008 R2

I have a test setup in my office which runs mostly on Hyper-V. It is a kind of home-brew small  business server, with Exchange, ISA and SharePoint all running on separate VMs. I’ve followed Microsoft’s advice and kept Active Directory on a separate physical server. Until today, Hyper-V itself was running on Server 2008.

I’m reviewing Hyper-V Server 2008 R2, so I figured it would be interesting to migrate the VMs. I attached an external USB drive, shut down the  VMs and exported them. Next, I verified that there was nothing else I needed to preserve on that machine, and set about installing Hyper-V Server 2008 R2 from scratch.

Aside: when I first set this up I broke the rules by having Active Directory on the Hyper-V host. That worked well enough in my small setup; but I realised that you lose some of the benefit of virtualisation if you have anything of value on the host, so I moved Active Directory to a separate box.

I wish I could tell you that the migration went smoothly. Actually, from the Hyper-V perspective it did go smoothly. However, I had an ordeal with my server, a cheapie HP ML110 G5. The driver for the embedded Adaptec Sata RAID did not work with Hyper-V Server 2008 R2, and I couldn’t find an update, so I disabled the RAID. The driver for my second network card also didn’t work, and I had to replace the card. Finally, my efforts at updating the BIOS had landed me with a known problem on this server: the fans staying at maximum speed and deafening volume. Fortunately I found this thread which gives a fix: installing upgraded firmware for HP’s Lights-Out Remote Management as well. Blissful (near) silence.

Once I’d got the operating system installed successfully, bringing the VMs back on line was a snap. I used the console menu to join the machine to the domain, set up remote management, and configure the network cards. Next, I copied the exported VMs to the new server, imported them using Hyper-V manager running on Windows 7, and shortly afterwards everything was up and running again. I did get a warning logged about the integration services being out-of-date, but this was easy to upgrade. I’m hoping to see some performance benefit, since my .vhd virtual drives are dynamic, and these are meant to be much faster in the R2 update.

Although I’m impressed with Hyper-V itself, some aspects of Hyper-V Server 2008 R2 are lacking. Mostly this is to do with Server Core. Shipping a cut-down Server OS without a GUI is a great idea in itself, but Microsoft either needs to make it easy to manage from the command line, or easy to hook up to remote tools. Neither is the case. If you want to manage Hyper-V from the command line you need this semi-official management library, which seems to be the personal project of technical evangelist James O’Neill. Great work, but you would have thought it would be built into the product.

As for remote tools, the tools themselves exist, but getting the permissions right is such an arcane process that another dedicated Microsoft individual, program manager John Howard, wrote a script to make it possible for humans. It is not so bad with domain-joined hosts like mine, but even then I’ve had strange errors. I haven’t managed to get device manager working remotely yet – “Access denied” – and sometimes I get a kerberos error “network path not found”.

Fortunately there’s only occasional need to access the host once it is up and running; it seems very stable and I doubt it will require much attention.