Category Archives: virtualization

Using backup on Windows Hyper-V Server or Server Core

Hyper-V Server 2008 R2 is a free virtualisation platform from Microsoft and an excellent bargain; I guess it is something Microsoft feels it has to do in order to compete with VMWare’s vSphere Hypervisor (ESXi) which is also free. Of course Microsoft still gets your money if you run Windows Server on the VMs, in either case. Hyper-V Server is in essence Windows Server Core with just the Hyper-V Role enabled, which means there is no full GUI, just a command window and a few odd GUI apps like Task Manager, Notepad and Registry Editor which Microsoft decided we cannot live without.

So what happens if you want to backup Hyper-V Server with built-in tools? Windows Server Backup is not available, first because it is a GUI application, and second because it is not installed.

There is a way. Windows Server Backup has a command-line version called Wbadmin. In some ways it is better, because you can script it, schedule it, and easily configure it through command-line arguments. It is not installed by default on Hyper-V Server or Server Core, but you can add it:

ocsetup WindowsServerBackup

Aside: If you want to see what else you can install with ocsetup, try oclist. You can install all sorts of things on Hyper-V Server, using this and third-party software, but note the terms of the EULA:

2(b). The instance of the server software running in the physical operating system environment may be used only to:
· provide hardware virtualization services, and/or
· run software to manage and service operating system environments on the licensed server.

Backup comes into that category in my opinion, but there could be areas of uncertainty. Using Hyper-V Server as a general-purpose file server would be a breach of the license, but using a file share on Hyper-V Server to copy some utilities which which to manage the server should be OK. I think – consult your lawyer.

Once you have Wbadmin installed you can backup the server. Attach an external hard drive, say to drive E, and run:

wbadmin start backup -backupTarget:e: -include:c:,d: -quiet

Actually that is not quite right, though it was my first effort. If you run this, even on a system with only C and D drives, you will get a warning:

Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it.

The reason for this is that current versions of Windows use a hidden system partition by default. This partition does not have a drive letter, but is needed for system recovery. In order to include it, add the –allCritical argument:

wbadmin start backup -backupTarget:e: -include:c:,d -quiet -allCritical

This will add the hidden partition to the backup, and enable system recovery, where you can restore the OS and all its data in once operation.

Another important argument is –vssFull. This switch in effect tells the operating system it has been backed up. The archive bit on changed files is flipped. You want this to happen if this is your only backup, but you don’t want this to happen if you are also using another type of backup.

Note that you can quit the backup with Ctrl-C but in fact it continues running. If you quit and then want to check the status, type:

wbadmin get status

and if you really want to quit:

wbadmin stop job

Backing up running VMs

Now the interesting bit. Can we backup VMs while they are running?

It should be possible, though Microsoft does not make it easy. The idea is that the backup saves the state of the VM in a snapshot, and backs up the snapshot. This means it should start cleanly on restore. But there are several tricky points.

First, if you want to backup VMs from the host, you need to set a registry key – see the following article. I would like to know why this is not set by default – it must be deliberate, since the requirement has stayed the same in Server 2008 and Server 2008 R2.

Second, there are actually two different snapshot mechanisms, one operating entirely on the host called “saved state”, and one operating in conjunction with integration services in the VM, called “Child VM snapshot”, according to the most detailed official article on the subject. This feature is shown in Hyper-V settings as Backup integration. For the Child VM Snapshot to work, there is a further limitation, that:

The Snapshot File Location for the VM is set to be the same volume in the host operating system as the VHD files for the VM.

I am not sure what happens if you have VHDs in several locations, as you might do if you wanted to optimize performance by having VHDs on different physical disks. [Update – apparently in Windows Server 2002 R2 the .AVHD snapshot files are always in the same location as their parent VHD, and this is on a per-VHD basis, so it should not be a problem in R2].

Third, there is a question mark about whether either method works for VMs running Active Directory:

Active Directory does not support any method that restores a snapshot of the operating system or the volume the operating system resides on. This kind of method causes an update sequence number (USN) rollback. When a USN rollback occurs, the replication partners of the incorrectly restored domain controller may have inconsistent objects in their Active Directory databases. In this situation, you cannot make these objects consistent.

I am also not clear whether archive bits are flipped in the child VM, if you use –vssfull along with the Child VM snapshot. If so, you should definitely not use –vssFull in the host if you are also backing up from the guest. I am trying to get clarification on these points.

These are more questions than I would like for something as critical as backup and restore of VMs. For peace of mind you should either shut them down first, which is unacceptable in most production environments, or else backup from the guest instead of, or in addition to, backing them up from the host. I’ll update this post when I get further information.

Data Protection Manager

Finally, note that in grown-up Microsoft environments you are meant to use Data Protection Manager (DPM) rather than fiddling around with wbadmin. There is even a white paper on how this integrates with Hyper-V. Ultimately though this is also based on VSS so some of the same issues may apply. However, if you are using the free Hyper-V Server 2008 R2, you are probably not in the market for DPM and its additional hardware, software and licensing requirements.

Bare-metal recovery of a Hyper-V virtual machine

Over the weekend I ran some test restores of Microsoft Hyper-V virtual machines. You can restore a Hyper-V host, complete with its VMs, using the same technique as with any Windows server; but my main focus was on a different scenario. Let’s say you have a Server 2008 VM that has been backed up from the guest using Windows Server Backup. In my case, the backup had been made to a VHD mounted for that purpose. Now the server has been stolen and all you have is your backup. How do you restore the VM?

In principle you can do a bare-metal restore in the same way as with a physical machine. Configure the VM as closely as possible to how it was before, attach the backup, boot the VM from the Server 2008 install media, and perform a system recovery.

Unfortunately this doesn’t work if your VM uses VHDs attached to the virtual SCSI controller. The reason is that the recovery console cannot see the SCSI-attached drives. This is possibly related to the Hyper-V limitation that you cannot boot from a virtual SCSI drive.

The workaround I found was first to attach the backup VHD to the virtual IDE controller (not SCSI), so the recovery console can see it. Then to do a system recovery of the IDE drives, which will include the C drive. Then to shutdown the VM (before the restart), mount both the backup and the SCSI-attached VHDs on the host using diskpart, and use wbadmin to restore each individual volume. Finally, detach the VHDs and restart the VM.

It worked. One issue I noticed though is that the network adapter in the restored VM was considered different to the one in the original VM, even though I applied the same MAC address. Not a great inconvenience, but it meant fixing networking as the old settings were attached to the NIC that was now missing.

I’ve appended the details to my post on How to backup Small Business Server 2008 on Hyper-V.

USB devices and Hyper-V – remote client yes, host no

At TechEd in New Orleans, Microsoft has announced that the version of Hyper-V in Windows Server 2008 R2 Service Pack 1 – a typical Microsoft mouthful – will include support for generic USB devices. That is, you can remote into a Hyper-V VM, plug in your USB camera, scanner or bar-code reader, and it will be re-directed to the remote desktop.

It’s a welcome feature, and removes one of the annoyances of working on a remote desktop. However, there is another scenario that Microsoft has not addressed, which is support for USB devices on the Hyper-V host. For example, USB drives are often used for backup, but if you plug a USB drive into a Hyper-V host, it is not easy to use it for backup from within a Hyper-V guest. Well, there are ways, but you are not going to like any of them – mount the drive in the host, mark it as offline, attach it to the guest using pass-through, and so on.

So will Hyper-V ever support USB devices in the host as well as on remote clients? I asked about this, and was told that it is not a priority, because although the topic comes up regularly, it is “not in the top ten feature requests”.

That’s a shame. Even if Microsoft supported only USB storage devices, it would help significantly with tasks like backing up Small Business Server when run on a virtual machine.

VMforce: Salesforce partners VMware to run Java in the cloud

Salesforce and VMware have announced VMforce, a new cloud platform for enterprise applications. You will be able to deploy Java applications to VMforce, where they will run on a virtual platform provided by VMware. There will be no direct JDBC database access on the platform itself, but it will support the Java persistence API, with objects stored on Force.com. Applications will have full access to the Salesforce CRM platform, including new collaboration features such as Chatter, as well as standard Java Enterprise Edition features provided by Tomcat and the Spring framework. Springsource is a division of VMware.

A developer preview will be available in the second half of 2010; no date is yet announced for the final release.

There are a couple of different ways to look at this announcement. From the perspective of a Force.com developer, it means that full Java is now available alongside the existing Apex language. That will make it easier to port code and use existing skills. From the perspective of a Java developer looking for a hosted deployment platform, it means another strong contender alongside others such as Amazon’s Elastic Compute Cloud (EC2).

The trade-off is that with Amazon EC2 you have pretty much full control over what you deploy on Amazon’s servers. VMforce is a more restricted platform; you will not be able to install what you like, but have to run on what is provided. The advantage is that more of the management burden is lifted; VMforce will even handle backup.

I could not get any information about pricing or even how the new platform will be charged. I suspect it will compete more on quality than on price. However I was told that smooth scalability is a key goal.

More information here.

VMWare: the cloud is private

I attended this morning’s VMWare roundtable, debating the rather silly proposition that IT should be removed from the boardroom agenda. To be fair, even VMWare does not really believe this, but is arguing that its virtualisation technology makes IT service provision so trouble-free that the board can focus on IT as it advances their business, rather than just keeping the show on the road. I don’t believe that either, though no doubt it can help. It was nevertheless interesting to hear Jim Fennell, Information Systems Manager for the Lagan Group, explain how his virtual infrastructure allowed him to run up servers or applications such as SharePoint on demand, with internal charges based on usage.

The very definition of a private cloud, in fact; and this chimed nicely with some other research I’ve been doing on cloud security. Current cloud computing models are flawed, for the following reason among others.

So-called private clouds do not relieve organisations of the IT burden, though they may simplify it, and do not fully yield the benefits of multi-tenancy, elasticity and economies of scale except perhaps in the case of the largest enterprises, or governments.

On the other hand, public clouds are also flawed, because the customer retains legal responsibility for their data but loses operational responsibility. That split surfaces in debates about SLAs, legal liability and consequential loss, compliance with regulations concerning data location and segregation, and conflicts over whether customers should have the right to audit their cloud provider’s technology and security practices. The public cloud is not yet mature; it lacks the standards and regulatory frameworks that it needs, though work is being done.

VMWare may not mind about this, because it has positioned itself as the first choice for technology to drive private clouds. I talked to Chief Operating Office Tod Nielsen (formerly of Microsoft) after the event, and he told me that the majority of enquiries from potential customers relate to setting up private cloud infrastructures.

Another big growth area is desktop virtualisation, where customers with thousands of aging PCs running Windows XP want their next desktop upgrade to be their last, and see virtual desktops as a route to that goal.

I am intrigued by the desktop issue, since maintaining desktop PCs remains a significant maintenance challenge. The rise of non-PC devices is also relevant here. Isn’t the future more in pure web applications – perhaps enhanced with RIA technologies like Flash and Silverlight – rather than in virtual desktops? Nielsen said that the huge numbers of legacy applications out there made this impossible in the near future.

Nevertheless, you can see how VMWare is planning for more of a pure web play longer term, with acquisitions such as the Java application framework Springsource. One idea that was mentioned during the roundtable was a sort of server app market, where you can plug in pre-built applications into VMWare’s ESX platform.

Finally, one side-effect of increasing desktop virtualisation, in Nielsen’s view, is that more users will choose to run Apple Macs as the host. He also says that the number one customer request, in the weeks since Apple’s announcement, is for iPad support for their virtual clients. Make of that what you will.

New HP and Microsoft agreement commits $50 million less than similar 2006 deal

I’ve held back comment on the much-hyped HP and Microsoft three-year deal announced on Wednesday mainly because I’ve been uncertain of its significance, if any. It didn’t help that the press release was particularly opaque, full of words with many syllables but little meaning. I received the release minutes before the conference call, during which most of us were asking the same thing: how is this any different from what HP and Microsoft have always done?

It’s fun to compare and contrast with this HP and Microsoft release from December 2006 – three years ago:

We’ve agreed to a three-year, US$300 million investment between our two companies, and a very aggressive go-to-market program on top of that. What you’ll see us do is bring these solutions to the marketplace in a very aggressive way, and go after our customers with something that we think is quite unique in what it can do to change the way people work.

$300 million for three years in 2006; $250 million for three years in 2010. Hmm, not exactly the new breakthrough partnership which has been billed. Look here for what the press release should have said: it’s mainly common-sense cooperation and joint marketing.

Still, I did have a question for CEOs Mark Hurd and Steve Ballmer which was what level of cloud focus was in this new partnership, drawing these remarks from Ballmer:

The fact that our two companies are very directed at the cloud is the driving force behind this deal at this time. The cloud really means a modern architecture for how you build and deploy applications. If you build and deploy them to our service that we operate that’s called Windows Azure. If a customer deploys them inside their own data centre or some other hosted environment, they need a stack on which to build, hardware software and services, that instances the same application model that we’ll have on Windows Azure. I think of it as the private cloud version of Windows Azure.

That thing is going to be an integrated stack from the hardware, the virtualization layer, the management layer and the app model. It’s on that that we are focusing the technical collaboration here … we at Microsoft need to evangelize that same application model whether you choose to host in the the cloud or on your own premises. So in a sense this is entirely cloud motivated.

Hurd added his insistence that this is not just more of the same:

I would not want you to write that it sounds a lot like what Microsoft and HP have been talking about for years. This is the deepest level of collaboration and integration and technical work we’ve done that I’m aware of … it’s a different thing that what you’ve seen before. I guarantee Steve and I would not be on this phone call if this was just another press release from HP and Microsoft.

Well, you be the judge.

I did think Ballmer’s answer was interesting though, in that it shows how much Microsoft (and no doubt HP) are pinning their hopes on the private cloud concept. The term “private cloud” is a dubious one, in that some of the defining characteristics of cloud – exporting your infrastructure, multi-tenancy, shifting the maintenance burden to a third-party – are simply not delivered by a private cloud. That said, in a large organisation they might look similar to most users.

I can’t shake off the thought that since HP wants to carry on selling us servers, and Microsoft wants to carry on selling us licences for Windows and Office, the two are engaged in disguised cloud avoidance. Take Office Web Apps in Office 2010 for example: good enough to claim the online document editing feature; bad enough to keep us using locally installed Office.

That will not work long-term and we will see increasing emphasis on Microsoft’s hosted offerings, which means HP will sell fewer servers. Maybe that’s why the new deal is for a few dollars less than the old one.

The virtual Small Business Server 2008 backup problem

Microsoft’s Small Business Server 2008 is supported running as a Hyper-V guest; but there’s one nasty problem. The built-in backup expects external USB drives, and a Hyper-V guest does not have direct access to USB.

Here’s a solution I’ve come up with. It lets you use the built-in backup wizard, and lets users simply attach a new external USB drive each day as they expect. It is not perfect, since it requires copying the entire backup afresh to the USB drive, rather than doing a differential backup – though SBS itself still does a differential backup. It also requires Hyper-V 2008 R2, which means struggling with server core if you use the free version. Still, it’s better than any solution I’ve seen from Microsoft.

Hyper-V VMs can fail to start if the host is copying a large file

I have a couple of Microsoft Hyper-V servers which I’ve been working with, one of which has 20GB RAM. It had two virtual machine guests, one with 12GB allocated and another with 2GB allocated. I created a third VM with 2GB and started it up. It worked initially, but on rebooting the VM I got the message:

Failed to create partition: Insufficient system resources exist to complete the requested service. (0x800705AA)

This was puzzling. Most people consider that the Hyper-V host does not need very much RAM for its own operations – Brien Possey suggests 2GB, for example – and I am running the stripped-down Hyper-V 2008 R2. 4GB should be more than enough.

After chasing round for a bit, and wondering if it was something to do with NUMA, or WMIPrvse.exe gobbling all the RAM, I found out the reason. At the time I was trying to start the VM, the Hyper-V host was copying a large file (a .VHD) to an external drive for backup. In order to perform this action, the host was using a large amount of RAM for a temporary cache; and was apparently unable to release it for a VM to use until the copy completed.

In some circumstances this could be unfortunate. If you had a scheduled task in the host for copying a large file at the same moment that a guest needed a restart, perhaps triggered by Windows Update, the guest might fail to restart.

Something worth knowing if you work with Hyper-V.

Technorati Tags: ,

Wrestling with Windows Server Core

Windows Server Core is a stripped-down build of Windows Server 2008 which lacks most of the GUI. It’s a great idea: more lightweight, less to go wrong, and as the Unix folk have always said, who needs a GUI on a server anyway?

That said, the Windows culture has always assumed the presence of the GUI and most of the tools and utilities out there assume it. This means that you can expect some extra friction in managing your Server Core installation.

I recently attended a couple of Microsoft conferences and one of the things I was trying gently to discover was the extent of the take-up for Server Core, and to what extent hardware vendors such as HP had taken it to heart and were no longer assuming that all their Windows server customers could use GUI tools. I didn’t come away with any useful information on the subject, though perhaps that in itself says something.

I’ve been using Hyper-V 2008 R2, which is in effect Server Core with just one role, and a recent experience illustrates my point. After considerable effort (and help from semi-official scripts) I managed to get Hyper-V Manager working remotely, in order to create and manage the virtual machines. However, I ran into an annoying problem. There are three physical NICs in this box, and the idea was to have one for the host, and two others for virtual switches (for use by guests). Somehow, probably as a result of an early experiment, the virtual switch configuration got slightly messed up. I only had one virtual switch, and when I tried to create a second one on an otherwise unused NIC, I got the message:

Cannot bind to [Network connection name] because it is already bound to another virtual network.

That wasn’t the case as far as I could see; but that was no consolation.

The problem led me to this blog post which says that, if you are lucky, all you need to do to resolve it is to remove the binding to Microsoft Virtual Network Switch Protocol from the affected network connection. To do this, just open Local Area Connection Properties … but wait, this is Server Core, I don’t have a Local Area Connection Properties dialog.

Luckily, the guy has thought of that and says you can use the command-line tool nvspbind.exe instead. Great. But where is it? It has a page on MSDN which documents the tool, authored by a member of the Hyper-V team called Keith Mange, but there is no download. How infuriating can you get? There are a few desperate requests for a download link, and a comment “Unfortunately the nvspbind is no longer available for download”, and that is that.

All was not lost. I poked around Mange’s other downloads on MSDN and found two other utilities, nvspscrub.js and nvspinfo.js. Nvspscrub.js is a tool of last resort: it removes all the Virtual Switch bindings and deletes them from Hyper-V. I did not want that, because my first virtual switch was working fine. However, I figured I could modify Nvspscrub.js just to delete the one that was troublesome. I modified the script, deleted most of the code that modified the system, and added an if condition so that only the device with the GUID which I specified would be unbound.

It worked first time, and I was able to create my second virtual switch.

Still, the fact that this problem is known, and that the only documented cure (that I can find) is in a blog post which refers to a tool that has been pulled, suggests to me that this stuff is not yet mainstream.

Love and hate for Microsoft Small Business Server

I’ve just completed a migration from Small Business Server 2003 to 2008. I’ve worked on and off with SBS since version 4.0, and have mixed feelings about the product. It has always been great value, but massive complexity lurks not far beneath its simple wizards.

The difficulty of migration is probably its worst feature: it chugs along for a few years gradually outgrowing its hardware, and then when the time comes for a new server customers are faced with either starting from scratch with a clean install – set up new accounts, import mailboxes, every client machine removed and rejoined to a new domain – or else a painful migration.

I took the latter route, and also decided to go virtual on Hyper-V Server 2008 R2. In most important respects it went smoothly: Active Directory behaved itself, and the Exchange mailboxes all came over cleanly.

Still, several things struck me during the migration. Microsoft has a handy 79-page step-by-step document, but anyone who thinks that carefully following the steps will guarantee success will be disappointed. There are always surprises. The document does not properly cover DHCP, for example. The migration is surprisingly messy in places. The new SBS has different sets of permissions than the old one, and after the upgrade you have to somehow merge the two. The migration is not fully automated, and there is plenty of manual editing of various settings.

Even migrating SBS 2008 to SBS 2008, for a new server, has brought forth a 58-page document from Microsoft.

Then there are the errors to deal with. There are always errors. You have to figure out which ones are significant and how to fix them. I would like to meet a windows admin who could look me in the eye and say they have no errors in their event log.

Things got bad when applying all the updates to bring the server up-to-date. At one point SharePoint broke completely and could not contact its configuration database.  There’s also the mystery of security update KB967723, which Windows Update installed insisting that it was “important,” and which then generated the following logged message 79 times in the space of a few seconds:

Windows Servicing identified that package KB967723(Security Update) is not applicable for this system

Nevertheless, a little tender care and attention got the system into reasonable shape. It is even smart enough to change Outlook settings to the new server automatically. A great feature of the migration is that email flow is never interrupted.

One problem: although running SBS virtual is a supported configuration, the built-in backup system doesn’t handle it well, because it assumes use of external USB drives which Hyper-V guests cannot access directly. There are many solutions, none perfect, and it appears that Microsoft did not think this one through.

That said, the virtual solution has some inherent advantages for backup and restore, the main one being that you can guarantee identical hardware for disaster recovery. If you shut the guests down and backup the host, or export the VM, you have a reliable system backup. You can also back up a running guest from the host, though in my experience this is more fragile.

Migrating an SBS system is actually harder than working with grown-up Windows systems on separate servers (or virtual servers) because it all has to be done together. I reckon Microsoft could do a better job with the tools; but it is a complex process with multiple potential points of failure.

The experience overall does nothing to shake my view that cloud-based services are the future. I would like to see SBS become a kind of smart cache for cloud storage and services, rather than being a local all-or-nothing box that can absorb large amounts of troubleshooting time. Microsoft is going to lose a lot of this SME business, because it has ploughed on with more of the same rather than helping its existing SBS customers to move on.

Nevertheless, if you have made the decision to run your own email and collaboration services, rather than being at the mercy of a hosted service, SBS 2008 does it all.