Category Archives: virtualization

Migrating to Hyper-V 2008 R2

I have a test setup in my office which runs mostly on Hyper-V. It is a kind of home-brew small  business server, with Exchange, ISA and SharePoint all running on separate VMs. I’ve followed Microsoft’s advice and kept Active Directory on a separate physical server. Until today, Hyper-V itself was running on Server 2008.

I’m reviewing Hyper-V Server 2008 R2, so I figured it would be interesting to migrate the VMs. I attached an external USB drive, shut down the  VMs and exported them. Next, I verified that there was nothing else I needed to preserve on that machine, and set about installing Hyper-V Server 2008 R2 from scratch.

Aside: when I first set this up I broke the rules by having Active Directory on the Hyper-V host. That worked well enough in my small setup; but I realised that you lose some of the benefit of virtualisation if you have anything of value on the host, so I moved Active Directory to a separate box.

I wish I could tell you that the migration went smoothly. Actually, from the Hyper-V perspective it did go smoothly. However, I had an ordeal with my server, a cheapie HP ML110 G5. The driver for the embedded Adaptec Sata RAID did not work with Hyper-V Server 2008 R2, and I couldn’t find an update, so I disabled the RAID. The driver for my second network card also didn’t work, and I had to replace the card. Finally, my efforts at updating the BIOS had landed me with a known problem on this server: the fans staying at maximum speed and deafening volume. Fortunately I found this thread which gives a fix: installing upgraded firmware for HP’s Lights-Out Remote Management as well. Blissful (near) silence.

Once I’d got the operating system installed successfully, bringing the VMs back on line was a snap. I used the console menu to join the machine to the domain, set up remote management, and configure the network cards. Next, I copied the exported VMs to the new server, imported them using Hyper-V manager running on Windows 7, and shortly afterwards everything was up and running again. I did get a warning logged about the integration services being out-of-date, but this was easy to upgrade. I’m hoping to see some performance benefit, since my .vhd virtual drives are dynamic, and these are meant to be much faster in the R2 update.

Although I’m impressed with Hyper-V itself, some aspects of Hyper-V Server 2008 R2 are lacking. Mostly this is to do with Server Core. Shipping a cut-down Server OS without a GUI is a great idea in itself, but Microsoft either needs to make it easy to manage from the command line, or easy to hook up to remote tools. Neither is the case. If you want to manage Hyper-V from the command line you need this semi-official management library, which seems to be the personal project of technical evangelist James O’Neill. Great work, but you would have thought it would be built into the product.

As for remote tools, the tools themselves exist, but getting the permissions right is such an arcane process that another dedicated Microsoft individual, program manager John Howard, wrote a script to make it possible for humans. It is not so bad with domain-joined hosts like mine, but even then I’ve had strange errors. I haven’t managed to get device manager working remotely yet – “Access denied” – and sometimes I get a kerberos error “network path not found”.

Fortunately there’s only occasional need to access the host once it is up and running; it seems very stable and I doubt it will require much attention.

Fixing a VirtualBox Windows XP blue screen

The great think about virtualisation is that virtualised hardware stays the same, so you don’t get problems when you move to new hardware, right? Unfortunately when I ran up an XP image on VirtualBox, newly installed on Vista 64, I got this blue screen, an 0x0000007B stop error:

  

The problem was that VirtualBox must have changed its default virtual IDE controller since I first set up this VM. Windows hates having the storage controller changed – though there are ways to fix it. Much easier, though, to change the IDE Controller setting in VirtualBox from PIIX4 to PIIX3:

This problem would likely not have occurred if I had preserved the .xml file which defines the virtual machine settings. Unfortunately I only preserved the hard drive .vdi file, and used it in a new virtual machine. So VirtualBox is working as designed. Still, an easy fix.

Windows 7 XP Mode dialogs confuse virtual with real

I was impressed with the integration between XP Mode virtual applications and native Windows 7, as I explained in this review. I’d suggest though that Microsoft needs to do better in distinguishing dialogs that come from virtual XP from dialogs displayed by native Windows 7. This may seem perverse – integration is about disguising the difference, not accentuating it. But let me give you an example of where this is a problem. I started Access 2000 as a virtual application, which worked fine, and behind the scenes Virtual XP kicked into life. Then I saw this dialog on the Windows 7 desktop:

This dialog does not mention Windows XP. It just says Windows. How am I to know that it relates to a virtual instance of XP, and not to Windows 7 itself? Well, if I am awake I might spot that the window close gadget is XP-style, and not the Windows 7 style which is wider and with a smaller X. I am sure that is too subtle for many users.

Here is another example:

In this case, Windows 7 has popped up a notification saying my computer might be at risk, on the arguably dubious grounds that no antivirus software is installed. The balloon has (Remote) in brackets. So what does that mean? Actually, it means the virtual instance of XP, but the word Remote is not a clear way of saying so.

If I click the balloon, I get the XP security center, with no indication that it relates to virtual XP rather than to Windows 7 directly.

I’d like to see more clarity, even if it makes integration a tiny bit less seamless.

Technorati Tags: ,,

Cloud computing means exporting your IT infrastructure to the Internet

I’ve just attended my first cloudcamp unconference, held during QCon London. We ended up debating how you would explain cloud computing to a non-technical audience. The problem is that different people mean different things by the term.

The consumer perspective is to do with running applications and storing your stuff on the Internet. Gmail, Google Docs, Skydrive, are all examples of doing cloud-based computing from a consumer perspective. Somehow we brought BBC iPlayer, Facebook and YouTube into the mix as well. Some think that the home computer will disappear, replaced by Internet-connected appliances and devices.

The small business and entrepreneur’s perspective is to do with low start-up costs and low barriers to entry. Anyone can run a web site, take payments with PayPal or Amazon Payment Services or Google Checkout, and use cloud services for email and collaboration.

The larger business or enterprise perspective is do with exporting IT infrastructure to the Internet. Close your data centre, sell your servers, move your computing to virtual servers running on Amazon’s elastic compute cloud or some such. There is not much of this happening as far as I can see, though we are seeing virtualization (which might be a first step), and some take-up for software-as-a-service (SAAS) applications like Salesforce.com.

I suppose it is appropriate that the cloud term is fluffy. To some it is synonymous with the Internet; to others it means SAAS applications; to others it means virtual servers running who knows what; to others it means a hosted application platform (platform-as-a-service or PAAS).

The problem with vague terms is that they make discussion difficult.

My favourite usage: cloud computing means exporting IT infrastructure to the Internet.

Hyper-V disk I/O: performance of dynamic vs fixed virtual hard disks

The dynamic virtual hard drive is one of the best things about virtualization. It is like Dr Who’s Tardis. The virtualized OS thinks it has plenty of space, while on the host machine your 128GB virtual drive might occupy just 4 or 5GB – this is typical of the test VMs I set up, running say Server 2008 and a server application or two.

Trouble is, there’s a performance penalty. I first came across this with a hilariously slow Ubuntu install, where the problem is made worse by the lack of integration services, the utilities and drivers that install into the guest to enable smooth interaction with the host.

As an experiment, I created a second Ubuntu VM using a 30GB fixed-size drive. Better? Yes, much better. Here are the figures on my admittedly slow low-end HP Xeon server:

Copy an 891MB file:

  • Ubuntu 8.10 on 127GB dynamic drive with 1GB RAM: 6 min 45 secs
  • Ubuntu 8.10 on 30GB fixed drive with 1GB RAM: 3 min 15 secs

As a further test, I copied the same file in Server 2008:

  • Server 2008 on 127GB dynamic drive with 2GB RAM: 5 min 55 secs

My immediate thoughts: you would be crazy to use VMs in production with dynamic drives. Always use fixed drives. You can still expand them manually if necessary. Note that Hyper-V defaults to dynamic drives.

Still, these tests are not extensive or rigorous; I’d be interested in other results. I’ll also be creating my next Server 2008 VM on a fixed drive and will repeat the test there.

I’ve posted some further hyper-v tips and gotchas here.

Technorati tags: , ,

How Hyper-V can seem to lose your data

I’m sure it can really lose your data as well, but in this case “seem” is the appropriate word. I’ve been messing around with Hyper-V and one of my test machines is a SharePoint server. I started this up and found I could not access it over the network. On further investigation, it turned out to be a broken trust relationship with the Domain Controller. In other words, on attempting to log on with domain credentials I got the message:

The trust relationship between this workstation and the primary domain failed

The official advice when confronted with this problem is to remove and re-join it to the domain, creating a new computer account. I did so. Logged on, and was disappointed to discover that SharePoint was now empty. Worse still, even checking out the SQL Server databases did not uncover them. All my documents had vanished.

It turned out that I had done the wrong thing. What had really happened is that Hyper-V had been saving my changes on that virtual hard drive to a “differencing disk”, a file with an .avhd extension. This is part of the Hyper-V snapshot system. Somehow, Hyper-V had forgotten the differencing disk, and started up my SharePoint VM using the last fully merged copy of the drive, which was over a month old. My drive had gone back in time, so the data had gone.

The solution was to restore the old parent .vhd from backup, and then manually merge it with the differencing file. Step by step instructions are here. Since I had deleted the original computer account, I then had to remove and rejoin the machine to the domain a second time. All was well and my data reappeared.

The bug here is how Hyper-V managed to start with an old version of the virtual hard drive in the first place. I can imagine this causing panic if it occurs in production – and once you start writing new, important data to the old version you are really in trouble. I was lucky that the discrepancy was severe enough that Active Directory complained.

Virtualization may be wonderful; but it also introduces new problems of its own.

The other lesson is that those .vhd files in C:\Users\Public\Public Documents\Hyper-V\Virtual Hard Disks do not necessarily contain your latest data. You also need to consider the .avhd files stored handily at C:\Program Data\Microsoft\Windows\Hyper-V\Snapshots.

Technorati tags: , , ,

Mixing Hyper-V, Domain Controller and DHCP server

My one-box Windows server infrastructure is working fine, but I ran into a little problem with DHCP. I’d decided to have the host operating system run not only Hyper-V, but also domain services, including Active Directory, DNS and DHCP. I’m not sure this is best practice. Sander Berkouwer has a useful couple of posts in which he explains first that making the host OS a domain controller is poor design:

From an architectural point of view this is not a desired configuration. From this point of view you want to separate the virtualization and platforms from the services and applications. This way you’re not bound to a virtualization product, a platform, certain services or applications. Microsoft’s high horse from an architectural point of view is the One Server, One Server Role thought, in which one server role per server platform gets deployed. No need for a WINS server anymore? Simply shut it down…

Next, he goes on to explain the pitfalls of having your DC in a VM:

Virtualizing a Domain Controller reintroduces possibilities to mess up the Domain Controller in ways most of the Directory Services Most Valuable Professionals (MVPs) and other Active Directory enthusiasts have been fixing since the dawn of Active Directory.

He talks about problems with time synchronization, backup and restore, saved state (don’t do it), and possible replication errors. His preference after all that:

In a Hyper-V environment I recommend placing one Domain Controller per domain outside of your virtualized platform and making this Domain Controller a Global Catalog. (especially in environments with Microsoft Exchange).

Sounds good, except that for a tiny network there are a couple of other factors. First, to avoid running multiple servers all hungry for power. Second, to make best user of limited resources on a single box. That means either risking running a Primary Domain Controller (PDC) on a VM (perhaps with the strange scenario of having the host OS joined to the domain controlled by one of its VMs), or risking making the host OS the PDC. I’ve opted for the latter for the moment, though it would be fairly easy to change course. I figure it could be good to have a VM as a backup domain controller for disaster recovery in the scenario where the host OS would not restore, but the VMs would – belt and braces within the confines of one server.

One of the essential services on a network is DHCP, which assigns IP numbers to computers. There must be one and only one on the network (unless you use static addresses everywhere, which I hate). So I disabled the existing DCHP server, and added the DHCP server role to the new server.

It was not happy. No IP addresses were served, and the error logged was 1041:

The DHCP service is not servicing any DHCPv4 clients because none of the active network interfaces have statically configured IPv4 addresses, or there are no active interfaces.

Now, this box has two real NICs (one for use by ISA), which means four virtual NICs after Hyper-V is installed. The only one that the DHCP server should see is the virtual NIC for the LAN, which is configured with a static address. So why the error?

I’m not the first to run into this problem. Various solutions are proposed, including fitting an additional NIC just for DHCP. However, this one worked for me.

I simply changed the mask on the desired interface from 255.255.255.0 to 255.255.0.0, saved it, then changed it back.  Suddenly the interface appeared in the DHCP bindings.

Strange I know. The configuration afterwards was the same as before, but the DHCP server now runs fine. Looks like a bug to me.

Hands on with Hyper-V: it’s brilliant

I have just installed an entire Windows server setup on a single cheap box. It goes like this. Take one budget server stuffed with 8GB RAM and two network cards. Install Server 2008 with the Hyper-V and Active Directory Domain Services, DNS and DHCP. Install Server 2003 on a 1GB Hyper-V VM for ISA 2006. Install Server 2008 on a 4GB VM for Exchange 2007. Presto: it’s another take on Small Business Server, except that you don’t get all the wizards; but you do get the flexibility of multiple servers, and you do still have ISA (which is missing from SBS 2008).

Can ISA really secure the network in a VM (including the machine on which it is hosted)? A separate physical box would be better practice. On the other hand, Hyper-V has a neat approach to network cards. When you install Hyper-V, all bindings are removed from the “real” network card and even the host system uses a virtual network card. Hence your two NICs become four:

As you may be able to see if you squint at the image, I’ve disabled Local Area Connection 4, which is the virtual NIC for the host PC. Local Area Connection 2 represents the real NIC and is bound only to “Microsoft Virtual Network Switch Protocol”.

This enables the VM running ISA to use this as its external NIC. It strikes me as a reasonable arrangement, surely no worse than SBS 2003 which runs ISA and all your other applications on a single instance of the OS.

Hyper-V lets you set start-up and shut-down actions for the servers it is hosting. I’ve set the ISA box to start up first, with the Exchange box following on after a delay. I’ve also set Hyper-V to shut down the servers cleanly (through integration services installed into the hosted operating systems) rather than saving their state; I may be wrong but this seems more robust to me.

Even with everything running, the system is snoozing. I’m not sure that Exchange needs as much as 4GB on a small network; I could try cutting it down and making space for a virtual SharePoint box. Alternatively, I’m tempted to create a 1GB server to act as a secondary domain controller. The rationale for this is that disaster recovery from a VM may well be easier than from a native machine backup. The big dirty secret of backup and restore is that it only works for sure on identical hardware, which may not be available.

This arrangement has several advantages over an all-in-one Small Business Server. There’s backup and restore, as above. Troubleshooting is easier, because each major application is isolated and can be worked on separately. There’s no danger of notorious memory hogs like store.exe (part of Exchange) grabbing more than their fair share of RAM, because it is safely partitioned in its own VM. After all, Microsoft designed applications like Exchange, ISA and SharePoint to run on dedicated servers. If the business grows and you need to scale, just move a VM to another machine where it can enjoy more RAM and CPU.

I ran a backup from the host by enabling VSS backup for Hyper-V (requires manual registry editing for some reason), attaching an external hard drive, and running Windows Server backup. The big questions: would it restore successfully to the same hardware? To different hardware? Good questions; but I like the fact that you can mount the backup and copy individual files, including the virtual hard drives of your VMs. Of course you can also do backups from within the guest operating systems. There’s also a snag with Exchange, since a backup like this is not Exchange-aware and won’t truncate its logs, which will grow infinitely. There are fixes; and Microsoft is said to be working on making Server 2008 backup Exchange-aware.

Would a system like this be suitable for production, as opposed to a test and development setup like mine? There are a couple of snags. One is licensing cost. I’ve not worked out the cost, but it is going to add up to a lot more than buying SBS. Another advantage of SBS is that it is fully supported as a complete system aimed at small businesses. Dealing with separate virtual servers is also more demanding than running SBS wizards for setup, though I’d argue it is actually easier for troubleshooting.

Still, this post is really about Hyper-V. I’ve found it great to work with. I had a few hassles, particularly with Server 2003 – I had to remember my Windows keyboard shortcuts until I could get SP2 and Hyper-V Integration Services installed. Once installed though, I log on to the VM using remote desktop and it behaves just like a dedicated box. The performance overhead of using a VM seems small enough not to be an issue.

I’ve found it an interesting experiment. Maybe some future SBS might be delivered like this.

Update: I tried reducing the RAM for the Exchange VM and it markedly reduced performance. 4GB seems the best spot.

Run a VM on your mobile phone

VMWare has announced its Mobile Virtualization Platform for Mobile Phones. The idea is that you run apps within a virtual machine on your device:

Because VMware MVP virtualizes the hardware, handset vendors can develop a software stack with an operating system and a set of applications not tied to the underlying hardware allowing them to deploy the same software stack on a wide variety of phones without worrying about the underlying hardware differences. At the same time, by isolating the device drivers from the operating system, handset vendors can further reduce porting costs by using the same drivers irrespective of the operating system deployed on the phone.

One of the benefits claimed is the ability to switch VMs, for example between home and work versions, and the ability to migrate to a new device by copying the VM from one to another.

VMWare says the Mobile Virtual Platform (MVP) supports:

… a wide range of real-time and rich operating systems including Windows CE 5.0 and 6.0, Linux 2.6.x, Symbian 9.x, eCos, µITRON NORTi and µC/OS-II.

No mention of Apple or iPhone, of course.

Update: I got a little more info from VMWare about this. This is a bare metal VM, so there is no host OS as such. The implication is that you cannot run both the VM and another OS, as on a PC; the VM in effect replaces the OS. This isn’t a product you will be able to buy for your mobile; it will come pre-installed, presuming VMWare is successful in marketing it to mobile phone manufacturers and telecom providers.

The technology comes from a company called Trango which VMWare has acquired. There is a bit more information about the product on Trango’s site.

Technorati tags: , , ,

Windows comes to Amazon’s cloud

You will soon be able to run Windows on Amazon’s Elastic Compute Cloud (EC2), in a fully supported manner. Jeff Barr says this is scheduled for public release by the end of 2008:

The 32 and 64 bit versions of Windows Server will be available and will be able to use all existing EC2 features such as Elastic IP Addresses, Availability Zones, and the Elastic Block Store. You’ll be able to call any of the other Amazon Web Services from your application. You will, for example, be able to use the Amazon Simple Queue Service to glue cross-platform applications together.

This opens up EC2 to a substantial new group of potential customers. They will be asking, of course, if the cloud can be made reliable.

Now, how about integrating with Hyper-V and/or VMware so you could easily move your servers in and out of the cloud?