Tag Archives: hyper-v

Amazon Linux 2023 on Hyper-V

Amazon Linux 2023 came out in March 2023, somewhat late as it was originally called Amazon Linux 2022. It took even longer to provide images for running it outside AWS, but these did eventually arrive – but only for VMWare and KVM, even though old Amazon Linux 2 does have a Hyper-V image.

I wanted to try out AL 2023 and it makes sense to do that locally rather than spend money on EC2; but my server runs Windows Hyper-V. Migrating images between hypervisors is nothing new so I gave it a try.

  • I used the KVM image here (or the version that was available at the time).
  • I used the qemu disk image utility to convert the .qcow2 KVM disk image to .vhdx format. I installed qemu-img by installing QUEMU for Windows but not enabling the hypervisor itself.
  • I used the seed.iso technique to initialise the VM with an ssh key and a user with sudo rights. I found it helpful to consult the cloud-init documentation linked from that page for this.
  • In Hyper-V I created a new Generation 1 VM with 4GB RAM and set it to boot from converted drive, plus seed.iso in the virtual DVD drive. Started it up and it worked.
Amazon Linux 2023 running on Hyper-V

I guess I should add the warning that installing on Hyper-V is not supported by AWS; on the other hand, installing locally has official limitations anyway. Even if you install on KVM the notes state that the KVM guest agent is not packaged or supported, VM hibernation is not supports, VM migration is not supported, passthrough of any device is not supported and so on.

What about the Hyper-V integration drivers? Note that “Linux Integration Services has been added to the Linux kernel and is updated for new releases.” Running lsmod shows that the essentials are there:

The Hyper-V modules are in the kernel in Amazon Linux 2023

Networking worked for me without resorting to a legacy network card emulation.

This exercise also taught me about the different philosophy in Amazon Linux 2023 versus Amazon Linux 2. That will be the subject of another post.

Notes from the field: virtualising an existing Windows server using UEFI and Secure Boot

Over the weekend I had the task of converting an existing Windows server running on HP RAID to a virtual machine on Hyper-V. This is a very small network with only one server so nice and simple. I used the sysinternals tool Disk2vhd which converts all the drives on an existing server to a single VHD or VHDX. It’s a nice tool that uses shadow copy to make a consistent snapshot.

The idea is that you then take your VHDX and and make it the drive for a new VM on the target host, in my case running Server 2019. Unfortunately my new VM would not boot. Generally there are three things that can happen in these cases. One is that the VM boots fine. Second it tries to boot but comes up with a STOP error. Third, it just sits there with a flashing cursor and nothing happens.

At this point I should say that Microsoft does not really support this type of migration. It is considered something that might or might nor work and at the user’s risk. However I have had success with it in the past and when it works, it does save a lot of time especially in small setups like this, because the new VM is a clone of the old server with all the shared folders, printer drivers, applications, databases and other configuration ready to go.

Disclaimer: please consider this procedure unsupported and if you follow any tips here do not blame me if it does not work! Normally the approach is to take the existing server off the network, do the P2V (Physical to Virtual), run up the new VM and check its health. If it cannot be made to work, scrap the idea, fire up the old server again, and do a migration to a new VM using other techniques, re-install applications and so on.

In my case I got a flashing cursor. What this means, I discovered after some research, is that there is no boot device. If you get a STOP error instead, you have a boot device but there is some other problem, usually with accessing the storage (see notes below about disabling RAID). At this point you will need an ISO of Windows Server xxxx (matching the OS you are troubleshooting) so you can run the troubleshooting tools. I downloaded the Windows Server 2016 Hyper-V, which is nice and small and has the tools.

Note that if the source server uses UEFI boot you must create a generation 2 Hyper-V VM. Well, either that or go down the rabbit hole of converting the GPT partitions to MBR without wiping the data so you can use generation 1.

For troubleshooting, the basic technique is to boot into the Windows recovery tools and then the command prompt.

I am not sure if this is necessary, but the first thing I did was to run regedit, load the system hive using the Load Hive option, and set the Intel RAID controller entries to zero. What this does is to tell Windows not to look for an Intel RAID for its storage. Essentially go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSetXXX\Services (usually XXX is 001 but it might not be) and find the entries if they exist for:

iaStor

iaStorAVC

iaStorAV

iaStorV

storAHCI

and set the Start or StartOverride parameters to 0. This even works for storAHCI since 0 is on and 3 is off.

The VM still would not boot. Flashing cursor. I am grateful for this thread in the Windows EightForums which explains how to fix EFI boot. My problem, I discovered via the diskpart utility, was that my EFI boot partition, which should show as a small, hidden, FAT32 partition, was instead showing as RAW, meaning no filesystem.

The solution, which I am copying here just in case the link fails in future, was (within the recovery command prompt for the failing VM) to do as follows – the bracketed comments are not to be typed, they are notes.

diskpart
list disk
select disk # ( # = disk number for the disk with the efi partition)
list partition (and note size of old efi or presumed efi partition, which will be small and hidden)
select partition # (# = efi partition)
create partition efi size=# (size of old partition, mine was 99)
format quick fs=fat32 label=”SYSTEM”
assign letter=”S”
exit

assuming C is still the drive letter assigned to your windows partition

type:

C:\Windows\System32\bcdboot C:\Windows

This worked perfectly for me. The VM booted, spent a while detecting devices, following which everything was straightforward.

Final comment: although it is unsupported, the Windows engineers have done an amazing job enabling Windows to boot on new hardware with relatively little fuss in most cases – you will end up of course with lots of hidden missing devices in Device Manager that you can clean up with care though I don’t think they do much harm.

Configuring the Android emulator for Hyper-V

Great news that the Android emulator now supports Hyper-V, but how do you enable it?

Pretty simple. First, you have to be running at least Windows 10 1803 (April 2018 update). Then, go into Control Panel – Programs – Turn Windows Features on and off and enabled both Hyper-V and the Windows Hypervisor Platform:

image

Note: this is not the same as just enabling Hyper-V. The Windows Hypervisor Platform, or WHPX, is an API for Hyper-V. Read about it here.

Reboot if necessary and run the emulator.

image

TroubleshootIng? Try running the emulator from the command line.

emulator -list-avds

will list your AVDs.

emulator @avdname -qemu -enable-whpx

will run the AVD called avdname using WHPX (Windows Hypervisor Platform). If it fails, you may get a helpful error message.

Note: If you get a Qt library not found error, use the full path to the emulator executable. This should be the one in the emulator folder, not the one in the tools folder. The full command is:

[path-to-android-sdk]\emulator\emulator @[avdname] -qemu -enable-whpx

You can also use the emulator from Visual Studio, though you need Visual Studio 2017 version 15.8 Preview 1 or higher with the Xamarin tools installed. That said, I had some success with starting the Hyper-V emulator separately (use the command above), then using it with a Xamarin project in Visual Studio 15.7.5.

image

Hyper-V compatible Android emulator now available

An annoying issue for Android developers on Windows is that the official Android emulator uses Intel’s HAXM hypervisor platform, which is incompatible with Microsoft’s Hyper-V.

The pain of dual-boot just to run the Android emulator is coming to an end. Google has announced that the latest release of the Android Emulator will support Hyper-V on both AMD and Intel PCs. This a relief to Docker users, for example, since Docker now uses Hyper-V by default.

Google Product Manager Jamal Eason has made a rather confusing post, positioning the new feature as mainly for the benefit of developers with AMD processors. Intel HAXM does not work with AMD processors.”Thanks to on-going development by Intel, the fastest emulator performance on Windows is still with Intel HAXM,” says Eason, stating that HAXM remains the default on Intel PCs and is recommended.

However the new Hyper-V support works fine on Intel as well as AMD PCs. The official docs say:

Though we recommend using HAXM on Windows, it is possible to use Windows Hypervisor Platform (WHPX) with the emulator. Situations in which you should use WHPX with the emulator are the following:

  • You need to use Hyper-V at the same time.
  • You are using an AMD CPU.

The new feature is “thanks to a new Microsoft Windows Hypervisor Platform (WHPX) API and recent open-source contributions from Microsoft,” says Eason.

It is another case of Microsoft doing the hard work to make Windows a better platform for developers, even when they are targeting non-Windows platforms (as is increasingly the case).

Notes from the field: Windows Time Service interrupts email delivery

A business with Exchange Server noticed that email was not flowing. The internet connection was fine, all the servers were up and running including Exchange 2016. Email has been fine just a few hours earlier. What was wrong?

The answer, or the beginning of the answer, was in the Event Viewer on the Exchange Server. Event ID 1035, only a warning:

Inbound authentication failed with error UnexpectedExchangeAuthBlobCheckForClockSkew for Receive connector Default Mailbox Delivery

Hmm. A clock problem, right? It turned out that the PDC for the domain was five minutes fast. This is enough to trigger Kerberos authentication failures. Result: no email. We fixed the time, restarted Exchange, and everything worked.

Why was the PDC running fast? The PDC was configured to get time from an external source, apparently, and all other servers to get their time from the PDC. Foolproof?

Not so. If you typed:

w32tm /query /status

at a command prompt on the PDC (not the Exchange Server, note), it reported:

Source: Free-running System Clock

Oops. Despite efforts to do the right thing in the registry, setting the Type key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters to NTP and entering a suitable list of time servers in the NtpServer key, it was actually getting its time from the server clock. This being a Hyper-V VM, that meant the clock on the host server, which – no surprise – was five minutes fast.

You can check for this error by typing:

w32tm /resync

at the command prompt. If it says:

The computer did not resync because no time data was available.

then something is wrong with the configuration. If it succeeds, check the status as above and verify that it is querying an internet time server. If it is not querying a time server, run a command like this:

w32tm /config /update /manualpeerlist:”0.pool.ntp.org,0x8 1.pool.ntp.org,0x8 2.pool.ntp.org,0x8 3.pool.ntp.org,0x8″ /syncfromflags:MANUAL

until you have it right.

Note this is ONLY for the server with the PDC Emulator FSMO role. Other servers should be configured to get time from the PDC.

Time server problems seem to be common on Windows networks, despite the existence of lots of documentation. There are also various opinions on the best way to configure Hyper-V, which has its own time synchronization service. There is a piece by Eric Siron here on the subject, and I reckon his approach is a safe one (Hyper-V Synchronization Service OFF for the PDC Emulator, ON for every other VM).

I love his closing remarks:

The Windows Time service has a track record of occasionally displaying erratic behavior. It is possible that some of my findings are not entirely accurate. It is also possible that my findings are 100% accurate but that not everyone will be able to duplicate them with 100% precision. If working with any time sensitive servers or applications, always take the time to verify that everything is working as expected.

No more infrastructure roles for Windows Nano Server, and why I still like Server Core

Microsoft’s General Manager for Windows Server Erin Chapple posted last week about Nano Server (under a meaningless PR-speak headline) to explain that Nano Server, the most stripped-down edition of Windows Server, is being repositioned. When it was introduced, it was presented not only as a lightweight operating system for running within containers, but also for infrastructure roles such as hosting Hyper-V virtual machines, hosting containers, file server, web server and DNS Server (but without AD integration).

In future, Nano Server will be solely for the container role, enabling it to shrink in size (for the base image) by over 50%, according to Chapple. It will no longer be possible to install Nano Server as a standalone operating system on a server or VM. 

This change prompted Microsoft MVP and Hyper-V enthusiast Aidan Finn to declare Nano Server all but dead (which I suppose it is from a Hyper-V perspective) and to repeat his belief that GUI installs of Windows Server are best, even on a server used only for Hyper-V hosting.

Prepare for a return to an old message from Microsoft, “We recommend Server Core for physical infrastructure roles.” See my counter to Nano Server. PowerShell gurus will repeat their cry that the GUI prevents scripting. Would you like some baloney for your sandwich? I will continue to recommend a full GUI installation. Hopefully, the efforts by Microsoft to diminish the full installation will end with this rollback on Nano Server.

Finn’s main argument is that the full GUI makes troubleshooting easier. Server Core also introduces a certain amount of friction as most documentation relating to Windows Server (especially from third parties) presumes you have a GUI and you have to do some work to figure out how to do the same thing on Core.

Nevertheless I like Server Core and use it where possible. The performance overhead of the GUI is small, but running Core does significantly reduce the number of security patches and therefore required reboots. Note that you can run GUI applications on Server Core, if they are written to a subset of the Windows API, so vendors that have taken the trouble to fix their GUI setup applications can support it nicely.

Another advantage of Server Core, in the SMB world where IT policies can be harder to enforce, is that users are not tempted to install other stuff on their Server Core Domain Controllers or Hyper-V hosts. I guess this is also an advantage of VMWare. Users log in once, see the command-line UI, and do not try installing file shares, print managers, accounting software, web browsers (I often see Google Chrome on servers because users cannot cope with IE Enhanced Security Configuration), remote access software and so on.

Only developers now need to pay attention to Nano Server, but that is no reason to give up on Server Core.

Microsoft Hyper-V vs VMWare: is System Center the weak point?

The Register reports that Google now runs all its cloud apps in Docker-like containers; this is in line with what I heard at the QCon developer event earlier this year, where Docker was the hot topic. What caught my eye though was Trevor Pott’s comment comparing, not Hyper-V to VMWare, but System Center Virtual Machine Manager to VMWare’s management tools:

With VMware, I can go from "nothing at all" to "fully managed cluster with everything needed for a five nines private cloud setup" in well under an hour. With SCVMM it will take me over a week to get all the bugs knocked out, because even after you get the basics set up, there are an infinite number of stupid little nerd knobs and settings that need to be twiddled to make the goddamned thing actually usable.

VMWare guy struggling to learn a different way of doing things? There might be a little of that; but Pott makes a fair point (in another comment) about the difficulty, with Hyper-V, of isolating the hypervisor platform from the virtual machines it is hosting. For example, if your Hyper-V hosts are domain-joined, and your Active Directory (AD) servers are virtualised, and something goes wrong with AD, then you could have difficulty logging in to fix it. Pott is talking about a 15,000 node datacenter, but I have dealt with this problem at a micro level; setting up Windows to manage a non-domain joined host from a domain-joined client is challenging, even with the help of the scripts written by an enterprising Program Manager at Microsoft. Of course your enterprise AD setup should be so resilient that this cannot happen, but it is an awkward dependency.

Writing about enterprise computing is a challenge for journalists because of the difficulty of getting hands-on experience or objective insight from practitioners; vendors of course are only too willing to show off their stuff but inevitably they paint with a broad brush and with obvious self-interest. Much of IT is about the nitty-gritty. I do a little work with small businesses partly to get some kind of real-world perspective. Even the little I do is educational.

For example, recently I renewed the certificate used by a Microsoft Dynamics CRM installation. Renewing and installing the certificate was easy; but I neglected to set permissions on the private key so that the CRM service could access it, so it did not work. There was a similar step needed on the ADFS server (because this is an internet-facing deployment); it is not an intuitive process because the errors which surface in the event viewer often do not pinpoint the actual problem, but rather are a symptom of the problem. It does not help that the CRM Email Router, when things go wrong, logs an identical error event every few seconds, drowning out any other events.

In other words, I have shared some of the pain of sysadmins and know what Pott means by “stupid little nerd knobs”.

Getting back to the point, I have actually installed System Center including Virtual Machine Manager in my own lab, and it was challenging. System Center is actually a suite of products developed at different times and sometimes originating from different companies (Orchestrator, for example), and this shows in lack of consistency in the user interface, and in occasional confusing overlap in functionality.

I have a high regard for Hyper-V itself, having found it a solid and fast performer in my own use and an enormous advance over working with physical servers. The free management tool that you can install on Windows 7 or 8 is also rather good. The free Hyper-V server you can download from Microsoft is one of the best bargains in IT. Feature-wise, Hyper-V has improved rapidly with each new release and it seems to me a strong offering.

We have also seen from Microsoft’s own Azure cloud platform, which uses Hyper-V for virtualisation, that it is possible to automate provisioning and running Hyper-V at huge scale, controlled by easy to use management tools, either browser-based or using PowerShell scripts.

Talk private cloud though, and you are back with System Center with all its challenges and complexity.

Well, now you have the option of Azure Pack, which brings some of Azure’s technology (including its user-friendly portal) to enterprise or hosting provider datacenters. Microsoft needed to harmonise System Center with Azure; and the fact that it is replacing parts of System Center with what has been developed for Azure suggests recognition that it is much better; though no doubt installing and configuring Azure Pack also has challenges.

My last reflection on the above is that ease of use matters in enterprise IT just as it does in the consumer world. Yes, the users are specialists and willing to accept a certain amount of complexity; but if you have reliable tools with clearly documented steps and which help you to do things right, then there are fewer errors and greater productivity. 

My last server? HP ML310e G8 quick review

Do small businesses still need a server? In my case, I do still run a couple, mainly for trying out new releases of server products like Windows Server 2012 R2, System Center 2012, Exchange and SharePoint. The ability to quickly run up VMs for testing software is of huge value; you can do this with just a desktop but running a dedicated hypervisor is convenient.

My servers run Hyper-V Server 2012 R2, the free version, which is essentially Server Core with just the Hyper-V role installed. I have licenses for full Windows server but have stuck with the free one partly because I like the idea of running a hypervisor that is stripped down as far as possible, and partly because dealing with Server Core has been educational; it forces you into the command line and PowerShell, which is no bad thing.

Over the years I have bought several of HP’s budget servers and have been impressed; they are inexpensive, especially if you look out for “top value” deals, and work reliably. In the past I’ve picked the ML110 range but this is now discontinued (though the G7 is still around if you need it); the main choice is either the small Proliant Gen8 MicroServer which packs in space for 4 SATA drives and up to 16GB RAM via 2 PC3 DDR3 DIMM slots and support for the dual-core Intel Celeron G1610T or Pentium G2020T; or the larger ML310 Gen8 series with space for 4 3.5" or 8 small format SATA drives and 4 PC3 DDR3 DIMM slots for up to 32GB RAM, with support for the Core i3 or Xeon E3 processors with up to 4 cores. Both use the Intel C204 chipset.

I picked the ML310e because a 4-core processor with 32GB RAM is gold for use with a hypervisor. There is not a huge difference in cost. While in a production environment it probably makes sense to use the official HP parts, I used non-HP RAM and paid around £600 plus VAT for a system with a Xeon  E3-1220v2 4-core CPU, 32GB RAM, and 500GB drive. I stuck in two budget 2Tb SATA drives to make up a decent server for less than £800 all-in; it will probably last three years or more.

There is now an HP ML310e Gen 8 v2 which might partly explain why the first version is on offer for a low price; the differences do not seem substantial except that version 2 has two USB 3.0 ports on the rear in place of four USB 2.0 ports and supports Xeon E3 v3.

Will I replace this server? The shift to the cloud means that I may not bother. I was not even sure about this one. You can run up VMs in the cloud easily, on Amazon ECC or Microsoft Azure, and for test and development that may be all you need. That said, I like the freedom to try things out without worrying about subscription costs. I have also learned a lot by setting up systems that would normally be run by larger businesses; it has given me better understanding of the problems IT administrators encounter.

image

So how is the server? It is just another box of course, but feels well made. There is an annoying lock on the front cover; you can’t remove the side panel unless this is unlocked, and you can’t remove the key unless it is locked, so the solution if you do not need this little bit of physical security is to leave the key in the lock. It does not seem worth much to me since a miscreant could easily steal the entire server and rip off the panel at leisure.

On the front you get 4 USB 2.0 ports, UID LED button, NIC activity LED, system health LED and power button.

image

The main purpose of the UID (Unit Identifier) button is to help identify your server from the rear if it is in a rack. You press the button on the front and an LED lights at the rear. Not that much use in a micro tower server.

Remove the front panel and you can see the drive cage:

image

Hard drives are in caddies which are easily pulled out for replacement. However note the “Non hot plug” on these units; you must turn the server off first.

You might think that you have to buy HP drives which come packaged in caddies. This is not so; if you remove one of the caddies you find it is not just a blank, but allows any standard 3.5" drive to be installed. The metal brackets in the image below are removed and you just stick the drive in their place and screw the side panels on.

image

Take the side panel off and you will see a tidy construction with the 350w power supply, 4 DIMM slots, 4 PCI Express slots (one x16, two x8, one x4), and a transparent plastic baffle that ensures correct air flow.

image

The baffle is easily removed.

image

What you see is pretty much as it is out of the box, but with RAM fitted, two additional drives, and a PCIX USB 3.0 card fitted since (annoyingly) the server comes with USB 2.0 only – fixed in the version 2 edition.

On the rear are four more USB 2.0 ports, two 1GB NIC ports, a blank where a dedicated ILO (Integrated Lights Out) port would be, video and serial connector.

image

Although there is no ILO port on my server, ILO is installed. The luggage label shows the DNS name you need to access it. If you can’t get at the label, you can look at your DHCP server and see what address has been allocated to ILOxxxxxxxxx and use that. Once you log in with a web browser you can change this to a fixed IP address; probably a good idea in case, in a crisis, the DHCP server is not working right.

ILO is one of the best things about HP servers. It is a little embedded system, isolated from whatever is installed on the server, which gets you access to status and troubleshooting information.

image

Its best feature is the remote console which gives you access to a virtual screen, keyboard and mouse so you can get into your OS from a remote session even when the usual remote access techniques are not working. There are now .NET and mobile options as well as Java.

image

Unfortunately there is a catch. Try to use this an a license will be demanded.

image

However, you can sign up for an evaluation that works for a few weeks. In other words, your first disaster is free; after that you have to pay. The license covers several servers and is not good value for an individual one.

Everything is fine on the hardware side, but what about the OS install? This is where things went a bit wrong. HP has a system called Intelligent Provisioning built in. You pop your OS install media in the DVD drive (or there are options for network install), run a wizard, and Intelligent Provisioning will update its firmware, set up RAID, and install your OS with the necessary drivers and HP management utilities included.

I don’t normally bother with all this but I thought I should give it a try. Unfortunately Server 2012 R2 is not supported, but I tried it for Server 2012 x64, hoping this would also work with Hyper-V Server, but no go; failed with unattend script error.

Next I set up RAID manually using the nice HP management utility in the BIOS and tried to install using the storage drivers saved to a USB pen drive. It seemed to work but was not stable; it would sometimes fail to boot, and sometimes you could log on and do a few things but Windows would crash with a Kernel_Security_Check_Failure.

Memory problems? Drive problems? It was not clear; but I decided to disable embedded RAID in the BIOS and use standard AHCI SATA. Install proceeded perfectly with no need for additional drivers, and the OS is 100% stable.

I did not want to give up RAID though, so wondered if I could use Storage Spaces on Hyper-V Server. Apparently you can. I joined the Hyper-V Server to my domain and then used Server Manager remotely to create a Storage Pool from my pair of 2TB drives, and then a mirrored virtual disk.

My OS drive is not on resilient storage but I am not too concerned about that. I can backup the OS (wbadmin works), and since it does nothing more than run Hyper-V, recovery should be straightforward if necessary.

After that I moved across some VMs using a combination of Move and Export with no real issues, other than finding Move too slow on my system when you have a large VHD to copy.

The server overall seems a good bargain; HP may have problems overall, but the department that turns out budget servers seems to do an excellent job. My only complaint so far is the failure of the storage drivers on Server 2012 R2, which HP will I hope fix with an update.

Hyper-V 2012 R2 Live Migration Hands On

I have two servers running Hyper-V, which I have just upgraded to Hyper-V Server 2012 R2.

I thought it was time to test live migration. I have a VM which runs ISA server 2004. It is connected to two virtual switches, one for the internal network, and one for the external network. Both servers have two identically named virtual switches.

I ran into all the errors. First, I just checked the box for Enable incoming and outgoing live migrations for Hyper-V on each box.

image

Then I tried to move the VM. I got the error described here: The credentials supplied to the package were not recognized.

I am not using System Center VMM (Virtual Machine Manager) but just the Hyper-V manager. However it put me on the right track. To have any hope of success with this when working remotely (and who isn’t?) you need to go into the Advanced Features of Hyper-V Live Migration settings and check the box for Use Kerberos:

image

Next, you have to go into Active Directory and set up Delegation using Kerberos for several services: cifs, and Microsoft Virtual System Migration Service. There is a screengrab in the comments here. Do this for both (or all) the servers you want to participate in Live Migration.

I retried the move. Still no go; I got a General Access Denied error 0×80070005 when the source server tried to create a temp folder on the destination server.

The fix, it turned out, was to add the domain administrator to the local Hyper-V group Hyper-V Administrators. You can do this with PowerShell as explained (in generic terms) here.

Then reboot the source server.

I retried the move operation. It worked.

The funny thing: all my internet traffic goes through this VM. I use the internet constantly, but did not notice any downtime as the VM moved from one host to the other.

When I remembered I checked and found that the VM had indeed moved.

Very cool.

My question though: why is it that getting this stuff working always seems to involve several steps (in this case Active Directory, Advanced Feature settings, and of course reboot) that are barely documented?

Why can’t some wizard check the settings for you when you enable Live Migration and offer to fix them or at least tell you what to do?

Nevertheless, once you get it working this is impressive, especially considering that I have no shared storage nor System Center VMM.

Upgrading Hyper-V Server 2012 to 2012 R2: minor hassles

I have a couple of servers running Hyper-V Server, the free version of Microsoft’s hypervisor.

Hyper-V Server R2 is now available with some nice improvements. I tried an in-place upgrade. You do this by running setup from within a running instance of the server. This did not work when going from 2008 to 2012, but I am glad to report that it does work for 2012 to 2012 R2:

image

You will need to make sure that all the VMs are shut down before you run the upgrade. Otherwise you get a message and the upgrade fails:

image

In my case the upgrade was smooth and not too lengthy. However I was warned that because I use a pass-through drive in one VM, that this might cause a problem. It did, and the VM failed to start after the upgrade:

image

The fix was trivial: remove the pass-through drive and then add it back. After that the VM started.

Then I hit another problem. Although my VMs had started, they had no network connectivity, even after I upgraded the integration components. These VMs run Server 2008 R2, in case that makes a difference (I doubt it). The virtual switch still showed in Hyper-V settings but no traffic passed through to the VMs.

I tried two solutions. Removing the NIC from the VM and re-adding it made no difference (and this is also a poor solution since you then have to reconfigure the NIC in the VM). However deleting the virtual switch and replacing it with a new one of the same name and configuration was successful. The virtual NICs then have to be reconnected to the new virtual switch, but this is painless.

The UI for the new version looks exactly the same as before. However the Windows version number has changed from 6.2.9200 to 6.3.9600, so you can verify that the OS really was upgraded:

image

Is it better to avoid in-place upgrades? A clean upgrade is safer, if you do not mind exporting and re-importing the VMs, or moving them all to another host, before the upgrade. On the other hand, with the upgrade cycle now faster than before, in-place upgrade makes sense as a way of keeping pace with little pain.