Tag Archives: virtualization

Microsoft Hyper-V vs VMWare: is System Center the weak point?

The Register reports that Google now runs all its cloud apps in Docker-like containers; this is in line with what I heard at the QCon developer event earlier this year, where Docker was the hot topic. What caught my eye though was Trevor Pott’s comment comparing, not Hyper-V to VMWare, but System Center Virtual Machine Manager to VMWare’s management tools:

With VMware, I can go from "nothing at all" to "fully managed cluster with everything needed for a five nines private cloud setup" in well under an hour. With SCVMM it will take me over a week to get all the bugs knocked out, because even after you get the basics set up, there are an infinite number of stupid little nerd knobs and settings that need to be twiddled to make the goddamned thing actually usable.

VMWare guy struggling to learn a different way of doing things? There might be a little of that; but Pott makes a fair point (in another comment) about the difficulty, with Hyper-V, of isolating the hypervisor platform from the virtual machines it is hosting. For example, if your Hyper-V hosts are domain-joined, and your Active Directory (AD) servers are virtualised, and something goes wrong with AD, then you could have difficulty logging in to fix it. Pott is talking about a 15,000 node datacenter, but I have dealt with this problem at a micro level; setting up Windows to manage a non-domain joined host from a domain-joined client is challenging, even with the help of the scripts written by an enterprising Program Manager at Microsoft. Of course your enterprise AD setup should be so resilient that this cannot happen, but it is an awkward dependency.

Writing about enterprise computing is a challenge for journalists because of the difficulty of getting hands-on experience or objective insight from practitioners; vendors of course are only too willing to show off their stuff but inevitably they paint with a broad brush and with obvious self-interest. Much of IT is about the nitty-gritty. I do a little work with small businesses partly to get some kind of real-world perspective. Even the little I do is educational.

For example, recently I renewed the certificate used by a Microsoft Dynamics CRM installation. Renewing and installing the certificate was easy; but I neglected to set permissions on the private key so that the CRM service could access it, so it did not work. There was a similar step needed on the ADFS server (because this is an internet-facing deployment); it is not an intuitive process because the errors which surface in the event viewer often do not pinpoint the actual problem, but rather are a symptom of the problem. It does not help that the CRM Email Router, when things go wrong, logs an identical error event every few seconds, drowning out any other events.

In other words, I have shared some of the pain of sysadmins and know what Pott means by “stupid little nerd knobs”.

Getting back to the point, I have actually installed System Center including Virtual Machine Manager in my own lab, and it was challenging. System Center is actually a suite of products developed at different times and sometimes originating from different companies (Orchestrator, for example), and this shows in lack of consistency in the user interface, and in occasional confusing overlap in functionality.

I have a high regard for Hyper-V itself, having found it a solid and fast performer in my own use and an enormous advance over working with physical servers. The free management tool that you can install on Windows 7 or 8 is also rather good. The free Hyper-V server you can download from Microsoft is one of the best bargains in IT. Feature-wise, Hyper-V has improved rapidly with each new release and it seems to me a strong offering.

We have also seen from Microsoft’s own Azure cloud platform, which uses Hyper-V for virtualisation, that it is possible to automate provisioning and running Hyper-V at huge scale, controlled by easy to use management tools, either browser-based or using PowerShell scripts.

Talk private cloud though, and you are back with System Center with all its challenges and complexity.

Well, now you have the option of Azure Pack, which brings some of Azure’s technology (including its user-friendly portal) to enterprise or hosting provider datacenters. Microsoft needed to harmonise System Center with Azure; and the fact that it is replacing parts of System Center with what has been developed for Azure suggests recognition that it is much better; though no doubt installing and configuring Azure Pack also has challenges.

My last reflection on the above is that ease of use matters in enterprise IT just as it does in the consumer world. Yes, the users are specialists and willing to accept a certain amount of complexity; but if you have reliable tools with clearly documented steps and which help you to do things right, then there are fewer errors and greater productivity. 

NVIDIA’s Visual Computing Appliance: high-end virtual graphics power on tap

NVIDIA CEO Jen-Hsun Huang has announced the Grid Visual Computing Appliance (VCA). Install one of these, and users anywhere on the network can run graphically-demanding applications on their Mac, PC or tablet. The Grid VCA is based on remote graphics technology announced at last year’s GPU Technology Conference. This year’s event is currently under way in San Jose.

The Grid VCA is a 4U rack-mounted server.


Inside are up to 2 Xeon CPUs each supporting 16 threads, and up to 8 Grid GPU boards each containing 2 Kepler GPUs each with 4GB GPU memory. There is up to 384GB of system RAM.


There is a built-in hypervisor (I am not sure which hypervisor NVIDIA is using) which supports 16 virtual machines and therefore up to 16 concurrent users.

NVIDIA supplies a Grid client for Mac, Windows or Android (no mention of Apple iOS).

During the announcement, NVIDIA demonstrated a Mac running several simultaneous Grid sessions. The virtual machines were running Windows with applications including Autodesk 3D Studio Max and Adobe Premier. This looks like a great way to run Windows on a Mac.


The Grid VCA is currently in beta, and when available will cost from $24,900 plus $2,400/yr software licenses. It looks as if the software licenses are priced at $300 per concurrent user, since the price doubles to $4,800/Yr for the box which supports 16 concurrent users.


Businesses will need to do the arithmetic and see if this makes sense for them. Conceptually it strikes me as excellent, enabling one centralised GPU server to provide high-end graphics to anyone on the network, subject to the concurrent user limitation. It also enables graphically demanding Windows-only applications to run well on Macs.

The Grid VCA is part of the NVIDIA GRID Enterprise Ecosystem, which the company says is supported by partners including Citrix, Dell, Cisco, Microsoft, VMWare, IBM and HP.


Microsoft’s Hyper-V Server 2012: too painful to use?

A user over on the technet forums says that the free standalone Hyper-V is too painful to use:

I was excited about the free stand-alone version and decided to try it out.  I downloaded the Hyper-V 2012 RC standalone version and installed it.  This thing is a trainwreck!  There is not a chance in hell that anyone will ever use this thing in scenarios like mine.  It obviously intended to be used by IT Geniuses in a domain only.  I would really like a version that I can up and running in less than half an hour like esxi.  How the heck is anyone going to evaluate it this in a reasonable manner? 

To be clear, this is about the free Hyper-V Server, which is essentially Server Core with only the Hyper-V role available. It is not about Hyper-V in general as a feature of Windows Server and Windows 8.

Personally I think the standalone Hyper-V Server is a fantastic offering; but at the same time I see this user’s point. If you join the Hyper-V server to a Windows domain and use the administration tools in Windows 8 everything is fine; but if you are, say, a Mac user and download Hyper-V Server to have a look, it is not obvious what to do next. As it turns out you can get started just by typing powershell at a command prompt and then New-VM, but how would you know that? Further, if Hyper-V is not joined to a domain you will have permission issues trying to manage it remotely.

Install Hyper-V Server, and the screen you see after logging on does not even mention virtualization.


By contrast, with VMWare’s free ESXi has a web UI that works from any machine on the network and lets you get started creating and managing VMs. It is less capable than Hyper-V Server; but for getting up and running quickly in a non-domain environment it wins easily.

I have been working with Hyper-V Server 2012 myself recently, upgrading two servers on my own network which run a bunch of servers for development and test. From my perspective the free Hyper-V Server, which is essentially Server Core with only the Hyper-V role available, is a great offer from Microsoft, though I am still scratching my head over how to interpret the information (or lack of it) on the new product page, which refers to the download as a trial. I am pretty sure it is still offered on similar terms to those outlined for Hyper-V Server 2008 R2 by Program Manager Jeff Woolsey, who is clear that it is a free offering:

  • Up to 8 processors
  • Up to 64 logical processors
  • Up to 1TB RAM
  • Up to 64GB RAM per VM

These specifications may have been improved for Hyper-V Server 2012; or perhaps reduced; or perhaps Microsoft really is making it a trial. It is all rather unclear, though I would guess we will get more details soon.

It is worth noting that if you do have a Windows domain and a Windows 8 client, Hyper-V Server is delightfully easy to use, especially with the newly released Remote Server Administration Tools that now work fine with Windows 8 RTM, even though at the time of writing the download page still says Release Preview. You can use Server Manager as well as Hyper-V Manager, giving immediate access to events, services and performance data, plus a bunch of useful features on a right-click menu:


In addition, File and Storage services are installed by default, which I presume means you can use Storage Spaces with Hyper-V Server, which could be handy for hosting VMs with dynamically expanding virtual hard drives. Technically you could also use it as a file server, but I presume that would breach the license.

For working with VMs themselves of course you have the Hyper-V Manager which is a great tool and not difficult to use.


The question then: with all the work that has gone into these nice GUI tools, why does Microsoft throw out Hyper-V Server with so little help that a potential customer calls it “too painful to use”?

Normally the idea of free editions is to entice customers into upgrading to a paid-for version. That is certainly VMWare’s strategy, but Hyper-V seems to be different. It is actually good enough on its own that for many users it will be a long time before there is any need to upgrade. Microsoft’s hope, presumably, is that you will run Windows Server instances in those Hyper-V VMs, and these of course do need licenses. If you buy Windows 8 to run the GUI tools, that is another sale for Microsoft. In fact, the paid-for Windows Server 2012 can easily work out cheaper than the free editions, if you need a lot of server licenses, since they come with an allowance of licenses for virtual instances of Windows Server. Hyper-V Server is only really free if you run free software, such as Linux, in the VMs.

Personally I like Hyper-V Server for another reason. Its restricted features mean that there is no temptation to run other stuff on the host, and that in itself is an advantage.

Upgrading to Hyper-V Server 2012

After discovering that in-place upgrade of Windows Hyper-V Server 2008 R2 to the 2012 version is not possible, I set about the tedious task of exporting all the VMs from a Hyper-V Server box, installing Hyper-V Server 2012, and re-importing.

There are many reasons to upgrade, not least the irritation of being unable to manage the VMs from Windows 8. Hyper-V Manager in Windows 8 only works with Windows 8/Server 2012 VMs. It does seem to work the other way round: Hyper-V Manager in Windows 7 recognises the Server 2012 VMs successfully, though of course new features are not exposed.

The export and import has worked smoothly. A couple of observations:

1. Before exporting, it pays to set the MAC address of virtual network cards to static:


The advantage is that the operating system will recognise it as the same NIC after the import.

2. Remove any snapshots before the export. In one case I had a machine with a snapshot and the import required me to delete the saved state.

3. After installing Hyper-V 2012, don’t forget to check the date, time and time zone and adjust if necessary. You can do this from the sconfig menu.

4. The import dialog has a new option, called Restore:


What is the difference between Register and Restore? Do not bother pressing F1, it will not tell you. Instead, check Ben Armstrong’s post here. If you choose Register, the VM will be activated where it is; not what you want if you mistakenly ran Import against a VM exported to a portable drive, for example. Restore on the other hand presents options in a further step for you to move the files to another location.

5. For some reason I got a remote procedure call failed message in Hyper-V Manager after importing a Linux VM, but then when I refreshed the console found that the import had succeeded.

6. Don’t forget to upgrade the integration services. Connect to the server using the Hyper-V Manager, then choose Insert Integration Services Setup Disk from the Action menu.


Cosmetically the new Hyper-V Server looks almost identical to the old: you log in and see two command prompts, one empty and one running the SConfig administration menu.

Check the Hyper-V settings though and you see all the new settings, such as Enable Replication, Virtual SAN Manager, single-root IO virtualization (SR-IOV), extension support in a virtual switch, Live Migrations and Storage Migrations, and more.

Small Businesses love the cloud says Parallels: three times more likely to choose cloud over on-premise servers

The Parallels Summit is on in Orlando, Florida, and at the event the company has released details of its “Cloud insights” research, focused on small businesses.

Most people know Parallels for its desktop virtualization for the Mac. This or an equivalent comes in handy when you need to run Windows software on a Mac, or cross-develop for Mac and Windows on one machine.

Another sides of the company’s business though is providing virtualization software for hosting providers. The Plesk control panel for managing virtual machines and websites through a web interface is a Parallels product. Many of the customers for this type of hosting are small businesses, which means that Parallels has an indirect focus on this market.

Despite Parallels offering a “Switch to Mac” edition and perhaps competing in some circumstances with Microsoft’s Hyper-V virtualization, Parallels is a Microsoft partner and has tools which work alongside Hyper-V as well as supporting Microsoft cloud services including Office 365.

Given the company’s business, you can expect its research to come out in favour of cloud, but I was still interested in this statistic:

SMBs with less than 20 employees are at least three times more likely to choose cloud services over on-premise services

It was not long ago that SMBs of this size would almost inevitably install Microsoft’s Small Business Server once they got too big to manage with an ad-hoc network.

I would be interested to know more of course. How do they break down between, say, Google apps, Office 365, or other services such as third-party hosted Exchange? Do they go pure cloud as far as possible, or still run a local server for file shares, print management, and legacy software that expects a local Windows server? Or cloud for email, on-premise for everything else? Do they trust the cloud completely, or have a plan “B” in the event that the impossible happens and services fail?

Finally, what happens as these companies grow? Scalability and pay as you go is a primary reason for going cloud in the first place, so my expectation is that they would stay with this model, but I also suspect that there is pressure to have some on-premise infrastructure as sites get larger.

Parallels Desktop 6 for Mac: nice work but beware Windows security settings

I’ve just set up Parallels Desktop 6 on a Mac, in preparation for some development work. Installed Parallels, created a new virtual machine, and selected a Windows 7 Professional with SP1 CD image downloaded from Microsoft’s excellent MSDN subscription service.

The way this works is that you install the Parallels application and the create a new virtual machine, selecting a boot CD or image. Next, you have a dialog where you select whether or not you want an Express installation. It is checked by default. I left it checked and proceeded with the install.


The setup was delightfully smooth and I was soon running Windows on the Mac. I chose a “Like my PC” install so that Windows runs in a window. The alternative is to hide the virtual Windows desktop and simply to show Windows applications on the Mac desktop.

Everything seemed fine, but I was puzzled. Why was Windows not installing any updates? It turns out that the Express install disables this setting.


It also sets user account control to an insecure setting, where the approval dialog does not use the secure desktop.


The Parallels Express install also sets up an Administrator account with a blank password, so you log on automatically.

No anti-virus is installed, which is not surprising since Windows does not come with anti-virus software by default.

These choices make a remarkable difference to the user experience. Set up was a pleasure and I could get to work straight away, untroubled by prompts, updates or warnings.

Unfortunately Windows in this state is insecure, and I am surprised that Parallels sets this as the default. Disabling automatic updates is particularly dangerous, leaving users at the mercy of any security issues that have been discovered since the install CD was built.

In mitigation, the Parallels user guide advises that you set a password after installation – but who reads user guides?

If you uncheck the Express Install option, you get a normal Windows installation with Microsoft’s defaults.

These security settings are unlikely to matter if you do not connect your Windows virtual machine to the internet, or if you never use a web browser or other Internet-connected software such as email clients. If you do real work in Windows though, which might well include Windows Outlook since the Mac version is poor in comparison, then I suggest changing the settings so that Window updates properly, as well as installing anti-virus software such as the free Security Essentials.

Three questions about Microsoft’s cloud play at TechEd 2011

This year’s Microsoft TechEd is subtitled Cloud Power: Delivered, and sky blue is the theme colour. Microsoft seems to be serious about its cloud play, based on Windows Azure.

Then again, Microsoft is busy redefining its on-premise solutions in terms of cloud as well. A bunch of Windows Servers on virtual machines managed by System Center Virtual Machine Manager (SCVMM) is now called a private cloud – note that the forthcoming SCVMM 2012 can manage VMWare and Citrix XenServer as well as Microsoft’s own Hyper-V. If everything is cloud then nothing is cloud, and the sceptical might wonder whether this is rebranding rather than true cloud computing.

I think there is a measure of that, but also that Microsoft really is pushing Azure heavily, as well as hosted applications like Office 365, and does intend to be a cloud computing company. Here are three side-questions which I have been mulling over; I would be interested in comments.


Microsoft gets Azure – but does its community?

At lunch today I sat next to a delegate and asked what she thought of all the Azure push at TechEd. She said it was interesting, but irrelevant to her as her organisation looks after its own IT. She then added, unprompted, that they have a 7,000-strong IT department.

How much of Microsoft’s community will actually buy into Azure?

Is Microsoft over-complicating the cloud?

One of the big announcements here at TechEd is about new features in AppFabric, the middleware part of Windows Azure. When I read about new features in the Azure service bus I think how this shows maturity in Azure; but with the niggling question of whether Microsoft is now replicating all the complexity of on-premise software in a new cloud environment, rather than bringing radical new simplicity to enterprise computing. Is Microsoft over-complicating the cloud, or it is more that the same necessity for complex solutions exist wherever you deploy your applications?

What are the implications of cloud for Microsoft partners?

TechEd 2011 has a huge exhibition and of course every stand has contrived to find some aspect of cloud that it supports or enables. However, Windows Azure is meant to shift the burden of maintenance from customers to Microsoft. If Azure succeeds, will there be room for so many third-party vendors? What about the whole IT support industry, internal and external, are their jobs at risk? It seems to me that if moving to a multi-tenanted platform really does reduce cost, there must be implications for IT jobs as well.

The stock answer for internal staff is that reducing infrastructure cost is an opportunity for new uses of IT that are beneficial to the business. Staff currently engaged in keeping the wheels turning can now deliver more and better applications. That seems to me a rose-tinted view, but there may be something in it.

Five years of Amazon Web Services

Amazon introduced its Simple Storage Service in March 2006. S3 was not the first of the Amazon Web Services (AWS); they were originally developed for affiliates who needed programmatic access to the Amazon retail store in order to use its data on third-party web sites. That said, there is a profound difference between a web service for your own affiliates, and one for generic use. I consider S3 to mark the beginning of Amazon’s venture into cloud computing as a provider.

It is also something I have tracked closely since those early days. I quickly wrote a Delphi wrapper for S3; it did not set the open source world alight but did give me some hands-on experience of the API. I was also on the early beta for EC2.

Amazon now dominates the section of the cloud computing market which is its focus, thanks to keen pricing, steady improvements, and above all the fact that the services have mostly worked as advertised. I am not sure what its market share is, or even how to measure it, since cloud computing is a nebulous concept. This Wall Street Journal article from February 2011 gives Rackspace the number two slot but with only one third of Amazon’s cloud services turnover, and includes the memorable remark by William Fellows of the 451 Group, “In terms of market share Amazon is Coke and there isn’t yet a Pepsi.”

The open source Eucalyptus platform has paid Amazon a compliment by implementing its EC2 API:

Eucalyptus is a private cloud-computing platform that implements the Amazon specification for EC2, S3, and EBS. Eucalyptus conforms to both the syntax and the semantic definition of the Amazon API and tool suite, with few exceptions.

AWS is not just EC2 and S3. Other offerings include two varieties of cloud database, services for queuing, notification and email, and the impressive Elastic Beanstalk for automatically scaling your application on demand.

Should we worry about Amazon’s dominance in cloud computing? Possibly, especially as the barriers to entry are considerable. Another concern is that as more computing infrastructure becomes dependent on Amazon, the potential disruption if the service were to break increases. How many of Amazon’s AWS customers have a plan B for when EC2 fails? Amazon defuses anti-competitive concerns by continuing to offer commodity pricing.

Amazon has quietly changed the computing landscape though; and though this is a few weeks late the 5th birthday of its cloud services deserves a mention.

Viewsonic ViewPad 10 Pro does Windows and Android – but Windows first

Viewsonic has announced the ViewPad 10 Pro, a 10” tablet that runs both Microsoft Windows 7 and Google Android 2.2.


I saw the ViewPad 10 Pro briefly this morning here at Mobile World Congress. Specs include Intel Oak Trail chipset, 2GB RAM, 32GB storage, and front-facing camera for conferencing.

The big appeal of the ViewPad 10 Pro, successor to the ViewPad 10, is that it runs Android as well as Windows. Just tap a button, and Android appears in place of Windows.

Sounds good; but as Viewsonic explained how this works I became doubtful. Apparently Android runs in a virtual machine on top of Windows. I have nothing against virtualization; but this approach does suggest some compromises in terms of Android performance and efficiency. No matter how clever Viewsonic has been in its implementation, some resources will still be devoted to Windows during an Android session and battery life will be less good than it might be.

I can see more sense in running Android first, for the sake of its speed and efficiency on low-power hardware, and Windows in virtualization for when you need to dip into Excel or some other Windows application.

The upside of this approach is that you can switch between the two without having to to do a hard reboot.

Viewsonic says you will be able to get one of these in your hands around May 2011.

Microsoft Hyper-V Annoyance: special permissions for VHDs

Today I needed to enlarge a virtual hard drive used by a Hyper-V virtual machine.

No problem: I used the third-party VHD Resizer which successfully copied my existing VHD to a new and larger one.

The snag: when I renamed the VHDs so that the new one took the place of the old, the VM would not start and Hyper-V reported “Access Denied”.

I looked at the permissions for the old VHD and noticed that they include full access for an account identified only by a GUID. Even more annoying, you cannot easily add those permissions to another file, as the security GUI reports the account as not found.

The solution comes from John Dombrowski in this thread:

1. Shutdown the VM
2. Detach the VHD file, apply changes
3. Reattach the VHD file, apply changes

This replaced the correct GUID for the VM.

Incidentally, this might not work if you use a remote Hyper-V manager. Permissions for remote management of Hyper-V are a notoriously prickly thing to set up. I have had problems on occasion with importing VMs, where this did not work from the remote management tool but did work if done on the machine itself, with similar access denied errors reported. If you use exactly the same account it should not be a problem, but if the remote user is different then bear this in mind.