Category Archives: virtualization

How Amazon Web Services dominate infrastructure as a service

During Mobile World Congress I met with some folk from Twilio, the cloud telephony company, who said something that interested me. Twilio uses Amazon Web Service (AWS) for its infrastructure and told me that essentially there is no choice, AWS is the only cloud provider which can scale on demand quickly and smoothly as required.

Today at QCon London I met a guy from another major cloud-based company, which is also built on Amazon, and I put the same point to him. Not only did he agree, he said that Amazon is increasing its lead over the competition. Amazon is less visible than some, he said, because of its approach to PR. It concentrates on marketing directly to developers rather than chasing press stories or running big advertising campaigns.

Amazon has also just reduced its prices.

This is mixed news for the industry. In general developers I speak to like working with AWS, and its scalability is a huge benefit when, for example, you are entering a new market. You can do so without having to invest in IT infrastructure.

On the other hand, stronger competition would be healthy. My contact said that he reckoned his company could move away from Amazon in 12 weeks or so if necessary, so it is not an absolute lock-in.

Storage Spaces coming to Windows 8 client as well as server

Steven Sinofsky has posted on the Building Windows 8 blog, making it clear that this feature is coming to the Windows 8 client as well as to Windows Server 8.

I took a hands-on look at Storage Spaces back in October.

The feature lets you add and remove physical drives from a pool of storage, create virtual disks in that pool with RAID-like resiliency if you have more than one physical drive available. There is also “thin provisioning”, which lets you create a virtual disk bigger than the available space. It sounds daft at first, but makes sense if you think of it as a resource to which you add media as needed rather than paying for it all up-front. It

The server version includes data deduplication so that similar or identical files occupy less physical space. Another feature which is long overdue is the ability to allocate space to a virtual folder rather than to a drive letter.

I do not know if all these features will come to the Windows 8 client version, but as data deduplication is not mentioned in Sinofsky’s post, and the dialog he shows does not include a folder option, it may well be that these are server-only. This is the new Windows 8 dialog:

image

Storage Spaces occupies a kind of middle ground in that enterprises will typically have more grown-up storage systems such as a Fibre Channel or iSCSI SAN (Storage Area Network). At the other end of the scale, individual business users do not want to bother with multiple drives at all. Nevertheless, for individuals with projects like storing large amounts of video, or small businesses looking for good value but reliable storage based on cheap SATA drives, Storage Spaces look like a great feature.

Most computer professionals will recall seeing users struggling with space issues on their laptop, not realising that the vendor (Toshiba was one example) had partitioned the drive and that they had a capacious D drive that was completely empty. It really is time that Microsoft figured out how to make storage management seamless and transparent for the user, and this seems to me a big step in that direction.

Installing Windows 8 developer preview on VirtualBox

I have installed the Windows 8 developer preview on Oracle VirtualBox. It does not work on Virtual PC since 64-bit guests are not supported. It is probably fine on Hyper-V, but I don’t have spare Hyper-V capacity for it at the moment.

image

I had a few hassles and thought it would be worth sharing my notes.

I gave the VM 2GB of RAM, 2 processors, and the maximum amount of video ram, but these settings are up to you.

The main problem I encountered was with the mouse. I found that it worked a bit in the Windows 8 guest, but only a bit. The pointer jumped around and was too frustrating to use.

The solution I found was to remote desktop to the VM from my Windows 7 desktop. I could not get the remote desktop built into VirtualBox to work, on a brief try, so I used pure Windows to Windows.

In order to do this, I first set networking in VirtualBox to Bridged. This means it is on the same subnet as the host computer. Then I enabled remote desktop access in the Windows 8 control panel. I opened a command prompt to check the IP address – Windows key + R opens the Run prompt and is a useful combination when the mouse is not working.

Then I was able to use remote desktop to that IP address. Note that unless you join the Windows 8 machine to a domain, the username is:

machinename\email address

or alternatively

WindowsLiveID\email address

presuming you do the default thing, which is to hook up Windows 8 to a Live ID.

Now, if you do this you will have two GUIs showing, which is untidy. You can fix this by running the VM headless. Shut down the VM, navigate to the VirtualBox directory and run the following command:

vboxheadless –startvm yourvmname

Now you can log on to the Windows 8 VM without having any other instance on the screen.

You might not have the same problem with the mouse, of course.

Incidentally, I am not sure what is the best way to shutdown the VM, but I use a command prompt or WindowsKey – R and type:

shutdown /s

My final observation: Windows 8 with just mouse and keyboard is a lot less fun than on a real tablet. It raises the question of just how much value there is in Windows 8 for non-tablet users. I suspect rather little, which is why Windows 7 is set for a long life on the corporate desktop, and for other users who do not have touch screens.

Perfect virtualization is a threat to Windows: VMWare aims to embrace and extinguish

One advantage Microsoft Windows has in the cloud and tablet wars is that it it is, well, Windows. Microsoft’s Office 365 cloud computing product largely depends on the assumption that users want to run Office on their local PC or tablet. Windows 8 tablets will be attractive to enterprises that want to continue running custom Windows applications, though they had better be x86 tablets since applications for Windows on ARM processors will require recompilation.

On the other hand, if you can run Windows applications just as easily on a Mac, Apple iPad or Android tablet, then Microsoft’s advantage disappears. Virtualization specialist VMWare is making that point at its VMworld conference in Las Vegas this week. In a press release announcing “New Products and Services for the Post-PC Era”, VMWare VP Christopher Young says:

As our customers begin to embrace this shift to the post-PC era, we offer a simple way to deliver a better Windows-based desktop-as-a-service that empowers organizations to do more with what they already have. At the same time, we are investing in expertise and delivering the open products needed to accelerate the journey to a new way to work beyond the Windows desktop.

Good for Windows, good for what comes after Windows is the message. It is based on several products:

VMView: enables users to work on a remote Windows desktop from a machine running thick or thin client Windows, Mac, iPad or Android. Some versions let you check out a VM for offline working. Virtual desktops have advantages over local ones, in that they are more manageable, more secure, and more robust. Zapping and reinstating a virtual desktop is easier than rebuilding the OS on a physical machine. The new VMView 5.0 claims up 75 percent bandwidth improvement and better 3D graphics. Performance is always compromised to some extent, versus a local operating system, but for many business applications it is more than good enough.

VMWare Horizon: A cloud identity platform which centralizes authentication and access management. You can think of it as VMWare’s cloud-based replacement for Microsoft Active Directory. It is currently focused on access to web-based applications but at VMWorld the company announced its extension to virtual Windows applications, a capability to be in beta by the end of 2011.

Project AppBlast: Lets users run virtual Windows application in an HTML5-capable browser running on any device.

Project Octopus: a data synchronization service to enable collaboration and data-sharing, which will link to VMWare’s other services.

VMWare’s advantage is its strong technology and that Microsoft allowed its own virtualization technologies, including Hyper-V and Remote Desktop Services, to fall behind.

That said, Microsoft has made a substantial effort to catch up in the last few years. Hyper-V and System Center working together form the basis of Microsoft’s private cloud, and under the covers the Azure platform is based on Hyper-V virtual machines. Microsoft’s advantage is the notion that if you are running Windows server and Windows applications anyway, you will be better off with the built-in virtualization features rather than a third-party solution. Microsoft can also afford to undercut VMWare’s prices, because it is bundling virtualization with its operating system. Microsoft has made it easier to run mixed VMWare and Hyper-V systems by supporting VMWare with System Center.

An entrenched competitor is hard to shift though, and VMWare appears to have won the argument with Dell, which has announced the Dell Cloud based on VMWare’s vCloud multi-tenant services.

What is interesting here is not so much the question of who runs Windows applications best, in a variety of virtual scenarios, but the extent to which VMWare succeeds in establishing its own identify system as the heart of an application platform that lets enterprises move seamlessly to a non-Windows world. In other words, VMWare Horizon is now VMWare’s most strategic product. If it succeeds, then it is not only Microsoft that will need to pay attention.

When remote desktop does not connect: changing Windows DNS setttings remotely

This was an annoying. I tried to remote desktop into my Hyper-V Server today and could not. The message:

Remote Desktop cannot verify the identity of the remote computer because there is a time or date difference between your computer and the remote computer.

image

Hmm. I typed:

net time \\myhypervbox

and it was the same as the time on my desktop.

A Google or two later, and I discovered that this message is caused by an incorrect DNS setting on the target computer. That made sense, since a DNS server died recently. I had changed the settings on the VMs but forgot to do it on the Hyper-V host. Thank you Microsoft for a misleading error message.

Of course my Hyper-V server has no screen attached. So how to change the DNS setting? Umm, not by remote desktop.

I fiddled with netsh for a bit. This looks promising, but it was not playing ball. I tried to list the interfaces and it gave an error saying it could not do so when remote access is not running. Further, I have two network cards in this machine, and Hyper-V creates virtual interfaces, and I was not sure what the correct network interface name was.

Next up was the registry editor. Run Regedit, choose File – Connect Network Registry. That worked. I went to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tcpip\Parameters\Interfaces

This lists the network interfaces as GUIDs. I went through them one by one, and in the two cases where the NameServer entry was set to the dead server, I changed it to the new one.

There is also an entry for NameServer in the top level Parameters key but this was blank and I left it alone.

If you want to know what all these keys do, there is a guide here.

I rebooted the machine, remotely of course:

shutdown /m \\myhypervbox /r

and when it restarted remote desktop worked again.

Three questions about Microsoft’s cloud play at TechEd 2011

This year’s Microsoft TechEd is subtitled Cloud Power: Delivered, and sky blue is the theme colour. Microsoft seems to be serious about its cloud play, based on Windows Azure.

Then again, Microsoft is busy redefining its on-premise solutions in terms of cloud as well. A bunch of Windows Servers on virtual machines managed by System Center Virtual Machine Manager (SCVMM) is now called a private cloud – note that the forthcoming SCVMM 2012 can manage VMWare and Citrix XenServer as well as Microsoft’s own Hyper-V. If everything is cloud then nothing is cloud, and the sceptical might wonder whether this is rebranding rather than true cloud computing.

I think there is a measure of that, but also that Microsoft really is pushing Azure heavily, as well as hosted applications like Office 365, and does intend to be a cloud computing company. Here are three side-questions which I have been mulling over; I would be interested in comments.

image

Microsoft gets Azure – but does its community?

At lunch today I sat next to a delegate and asked what she thought of all the Azure push at TechEd. She said it was interesting, but irrelevant to her as her organisation looks after its own IT. She then added, unprompted, that they have a 7,000-strong IT department.

How much of Microsoft’s community will actually buy into Azure?

Is Microsoft over-complicating the cloud?

One of the big announcements here at TechEd is about new features in AppFabric, the middleware part of Windows Azure. When I read about new features in the Azure service bus I think how this shows maturity in Azure; but with the niggling question of whether Microsoft is now replicating all the complexity of on-premise software in a new cloud environment, rather than bringing radical new simplicity to enterprise computing. Is Microsoft over-complicating the cloud, or it is more that the same necessity for complex solutions exist wherever you deploy your applications?

What are the implications of cloud for Microsoft partners?

TechEd 2011 has a huge exhibition and of course every stand has contrived to find some aspect of cloud that it supports or enables. However, Windows Azure is meant to shift the burden of maintenance from customers to Microsoft. If Azure succeeds, will there be room for so many third-party vendors? What about the whole IT support industry, internal and external, are their jobs at risk? It seems to me that if moving to a multi-tenanted platform really does reduce cost, there must be implications for IT jobs as well.

The stock answer for internal staff is that reducing infrastructure cost is an opportunity for new uses of IT that are beneficial to the business. Staff currently engaged in keeping the wheels turning can now deliver more and better applications. That seems to me a rose-tinted view, but there may be something in it.

Another Windows app store – but this time it is virtual. Embarcadero’s AppWave promises instant installs

Embarcadero has announced the AppWave Store, a forthcoming app store for Windows which uses application virtualization to avoid the hassles and risks of the usual Windows install process.

image

The idea is that purchasing apps for Windows will be as simple as installing an app on a mobile using the Apple app store or Android Market.

The underlying technology was developed to simplify deployment of Embarcadero’s tools. The All-Access subscription includes a tool box application that lets you run tools using “InstantOn”, which means no installation, just click and run. I have used this for a while now and it works well.

image

When you run an InstantOn tool for the first time, you are prompted to download:

image

There is of course a pause while the app downloads. This is not thin client technology where the app actually runs remotely. It is installed on your local machine, but isolated so there are no dependencies or conflicts.

image

Once downloaded, you just launch the application. No other setup, other than software agreement and registration prompt.

image

The download is cached, so you can launch next time without delay, and it works offline too. AppWave is a rebranded version of InstantOn, and is also available for internal deployment of Embarcadero tools.

The AppWave store takes this technology and applies to a store for the general public. Developers will pay $99 per year (though the fee is waived if you sign up now) and get AppWave Studio, which lets you convert software to run under AppWave. The conversion process is called “mastering” and only takes a few hours, according to the FAQ [pdf].

Windows XP, Vista and 7 are supported clients, availability will be worldwide at launch, and Embarcadero takes 30% of your sale price. No launch date has been announced.

I guess the first big issue is whether developers will feel that the 30% fee is good value bearing in mind that there are many other ways to sell and deploy software.

Second, there are other app stores out there or coming, not least Microsoft’s own which is likely to be part of Windows 8. Will AppWave compete effectively?

Third, does Embarcadero have what it takes to market AppWave and make a destination for Windows users looking for apps?

App virtualization is a neat trick though, and could save significant support costs as well as being appealing for customers. Deploying apps using runtimes like Silverlight or Adobe AIR can be equally seamless, but apps have to be written specifically for those runtimes, whereas AppWave works with apps written for the full Windows API.

It is surprising that Embarcadero is not also marketing the AppWave technology for developers for general purpose use. Possibly this is coming; or maybe the company will try to keep it as an exclusive benefit for the AppWave store. There are alternatives, including Microsoft App-V and VMWare ThinApp.

See also Marco Cantu’s post Understanding Embarcadero AppWave, which is what alerted me to the AppWave store.

Restoring an old Small Business Server 2008 backup: beware expired Active Directory

Seemingly tricky problems sometimes have simple solutions – but you have to find them first.

So it was with this one. I was asked to recover some emails from Small Business Server, from a backup that was about six months old. This SBS runs on Hyper-V using a proven backup system and I decided to restore onto a test system just to recover the emails. All went well until the first boot. The restored SBS went into a reboot cycle. Trying safe mode revealed the error:

STOP: c00002e2 Directory Services could not start because of the following error: A device attached to the system is not functioning. Error Status: 0xc0000001

The suggested fix for this is to boot into Directory Services Restore Mode. In my case this was not possible, because after the first failure the VM booted into Windows Error Recovery mode which does not offer Directory Services Restore Mode. Rather than try to get round this, I simply restored the server again, and took a snapshot before the first boot so I would not have to do so again.

I could now get into Directory Services Restore Mode, though note that you need to log on as .\administrator using the password set when SBS 2008 was installed. I tried some of the steps here with little success. The suggested ntdsutil commands did not work. I had to activate an instance, which by the way is ntds, and then got a message saying the operation failed because the system was in Directory Services Restore Mode and to try rebooting. I knew what the result would be.

In other words, I was getting nowhere. Then I found a user with a similar problem. The reason: Active Directory will not restore if it is older than the “tombstone lifetime”. This is nicely explained here. It is to do replication. Active Directory is designed to replicate between domain controllers, which means it has to keep a record of deleted items. If a particular instance is older than the tombstone lifetime, it could not replicate safely, hence the error message.

Well, kind-of. Note that the error message says, “A device attached to the system is not functioning”. If only it could have said, “Active Directory is too old”, that would have saved some time. Note also that SBS is often the sole domain controller, making the problem irrelevant. Note further that in my case I did not care a jot about replication, since all I needed was some emails.

Still, it gave me an easy solution. Just set the date back in Hyper-V and reboot. Everything worked fine.

In the end it did not cost me too much time, and doing this stuff in Hyper-V while getting on with your work during the slow bits is a lot more fun than when using real systems.

I do find it interesting though how these simple problems can surface as bewildering errors that lead you through a maze of obscure technical documents before you find the simple solution.

Microsoft Hyper-V Annoyance: special permissions for VHDs

Today I needed to enlarge a virtual hard drive used by a Hyper-V virtual machine.

No problem: I used the third-party VHD Resizer which successfully copied my existing VHD to a new and larger one.

The snag: when I renamed the VHDs so that the new one took the place of the old, the VM would not start and Hyper-V reported “Access Denied”.

I looked at the permissions for the old VHD and noticed that they include full access for an account identified only by a GUID. Even more annoying, you cannot easily add those permissions to another file, as the security GUI reports the account as not found.

The solution comes from John Dombrowski in this thread:

1. Shutdown the VM
2. Detach the VHD file, apply changes
3. Reattach the VHD file, apply changes

This replaced the correct GUID for the VM.

Incidentally, this might not work if you use a remote Hyper-V manager. Permissions for remote management of Hyper-V are a notoriously prickly thing to set up. I have had problems on occasion with importing VMs, where this did not work from the remote management tool but did work if done on the machine itself, with similar access denied errors reported. If you use exactly the same account it should not be a problem, but if the remote user is different then bear this in mind.

Migrating from physical to virtual with Hyper-V and disk2vhd

I have a PC on which I did most of my work for several years. It runs Windows XP, and although I copied any critical data off it long ago, I still wheel it out from time to time because it has Visual Studio 6 and Delphi 7 projects with various add-ins installed, and it is easier to use the existing PC than to replicate the environment in a virtual machine.

These old machines are a nuisance though; so I thought I’d try migrating it to a virtual machine. There are numerous options for this, but I picked Microsoft Hyper-V because I already run several test servers on this platform with success. Having a VM on a server rather than on the desktop with Virtual PC, Virtual Box or similar means it is always easily available and can be backed up centrally.

The operation began smoothly. I installed the free Sysinternals utility Disk2vhd, which uses shadow copy so that it can create a VHD (virtual hard drive) from the system on which it is running. Next, I moved the VHD to the Hyper-V server and created a new virtual machine set to boot from that drive.

Windows XP started up first time without any blue screen problems, though it did ask to be reactivated.

image

I could not activate yet though, because XP could not find a driver for the network card. The solution was to install the Hyper-V integration services, and here things started to go wrong. Integration services asked to upgrade the HAL (Hardware Abstraction Layer), a key system DLL:

image

However, on restart I got the very same dialog.

Fortunately I was not the first to have this problem. I was prepared for some hassle and had my XP with SP3 CD ready, so I copied and expanded halaacpi.dll from this CD to my system32 folder and amended boot.ini as suggested:

multi(0)disk(0)rdisk(0)partition(2)\WINDOWS=”Disk2vhd Microsoft Windows XP Professional” /FASTDETECT /NOEXECUTE=OPTIN /HAL=halaacpi.dll

I rebooted and now the integration services installed OK. However, if you do this then I suggest your delete the /HAL=halaacpi.dll argument before rebooting again, as with this in place Windows would not start for me.  In fact, you can delete the special Disk2vhd option in boot.ini completely; it is no longer needed.

After that everything was fine – integration worked, the network came to life and I activated Windows – but performance was poor. To be fair, it was not that good in hardware either. Still, I am working on it. I’ve given the Virtual Machine 1.5GB RAM and dual processors. Removing software made obsolete by the migration, things like the SoundMAX and NVIDIA drivers seemed to help quite a bit. It is usable, and will improve as I fine-tune the setup.

Overall, the process was easier than I expected and getting at my old developer setup is now much more convenient.