Bare-metal recovery of a Hyper-V virtual machine

Over the weekend I ran some test restores of Microsoft Hyper-V virtual machines. You can restore a Hyper-V host, complete with its VMs, using the same technique as with any Windows server; but my main focus was on a different scenario. Let’s say you have a Server 2008 VM that has been backed up from the guest using Windows Server Backup. In my case, the backup had been made to a VHD mounted for that purpose. Now the server has been stolen and all you have is your backup. How do you restore the VM?

In principle you can do a bare-metal restore in the same way as with a physical machine. Configure the VM as closely as possible to how it was before, attach the backup, boot the VM from the Server 2008 install media, and perform a system recovery.

Unfortunately this doesn’t work if your VM uses VHDs attached to the virtual SCSI controller. The reason is that the recovery console cannot see the SCSI-attached drives. This is possibly related to the Hyper-V limitation that you cannot boot from a virtual SCSI drive.

The workaround I found was first to attach the backup VHD to the virtual IDE controller (not SCSI), so the recovery console can see it. Then to do a system recovery of the IDE drives, which will include the C drive. Then to shutdown the VM (before the restart), mount both the backup and the SCSI-attached VHDs on the host using diskpart, and use wbadmin to restore each individual volume. Finally, detach the VHDs and restart the VM.

It worked. One issue I noticed though is that the network adapter in the restored VM was considered different to the one in the original VM, even though I applied the same MAC address. Not a great inconvenience, but it meant fixing networking as the old settings were attached to the NIC that was now missing.

I’ve appended the details to my post on How to backup Small Business Server 2008 on Hyper-V.

Dusty PC keeps on keeping on

It is amazing how well desktop PCs work even when choked by dust. This one was working fine:

image

It is a little hard to see from the picture, but the mass of dust on the right is actually a graphics card. The graphics card has its own fan, which was so dust-choked that the the space between the blades was filled.

Running PCs in this state is not a good idea. If your machine is not working properly or overheating, it is worth a look. If it is working fine, perhaps it is a worth a look anyway. The best way to clear it is with one of those air duster aerosols.

Changing the motherboard under Windows 7

Today I needed to swap motherboards between a machine running Hyper-V Server 2008 R2 and another running 32-bit Windows 7. No need to go into the reason in detail; it’s to do with some testing I’m doing of Hyper-V backup and restore. The boards were similar, both Intel, though one had a Pentium D processor installed and the other a Core Duo. Anyway, I did the deed and was intrigued to see whether Windows would start in its new hardware.

Hyper-V Server – which is really 64-bit Server Core 2008 R2 – started fine, installed some new drivers, requested a restart, and all was well.

Windows 7 on the other hand did not start. It rebooted itself and offered startup repair, which I accepted. It suggested I try a system restore, which I refused, on the grounds that the problem was not some new corruption, but that I had just changed the motherboard. Next, startup repair went into a lengthy checking procedure, at the end of which it reported failure with an unknown problem possibly related to a configuration change.

That was annoying. Then I remembered the problems Windows has with changing to and from AHCI, a BIOS configuration for Serial ATA. I posted on the subject in the context of Vista. I checked the BIOS, which was set to AHCI, changed it to IDE mode, and Windows started fine. Then I made the registry change for AHCI, shutdown, changed back to AHCI in the BIOS. Again, Windows started fine.

What puzzles me is why the long-running Windows 7 startup repair sequence does not check for this problem. If the alternative is a complete reinstall of Windows, it could save a lot of time and aggravation.

It is also worth noting that Windows 7 declared itself non-genuine after this operation, though actually it re-activated OK. I guess if you had two machines with OEM versions of Windows 7, for example, and swapped the motherboards, then strictly you would need two new licenses.

Don Syme on F#

I’ve posted a lengthy interview with Don Syme, designer of Microsoft’s functional programming language F#. It covers:

  • The genesis of F#
  • Why it is in Visual Studio 2010
  • How it differs from other ML languages
  • Who should use it
  • What it brings to parallel and asynchronous programming
  • Unit testing F#
  • Future plans for F#
  • Book recommendations

One of the questions is: if I’m a C# or C++ developer, what practical, business-benefit reason is there to look at F#? Worth a read if you’ve wondered about that.

Setting up RemoteApp and secure FTP on Windows

I spent some time setting up RemoteApp and secure FTP for a small business which wanted better remote access without VPN. VPN is problematic for various reasons: it is sometimes blocked by public or hotel wifi providers, it is not suitable for poor connections, performance can be poor, and it means constantly having to think about whether your VPN tunnel is open or not. When I switched from connecting Outlook over VPN to connecting over HTTP, I found the experience better in every way; it is seamless. At least, it would be if it weren’t for the connection settings bug that changes the authentication type by itself on occasion; but I digress.

Enough to say that VPN is not always the best approach to remote access. There’s also SharePoint of course; but there are snags with that as well – it is powerful, but complex to manage, and has annoyances like poor performance when there are a large number of documents in a single folder. In addition, Explorer integration in Windows XP does not always work properly; it seems better in Vista and Windows 7.

FTP on the other hand can simply publish an existing file share to remote users. FTP can be horribly insecure; it is a common reason for usernames and passwords to passed in plain text over the internet. Fortunately Microsoft now offers an FTP service for IIS 7.0 that can be configured to require SSL for both password exchange and data transmission. I would not consider it otherwise. Note that this is different from the FTP service that ships with the original Server 2008; if you don’t have 2008 R2 you need a separate download.

So how was the setup? Pretty frustrating at the time; though now that it is all working it does not seem so bad. The problem is the number of moving parts, including your network configuration and firewall, Active Directory, IIS, digital certificates, and Windows security.

FTP is problematic anyway, thanks to its use of multiple ports. Another point of confusion is that FTP over SSL (FTPS) is not the same thing as Secure FTP (SFTP); Microsoft offers an FTPS implementation. A third issue is that neither of Microsoft’s FTP clients, Internet Explorer or the FTP command-line client, support FTP over SSL, so you have to use a third-party client like FileZilla. I also discovered that you cannot (easily) run a FTPS client behind an ISA Server firewall, which explained why my early tests failed.

Documentation for the FTP server is reasonable, though you cannot find all the information you need in one place. I also found the configuration perplexing in places. Take this dialog for example:

image

The Data Channel Port Range is disabled with no indication why – the reason is that you set it for the entire IIS server, not for a specific site. But what is the “External IP Address of Firewall”? The wording suggests the public IP address; but the example suggests an internal, private address. I used the private address and it worked.

As for RemoteApp, it is a piece of magic that lets you remote the UI of a Windows application, so it runs on the server but appears to be running locally. It is essentially the same thing as remote desktop, but with the desktop part hidden so that you only see the window of the running app. One of the attractions is that it looks more secure, since you can give a semi-trusted remote user access to specified applications only, but this security is largely illusory because under the covers it is still a remote log-in and there are ways to escalate the access to a full desktop. Open a RemoteApp link on a Mac, for example, and you get the full desktop by default, though you can tweak it to show only the application, but with a blank desktop background:

image

Setup is laborious; there’s a step by step guide that covers it well, though note that Terminal Services is now called Remote Desktop Services. I set up TS Gateway, which tunnels the Terminal Server protocol through HTTPS, so you don’t have to open any additional ports in your firewall. I also set up TS Web Access, which lets users navigate to a web page and start apps from a list, rather than having to get hold of a .RDP configuration file or setup application.

If you must run a Windows application remotely, RemoteApp is a brilliant solution, though note that you need additional Client Access Licenses for these services. Nevertheless, it is a shame that despite the high level of complexity in the configuration of TS Gateway, involving a Connection Authorization Policy and a Resource Authorization Policy, there is no setting for “only allow users to run these applications, nothing else”. You have to do this separately through Software Restriction Policies – the document Terminal Services from A to Z from Cláudio Rodrigues at WTS.Labs has a good explanation.

I noticed that Rodrigues is not impressed with the complexity of setting up RemoteApp with TS Gateway and so on on Windows Server 2008 R2:

So years ago (2003/2004) we had all that sorted out: RDP over HTTPS, Published Applications, Resource Based Load Balancing and so on and no kidding, it would not take you more than 30 minutes to get all going. Simple and elegant design. More than that, I would say, smart design.

Today after going through all the stuff required to get RDS Web Access, RDS Gateway and RDS Session Broker up and running I am simply baffled. Stunned. This is for sure the epitome of bad design. I am still banging my head in the wall just thinking about how the setup of all this makes no sense and more than that, what a steep learning curve this will be for anyone that is now on Windows Server 2003 TS.

What amazes me the most is Microsoft had YEARS to watch what others did and learn with their mistakes and then come up with something clean. Smart. Unfortunately that was not the case … Again, I am not debating if the solution at the end works. It does. I am discussing how easy it is to setup, how smart the design is and so on. And in that respect, they simply failed to deliver. I am telling you that based on 15+ years of experience doing nothing else other than TS/RDS/Citrix deployments and starting companies focused on TS/RDS development. I may look stupid indeed but I know some shit about these things.

Simplicity and clean design are key elements on any good piece of software, what someone in Redmond seems to disagree.

My own experience was not that bad, though admittedly I did not look into load balancing for this small setup. I agree though: you have to do a lot of clicking to get this stuff up and running. I am reminded of the question I asked a few months back: Should IT administration be less annoying? I think it should, if only because complexity increases the risk of mistakes, or of taking shortcuts that undermine security.

Ten years of Microsoft .NET – but what about the next ten?

Technology products have many birthdays – do you count from first announcement, or release to manufacturing, or general availability? Still, this week is a significant one for Microsoft .NET and the C# language, which was first unveiled to the world in detail at Tech-Ed Europe on July 7th, 2000. The timing was odd; July 7th was the last day of Tech-Ed, whereas news at such events is normally reserved to the first day or two – but the reason was to preview the announcement at the Professional Developers Conference in Orlando the following week. It was one of the few occasions when Europe got the exclusive, though as I recall most of the journalists had already gone home.

It is interesting to look back, and I wrote a piece for The Register on .NET hits and misses. However you spin it, it’s fair to say that the .NET platform has proved to be one of Microsoft’s better initiatives, and has delivered on at least some of its goals.

It is even more interesting to look forward. Will we still be using .NET in 2020?

There is no sign of Microsoft announcing a replacement for .NET; and little sign of .NET catching on in a big way outside the Microsoft platform, so in part the question is about how the company will fare over the coming decade. Still, it is worth noting that the role of the .NET framework  in that platform still seems to be increasing.

Most predictions are wrong; but the general trend right now is towards the cloud+device computing model. The proposition is that both applications and data belong in the cloud, whether public, private or hybrid. Further, it seems plausible that we will fall out of love with personal computers, with all their complexity and vulnerability to malware, and embrace devices that just work, where the operating system is locked down, data is just a synchronised local cache, and applications are lightweight clients for internet services. Smartphones are already like this, but by the end of this year when Apple’s iPad has been joined by other slates and small computers running Google Android, Google ChromeOS, Intel/Nokia MeeGo and HP WebOS, it may be obvious that traditional laptop and desktop computers will decline.

It turns out that the .NET Framework is well suited to this model, so much so that Microsoft has made it the development platform for Windows Phone 7. Why stop at Windows Phone 7 – what about larger devices that run only .NET applications, sandboxed from the underlying operating system and updated automatically over the Internet? Microsoft cannot do that for Windows as we know it, because we demand compatibility with existing applications, but it could extend the Windows Phone 7 OS and application model to a wider range of devices that take over some of the tasks for which we currently use a laptop.

In theory then, with Azure in the cloud and Silverlight on devices, the next ten years could be good ones for the .NET Framework.

That said, it is also easy to build the case against. Microsoft has it all to do with Windows Phone 7; the market is happily focused on Apple and Google Android devices at the high end. Microsoft’s hardware partners are showing signs of disloyalty, after years of disappointment with Windows Mobile, and HP has acquired Palm. If Windows Phone 7 fails to capture much of the market, as it may well do, then mobile .NET will likely fail with it. Put this together with a decline in traditional Windows machines, and the attraction of .NET as a cloud-to-client framework will diminish.

Although developer platform VP Scott Guthrie, C# architect Anders Hejlsberg and others are doing an excellent job of evolving the .NET framework, it is the success or failure of the wider Microsoft platform that will determine its future.

iTunes hacks: whose fault are they?

A big story today concerns irregular activity on Apple’s iTunes store, the one and only means of purchasing applications for iPhone and iPad and central to the company’s strategy. The reports allege that developers are hacking iTunes accounts to purchase and give favourable review to their apps – which can only be a short term strategy since you would imagine that such activity would soon be detected and the perpetrators traced through the payment system.

As it happens I’d been meaning to post about iTunes security in any case. I blogged about an incident just over a month ago, since when there have been a steady stream of comments from other users who say that their iTunes accounts were hacked and fraudulent purchases made.

A recent comment refers to this thread, started over a year ago and now with over 200 comments from similarly afflicted users.

Despite the number of reported incidents, there is no reason to suppose that Apple’s servers have been broken into. Several other mechanisms are more likely, including malware-infected computers on which users may have stored passwords, or have keystrokes logged; or successful attempts to guess passwords or the answer to so-called “security questions” which also give access to account details.

Such questions should be called insecurity questions, since they are really designed to reduce the burden on helpdesks from users who have lost passwords or access to obsolete email accounts. Since they allow access to accounts without knowing the password, they reduce security, and even more so when the questions are for semi-public information like mother’s maiden name, which is commonly used.

Given the number of iTunes accounts, it is not surprising that there are numerous successful hacks, whether or not there is some issue (other than the insecurity questions) with iTunes or Apple’s servers.

That said, there is a consistent theme running through all these threads, which is that Apple’s customer service towards victims of hacking seems poor. Contact is email-only, users are simply referred to their banks, Apple promises further contact within 24 hours that is often not forthcoming, and there are reports of users losing access to credit or previous purchases. It was an instance of the latter which prompted my earlier post.

Apple therefore should fix its customer service, even if its servers are watertight. I’d like to see it lose the insecurity questions too.

image

iPhone 4 Antenna: Apple wrongly calls it a software problem – but it is easily fixed with a case

Apple is sufficiently bothered by criticism of the iPhone 4 antenna, an external band around the device whose reception is poor when held in the normal way, that it has posted a letter on the subject:

We have discovered the cause of this dramatic drop in bars, and it is both simple and surprising.

Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength.

Apple’s reasoning is that because the range of values displayed by its signal bars is smaller than it should be, users can see a signal drop of two or three bars when the real drop is only a small one. So it’s apologised … for its software error:

For those who have had concerns, we apologize for any anxiety we may have caused.

However, users are not primarily concerned about the number of bars. They are concerned about calls dropping, or even being unable to make calls. The best article I have seen on the matter is Anandtech’s detailed review which has the measurements: the iPhone 4’s signal attenuation when “holding naturally” is 19.8dB, nearly twice as severe as an HTC Nexus One at 1.9dB, and ten times worse than an iPhone 3GS at 1.9dB.

It is disappointing that Apple will not own up to the problem, or do anything about it for existing customers – though you can bet that future iterations of iPhone 4 will fix the issue.

Still, there is one thing in Apple’s letter that I agree with:

As a reminder, if you are not fully satisfied, you can return your undamaged iPhone to any Apple Retail Store or the online Apple Store within 30 days of purchase for a full refund.

The antenna problem is a fault and a return is justified. That said, you can fix the problem by buying a case – yes, Apple should pay, but it seems determined to avoid doing so. Since iPhone 4 is still in high demand, my assumption is that most customers feel it is worth having despite its flaw.

Kin questions as Microsoft pulls the plug

So Microsoft has stopped work on its Kin phone and cancelled plans for a European launch:

We have made the decision to focus on our Windows Phone 7 launch and we will not ship KIN in Europe this fall as planned. Additionally, we are integrating our KIN team with the Windows Phone 7 team, incorporating valuable ideas and technologies from KIN into future Windows Phone releases. We will continue to work with Verizon in the U.S. to sell current KIN phones.

The Kin went on sale in May in the US, on Verizon. I’ve never seen a Kin device; but there were several obvious problems:

  • The phones were not that good, according to reports. In perhaps the most competitive technology market that exists, a device has to be exceptional to succeed; and even then it might not. Palm webOS phones are great devices and still not really winners.
  • The Verizon plan was too expensive at $70 per month – a bewildering price for the youth market which was the supposed target.
  • Even if the phones and service had been good, the launch was puzzling in the context of the build-up to Windows Phone 7 later this year.

My initial reaction to Kin was “Whose fault is it?” and there has been no reason to change it.

The whole thing is a tragi-comedy, and joins projects like the Ultra Mobile PC, or Origami, whose failure was baked into the launch – Origami was also too expensive for its market as well as flawed in its design.

Killing the Kin after just a few weeks is embarrassing, but the right decision.

The key question though: what does the costly development, launch, and scrapping of Kin say about Microsoft’s management? If I were a shareholder I’d like to know the answer to that one.

I might also ask why Microsoft is spending big on an advertising campaign to persuade us to become “new busy” when we are already busy enough, for an online service that is mostly not yet launched? I wonder how many potential users took a look at the new Hotmail, observed that it was the same as the old one, and will never come back?

In the case of Kin the company has at least recognized its mistake; but the deeper problem is an accident-prone culture that is damaging Microsoft’s prospects.