All posts by onlyconnect

How Hyper-V can seem to lose your data

I’m sure it can really lose your data as well, but in this case “seem” is the appropriate word. I’ve been messing around with Hyper-V and one of my test machines is a SharePoint server. I started this up and found I could not access it over the network. On further investigation, it turned out to be a broken trust relationship with the Domain Controller. In other words, on attempting to log on with domain credentials I got the message:

The trust relationship between this workstation and the primary domain failed

The official advice when confronted with this problem is to remove and re-join it to the domain, creating a new computer account. I did so. Logged on, and was disappointed to discover that SharePoint was now empty. Worse still, even checking out the SQL Server databases did not uncover them. All my documents had vanished.

It turned out that I had done the wrong thing. What had really happened is that Hyper-V had been saving my changes on that virtual hard drive to a “differencing disk”, a file with an .avhd extension. This is part of the Hyper-V snapshot system. Somehow, Hyper-V had forgotten the differencing disk, and started up my SharePoint VM using the last fully merged copy of the drive, which was over a month old. My drive had gone back in time, so the data had gone.

The solution was to restore the old parent .vhd from backup, and then manually merge it with the differencing file. Step by step instructions are here. Since I had deleted the original computer account, I then had to remove and rejoin the machine to the domain a second time. All was well and my data reappeared.

The bug here is how Hyper-V managed to start with an old version of the virtual hard drive in the first place. I can imagine this causing panic if it occurs in production – and once you start writing new, important data to the old version you are really in trouble. I was lucky that the discrepancy was severe enough that Active Directory complained.

Virtualization may be wonderful; but it also introduces new problems of its own.

The other lesson is that those .vhd files in C:\Users\Public\Public Documents\Hyper-V\Virtual Hard Disks do not necessarily contain your latest data. You also need to consider the .avhd files stored handily at C:\Program Data\Microsoft\Windows\Hyper-V\Snapshots.

Technorati tags: , , ,

Have you seen a real JavaFX app yet? Sun’s misleading 100 million claim

I haven’t – only samples and demos. Which makes Jonathan Schwartz’s claim of 100,000,000 downloads, presented as “JavaFX Hits 100,000,000 Milestone!”, suspect. Still, I reckon there is an easy explanation. JavaFX is now included with the JRE, the standard Java runtime download. So what Schwartz means – please correct me if I am wrong – is that there have been 100,000,000 downloaded updates to the JRE (no doubt partly thanks to Sun’s auto updater on Windows), since JavaFX became part of it.

In order to test this theory, I fired up a virtual machine (using Sun’s excellent Virtual Box) which runs Vista but does not have Java installed. Then I went to Java.com and went for the free Java download. At the end of the install, I saw this dialog:

Note: it says “The JavaFX runtime will be downloaded when you click Finish”. There are no buttons aside from Finish, unless you count the close gadget. Therefore, I got JavaFX by default with the JRE.

I have nothing against JavaFX, but meaningless PR spin will do nothing to help the technology.

Technorati tags: ,

QCon next month reports strong registrations

An email this morning reminds me that QCon takes place in London next month – this is one of my favourite developer-focused conferences, with excellent speakers covering a breadth of technology, though if you hate all things Agile it is probably not the place for you.

The organizers say that:

attendance for this year’s QCon London is actually ahead of last year’s, despite the problems in the economy

Given that it’s now common for conferences to be shrunk or cancelled, that’s impressive.

Technorati tags: ,

Fixing the Exchange 2007 quarantine – most obscure Outlook operation ever

I’ve been testing Exchange 2007 recently and overall I’m impressed. Smooth and powerful; and the built-in anti-spam is a great improvement on what is in Exchange 2003. One of the features lets you redirect spam to a quarantine mailbox. You know the kind of thing: it’s a junk bucket, and someone gets the job of sifting through it looking for false positives, like lotteries you really have won (still looking).

Sounds a nice feature, but apparently Microsoft did not quite finish it. The quarantine is a standard Exchange mailbox, which means you have to add a quarantine user. To view the quarantine, you log onto that mailbox. A bit of a nuisance, but not too bad once you have figured out the somewhat obscure means of opening another user’s mailbox within your own Outlook. You’ll notice a little usability issue. All the entries are non-delivery reports from the administrator. You cannot see who they are from without reading the report, making it harder to scan them for genuine messages.

Another issue is when you find an email you want to pluck out of the bucket. My guess is that you will need to Google this one, or call support. The trick is to open the message, and click Send Again. It is counter-intuitive, because the message you are sending again is not the one you can see – that’s the Administrator’s report – but the original message which is otherwise hidden.

So you hit Send Again. As if by magic, the lost message appears. Great; but there’s another little issue. If you hit Send, the message will be sent from you, not from the original sender.

Both issues can be fixed. The fix for Send Again is to log on as the quarantine user – opening the mailbox is not enough. Since it is not particularly easy to switch user in Outlook, the obvious solution is Outlook Web Access; or you could use Switch User in Vista to log on with Outlook as the quarantine user. Send Again will then use the original sender by default.

How about being able to see the original sender in Outlook? No problem – just follow the instructions here. I won’t bore you by repeating them; but they form, I believe, a new winner in the Outlook obscurity hall of shame. After using Notepad to create and save a form config file, you use the UI to install it, and here’s a screenshot showing how deeply the required dialog is buried:

A few more steps involving a field picker dialog reminiscent of Windows 95, and now you can see all those faked sender email addresses:

The mitigating factor is that the anti-spam rules themselves are pretty good, and I’ve not found many false positives.

SharePoint 2007 tip: use Explorer not the browser to upload documents

I am testing SharePoint on my local network. MOSS (Microsoft Office SharePoint Server) 2007 is installed, on Hyper-V of course. I go to the default site and create a new document library. Navigate to the new library, and select multiple upload. Select all the file in an existing network share that contains just over 1000 documents. Hit upload. Files upload at impressive speed. Nice. But … the library remains empty. No error reported, just a nil result.

I suspect the speed only seems impressive because it is not really uploading the documents; it is uploading a list of documents to upload later.

I try multiple upload of just three documents. Works fine.

I go to the site administration, and look at the general settings. This looks like it – a 50Mb limit:

I change it to 1000Mb. Retry the upload. Same result. Restart SharePoint. Same result.

Hmm, maybe this post has the answer:

Yes there is problem with WSS 3.0 and MOSS 2007 while uploading a multiple file at a time given the fact both supports the multifile uploading. [sic]

You can upload multiple file by using Explorer View not the default view (All Documents). In this way you can use the windows like functionality of dragging and dropping a file from your folders without encountering any error and added advantage will be the speed of uploading a file. This is the best way of uploading a file to a document library in WSS 3.0 or MOSS.

I try the multiple copy in Explorer view, and indeed it works perfectly. Another advantage: in Explorer view, all the uploaded documents retain the date of the file, whereas Multiple Upload gives them all today’s date.

Conclusion: use the Explorer view, not the web browser, to copy files to and from SharePoint. On Vista, you can make a SharePoint library a “favourite link” which simplifies navigation.

Why not just use a shared folder? That’s the big question. I’ve never had this kind of problem with simple network shares. In what circumstances is the performance overhead and hassle of SharePoint justified by the extra features it offers? I’m hoping hands-on experience will help me make judgement.

Technorati tags: ,

Is it OK to rip a CD, then sell it?

I’ve been mulling over this comment on the Music Magpie web site:

We originally launched musicmagpie as an easy way for everyone to turn their old CDs into cash so that they did not have to be thrown away if they had decided to go digital.

Music Magpie is a second-hand CD retailer which cleverly portrays itself as green by pointing out that it is better for the environment to sell your CDs than to chuck them away. Incidentally, I chuck away hundreds of CDs every year, but they are promotional data CDs from computer magazines, conferences and the like; I doubt Music Magpie would thank me for them.

So imagine that I’ve got a lot of CDs but have now “gone digital”. I suppose if I am not tech-savvy enough to know that CDs are also digital, I might re-buy the ones I still liked on iTunes. It is more likely though that I would rip my CDs to computer before doing anything else, or “import from CD” as Apple describes it. So is my next step to flog the redundant plastic to Music Magpie, or on Amazon or eBay if I want a better price?

Ethically, I’m pretty sure the answer is no. Legally, I’m not even sure it is OK to rip them in the first place. In practice, I’m aware that lots of people do this, and I imagine that it forms a significant part of the market for Music Magpie and other second-hand dealers. Pragmatically, collectors aside, a CD is pretty much useless once you have a lossless copy and a backup, so you can understand why people sell them.

It makes me wonder why there is so little guidance on the subject, for example on CDs themselves. If I pick up a CD, I read “Unauthorized copying, public performance, broadcasting, hiring or rental of this recording prohibited.” A reasonable person would presume that it is OK to sell the CD as a second-hand item. A reasonable person, noting the existence of prominent ripping features in software from the most reputable software companies (Apple, Microsoft, etc) would presume that it is OK to rip the CD. So why not both?

I’m guessing that the reason for the silence is that industry lawyers are reluctant to broach the subject, for fear of giving away too much. For example, if there were guidance that said, “it is OK to rip”, that would concede a point they may be unwilling to concede.

Technorati tags: , , , , , ,

Google says top two results get most of the hits – but what about ads?

A post on the Official Google Blog says that the first two search results get most of the clicks:

This pattern suggests that the order in which Google returned the results was successful; most users found what they were looking for among the first two results and they never needed to go further down the page.

I knew you had to be on the first page – but the “top two” result is even harder to achieve.

It is significant though that Google’s post makes no mention of ads. I am quite sure that the study included research into their effectiveness. Google has chosen not to reveal this aspect of the research.

In particular, most Google search results do not look like the examples. Rather, they have ads at the top which look just like the other results, except with a different background colour and a faint “Sponsored Links” at the right:

My question: in a result list like this, which “top two” gets the eyeballs and the clicks? The search results? Or the paid links?

Technorati tags: , ,

Kaspersky site hacked through SQL injection

There are millions of sites out there vulnerable to SQL injection; apparently one of them (at least until yesterday) was that of the security software vendor kaspersky.com. A hacker codenamed unu posted details – not all the details, but enough to show that the vulnerability was real. The hack exposed username tables and possibly personal details. Reddit has a discussion of the programming issues. According to the Reg, Kaspersky had been warned but took no action:

I have sent emails to info@kaspersky.com, forum@kaspersky.com, and webmaster@kaspersky.com warning Kasperky [sic] about the problem but I didn’t get any response," Unu, the hacker, said in an email. "After some time, still having no response from Kaspersky, I have published the article on hackersblog.org regarding the vulnerability.

The trouble with those kinds of email addresses is that they are unlikely to get to the right people. It’s still disappointing; and also disappointing that there is currently no mention of the issue (that I can see) on Kaspersky’s site. The company’s response to the security hole is equally as important as the vulnerability itself. When WordPress was hacked, founder Matt Mullenweg was everywhere responding to comments – on this blog, for example. I liked that a lot.

Technorati tags: , ,

The Exchange VSS plug-in for Server 2008 that isn’t (yet)

If you install Exchange 2007 on Server 2008, one problem to consider is that the built-in backup is not Exchange-aware. You have to use a third-party backup, or hack in the old ntbackup from Server 2003. Otherwise, Exchange might not be restorable, and won’t truncate its logs after a backup.

In June 2008 Scott Schnoll, Principal Technical Writer on the Exchange Server product team, announced that:

As a result of the large amount of feedback we received on this issue, we have decided to ship a plug-in for WSB created by Windows and the Small Business Server (SBS) team that enables VSS-based backups of Exchange.

He is making reference to the fact that Small Business Server 2008 does include a VSS (Volume Shadow Copy Service) plug-in for Exchange, so that the built-in backup works as you would expect. This was also announced at the 2008 TechEd, shipping later that summer was mentioned, and the decision was generally applauded. But SBS 2008 shipped last year. So where is the plug-in?

This became the subject of a thread on TechNet, started in August 2008, in which the participants refused to accept a series of meaningless “we’re working on it” responses:

This is becoming more than a little absurd.  I understand that these things can take time, and that unexpected delays can occur, but I rather expect that more information might be provided than “we’re working on it”, because I know that already and knew it months ago.  What sort of timeframe are we looking at, broadly?  What is the most that you are able to tell us?

Then someone spotted a comment by Group Program Manager Kurt Phillips in this thread:

We’re planning on starting work on a backup solution in December – more to follow on that.

Phillips then said in the first thread mentioned above:

The SBS team did implement a plug-in for this.  In fact, we met with them to discuss some of the early design work and when we postponed doing it in late summer, they went ahead with their own plans, as it is clearly more targeted toward their customer segment (small businesses) than the overall Exchange market.

We are certainly evaluating their work in our plan.

For those anxiously awaiting the plug-in, because they either mistrust or don’t want to pay for a third-party solution, the story has changed quite a bit from the June announcement. Apparently no work was done on the plug-in for six months or so; and rather than implementing the SBS plug-in it now seems that the Exchange team is doing its own. Not good communication; and here comes Mr Fed-Up:

Like most things from this company, we can expect a beta quality “solution” by sometime in 2010. We have a few hundred small business clients that we do outsourced IT for, and as it’s come time to replace machines, we’ve been replacing Windows PCs with Macs, and Windows servers with Linux. It’s really amazing how easy it is to setup a Windows domain on a Linux server these days. The end users can’t tell a difference.

What this illustrates is that blogging, forums and open communication are great, but only when you communicate bad news as well as good. It is remarkable how much more patient users are when they feel in touch with what is happening.

Technorati tags: , ,

Mixing Hyper-V, Domain Controller and DHCP server

My one-box Windows server infrastructure is working fine, but I ran into a little problem with DHCP. I’d decided to have the host operating system run not only Hyper-V, but also domain services, including Active Directory, DNS and DHCP. I’m not sure this is best practice. Sander Berkouwer has a useful couple of posts in which he explains first that making the host OS a domain controller is poor design:

From an architectural point of view this is not a desired configuration. From this point of view you want to separate the virtualization and platforms from the services and applications. This way you’re not bound to a virtualization product, a platform, certain services or applications. Microsoft’s high horse from an architectural point of view is the One Server, One Server Role thought, in which one server role per server platform gets deployed. No need for a WINS server anymore? Simply shut it down…

Next, he goes on to explain the pitfalls of having your DC in a VM:

Virtualizing a Domain Controller reintroduces possibilities to mess up the Domain Controller in ways most of the Directory Services Most Valuable Professionals (MVPs) and other Active Directory enthusiasts have been fixing since the dawn of Active Directory.

He talks about problems with time synchronization, backup and restore, saved state (don’t do it), and possible replication errors. His preference after all that:

In a Hyper-V environment I recommend placing one Domain Controller per domain outside of your virtualized platform and making this Domain Controller a Global Catalog. (especially in environments with Microsoft Exchange).

Sounds good, except that for a tiny network there are a couple of other factors. First, to avoid running multiple servers all hungry for power. Second, to make best user of limited resources on a single box. That means either risking running a Primary Domain Controller (PDC) on a VM (perhaps with the strange scenario of having the host OS joined to the domain controlled by one of its VMs), or risking making the host OS the PDC. I’ve opted for the latter for the moment, though it would be fairly easy to change course. I figure it could be good to have a VM as a backup domain controller for disaster recovery in the scenario where the host OS would not restore, but the VMs would – belt and braces within the confines of one server.

One of the essential services on a network is DHCP, which assigns IP numbers to computers. There must be one and only one on the network (unless you use static addresses everywhere, which I hate). So I disabled the existing DCHP server, and added the DHCP server role to the new server.

It was not happy. No IP addresses were served, and the error logged was 1041:

The DHCP service is not servicing any DHCPv4 clients because none of the active network interfaces have statically configured IPv4 addresses, or there are no active interfaces.

Now, this box has two real NICs (one for use by ISA), which means four virtual NICs after Hyper-V is installed. The only one that the DHCP server should see is the virtual NIC for the LAN, which is configured with a static address. So why the error?

I’m not the first to run into this problem. Various solutions are proposed, including fitting an additional NIC just for DHCP. However, this one worked for me.

I simply changed the mask on the desired interface from 255.255.255.0 to 255.255.0.0, saved it, then changed it back.  Suddenly the interface appeared in the DHCP bindings.

Strange I know. The configuration afterwards was the same as before, but the DHCP server now runs fine. Looks like a bug to me.