How many people buy software from sites like this?

Over the weekend I was helping a student find the best price for some software, taking advantage of any educational discounts on offer. We tried Google and soon came up with a company offering excellent prices.

Unfortunately all the software on offer is pirated. If you make a purchase, you are doubly caught: first you have downloaded software that is no more legal than what you might get for free from a torrent; and second you have paid good money while acquiring nothing of value.

But how would you know? I admit, it did not take me long. I went to the terms and conditions, and saw stuff like this:

2.3. You have no possibility to register the software with the producer and updates are accessible for selective commodities.
2.4. The printed license documents will not be available for you.
2.5. We will not give you a disk with a copy of software.

I also noticed that the About Us gave no address nor even a country for the store; and that Contact Us was just a web form, no address or telephone number. Finally I looked up the domain name which is registered to a gentleman in Slovenia; a web search on the name brings it up in connection with other fraudulent ecommerce sites – though frankly it would be surprising if anyone of that name exists at all.

Nevertheless, I would be intrigued to know how many people do purchase from these sites. After all, many of us don’t read the small print; and we may know that both OEM and educational discounts are a reality, and that downloaded software can be cheaper.

I was interested to know how I would be asked to pay, so I commenced a purchase for Adobe CS4 Master Collection for just $299.00. Proceed to checkout, and I ended up on an SSL connection which raised no alerts from the browser bad-site protection in any browser that I tried. In fact, the certificate checked out OK, secured by Equifax Secure Global eBusiness CA-1.

Naturally I declined to enter my credit card details.

Still, I can imagine someone who is in a hurry or not expert in the wiles of the web thinking everything is OK. For all I know, there may be many satisfied customers out there, who got cheap software and do not know or care about its origins.

It strikes me that both the search engines and especially the SSL certificate providers could do better at defending against this kind of scam.

In the meantime, how can you tell if a site is genuine? My suggestions:

1. Disregard any familiar logos – particularly if they are images without links. This example has logos from Shop.Com, PriceGrabber.com, CNet Certified Store, and BBB Accedited Business. Just images.

2. The fact that you got to the site through a Google search or even a Google ad tells you nothing about its legality.

3. Look at the About and Contact details. Genuine companies have address, company registration and tax details, which you can also look up elsewhere. Lack of any such details is suspicious.

4. If the prices seem too good to be true, they likely are.

5. Text in broken English is a dead giveaway – but I’ve noticed the scammers getting better at this.

6. If in doubt, use a well-known and trusted retailer instead.

Technorati Tags:

A few observations on King Crimson: The Court of the Crimson King

DGM has released a magnificent CD/DVD box set reissue of King Crimson’s classic debut, The Court of the Crimson King.

Maybe I will write more about this when I have listened to it properly, but in the meantime a few observations.

This is completist heaven. There is always argument about whether reissues should feature the original mix (for authenticity) or a new mix (to benefit from modern noise-free mixing techniques). The makers of the recent Genesis boxes contentiously chose the latter. DGM by contrast offers both.

Not only that, you get several versions of both. You get a new 2009 mix in CD and several DVD versions – several DVD versions because only DVD audio players can cope with the highest resolution, and most people only have DVD video players – so we end up with a 2009 surround mix in two audio versions; a 2009 stereo mix in two audio versions; and the original mix as mastered in 2004 in two audio versions.

It doesn’t stop there. We also get a needledrop from the first pressing of the UK vinyl release on Island Records; and an alternate take version of the album with different performances, such as an instrumental-only 21st Century Schizoid Man.

Then there are the other extras: the full version of Moonchild; a live concert from 1969 (Hyde Park, July 5th combined with Filmore East, New York in November); a mono album mix issued for US radio.

If 5 CDs and a DVD aren’t enough for you, you can also enjoy the LP-size box, which enables the original artwork to be printed at its proper size, and inserts including a well-written 24 page booklet, two photographs from the era, and rattling aimlessly about inside, two little badges.

But I promised some observations, not just a description. I love the album; never be deceived by the opening clamour of 21st Century Schizoid Man, this is thoughtful music, not a mindless thrash. It was extraordinary hearing it for the first time; I’m not sure when that was for me – not 1969, but a couple of years later. It might have been on that wonderful Island Records sampler, Nice Enough to Eat, which I listened to in 1972 or thereabouts. If any album deserves this kind of treatment, this one does.

It was particularly thoughtful of the compilers to include the vinyl needledrop and the full-size artwork. Still … as it happens I have the record, not the first pressing, but an early ILPS, 4U matrix if you really want to know.

I played the record and then the CD needledrop. You know what? My record sounds better to my ears. Oddly, on the “declicked” needledrop you can easily hear pre-echo of the opening salvo of Schizoid, where it goes from very quiet to very loud. This is a vinyl flaw, where a quiet groove picks up a faint echo of the louder groove which follows it. My cut doesn’t have that, at least not audibly. It also sounds richer, more open, more dynamic.

Another thing I noticed: the artwork. Honestly, you have to see an early pressing of the original LP to appreciate this very striking image. The definition is much better, the colours more vibrant, those eyes – they stare manically out of the original, in the new print they are muted.

I’m guessing that they didn’t manage to get hold of the original artwork, and that what we have here is a print of a print.

Never mind. If you love this album, get the box; it is fantastic. You can get it from burning shed.

Hyper-V VMs can fail to start if the host is copying a large file

I have a couple of Microsoft Hyper-V servers which I’ve been working with, one of which has 20GB RAM. It had two virtual machine guests, one with 12GB allocated and another with 2GB allocated. I created a third VM with 2GB and started it up. It worked initially, but on rebooting the VM I got the message:

Failed to create partition: Insufficient system resources exist to complete the requested service. (0x800705AA)

This was puzzling. Most people consider that the Hyper-V host does not need very much RAM for its own operations – Brien Possey suggests 2GB, for example – and I am running the stripped-down Hyper-V 2008 R2. 4GB should be more than enough.

After chasing round for a bit, and wondering if it was something to do with NUMA, or WMIPrvse.exe gobbling all the RAM, I found out the reason. At the time I was trying to start the VM, the Hyper-V host was copying a large file (a .VHD) to an external drive for backup. In order to perform this action, the host was using a large amount of RAM for a temporary cache; and was apparently unable to release it for a VM to use until the copy completed.

In some circumstances this could be unfortunate. If you had a scheduled task in the host for copying a large file at the same moment that a guest needed a restart, perhaps triggered by Windows Update, the guest might fail to restart.

Something worth knowing if you work with Hyper-V.

Technorati Tags: ,

Wrestling with Windows Server Core

Windows Server Core is a stripped-down build of Windows Server 2008 which lacks most of the GUI. It’s a great idea: more lightweight, less to go wrong, and as the Unix folk have always said, who needs a GUI on a server anyway?

That said, the Windows culture has always assumed the presence of the GUI and most of the tools and utilities out there assume it. This means that you can expect some extra friction in managing your Server Core installation.

I recently attended a couple of Microsoft conferences and one of the things I was trying gently to discover was the extent of the take-up for Server Core, and to what extent hardware vendors such as HP had taken it to heart and were no longer assuming that all their Windows server customers could use GUI tools. I didn’t come away with any useful information on the subject, though perhaps that in itself says something.

I’ve been using Hyper-V 2008 R2, which is in effect Server Core with just one role, and a recent experience illustrates my point. After considerable effort (and help from semi-official scripts) I managed to get Hyper-V Manager working remotely, in order to create and manage the virtual machines. However, I ran into an annoying problem. There are three physical NICs in this box, and the idea was to have one for the host, and two others for virtual switches (for use by guests). Somehow, probably as a result of an early experiment, the virtual switch configuration got slightly messed up. I only had one virtual switch, and when I tried to create a second one on an otherwise unused NIC, I got the message:

Cannot bind to [Network connection name] because it is already bound to another virtual network.

That wasn’t the case as far as I could see; but that was no consolation.

The problem led me to this blog post which says that, if you are lucky, all you need to do to resolve it is to remove the binding to Microsoft Virtual Network Switch Protocol from the affected network connection. To do this, just open Local Area Connection Properties … but wait, this is Server Core, I don’t have a Local Area Connection Properties dialog.

Luckily, the guy has thought of that and says you can use the command-line tool nvspbind.exe instead. Great. But where is it? It has a page on MSDN which documents the tool, authored by a member of the Hyper-V team called Keith Mange, but there is no download. How infuriating can you get? There are a few desperate requests for a download link, and a comment “Unfortunately the nvspbind is no longer available for download”, and that is that.

All was not lost. I poked around Mange’s other downloads on MSDN and found two other utilities, nvspscrub.js and nvspinfo.js. Nvspscrub.js is a tool of last resort: it removes all the Virtual Switch bindings and deletes them from Hyper-V. I did not want that, because my first virtual switch was working fine. However, I figured I could modify Nvspscrub.js just to delete the one that was troublesome. I modified the script, deleted most of the code that modified the system, and added an if condition so that only the device with the GUID which I specified would be unbound.

It worked first time, and I was able to create my second virtual switch.

Still, the fact that this problem is known, and that the only documented cure (that I can find) is in a blog post which refers to a tool that has been pulled, suggests to me that this stuff is not yet mainstream.

Hands on with Intel Moblin

When I saw that trying out Intel’s Moblin Linux 2.1 was as easy as downloading an image and writing it to a USB pen drive, I could not resist giving it a try.

Moblin (it rhymes with Goblin) is aimed at netbooks running Intel’s Atom processor, though it also runs on other Intel processors – mine is a Core 2 Duo. The supplied intro says it is a “completely new user experience” and “the next evolution in operating systems”. Well, one thing greatly impressed me. Moblin booted perfectly when plugged into my Toshiba M400 Portege laptop, playing sound and video, and picking up the wi-fi card without any messing around.

Next, I spent a few minutes exploring the user interface. There are some fun, bouncy mouse-over effects, though the cutesy default imagery, featuring an unlikely friendship between what I think is a cat and some birds, did nothing for me. I discovered a browser based on Mozilla, but hiding many of its features, a media player, an application gallery with easy install of a selection of further apps (the usual Linux things), and an effort to bring social networking to the fore by integrating with Twitter and last.fm, with others presumably to follow.

I am not sure about it though; I suspect the first thing I would do with a Moblin netbook is to work out how to install Ubuntu or some other Linux that is less sugar-coated and exposes all the features I am used to; and I suspect most users (given the choice) would rather have Windows 7.

My instant and probably unfair reaction is that Microsoft has nothing to fear from Moblin, even though I can see that a lot of work has gone into making it easy to use.

It is an interesting contrast to Google Chrome OS, which I have also been trying. Although Moblin has more features right now, Chrome OS is more compelling; Chrome OS feels stripped-down rather than simplified, and embraces a new model of computing that I think can be made to work.

Incidentally, Google acknowledges Moblin as one of the open-source projects which it uses in Chrome OS.

Technorati Tags: ,,,

Chrome OS: will Google keep its vision?

I spent some time with Chrome OS over the weekend and yesterday, first doing my own build of the open source Chromium OS, and then running it and writing a review.

The build process was interesting: you actually compile Chromium OS from a chroot virtual environment. My first efforts were unsuccessful, for two reasons. First, Chromium OS assumes the presence of a pre-built Chromium (the browser), so you have to either build Chromium first, or download a pre-built version. However, the Chromium build has to be customised for Chromium OS. I did manage to build Chromium, but it failed to run, with what looked like a gtk version error, so I gave up and downloaded a zip.

Chromium OS itself I did build successfully, though I ran into an error that needed this patch, which I applied manually. I was using the latest code from the git repository at the time. I expect that this problem has been fixed now though you may run into different ones; life on the bleeding edge can be painful.

I also had difficulty logging in. You are meant to log in with a Google account, which presumes a live internet connection at least on the first occasion. Although Chromium OS successfully used the ethernet connection on my laptop, getting an IP address and successfully pinging internet sites, the login still failed with a “Network not connected” error. Studying the logs revealed a certificate error. You can also create a backdoor user at build time, so I did that instead.

Once I got Chromium OS up and running, booting from a USB key, I found it mostly worked OK. It is a fascinating project, because of Google’s determination to avoid local application installs, thereby gaining better security as well as driving the user towards web solutions for all their needs.

That’s a bold vision, but also an annoying one. Normally, when reviewing something relevant like an operating system or a word processor, I try to write the review in the product I am testing. In fact, I am writing this post in Chromium OS. However, I could not write my review on Chromium OS, because I needed screenshots; and although there are excellent web-based image editing tools, I could not find a way to take screenshots and paste or upload them into those tools. The solution I adopted was to run Chromium OS in a virtual machine – I used VirtualBox – and take the screenshots from the host operating system.

It is a small point; but makes me wonder whether Google will end up bundling just a few local utilities to make the web-based life a little easier. If it does so, third parties will want to add their own; and Google will be under pressure to abandon its idea of no local application installs.

Another interesting point: the rumour is that Google will unify Chrome OS  with Android, which does allow application installs. Can that happen without providing a way to run Android apps on Chrome?

Chromium OS includes a calculator utility, which opens in a panel. Mine does not work though; I get a blank panel with the URL http://welcome-cros.appspot.com/calculator.html – which seems to be a broken link. Still, is that really a sensible way to provide a calculator? What about offline – will it work from a Gears local web server, or as a static HTML page with a JavaScript calculator, or will it not work at all?

I will be interested to see whether Google ends up compromising a little in order to improve the usability and features of its new OS.

COM automation in Silverlight 4 is not an “edge case”

I wrote a piece for The Register about the arrival of Windows-specific features in Silverlight, which attracted some comments both on the Reg and also on Slashdot. Plenty of people said it was just what they expected from Microsoft, some of them misunderstanding the point that this only applies to out-of-browser applications that are trusted: the user has to pass a dialog box granting the application permission to access the local system. A few defended Microsoft’s decision; and this Slashdot comment on COM automation in Silverlight 4 strikes me as a good encapsulation of the official line:

This is a fairly obscure feature, and I’m fairly surprised that it was included at all, but doubt it’ll be of use to the vast majority of current and future Silverlight developers out there. Like the html control, it’s a crutch, to allow developers that want to use Silverlight a way to leverage existing investments. The mantra I’ve heard out of the Silverlight team is to focus on unblocking customer scenarios (scenarios they cannot unblock themselves) without compromising the overall feature goals (like keeping the runtime download small) … it’s an edge case feature that doesn’t affect Silverlight’s over all "Cross-Platforminess".

The idea that COM automation is merely an “edge case” surprises me, even though I also recall it being described like that at PDC. Access to COM automation gives a Silverlight desktop application on Windows substantial extra capability. At PDC program manager Joe Stegman showed how Silverlight 4 can integrate with Office, sending data into an Excel spreadsheet: an example with obvious value for real applications. I also heard developers at PDC discussing how they might wrap up a Silverlight application with a COM DLL, creating an application which in effect has full access to the local operating system. Although Silverlight cannot access the Windows API directly, there are no such restrictions on the COM DLL, so the combination means that pretty much anything is possible.

Let’s also bear in mind that Microsoft’s Brad Becker is on record saying that one day WPF and Silverlight might simply become different .NET profiles. He told me this at Mix earlier this year; and said a similar thing to Mary Jo Foley at PDC:

Some day — Microsoft won’t say exactly when — Silverlight and WPF are going to merge into one Web programming and app delivery model that, most likely, will be known as Silverlight, Brad Becker, Director of Product Management for Microsoft’s Rich Client Platforms, told me this week

If Microsoft is contemplating such a thing, then clearly full access to the native features of Windows will have to be possible.

I am not entirely negative about this prospect. Even if you are only targeting Windows, Silverlight has a lot to commend it: a small runtime, easy setup, and options for browser-hosted or desktop deployment. If you have ever wrestled with the Windows installer or tackled a failed .NET runtime installation you will like the simplicity of running a Silverlight application.

Nevertheless, with version 4.0 Microsoft is changing its Silverlight story. It is no longer a pure cross-platform play; rather, it is a runtime where some features are cross-platform, and others Windows only. Microsoft calls this developer choice; I see it as evolving into the inverse of Sun’s aim with Java. Sun tried strenuously to guide developers towards cross-platform, but provided a way out – via Java Native Interface – if absolutely necessary. Microsoft will provide cross-platform where we really need it, but make it easy to slip into Windows-only development in order to get some nice feature like a location API, or Office integration.

I see this as an advantage for Flash, because developers know that Adobe has no incentive to prefer one operating system over another – except to the extent that minority platforms (like desktop Linux) tend to receive less investment.

Personally I think Microsoft should at least provide a way for Mac users to get similar benefits – perhaps by implementing something like the native process API in Adobe AIR 2.

I also think Microsoft will have to get real about Linux support. It is wrong that Microsoft will cheerfully state:

Silverlight 4 runs across all platforms and major browsers

as it does in the “Fact sheet” handed out at PDC; while leaving Linux implementation to a third-party process uncertain in both features and timing. Here is the reality of cross-platform Silverlight, in a screenshot taken seconds ago from Linux:

Right now it is a two-platform play – admittedly, the two platforms that matter most, especially in a Western world business context, but never forget that Google Chrome OS is coming.

Google Chrome OS – astonishing

I’m watching Google’s press briefing on the forthcoming Chrome OS. It is amazing. What Google is developing is a computer that answers several of the problems that have troubled users since the advent of the personal computer.

Exaggeration? Here’s a quick summary of what Chrome OS is. It’s a device that you will purchase which runs in effect just the Chrome browser. All storage is solid-state, it boots in a very short time – a few seconds.

The Chrome browser is somewhat modified. It has “application tabs” – on the top left below – which represent web applications that you use.

It also supports panels, windows which float above the browser. Use case: Google Talk, there while you browse other web pages.

All the data is online, apart from a user area that is a cache of online data. All binaries in Chrome OS are signed and inspected on start-up. They are known binaries, because the user will never install an application – only a browser extension, maybe, which will come via Google. Google is not planning to support anything other than web applications.

This has two implications. One is that stronger security is possible. If any binary is added or modified, that can easily be detected; it is a white-list approach. In the event of a problem, the machine can be re-imaged, making it clean.

Second, if your Chrome OS computer breaks or is stolen, or re-imaged as above, it’s no hassle. You can simply buy a new device, log back on, and all your data is there.

There will be offline support, with automatic synchronization to your online store.

At top left is a start button app button which opens an application-centric Favourites menu.

If you double-click a document, it opens on the Web. If it is an Excel document, for example, it might open in the Excel Web App, which Google rather gleefully demonstrated.

Will this be good? Yes. Cheap, fast, effective. Stream music. Run any web application.

What about the dark side of Chrome OS? That is easy to spot. The security model depends on Google knowing about all the binaries and browser extensions. If you have a binary which Google does not want to approve – “there is no certification process for an alternative web browser”, we were told – you have no way round Google’s control.

Alongside that, you will naturally see Google’s applications and identity management woven into the product. It gives Google huge power over its users. It could make Microsoft’s monopoly look trivial.

In mitigation, everything in Chrome OS is open source, and it draws on open source projects such as Web Kit.

I am sure there will be much debate on the implications of today’s announcement, but count me highly impressed – though Google acknowledges that this is not going to be a computer for every purpose.

It could nevertheless meet a large subset of computing needs; which will gradually grow as it matures.

More info here.

Technorati Tags:

PDC day two: Silverlight 4 and a free laptop

There were two big themes at PDC in Los Angeles today. One was the Silverlight 4 beta, the subject of the most impressive section at the keynote. The other was the announcement of free laptops for every attendee – aside from press and government. It is remarkable how a generous gift can change the atmosphere. The lack of breakfast or Universal Studios party was soon forgotten as the audience cheered its own good fortune.

There is actually some justification for handing out this hardware. It’s a decent machine, a modified Acer Aspire 1420P with Windows 7 x64, 2GB RAM, multi-touch display, and accelerometer. Most of us do not have multi-touch machines, and giving them to the core Windows developers who attend PDC may help stimulate the creation of applications that properly support this feature.

Otherwise, it was a Silverlight day. Although SharePoint 2010 was also in the keynote, the cheers it received felt more like relief, that it finally has sensible development and debugging tools in Visual Studio, than real enthusiasm. Somehow the keynote did not capture the potential of the product.

Silverlight though was well received. It is a huge release that opens up many new possibilities, though I am discovering some details that look awkward. There is also one troubling aspect, which is that Microsoft is introducing imbalance in its cross-platform story. The Windows version of Silverlight 4.0 supports COM automation, enabling integration with local APIs such as location on Windows 7, and Microsoft Office. There is no equivalent in the Mac release. It would not be so bad if Microsoft offered some route to similar functionality on the Mac, but there is none that I am aware of.

Microsoft folk that I spoke to about this dismissed it as a minor point, but it is not. Cross-platform is a discipline; this is a failure to observe that discipline and hands an advantage to Adobe Flash for developers that require broad reach.

Silverlight 4 ticks all the boxes, questions remain

Microsoft has announced Silverlight 4 here at PDC in Los Angeles. The gist of it I was expecting – device support, an option for fuller system access out of the browser – but the extent of the new features is remarkable. Here’s a few highlights:

  • Improved Just-in-time compilation gives 30% faster start-up, up to 100% performance increase
  • COM automation support on Windows when out of browser with full trust
  • Access to local file system, cross-site Internet access, custom window chrome when in full trust out of browser
  • Notification pop-up support even when sandboxed
  • Drag and drop target even when sandboxed
  • HTML control (only works out of browser), supports plug-ins
  • Rich text control with right-to-left text support
  • Printing support
  • Clipboard, right-click and mouse wheel support
  • Web cam and microphone support

Of course there are a few unanswered questions, such as what level of HTML support is available, or how Microsoft is protecting users from malicious Silverlight applets; I’ll be exploring these later today.

It’s clear though that Microsoft wants to compete fully with Adobe AIR, and that its energetic Silverlight development is continuing at full pace.

The beta is available now; full release is promised for the first half of 2010.

So where is Microsoft going with this? Why would anyone develop for WPF and Windows, if good enough features, cross-platform, and zero install is available through Silverlight?

Interesting times for .NET developers.

Technorati Tags: ,,,