Category Archives: hardware

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

CES 2014 report: robots, smart home, wearables, bendy TV, tablets, health gadgets, tubes and horns

CES in Las Vegas is an amazing event, partly through sheer scale. It is the largest trade show in Vegas, America’s trade show city. Apparently it was also the largest CES ever: two million square feet of exhibition space, 3,200 exhibitors, 150,000 industry attendees, of whom 35,000 were from outside the USA.

image

It follows that CES is beyond the ability of any one person to see in its entirety. Further, it is far from an even representation of the consumer tech industry. Notable absentees include Apple, Google and Microsoft – though Microsoft for one booked a rather large space in the Venetian hotel which was used for private meetings.  The primary purpose of CES, as another journalist explained to me, is for Asian companies to do deals with US and international buyers. The success of WowWee’s stand for app-controllable MiP robots, for example, probably determines how many of the things you will see in the shops in the 2014/15 winter season.

image

The kingmakers at CES are the people going round with badges marked Buyer. The press events are a side-show.

CES is also among the world’s biggest trade shows for consumer audio and high-end audio, which is a bonus for me as I have an interest in such things.

Now some observations. First, a reminder that CEA (the organisation behind CES) kicked off the event with a somewhat downbeat presentation showing that global consumer tech spending is essentially flat. Smartphones and tablets are growing, but prices are falling, and most other categories are contracting. Converged devices are reducing overall spend. One you had a camera, a phone and a music player; now the phone does all three.

Second, if there is one dominant presence at CES, it is Samsung. Press counted themselves lucky even to get into the press conference. A showy presentation convinced us that we really want not only UHD (4K UHD is 3840 x 2160 resolution) video, but also a curved screen, for a more immersive experience; or even the best of both worlds, an 85” bendable UHD TV which transforms from flat to curved.

image

We already knew that 4K video will go mainstream, but there is more uncertainty about the future connected home. Samsung had a lot to say about this too, unveiling its Smart Home service. A Smart Home Protocol (SHP) will connect devices and home appliances, and an app will let you manage them. Home View will let you view your home remotely. Third parties will be invited to participate. More on the Smart Home is here.

image

The technology is there; but there are several stumbling blocks. One is political. Will Apple want to participate in Samsung’s Smart Home? will Google? will Microsoft? What about competitors making home appliances? The answer is that nobody will want to cede control of the Smart Home specifications to Samsung, so it can only succeed through sheer muscle, or by making some alliances.

The other question is around value for money. If you are buying a fridge freezer, how high on your list of requirements is SHP compatibility? How much extra will you spend? If the answer is that old-fashioned attributes like capacity, reliability and running cost are all more important, then the Smart Home cannot happen until there are agreed standards and a low cost of implementation. It will come, but not necessarily from Samsung.

Samsung did not say that much about its mobile devices. No Galaxy S5 yet; maybe at Mobile World Congress next month. It did announce the Galaxy Note Pro and Galaxy Tab Pro series in three sizes; the “Pro” designation intrigues me as it suggests the intention that these be business devices, part of the “death of the PC” theme which was also present at CES.

Samsung did not need to say much about mobile because it knows it is winning. Huawei proudly announced that it it is 3rd in smartphones after Samsung and Apple, with a … 4.8% market share, which says all you need to know.

That said, Huawei made a rather good presentation, showing off its forthcoming AscendMate2 4G smartphone, with 6.1” display, long battery life (more than double that of iPhone 5S is claimed, with more than 2 days in normal use), 5MP front camera for selfies, 13MP rear camera, full specs here. No price yet, but expect it to be competitive.

image

Sony also had a good CES, with indications that PlayStation 4 is besting Xbox One in the early days of the next-gen console wars, and a stylish stand reminding us that Sony knows how to design good-looking kit. Sony’s theme was 4K becoming more affordable, with its FDR-AX100 camcorder offering 4K support in a device no larger than most camcorders; unfortunately the sample video we saw did not look particularly good.

image

Sony also showed the Xperia Z1 compact smartphone, which went down well, and teased us with an introduction for Sony SmartWear wearable entertainment and “life log” capture. We saw the unremarkable “core” gadget which will capture the data but await more details.

image

Another Sony theme was high resolution audio, on which I am writing a detailed piece (not just about Sony) to follow.

As for Microsoft Windows, it was mostly lost behind a sea of Android and other devices, though I will note that Lenovo impressed with its new range of Windows 8 tablets and hybrids – like the 8” Thinkpad with Windows 8.1 Pro and full HD 1920×1200 display – more details here.

image

There is an optional USB 3.0 dock for the Thinkpad 8 but I commented to the Lenovo folk that the device really needs a keyboard cover. I mentioned this again at the Kensington stand during the Mobile Focus Digital Experience event, and they told me they would go over and have a look then and there; so if a nice Kensington keyboard cover appears for the Thinkpad 8 you have me to thank.

Whereas Lenovo strikes me as a company which is striving to get the best from Windows 8, I was less impressed by the Asus press event, mainly because I doubt the Windows/Android dual boot concept will take off. Asus showed the TD300 Transformer Book Duet which runs both. I understand why OEMs are trying to bolt together the main business operating system with the most popular tablet OS, but I dislike dual boot systems, and if the Windows 8 dual personality with Metro and desktop is difficult, then a Windows/Android hybrid is more so. I’d guess there is more future in Android emulation on Windows. Run Android apps in a window? Asus did also announce its own 8” Windows 8.1 tablet, but did not think it worth attention in its CES press conference.

Wearables was a theme at CES, especially in the health area, and there was a substantial iHealth section to browse around.

image

I am not sure where this is going, but it seems to me inevitable that self-monitoring of how well or badly our bodies are functioning will become commonplace. The result will be fodder for hypochondriacs, but I think there will be real benefits too, in terms of motivation for exercise and healthy diets, and better warning and reaction for critical problems like heart attacks. The worry is that all that data will somehow find its way to Google or health insurance companies, raising premiums for those who need it most. As to which of the many companies jostling for position in this space will survive, that is another matter.

What else? It is a matter of where to stop. I was impressed by NVidia’s demo rig showing three 4K displays driven by a GTX-equipped PC; my snap absolutely does not capture the impact of the driving game being shown.

image

I was also impressed by NVidia’s ability to befuddle the press at its launch of the Tegra K1 chipset, confusing 192 CUDA cores with CPU cores. Having said that, the CUDA support does mean you can use those cores for general-purpose programming and I see huge potential in this for more powerful image processing on the device, for example. Tegra 4 on the Surface 2 is an excellent experience, and I hope Microsoft follows up with a K1 model in due course even though that looks doubtful.

There were of course many intriguing devices on show at CES, on some of which I will report over at the Gadget Writing blog, and much wild and wonderful high-end audio.

On audio I will note this. Bang & Olufsen showed a stylish home system, largely wireless, but the sound was disappointing (it also struck me as significant that Android or iOS is required to use it). The audiophiles over in the Venetian tower may have loopy ideas, but they had the best sounds.

CES can do retro as well as next gen; the last pinball machine manufacturer displayed at Digital Experience, while vinyl, tubes and horns were on display over in the tower.

image

My last server? HP ML310e G8 quick review

Do small businesses still need a server? In my case, I do still run a couple, mainly for trying out new releases of server products like Windows Server 2012 R2, System Center 2012, Exchange and SharePoint. The ability to quickly run up VMs for testing software is of huge value; you can do this with just a desktop but running a dedicated hypervisor is convenient.

My servers run Hyper-V Server 2012 R2, the free version, which is essentially Server Core with just the Hyper-V role installed. I have licenses for full Windows server but have stuck with the free one partly because I like the idea of running a hypervisor that is stripped down as far as possible, and partly because dealing with Server Core has been educational; it forces you into the command line and PowerShell, which is no bad thing.

Over the years I have bought several of HP’s budget servers and have been impressed; they are inexpensive, especially if you look out for “top value” deals, and work reliably. In the past I’ve picked the ML110 range but this is now discontinued (though the G7 is still around if you need it); the main choice is either the small Proliant Gen8 MicroServer which packs in space for 4 SATA drives and up to 16GB RAM via 2 PC3 DDR3 DIMM slots and support for the dual-core Intel Celeron G1610T or Pentium G2020T; or the larger ML310 Gen8 series with space for 4 3.5" or 8 small format SATA drives and 4 PC3 DDR3 DIMM slots for up to 32GB RAM, with support for the Core i3 or Xeon E3 processors with up to 4 cores. Both use the Intel C204 chipset.

I picked the ML310e because a 4-core processor with 32GB RAM is gold for use with a hypervisor. There is not a huge difference in cost. While in a production environment it probably makes sense to use the official HP parts, I used non-HP RAM and paid around £600 plus VAT for a system with a Xeon  E3-1220v2 4-core CPU, 32GB RAM, and 500GB drive. I stuck in two budget 2Tb SATA drives to make up a decent server for less than £800 all-in; it will probably last three years or more.

There is now an HP ML310e Gen 8 v2 which might partly explain why the first version is on offer for a low price; the differences do not seem substantial except that version 2 has two USB 3.0 ports on the rear in place of four USB 2.0 ports and supports Xeon E3 v3.

Will I replace this server? The shift to the cloud means that I may not bother. I was not even sure about this one. You can run up VMs in the cloud easily, on Amazon ECC or Microsoft Azure, and for test and development that may be all you need. That said, I like the freedom to try things out without worrying about subscription costs. I have also learned a lot by setting up systems that would normally be run by larger businesses; it has given me better understanding of the problems IT administrators encounter.

image

So how is the server? It is just another box of course, but feels well made. There is an annoying lock on the front cover; you can’t remove the side panel unless this is unlocked, and you can’t remove the key unless it is locked, so the solution if you do not need this little bit of physical security is to leave the key in the lock. It does not seem worth much to me since a miscreant could easily steal the entire server and rip off the panel at leisure.

On the front you get 4 USB 2.0 ports, UID LED button, NIC activity LED, system health LED and power button.

image

The main purpose of the UID (Unit Identifier) button is to help identify your server from the rear if it is in a rack. You press the button on the front and an LED lights at the rear. Not that much use in a micro tower server.

Remove the front panel and you can see the drive cage:

image

Hard drives are in caddies which are easily pulled out for replacement. However note the “Non hot plug” on these units; you must turn the server off first.

You might think that you have to buy HP drives which come packaged in caddies. This is not so; if you remove one of the caddies you find it is not just a blank, but allows any standard 3.5" drive to be installed. The metal brackets in the image below are removed and you just stick the drive in their place and screw the side panels on.

image

Take the side panel off and you will see a tidy construction with the 350w power supply, 4 DIMM slots, 4 PCI Express slots (one x16, two x8, one x4), and a transparent plastic baffle that ensures correct air flow.

image

The baffle is easily removed.

image

What you see is pretty much as it is out of the box, but with RAM fitted, two additional drives, and a PCIX USB 3.0 card fitted since (annoyingly) the server comes with USB 2.0 only – fixed in the version 2 edition.

On the rear are four more USB 2.0 ports, two 1GB NIC ports, a blank where a dedicated ILO (Integrated Lights Out) port would be, video and serial connector.

image

Although there is no ILO port on my server, ILO is installed. The luggage label shows the DNS name you need to access it. If you can’t get at the label, you can look at your DHCP server and see what address has been allocated to ILOxxxxxxxxx and use that. Once you log in with a web browser you can change this to a fixed IP address; probably a good idea in case, in a crisis, the DHCP server is not working right.

ILO is one of the best things about HP servers. It is a little embedded system, isolated from whatever is installed on the server, which gets you access to status and troubleshooting information.

image

Its best feature is the remote console which gives you access to a virtual screen, keyboard and mouse so you can get into your OS from a remote session even when the usual remote access techniques are not working. There are now .NET and mobile options as well as Java.

image

Unfortunately there is a catch. Try to use this an a license will be demanded.

image

However, you can sign up for an evaluation that works for a few weeks. In other words, your first disaster is free; after that you have to pay. The license covers several servers and is not good value for an individual one.

Everything is fine on the hardware side, but what about the OS install? This is where things went a bit wrong. HP has a system called Intelligent Provisioning built in. You pop your OS install media in the DVD drive (or there are options for network install), run a wizard, and Intelligent Provisioning will update its firmware, set up RAID, and install your OS with the necessary drivers and HP management utilities included.

I don’t normally bother with all this but I thought I should give it a try. Unfortunately Server 2012 R2 is not supported, but I tried it for Server 2012 x64, hoping this would also work with Hyper-V Server, but no go; failed with unattend script error.

Next I set up RAID manually using the nice HP management utility in the BIOS and tried to install using the storage drivers saved to a USB pen drive. It seemed to work but was not stable; it would sometimes fail to boot, and sometimes you could log on and do a few things but Windows would crash with a Kernel_Security_Check_Failure.

Memory problems? Drive problems? It was not clear; but I decided to disable embedded RAID in the BIOS and use standard AHCI SATA. Install proceeded perfectly with no need for additional drivers, and the OS is 100% stable.

I did not want to give up RAID though, so wondered if I could use Storage Spaces on Hyper-V Server. Apparently you can. I joined the Hyper-V Server to my domain and then used Server Manager remotely to create a Storage Pool from my pair of 2TB drives, and then a mirrored virtual disk.

My OS drive is not on resilient storage but I am not too concerned about that. I can backup the OS (wbadmin works), and since it does nothing more than run Hyper-V, recovery should be straightforward if necessary.

After that I moved across some VMs using a combination of Move and Export with no real issues, other than finding Move too slow on my system when you have a large VHD to copy.

The server overall seems a good bargain; HP may have problems overall, but the department that turns out budget servers seems to do an excellent job. My only complaint so far is the failure of the storage drivers on Server 2012 R2, which HP will I hope fix with an update.

Lenovo’s bundled Start menu: more OEM trouble for Microsoft

Lenovo and SweetLabs announced a deal yesterday whereby the Pokki app store and Start menu replacement will be pre-installed on Windows PCs.

This has been widely interpreted as a response to user dissatisfaction with the Windows 8 Start screen, which replaces with the hierarchical Windows 7 Start menu with a full-screen tiled view of application shortcuts. The press release, though, focuses more on the app store element:

Apps are dynamically recommended in the Pokki Start menu, app store, and game arcade to users by SweetLabs’ real-time app recommendation system, which matches the right apps with the right users. This system has already served one billion app recommendations this year, and the addition of Lenovo substantially extends the reach of this distribution opportunity for app developers looking to be promoted on brand new Windows 8 devices.

image

In other words, this is not just an app launcher but also a form of adware or if you prefer a third-party app store; the apps it installs will not be Windows Store apps running in the tablet-friendly Windows 8 environment, but desktop apps.

Microsoft could benefit I suppose if users concerned about missing the Start menu buy Windows 8, but in every other respect this is a retrograde step. Users who do want a Start menu would be better off with something like Start8 which will not nag them to install apps, which makes you wonder if Lenovo’s motivation is more to do with a lucrative deal than with pleasing its users. Microsoft’s strategy of building momentum for its own Windows 8 app store and platform will be undermined by this third-party effort.

Once again this illustrates how the relationship between Microsoft and its OEM partners can work to the detriment of both. The poor out of the box experience with Windows has been one of the factors driving users to the Mac or iPad over the years, and this is in large part due to trialware bundled by partner vendors.

Windows 8 is a special case, and there is no doubting the difficulty long-term Windows users have in getting used to the new Start screen. The new Start button in Windows 8.1 will help orient new users, but Microsoft is not backing away from Live Tiles or the Start screen. Lenovo’s efforts will make it harder for users to adjust, since they would be better off learning how to use Windows 8 as designed, rather than relying on a third-party utility.

Microsoft has Surface of course, which despite huge writedowns is well made and elegant, though too expensive; and this has not pleased the OEMs who previously had Windows to themselves.

It all makes you wonder if the famous gun-wielding cartoon of Microsoft’s organization chart should now be redrawn with the guns pointing between Microsoft and its hardware partners. After all, Windows Phone might also have gone better if the likes of Samsung and HTC had not been so focused on Android.

Windows RT and Surface RT: Why Microsoft should persevere

Microsoft has reported a $900 million write-down on Surface RT inventory in its latest financial results. Was Surface RT a big mistake?

A loss of that size is a massive blunder, but the concept behind Surface RT is good and Microsoft should persevere. Here’s why.

Surface RT is experimental in two ways:

  • It was the first Microsoft-branded PC (or tablet if you prefer).
  • It was among the first Windows RT devices. Running on the ARM processor, Windows RT is locked down so that you can only install new-style Windows 8 apps, not desktop apps. However, the desktop is still there, and Microsoft bundles Office, a desktop application suite.

Microsoft had (and has) good reason to do both of these things.

Historically, DOS and Windows prospered because it was open to any hardware manufacturer to build machines running Microsoft’s operating system, creating a virtuous circle in which competition drove down prices, and abundance created widespread application support.

This ecosystem is now dysfunctional. The experience of using Windows was damaged by OEM vendors churning out indifferent hardware bundled with intrusive trial applications. It is still happening, and when I have to set up a new Windows laptop it takes hours to remove unwanted software.

Unfortunately this cycle is hard to break, because OEM vendors have to compete on price, and consumers are seemingly poor at discriminating based on overall quality; too often they look for the best specification they can get for their money.

Further, Windows remains a well understood and popular target for malware. One of the reasons is that despite huge efforts from Microsoft with User Account Control (the technology behind “do you really want to do this” prompts in Windows Vista onwards), most users outside the enterprise still tend to run with full administrative rights for their local machine.

Apple exploited these weaknesses with Mac hardware that is much more expensive (and profitable), but which delivers a less frustrating user experience.

Apple has been steadily increasing its market share at the high end, but an even bigger threat to Windows comes from below. Locked-down tablets, specifically the Apple iPad and later Android tablets, also fixed the user experience but at a relatively low price. Operating systems designed for touch control means that keyboard and mouse are no longer necessary, making them more elegant portable devices, and a wireless keyboard can easily be brought into use when needed.

Microsoft understood these trends, although late in the day. With Surface it began to manufacture its own hardware, an initiative which alongside the bricks-and-mortar Microsoft Stores (supplying trialware-free Windows PCs) aims to counter the corrosive race to the bottom among OEM vendors.

Windows 8 also introduces a new application model which is touch-friendly, secure, and offers easy app deployment via the app store.

In Windows RT the experiment is taken further, by locking down the operating system so that only these new-style apps can be installed.

Surface RT brings both these things together, solving many of the problems of Windows in a single package.

Why Surface RT failed

Surface RT is well made, though performance is disappointing; it seems that Nvidia’s Tegra 3 chipset is not quite sufficient to run Windows and Office briskly, though it is usable, and graphics performance not bad.

There were several problems though.

  • The price was high, especially when combined with the clever keyboard cover.
  • It may solve the problems of Windows, but for many users it also lacks the benefits of Windows. They cannot run their applications, and all too often their printers will not print and other devices lack drivers.
  • Surface RT launched when the Windows 8 app store was new. The new app ecosystem also has its problems (all these things are inter-related) and in consequence few compelling apps were available.
  • Microsoft’s built-in apps were poor to indifferent, and Office was bundled without Outlook.

I was in New York for the launch of Surface RT. There were “Click In” ads everywhere and it was obvious that Microsoft had convinced itself that it could sell the device in large numbers immediately. That was a fantasy. I suppose that if consumers had taken Windows 8 to heart quickly (as opposed to resisting the changes it imposes) and if the app ecosystem had flourished quickly then it could have taken off but neither was likely.

Surface RT positives

Despite all the above, Surface RT is not a bad device. Personally I was immediately drawn to its slim size, long battery life, and high build quality. The keyboard cover design is superb, though not everyone gets on with the “touch” cover. I purchased one of the launch machines and still use it regularly for cranking out Word documents on the road.

Reviews on Amazon’s UK site are largely positive:

image 

Surface RT is also improving as the software evolves. Windows 8.1, now in preview, adds Outlook and makes the device significantly more useful for Exchange users. Performance also gets a slight lift. The built-in apps are improving and app availability in general is much better than it was at launch, though still tiny compared to iPad or Android.

I have also been trying Surface Pro since receiving one at Microsoft’s Build conference last month. The Pro device has great performance and runs everything, but it is too bulky and heavy to be a satisfying tablet, and battery life is poor. I think of it more as a laptop, whereas Surface RT is a true tablet with a battery that gives pretty much a full day’s use when out and about.

Microsoft’s biggest mistake with Surface RT was not the concept, nor the quality of the device. Rather, they manufactured far too many thanks to unrealistic expectations of the size of the initial market. The sane approach would have been a limited release with the aim of improving and refining it.

I hope Microsoft perseveres both with Windows RT and with Surface RT. Give it better performance with something like Nvidia, Tegra 4, Windows 8.1, and improved app support, and it is near-perfect.

The future of Windows

Desktop Windows will remain forever, but its decline is inevitable. Even if it fails, we should recognise that Microsoft is trying to fix long-standing and deep-rooted problems with Windows through its Windows 8, Surface and Windows RT initiatives, and there is some sanity in the solutions it has devised. Despite a billion dollars thrown away on excess Surface RT inventory, it should follow through rather than abandon its strategy.

NVIDIA’s Visual Computing Appliance: high-end virtual graphics power on tap

NVIDIA CEO Jen-Hsun Huang has announced the Grid Visual Computing Appliance (VCA). Install one of these, and users anywhere on the network can run graphically-demanding applications on their Mac, PC or tablet. The Grid VCA is based on remote graphics technology announced at last year’s GPU Technology Conference. This year’s event is currently under way in San Jose.

The Grid VCA is a 4U rack-mounted server.

image_thumb[14]

Inside are up to 2 Xeon CPUs each supporting 16 threads, and up to 8 Grid GPU boards each containing 2 Kepler GPUs each with 4GB GPU memory. There is up to 384GB of system RAM.

image_thumb[16]

There is a built-in hypervisor (I am not sure which hypervisor NVIDIA is using) which supports 16 virtual machines and therefore up to 16 concurrent users.

NVIDIA supplies a Grid client for Mac, Windows or Android (no mention of Apple iOS).

During the announcement, NVIDIA demonstrated a Mac running several simultaneous Grid sessions. The virtual machines were running Windows with applications including Autodesk 3D Studio Max and Adobe Premier. This looks like a great way to run Windows on a Mac.

image_thumb[17]

The Grid VCA is currently in beta, and when available will cost from $24,900 plus $2,400/yr software licenses. It looks as if the software licenses are priced at $300 per concurrent user, since the price doubles to $4,800/Yr for the box which supports 16 concurrent users.

image_thumb[18]

Businesses will need to do the arithmetic and see if this makes sense for them. Conceptually it strikes me as excellent, enabling one centralised GPU server to provide high-end graphics to anyone on the network, subject to the concurrent user limitation. It also enables graphically demanding Windows-only applications to run well on Macs.

The Grid VCA is part of the NVIDIA GRID Enterprise Ecosystem, which the company says is supported by partners including Citrix, Dell, Cisco, Microsoft, VMWare, IBM and HP.

image_thumb[13]

Intel Xeon Phi shines vs NVidia GPU accelerators in Ohio State University tests

Which is better for massively parallel computing, a GPU accelerator board from NVidia, or Intel’s new Xeon Phi? On the eve of NVidia’s GPU Technology Conference comes a paper which Intel will enjoy. Erik Sauley, Kamer Kayay, and Umit V. C atalyurek from the Ohio State University have issued a paper with performance comparisons between Xeon Phi, NVIDIA Tesla C2050 and NVIDIA Tesla K20. The K20 has 2,496 CUDA cores, versus a mere 61 processor cores on the Xeon Phi, yet on the particular calculations under test the researchers got generally better performance from Xeon Phi.

In the case of sparse-matrix vector multiplication (SpMV):

For GPU architectures, the K20 card is typically faster than the C2050 card. It performs better for 18 of the 22 instances. It obtains between 4.9 and 13.2GFlop/s and the highest performance on 9 of the instances. Xeon Phi reaches the highest performance on 12 of the instances and it is the only architecture which can obtain more than 15GFlop/s.

and in the case of sparse-matrix matrix multiplication (SpMM):

The K20 GPU is often more than twice faster than C2050, which is much better compared with their relative performances in SpMV. The Xeon Phi coprocessor gets
the best performance in 14 instances where this number is 5 and 3 for the CPU and GPU configurations, respectively. Intel Xeon Phi is the only architecture which achieves more than 100GFlop/s.

Note that this is a limited test, and that the authors note that SpMV computation is known to be a difficult case for GPU computing:

the irregularity and sparsity of SpMV-like kernels create several problems for these architectures.

They also note that memory latency is the biggest factor slowing performance:

At last, for most instances, the SpMV kernel appears to be memory latency bound rather than memory bandwidth bound

It is difficult to compare like with like. The Xeon Phi implementation uses OpenMP, whereas the GPU implementation uses CuSparse. I would also be interested to know whether as much effort was made to optimise for the GPU as for the Xeon Phi.

Still, this is a real-world test that, if nothing else, demonstrates that in the right circumstances the smaller number of cores in a Xeon Phi do not prevent it comparing favourably against a GPU accelerator:

When compared with cutting-edge processors and accelerators, its SpMV, and especially SpMM, performance are superior thanks to its wide registers
and vectorization capabilities. We believe that Xeon Phi will gain more interest in HPC community in the near future.

Images of Eurora, the world’s greenest supercomputer

Yesterday I was in Bologna for the press launch of Eurora at Cineca, a non-profit consortium of universities and other public bodies. The claim is that Eurora is the world’s greenest supercomputer.

image

Eurora is a prototype deployment of Aurora Tigon, made by Eurotech. It is a hybrid supercomputer, with 128 CPUs supplemented by 128 NVidia Kepler K20 GPUs.

What makes it green? Of course, being new is good, as processor efficiency improves with every release, and “green-ness” is measured in floating point operations per watt. Eurora does 3150 Mflop/s per watt.

There is more though. Eurotech is a believer in water cooling, which is more efficient than air. Further, it is easier to do something useful with the hot water you generate than with hot air, such as generating energy.

Other factors include underclocking slightly, and supplying 48 volt DC power in order to avoid power conversion steps.

Eurora is composed of 64 nodes. Each node has a board with 2 Intel Xeon E5-2687W CPUs, an Altera Stratix V FPGA (Field Programmable Gate Array), an SSD drive, and RAM soldered to the board; apparently soldering the RAM is more efficient than using DIMMs.

image

Here is the FPGA:

image

and one of the Intel-confidential CPUs:

image

On top of this board goes a water-cooled metal block. This presses against the CPU and other components for efficient heat exchange. There is no fan.

Then on top of that go the K20 GPU accelerator boards. The design means that these can be changed for Intel Xeon Phi accelerator boards. Eurotech is neutral in the NVidia vs Intel accelerator wars.

image

Here you can see where the water enters and leaves the heatsink. When you plug a node into the rack, you connect it to the plumbing as well as the electrics.

image

Here are 8 nodes in a rack.

image

Under the floor is a whole lot more plumbing. This is inside the Aurora cabinet where pipes and wires rise from the floor.

image

Here is a look under the floor outside the cabinet.

image

while at the corner of the room is a sort of pump room that pumps the water, monitors the system, adds chemicals to prevent algae from growing, and no doubt a few other things.

image

The press was asked NOT to operate this big red switch:

image

I am not sure whether the switch we were not meant to operate is the upper red button, or the lower red lever. To be on the safe side, I left them both as-is.

So here is a thought. Apparently Eurora is 15 times more energy-efficient than a typical desktop. If the mobile revolution continues and we all use tablets, which also tend to be relatively energy-efficient, could we save substantial energy by using the cloud when we need more grunt (whether processing or video) than a tablet can provide?

The disruption of pay as you go hardware – and I do not mean leasing

Last week Amazon CEO Jeff Bezos spoke at a “Fireside Chat” with AWS (Amazon Web Services) chief Werner Vogels. It was an excellent and inspirational performance from Bezos.

image

If there was a common theme, it was his belief in the merit of low margins, which of necessity keep a business efficient. Low margins are also disruptive to other businesses with high margins. But how low can margins go? In some cases, almost to nothing. Talking of Kindle Fire, Bezos remarked that “We don’t get paid when you buy the device. We get paid when you use the device.” It is the same pay as you go model as Amazon Web Services, he said, trying to remain vaguely on topic since this was an AWS event.

His point is that Amazon makes money when you buy goods or services via the device, not from profit on the device itself. He adds that this makes him comfortable, since at that point the device is also proving its value to the customer.

Google has the same business model with its Nexus range, which is why Google Nexus 7 and Amazon Kindle Fire are currently the best value 7” tablets out there. For Google, there is another spin on this: it makes the OS freely available to OEMs so that they also push Google’s adware OS out to the market. If you are not making much profit on the hardware, it makes no difference whether you or someone else sells it.

We do not have to believe that either Amazon or Google really makes nothing at all on the Kindle Fire or Nexus 7. Perhaps they make a slim margin. The point though: this is not primarily a profit centre.

This is disruptive because other vendors such as Apple, Microsoft, Nokia or RIM are trying to make money on hardware. So too are the Android OEMs, who have to be exceptionally smart and agile to avoid simply pushing out hardware at thin margins from which Google makes all the real money.

Google can lose too, when vendors like Amazon take Android and strip out the Google sales channels leaving only their own. This is difficult to pull off if you are not Amazon though, since it relies on having a viable alternative ecosystem in place.

But where does this leave Apple and Microsoft? Apple has its own services to sell, but it is primarily a high margin hardware company selling on quality of design and service. Apple is under pressure now; but Microsoft is hardest hit, since its OEMs have to pay the Windows tax and then sell hardware into the market alongside Android.

Ah, but Android is not a full OS like Windows or OSX. Maybe not … yet … but do not be deceived. Three things will blur this distinction to nothing:

1. The tablet OS category (including iOS) will become more powerful and the capability of apps will increase

2. An increasing proportion of your work will be done online and web applications are also fast improving

3. More people will question whether they need a “full OS” with all that implies in terms of maintenance hassles

Microsoft at least has seen this coming. It is embracing services, from Office 365 to Xbox Music, and selling its own tablet OS and tablet hardware. That is an uphill struggle though, as the mixed reaction to Windows 8 and Surface demonstrates.

Most of the above, I hasten to add, is not from Bezos but is my own comment. Watch the fireside chat yourself below.

Windows 8 launches: key questions remain, but Surface shines

I am in New York for the launch of Windows 8. This morning was the general launch; the Surface RT launch is to follow this afternoon. Windows chief Steven Sinofsky introduced the event. I was intrigued by how dismissive he was about a key Windows 8 issue: the learning challenge it presents to new users. He gave the impression that a few minutes experimenting will be enough, though he also referred to a guide that may be new; yesterday I picked up a small booklet which I had not seen before, introducing Windows 8.

Next Microsoft’s Julie Larson-Green and Michael Angiulo came on to show off a few details about the Windows 8 user interface, followed by Ballmer who gave what is for him a muted address about how great Windows 8 is going to be. Solid facts were few, but Microsoft did mention that over 1000 devices are certified for Windows 8.

So what is Windows 8 all about? It’s a tablet, it’s a laptop, it’s a PC we were told, in other words, everything. But everything is also nothing, and my sense is that even Microsoft is struggling to articulate its message, or at least, struggling to do so in ways that would not offend key partners.

Personally I like Windows 8, I find it perfectly usable and appreciate the convenience of the tablet format. That said, I look at all these hybrid devices and my heart sinks: these are devices that are neither one thing nor another, and pay for it with complexity and expense. Will they win over users who might otherwise have bought a MacBook? I am doubtful.

Windows RT and Intel Atom devices are more interesting. If Microsoft and its partners can push out Windows 8 devices that inexpensive and work well on tablets without keyboard clutter, that is what has potential to disrupt the market.

That brings me on to Surface. It is all in the body language: the conviction that was missing from the Windows 8 keynote in the morning was present in the Surface keynote in the afternoon. Even the room was better, with stylish Surface fake pavement art in the corridor and smart white seating.

image

General Manager Panos Panay showed off little details, like the way the rear camera angles so that it is level when the Surface is set on its kickstand. He talked about Microsoft’s drop tests, claiming that they had tested 72 different ways to drop a Surface and designed it not to break. He demonstrated this by dropping it onto a carpet, which was not too challenging, but the fact that Sinofsky successfully used it as a skateboard was more impressive.

image

No doubt then: Microsoft has more enthusiasm for Surface, described by Panay as “the perfect expression of Windows”, than it does for the 1000 certified devices from its partners, though the company would never admit that directly.

What is the significance of Surface? It goes beyond the device itself. It will impact Microsoft’s relationship with its hardware partners. It embodies an Apple-like principle that design excellence means hardware designed for software designed for hardware. It shows that the “OK but nothing special” approach of most Windows hardware vendors is no longer good enough. If Surface is popular, it will also introduce demand for more of the same: a 7” Surface, a Surface phone, and more.

Despite its quality, the success of Surface is not assured. The biggest problem with Windows 8 now is with the lack of outstanding apps. That is not surprising given that the platform is new, and you would think that users would make allowance for that. On the other hand, they may lack patience and opt for better supported platforms instead, in which case building app momentum will be a challenge.