Category Archives: intel

Spectre and Meltdown woes continue as Intel confesses to broken updates

Intel’s Navin Shenoy says the company has asked PC vendors to stop shipping its microcode updates that fix the speculative execution vulnerabilities identified by Google’s Project Zero team:

We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior.

This is a blow to industry efforts to fix this vulnerability, a process involving BIOS updates (to install the microcode) as well as operating system patches.

Intel says it has an “early version of the updated solution”. Given the length of time it takes for PC manufacturers to package and distribute BIOS updates for the many thousands of models affected, it looks like the moment at which the majority of active systems will be patched is now far in the future.

Vendors have not yet completed the rollout of the initial patch, which they are now being asked to withdraw.

The detailed microcode guidance is here. Intel also has a workaround which gives some protection while also preserving system stability:

For those concerned about system stability while we finalize the updated solutions, we are also working with our OEM partners on the option to utilize a previous version of microcode that does not display these issues, but removes the Variant 2 (Spectre) mitigations. This would be delivered via a BIOS update, and would not impact mitigations for Variant 1 (Spectre) and Variant 3 (Meltdown).

I am not sure who out there is not concerned about system stability? That said, public cloud vendors would rather almost anything than the possibility of code running in one VM getting unauthorised access to the host or to other VMs.

Right now it feels as if most of the world’s computing devices, from server to smartphone, are simply insecure. Though it should be noted that the bad guys have to get their code to run: trivial if you just need to run up a VM on a public cloud, more challenging if it is a server behind a firewall.

The mysterious microcode: Intel is issuing updates for all its CPUs from the last five years but you might not benefit

The Spectre and Meltdown security holes found in Intel and to a lesser extend AMD CPUs is not only one of the most serious, but also one of the most confusing tech issues that I can recall.

We are all used to the idea of patching to fix security holes, but normally that is all you need to do. Run Windows Update, or on Linux apt-get update, apt-get upgrade, and you are done.

This one is not like that. The reason is that you need to update the firmware; that is, the low-level software that drives the CPU. Intel calls this microcode.

So when Intel CEO Brian Krzanich says:

By Jan. 15, we will have issued updates for at least 90 percent of Intel CPUs introduced in the past five years, with updates for the remainder of these CPUs available by the end of January. We will then focus on issuing updates for older products as prioritized by our customers.

what he means is that Intel has issued new microcode for those CPUs, to mitigate against the newly discovered security holes, related to speculative execution (CPUs getting a performance gain by making calculations ahead of time and throwing them away if you don’t use them).

Intel’s customer are not you and I, the users, but rather the companies who purchase CPUs, which in most cases are the big PC manufacturers together with numerous device manufacturers. My Synology NAS has an Intel CPU, for example.

So if you have a PC or server from Vendor A, then when Intel has new microcode it is available to Vendor A. How it gets to your PC or server which you bought from Vendor A is another matter.

There are several ways this can happen. One is that the manufacturer can issue a BIOS update. This is the normal approach, but it does mean that you have to wait for that update, find it and apply it. Unlike Windows patches, BIOS updates do not come down via Windows update, but have to be applied via another route, normally a utility supplied by the manufacturer. There are thousands of different PC models and there is no guarantee that any specific model will receive an updated BIOS and no guarantee that all users will find and apply it even if they do. You have better chances if your PC is from a big name rather than one with a brand nobody has heard of, that you bought from a supermarket or on eBay.

Are there other ways to apply the microcode? Yes. If you are technical you might be able to hack the BIOS, but leaving that aside, some operating systems can apply new microcode on boot. Therefore VMWare was able to state:

The ESXi patches for this mitigation will include all available microcode patches at the time of release and the appropriate one will be applied automatically if the system firmware has not already done so.

Linux can do this as well. Such updates are volatile; they have to be re-applied on every boot. But there is little harm in that.

What about Windows? Unfortunately there is no supported way to do this. However there is a VMWare experimental utility that will do it:

This Fling is a Windows driver that can be used to update the microcode on a computer system’s central processor(s) (“CPU”). This type of update is most commonly performed by a system’s firmware (“BIOS”). However, if a newer BIOS cannot be obtained from a system vendor then this driver can be a potential substitute.

Check the comments – interest in this utility has jumped following the publicity around spectre/meltdown. If working exploits start circulating you can expect that interest to spike further.

This is a techie and unsupported solution though and comes with a health warning. Most users will never find it or use it.

That said, there is no inherent reason why Microsoft could not come up with a similar solution for PCs and servers for which no BIOS update is available, and even deliver it through Windows Update. If users do start to suffer widespread security problems which require Intel’s new microcode, it would not surprise me if something appears. If it does not, large numbers of PCs will remain unprotected.

Intel fights back against iOS with free tools for HTML5 cross-platform mobile development

Today at its Software Conference in Paris Intel presented its HTML5 development tools.

image

There are several components, starting with the XDK, a cross-platform development kit based on HTML5, CSS and JavaScript designed to be packaged as mobile apps using Cordova, the open source variant of PhoneGap.

There is an intriguing comment here:

The XDK is fully compatible with the PhoneGap HTML5 cross platform development project, providing many features that are missing from the open source project.

PhoneGap is Adobe’s commercial variant of Cordova. It looks as if Intel is doing its own implementation of features which are in PhoneGap but not Cordova, which might not please Adobe. Apparently code that Intel adds will be fed back into Cordova in due course.

Intel has its own JavaScript app framework, formerly called jqMobi and now called Intel’s App Framework. This is an open source framework hosted on Github.

There are also developer tools which run as an extension to Google Chrome, and a cloud-based build service which targets the following platforms:

  • Apple App Store
  • Google Play
  • Nook Store
  • Amazon Appstore for Android
  • Windows 8 Store
  • Windows Phone 8

And web applications:

  • Facebook
  • Intel AppUp
  • Chrome Store
  • Self-hosted

The build service lets you compile and deploy for these platforms without requiring a local install of the various mobile SDKs. It is free and according to Intel’s Thomas Zipplies there are no plans to charge in future. The build service is Intel’s own, and not related to Adobe’s PhoneGap Build, other than the fact that both share common source in Cordova. This also is unlikely to please Adobe.

You can start a new app in the browser, using a wizard.

image

Intel also has an iOS to HTML5 porting tool in beta, called the App Porter Tool. The aim is to convert Objective C to JavaScript automatically, and while the tool will not convert all the code successfully it should be able to port most of it, reducing the overall porting effort.

Given that the XDK supports Windows 8 modern apps and Windows Phone 8, this is also a route to porting from iOS to those platforms.

Why is Intel doing this, especially on a non-commercial basis? According to Zipplies, it is a reaction to “walled garden” development platforms, which while not specified must include Apple iOS and to some extent Google Android.

Note that both iOS and almost all Android devices run on ARM, so another way of looking at this is that Intel would rather have developers work on cross-platform apps than have them develop exclusively for ARM devices.

Zipplies also says that Intel can optimise the libraries in the XDK to improve performance on its processors.

You can access the HTML5 development tools here.

Programming NVIDIA GPUs and Intel MIC with directives: OpenACC vs OpenMP

Last month I was at Intel’s software conference learning about Many Integrated Core (MIC), the company’s forthcoming accelerator card for HPC (High Performance Computing). This month I am in San Jose for NVIDIA’s GPU Technology Conference learning about the latest development in NVIDIA’s platform for accelerated massively parallel computing using GPU cards and the CUDA architecture. The approaches taken by NVIDIA and Intel have much in common – focus on power efficiency, many cores, accelerator boards with independent memory space controlled by the CPU – but also major differences. Intel’s boards have familiar x86 processors, whereas NVIDIA’s have GPUs which require developer to learn CUDA C or an equivalent such as OpenCL.

In order to simplify this, NVIDIA and partners Cray, CAPS and PGI announced OpenACC last year, a set of directives which when added to C/C++ code instruct the compiler to run code parallelised on the GPU, or potentially on other accelerators such as Intel MIC. The OpenACC folk have stated from the outset their hope and intention that OpenACC will converge with OpenMP, an existing standard for directives enabling shared memory parallelisation. OpenMP is not suitable for accelerators since these have their own memory space.

One thing that puzzled me though: Intel clearly stated at last month’s event that it would support OpenMP (not OpenACC) on MIC, due to go into production at the end of this year or early next. How can this be?

I took the opportunity here at NVIDIA’s conference to ask Duncan Poole, who is NVIDIA’s Senior Manager for High Performance Computing and also the President of OpenACC, about what is happening with these two standards. How can Intel implement OpenMP on MIC, if it is not suitable for accelerators?

“I think OpenMP in the form that’s being discussed inside of the sub-committee is suitable. There’s some debate about some of the specific features that continues. Also, in the OpenMP committee they’re trying to address the concerns of TI and IBM so it’s a broader discussion than just the Intel architecture. So OpenMP will be useful on this class of processor. What we needed to do is not wait for it. That standard, if we’re lucky it will be draft at the end of this year, and maybe a year later will be ratified. We want to unify this developer base now,” Poole told me.

How similar will this adapted OpenMP be to what OpenACC is now?

“It’s got the potential to be quite close. The guy that drafted OpenACC is the head of that sub-committee. There’ll probably be changes in keywords, but there’s also some things being proposed now that were not conceived of. So there’s good debate going on, and I expect that we’ll benefit from it.

“Some of the features for example that are shared by Kepler and MIC with respect to nested parallelism are very useful. Nested parallelism did not exist at the time that we started this work. So there’ll be an evolution that will happen and probably a logical convergence over time.

If OpenMP is not set to support acclerators until two years hence, what can Intel be doing with it?

“It will be a vendor implementation of a pre-release standard. Something like that,” said Poole, emphasising that he cannot speak for Intel. “To be complementary to Intel, they have some good ideas and it’s a good debate right now.”

Incidentally, I also asked Intel about OpenACC last month, and was told that the company has no plans to implement it on its compilers. OpenMP is the standard it supports.

The topic is significant, in that if a standard set of directives is supported across both Intel and NVIDIA’s HPC platforms, developers can easily port code from one to the other. You can do this today with OpenCL, but converting an application to use OpenCL to enhance performance is a great deal more effort than adding directives.

Multicore processor wars: NVIDIA squares up to Intel

I first became aware of NVIDIA’s propaganda war against Intel at the 2012 GPU Technology conference in Beijing. CEO Jen-Hsun Huang stated that CPUs are remarkably inefficient for multicore processing:

The CPU is fast and is terrific at single-threaded performance, but because so much of the electronics inside the CPU is dedicated to out of order execution, branch prediction, speculative execution, all of the technology that has gone into sustaining instruction throughput and making the CPU faster at single-threaded applications, the electronics necessary to enable it to do that has grown tremendously. With four cores, in order to execute an operation, a floating point add or a floating point multiply, 50 times more energy is dedicated to the scheduling of that operation than the operation itself. If you look at the silicone of a CPU, the floating point unit is only a few percentage of the overall die, and it is consistent with the usage of the energy to sequence, to schedule the instructions running complicated programs.

That figure of 50 times surprised me, and I asked Intel’s James Reinders for a comment. He was quick to respond, noting that:

50X is ridiculous if it encourages you to believe that there is an alternative which is 50X better.  The argument he makes, for a power-efficient approach for parallel processing, is worth about 2X (give or take a little). The best example of this, it turns out, is the Intel MIC [Many Integrated Core] architecture.

Reinders went on to say:

Knights Corner is superior to any GPGPU type solution for two reasons: (1) we don’t have the extra power-sucking silicon wasted on graphics functionality when all we want to do is compute in a power efficient manner, and (2) we can dedicate our design to being highly programmable because we aren’t a GPU (we’re an x86 core – a Pentium-like core for “in order” power efficiency). These two turn out to be substantial advantages that the Intel MIC architecture has over GPGPU solutions that will allow it to have the power efficiency we all want for highly parallel workloads, but able to run an enormous volume of code that will never run on GPGPUs (and every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor).

So Intel is evangelising its MIC vs GPCPU solutions such as NVIDIA’s Tesla line. Yesterday NVIDIA’s Steve Scott spoke up to put the other case. If Intel’s point is that a Tesla is really a GPU pressed into service for general computing, then Scott’s first point is that the cores in MIC are really CPUs, albeit of an older, simpler design:

They don’t really have the equivalent of a throughput-optimized GPU core, but were able to go back to a 15+ year-old Pentium design to get a simpler processor core, and then marry it with a wide vector unit to get higher flops per watt than can be achieved by Xeon processors.

Scott then takes on Intel’s most compelling claim, compatibility with existing x86 code. It does not matter much, says Scott, since you will have to change your code anyway:

The reality is that there is no such thing as a “magic” compiler that will automatically parallelize your code. No future processor or system (from Intel, NVIDIA, or anyone else) is going to relieve today’s programmers from the hard work of preparing their applications for the future.

What is the real story here? It would, of course, be most interesting to compare the performance of MIC vs Tesla, or against the next generation of NVIDIA GPGPUs based on Kepler; and may the fastest and most power-efficient win. That will have to wait though; in the meantime we can see that Intel is not enjoying seeing the world’s supercomputers install NVIDIA GPGPUs – the Oak Ridge National Laboratory Jaguar/Titan (the most powerful supercomputer in the USA) being a high profile example:

In addition, 960 of Jaguar’s 18,688 compute nodes now contain an NVIDIA graphical processing unit (GPU). The GPUs were added to the system in anticipation of a much larger GPU installation later in the year.

Equally, NVIDIA may be rattled by the prospect of Intel offering strong competition for Tesla. It has not had a lot of competition in this space.

There is an ARM factor here too. When I spoke to Scott in Beijing, he hinted that NVIDIA would one day produce GPGPUs with ARM chips embedded for CPU duties, perhaps sharing the same memory.

NVIDIA plans to merge CPU and GPU – eventually

I spoke to Dr Steve Scott, NVIDIA’s CTO for Tesla, at the end of the GPU Technology Conference which has just finished here in Beijing. In the closing session, Scott talked about the future of NVIDIA’s GPU computing chips. NVIDIA releases a new generation of graphics chips every two years:

  • 2008 Tesla
  • 2010 Fermi
  • 2012 Kepler
  • 2014 Maxwell

Yes, it is confusing that the Tesla brand, meaning cards for GPU computing, has persisted even though the Tesla family is now obsolete.

image
Dr Steve Scott showing off the power efficiency of GPU computing

Scott talked a little about a topic that interests me: the convergence or integration of the GPU and the CPU. The background here is that while the GPU is fast and efficient for parallel number-crunching, it is of course still necessary to have a CPU, and there is a price to pay for the communication between the two. The GPU and the CPU each have their own memory, so data must be copied back and forth, which is an expensive operation.

One solution is for GPU and CPU to share memory, so that a single pointer is valid on both. I asked CEO Jen-Hsun Huang about this and he did not give much hope for this:

We think that today it is far better to have a wonderful CPU with its own dedicated cache and dedicated memory, and a dedicated GPU with a very fast frame buffer, very fast local memory, that combination is a pretty good model, and then we’ll work towards making the programmer’s view and the programmer’s perspective easier and easier.

Scott on the other hand was more forthcoming about future plans. Kepler, which is expected in the first half of 2012, will bring some changes to the CUDA architecture which will “broaden the applicability of GPU programming, tighten the integration of the CPU and GPU, and enhance programmability,” to quote Scott’s slides. This integration will include some limited sharing of memory between GPU and CPU, he said.

What caught my interest though was when he remarked that at some future date NVIDIA will probably build CPU functionality into the GPU. The form that might take, he said, is that the GPU will have a couple of cores that do the CPU functions. This will likely be an implementation of the ARM CPU.

Note that this is not promised for Kepler nor even for Maxwell but was thrown out as a general statement of direction.

There are a couple of further implications. One is that NVIDIA plans to reduce its dependence on Intel. ARM is a better partner, Scott told me, because its designs can be licensed by anyone. It is not surprising then that Intel’s multi-core evangelist James Reinders was dismissive when I asked him about NVIDIA’s claim that the GPU is far more power-efficient than the CPU. Reinders says that the forthcoming MIC (Many Integrated Core) processors codenamed Knights Corner are a better solution, referring to the:

… substantial advantages that the Intel MIC architecture has over GPGPU solutions that will allow it to have the power efficiency we all want for highly parallel workloads, but able to run an enormous volume of code that will never run on GPGPUs (and every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor).

In other words, Intel foresees a future without the need for NVIDIA, at least in terms of general-purpose GPU programming, just as NVIDIA foresees a future without the need for Intel.

Incidentally, Scott told me that he left Cray for NVIDIA because of his belief in the superior power efficiency of GPUs. He also described how the Titan supercomputer operated by the Oak Ridge National Laboratory in the USA will be upgraded from its current CPU-only design to incorporate thousands of NVIDIA GPUs, with the intention of achieving twice the speed of Japan’s K computer, currently the world’s fastest.

This whole debate also has implications for Microsoft and Windows. Huang says he is looking forward to Windows on ARM, which makes sense given NVIDIA’s future plans. That said, the I get impression from Microsoft is that Windows on ARM is not intended to be the same as Windows on x86 save for the change of processor. My impression is that Windows on ARM is Microsoft’s iOS, a locked-down operating system that will be safer for users and more profitable for Microsoft as app sales are channelled through its store. That is all very well, but suggests that we will still need x86 Windows if only to retain open access to the operating system.

Another interesting question is what will happen to Microsoft Office on ARM. It may be that x86 Windows will still be required for the full features of Office.

This means we cannot assume that Windows on ARM will be an instant hit; much is uncertain.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

When will Intel’s Many Integrated Core processors be mainstream?

I’m at Intel’s software tools conference in Dubrovnik, which I have attended for the last three years, and as usual the big topic is concurrent programming and how to write code that takes advantage of the multiple cores in today’s computers.

Clearly this remains a critical subject, but in some ways the progress over these last three years has been disappointing when it comes to the PCs that most of us use. Many machines are only dual-core, which is sub-optimal for concurrent programming since there is an overhead to multi-threading programming that eats into the benefit of having two cores. Quad core is now common too, and more useful, but what about having 50 or 80 or more cores? This enables massively parallel processing of the kind that you can easily do today with general-purpose GPU programming using OpenCL or NVidia’s CUDA, but not yet on the CPU unless you have a super computer. I realise that GPU cores are not the same as CPU cores; but nevertheless they enable some spectacularly fast parallel processing.

I am interested therefore in Intel’s MIC or Many Integrated Core architecture, which combines 50 or more CPU cores on a single chip. MIC is already in preview, with hardware codenamed Knight’s Corner and a development kit called Knight’s Ferry. But when will MIC hit the mainstream for servers and workstations, and how long is it until we can have 50 cores on a commodity desktop PC? I spoke to Intel’s chief evangelist James Reinders.

Reinders first gave me some background on MIC:

“We’ve made those bold steps to dual core, quad core and we’ve got even ten core now, but if you look inside those microprocessors they have a very simple structure. All the cores are hooked together and share their connection to memory, through a shared cache usually that’s on the chip. It’s a simple computer structure, and we know from experience when you build computers with more and more processors, that eventually you go to more sophisticated connections between the cores. You don’t build a 1000-processor super computer and hook them all together with a bus to one memory.

“It’s inevitable that on a chip we need to design a more sophisticated connection. That’s what MIC’s about, that’s what the Larrabee project has always been about, a belief that we should take a bunch of x86 cores and hook them together with something more sophisticated. In this case it’s a ring, a bi-directional, 512-bit wide high performance ring, with multiple connections to memory off the chip, which gives us more bandwidth.

“That’s how I look at MIC, it’s putting a cluster-type of design on a chip.”

But what about timing?

“The first place you’ll see this is in servers and in workstations, where there’s a lot of demand for a lot of computation. In that case we’ll see that availability sometime by the end of 2012. The Intel product should be out late in that year.

“When will we see it in other devices? I think that’s a ways off. It’s a very high core count part, more than 50, it’s going to consume a fair amount of power. The same part 18 months later will probably consume half the power. So inside a decade we could see this being common on desktops, I don’t know about mobile devices, it might even make it to tablets. A decade’s a long time, it gives a lot of time for people to come up with innovative uses for it in software.

“We’ll see single core disappear everywhere.”

Incidentally, it is hard to judge how much computing power is “enough”. Although having many CPU cores may seem overkill for everyday computing, things like speech recognition or on-the-fly image processing make devices smarter at the expense of intense processing under the covers. From super computers to smartphones, if more computing capability is available history tells us that we will find ways to use it.

Intel disappointed with Nokia’s Microsoft move, still backing MeeGo

Intel’s Suzy Ramirez has posted about the future of MeeGo Linux following Nokia’s decision to base its smartphone strategy on Microsoft’s Windows Phone operating system. Nokia was Intel’s key partner for MeeGo, which was formed by merging Intel’s Moblin with Nokia’s Maemo.

Although Nokia has been an important partner to Intel and MeeGo and we are disappointed by this decision, it’s important to know that this is by no means the end of MeeGo or the end to Intel’s commitment

says Ramirez, adding that “MeeGo is not just a phone OS”.

True; but with the focus also moved away from netbooks it is getting hard to see where MeeGo will have an opportunity to shine.

Intel promises to outline its mobile strategy this week at Mobile World Congress. I will be reporting from Barcelona in due course.

Ten big tech trends from 2010

This was an amazing year for tech. Here are some of the things that struck me as significant.

Sun Java became Oracle Java

Oracle acquired Sun and set about imposing its authority on Java. Java is still Java, but Oracle lacks Sun’s commitment to open source and community – though even in Sun days there was tension in this area. That was nothing to the fireworks we saw in 2010, with Java Community Process members resigning, IBM switching from its commitment to the Apache Harmony project to the official OpenJDK, and the Apache foundation waging a war of words against Oracle that was impassioned but, it seems, futile.

Microsoft got cloud religion

Only up to a point, of course. This is the Windows and Office company, after all. However – and this is a little subjective – this was the year when Microsoft convinced me it is serious about Windows Azure for hosting our applications and data. In addition, it seems to me that the company is willing to upset its partners if necessary for the sake of its hosted Exchange and SharePoint – BPOS (Business Productivity Online Suite), soon to become Office 365.

This is a profound change for Microsoft, bearing in mind its business model. I spoke to a few partners when researching this article for the Register and was interested by the level of unease that was expressed.

Microsoft also announced some impressive customer wins for BPOS, especially in government, though the price the customers pay for these is never mentioned in the press releases.

Microsoft Silverlight shrank towards Windows-only

Silverlight is Microsoft’s browser plug-in which delivers multimedia and the .NET Framework to Windows and Mac; it is also the development platform for Windows Phone 7. It still works on a Mac, but in 2010 Microsoft made it clear that cross-platform Silverlight is no longer its strategy (if it ever was), and undermined the Mac version by adding Windows-specific features that interoperate with the local operating system. Silverlight is still an excellent runtime, powerful, relatively lightweight, easy to deploy, and supported by strong tools in Visual Studio 2010. If you have users who do not run Windows though, it now looks a brave choice.

The Apple iPad was a hit

I still have to pinch myself when thinking about how Microsoft now needs to catch up with Apple in tablet computing. I got my first tablet in 2003, yes seven years ago, and it ran Windows. Now despite seven years of product refinement it is obvious that Windows tablets miss the mark that Apple has hit with its first attempt – though drawing heavily on what it learnt with the equally successful iPhone. I see iPads all over the place, in business as well as elsewhere, and it seems to me that the success of a touch interface on this larger screen signifies a transition in personal computing that will have a big impact.

Google Android was a hit

Just when Apple seemed to have the future of mobile computing in its hands, Google’s Android alternative took off, benefiting from mass adoption by everyone-but-Apple among hardware manufacturers. Android is not as elegantly designed or as usable as Apple’s iOS, but it is close enough; and it is a relatively open platform that runs Adobe Flash and other apps that do not meet Apple’s approval. There are other contenders: Microsoft Windows Phone 7; RIM’s QNX-based OS in the PlayBook; HP’s Palm WebOS; Nokia Symbian and Intel/Nokia MeeGo – but how many mobile operating systems can succeed? Right now, all we can safely say is that Apple has real competition from Android.

HP fell out with Microsoft

Here is an interesting one. The year kicked off with a press release announcing that HP and Microsoft love each other to the extent of $250 million over three years – but if you looked closely, that turned out to be less than a similar deal in 2006. After that, the signs were even less friendly. HP acquired Palm in April, signalling its intent to compete with Windows Mobile rather than adopting it; and later this year HP announced that it was discontinuing its Windows Home Server range. Of course HP remains a strong partner for Windows servers, desktops and laptops; but these are obvious signs of strain.

The truth though is that these two companies need one another. I think they should kiss and make up.

eBook readers were a hit

I guess this is less developer-oriented; but 2010 was the year when electronic book publishing seemed to hit the mainstream. Like any book lover I have mixed feelings about this and its implications for bookshops. I doubt we will see books disappear to the same extent as records and CDs; but I do think that book downloads will grow rapidly over the next few years and that paper-and-ink sales will diminish. It is a fascinating tech battle too: Amazon Kindle vs Apple iPad vs the rest (Sony Reader, Barnes and Noble Nook, and others which share their EPUB format). I have a suspicion that converged devices like the iPad may win this one, but displays that are readable in sunlight have special requirements so I am not sure.

HTML 5 got real

2010 was a huge year for HTML 5 – partly because Microsoft announced its support in Internet Explorer 9, currently in beta; and partly because the continued growth of browsers such as Mozilla Firefox, and the WebKit-based Google Chrome, Apple Safari and numerous mobile browsers showed that HTML 5 would be an important platform with or without Microsoft. Yes, it is fragmented and unfinished; but more and more of HTML 5 is usable now or in the near future.

Adobe Flash survived Apple and HTML 5

2010 was the year of Steve Jobs’ notorious Thoughts on Flash as well as a big year for HTML 5, which encroaches on territory that used to require the services of a browser plug-in. Many people declared Adobe Flash dead, but the reality was different and the company had a great year. Apple’s focus on design and usability helps Adobe’s design-centric approach even while Apple’s refusal to allow Flash on its mobile computers opposes it.

Windows 7 was a hit

Huge relief in Redmond as Windows 7 sold and sold. The future belongs to mobile and cloud; but Windows is not going away soon, and version 7 is driving lots of upgrades as even XP diehards move over. I’m guessing that we will get first sight of Windows 8 in 2011. Another triumph, or another Vista?