Category Archives: amazon

Google Assistant was all over IFA in Berlin. What are the implications?

Last week I attended IFA in Berlin, perhaps Europe’s biggest consumer electronics event, and was struck by the ubiquity of Google Assistant. The company spent big on promoting its digital assistant both outside and inside the venue.

image
Mach mal, Google; or in English, Go Google.

image

On the stands and in press briefings I soon lost count of who was supporting Google’s voice assistant. A few examples:

image

JBL/Harman in its earbuds

image

Lenovo with its Home Control Solutions – Lenovo also uses its own cloud and will support Amazon Alexa

image

LG with audio, TV, kitchen, home automation and more

image 

Bang & Olufsen with its smart speakers. No logo, but it is using Google Assistant both as a feature in itself (voice search and so on) and to control other audio devices.

And Sony with its TVs and more. For example, then new AF9 and ZF9 series: “Using the Google Assistant with both the AF9 and ZF9 will be even easier. Both models have built-in microphones that will free the hands; now you simply talk to the TV to find what you quickly want, or to ask the Google Assistant to play TV shows, movies, and more.*

I was only at IFA for the pre-conference press days so this is just a snapshot of what I saw; there were many more Google Assistant integrations on display, and quite a few (though not as many) for Amazon Alexa.

It is fair to say then that Google is treating this as a high priority and having considerable success in getting vendors to sign up.

What is Google Assistant?

Google Assistant really only needs three things in order to work. A microphone, to hear you. An internet connection, to send your voice input to its internet service for voice to text transcription, and then to its AI/Search service to find a suitable response. And a speaker, to output the result. You can get it as a product called Google Home but it is the software and internet service that counts.

image

Vendors of smart devices – anything that has an internet connection – can develop integrations so that Google Assistant can control them. So you can say, “Hey Google, turn on the living room light” and it will be so. Cool.

Amazon Alexa has similar features and this is Google’s main competition. Alexa was first and ties in well with Amazon services such as shopping and media. However Google has the advantage of its search services, its control of Android, and its extensive personal data derived from search, Android, Google Maps and location services, GMail and more. This means Google can do better AI and richer personalisation.

Natural language UI

Back in March I attended an AI Assistant Summit in London organised by Re-Work. One of the speakers was Yariv Adan, a Product Lead at Google Assistant.

image

I attend lots of presentations but this one made a particular impact on me. Adan believes that natural language UI is the next big technological shift. The preceding ones he identified were the Internet in the nineties and smartphones in the early years of this century. Adan envisages an era in which we no longer constantly pull out devices.

“I believe the next revolution is happening now, powered by AI. I call it the paradigm switch to natural UI. Instead of humans adapting to machines, machines adapt to humans. What we’re trying to create is we interact with machines the same way we interact with each other, in a natural way. Meaning using natural language, showing things, pointing at things, assuming context, assuming a human-like memory, expecting personality, humour, opinion, some kind of an emotional connection, empathy.

[In future] it is not the device changing, it is the device disappearing. We are not going to interact with devices any more. We are starting to interact with this AI entity, an ambient entity that exists everywhere.”

Note: If you ever read Isaac Asimov’s science fiction novels, you will recognise this as very like his Multivac computer, which hears and responds to your questions wherever you are.

“Imagine now that everything is connected, that the entity follows you. That there is no more device that you need to take out, turn on, speak to it. It’s around you, it’s on the TV, it’s in the speakers, it’s in your headphones, it’s in the watch, it’s in the auto, it’s there. Internet of things, any connected device that only has a speaker you can actually start interacting with that thing,”

said Adan.

Adan gave a number of demonstrations. Incidentally, he never uttered the words “Hey Google”. Simply, he spoke into his phone, where I presume some special version of Google Assistant was running. In particular, he was keen to show how the AI is learning about context and memory. So he asked what is the largest castle in the UK where people live. Answer: Windsor Castle. Then, Who built it? When? Is it open now? How can I get there by public transport? What about food? In each case, the Assistant answered as a human would, understanding that the topic was Windsor Castle. “I found some restaurants within 0.4 miles,” said the Assistant, betraying a touch of computer-style logic.

“Thank you you’re awesome,” says Adan. “Not a problem”, responds the Assistant. This is an example of personality or emotion, key factors, said Adan, in making interaction natural.

Adan also talked about personalisation. “Show me my flight”. The Assistant knows he is away from home and also has access to his mailbox, from where it has parse flight details. So it answers this generic question with specific details about tomorrow’s flight to Zurich.

“Where did I park my car?” In this case, Adan had taken a picture of his car after parking. The Assistant knew the location of the picture and was able to show both the image and its place on a map.

“I want to show how we use some of that power for the ecosystem that we have built … we’re trying to make that revolution to a place where you don’t need to think about the machine any more, where you just interact in a way that is natural. I am optimistic, I think the revolution is happening now.”

Implications and unintended consequences

An earlier speaker at the Re-Work event (sorry I forget who it was) noted that voice systems give simplified results compared to text-based searches. Often you only get one result. Back in the nineties, we used to talk about “10 blue links” as the typical result of a search. This meant that you had some sort of choice about where you clicked, and an easy way to get several different perspectives. Getting just one result is great if the answer is purely factual and is correct, but reinforces the winner-takes-all tendency. Instead of being on the first page of results, you have to be top. Or possibly pay for advertising; that aspect has not yet emerged in the voice assistant world.

If we get into the habit of shopping via voice assistants, it will be disruptive for brands. Maybe Amazon Basics will do well, if users simply say “get me some A4 paper” rather than specifying a brand. Maybe more and more decisions will be taken for you. “Get me a takeaway dinner”, perhaps, with the assistant knowing both what you like, and what you ate yesterday and the day before.

All this is speculation, but it is obvious that a shift from screens to voice for both transactions and information will have consequences for vendors and information providers; and that probably it will tend to reduce rather than increase diversity.

What about your personal data? This is a big question and one that the industry hates to talk about. I heard nothing about it at IFA. The assumption was that if you could turn on a light, or play some music, without leaving your chair, that must be a good thing. Yet, having a device or devices in your home listening to your every word (in case you might say “Hey Google”) is something that makes me uncomfortable. I do not want Google reading my emails or tracking my location, but it is becoming hard to avoid.

For most people, Google Assistant will just be a feature of their TV, or audio system, or a way to call up recipes in the kitchen.

From Google’s perspective though, it is safe to assume that the ability to collect data is a key reason for its strong promotion and drive behind Google Assistant. That data has enormous value. Targeted advertising is the start, but it also provides deep insight into how we live, trends in human behaviour, changing patterns of consumption, and much more. When things are going wrong with our health, our finances or our relationships, it is not implausible that Google may know before we do.

This is a lot of power to give a giant US corporation; and we should also note that in some scenarios, if the US government were to demand that data be handed over, a company like Google has no choice but to comply.

Personalisation can make our lives better, but also has the potential to harm us. An area of concern is that of shared risk, such as health insurance. Insurers may be reluctant to give policies to those people most likely to make a claim. Could Google’s data store somehow end up impacting our ability to insure, or its cost?

Personalisation is always a trade-off. Organisation gets my data; I get a benefit. I shop at a supermarket and this is fairly transparent. I use a loyalty card so the shop knows what I buy; in return I get discount points and special offers.

In the case of Google Assistant it is not so transparent. The EU’s GDPR legislation has helped, giving citizens the right to access their data and the right to be forgotten. However, we are still in the era of one-sided privacy policies and in many cases the binary choice of agree, or do not use our services. This becomes a problem if the service provider has anything close to a monopoly, which is true in Google’s case. Regulation, it seems to me, is exactly the right answer to the risks inherent in putting too much power in the hands of a business entity.

For myself, I am happy to cross the room and turn on the light, and to find my flight in my calendar. The trade-off is not worth it. But if Adan’s “ambient entity” comes to pass (which is actually most likely Google) I am not sure of the extent to which I will have a choice.

Adan’s work is terrific and the ability for machines to converse with humans in something close to a natural way is a huge technical achievement. I have nothing but respect for him and his team. It is part of a wider picture though, about data gathering, personalisation, and control of information and transactions, and it seems to me that this deserves more attention.

Microsoft’s strong financials, and some notes on Azure vs AWS and the risks of losing in mobile

Microsoft delivered another strong set of figures in its latest financial results, for the period April-June 2018. Total revenue of $30.085 million was up 17% year on year, and all three of the company’s sectors (Office, Azure and consumer) showed strong growth.

What’s notable? Largely this is more of the same. A few things to note. Linked in revenue increased 37% year on year – an acquisition that seems to be making sense for the company. Dynamics 365 revenue grew by 65%. The Dynamics story is all about cloud synergy. As an on-premises product Dynamics CRM (the part of the suite I know best) was relatively undistinguished but as a cloud product the seamless integration between Office 365 and Dynamics 365 (and Azure Active Directory) makes it compelling.

Windows 10 is doing OK, possibly as more businesses heave themselves off Windows 7 and buy new PCs with OEM licenses as they do.

Even areas in which Microsoft is far from dominant did well. Gaming was up 39%, Surface 25% and Search advertising up 17%.

The biggest growth in the quarter, according to the breakdown here, was in Azure. up 89%. This growth is not without pain; the Register reports capacity issues in the UK South region, for example, with users getting the message “Unfortunately, due to high demand for virtual machines in this region, we are not able to approve your quota request at this time.” You can still create VMs, but not necessarily in the region you want.

Will Microsoft outpace AWS? My take on this has not changed. AWS does very little wrong and remains the pre-eminent cloud for IAAS and many services by some distance. What AWS does not have is Office 365, or armies of Microsoft partners helping enterprise customers to shunt more and more of their IT infrastructure into Azure. Microsoft makes more money from licensing: Windows Server, SQL Server, Office 365 and Dynamics seats, and so on. AWS does more business at a lower margin. These are big differences. I see it as unlikely that Azure will overtake AWS in the provision of essential cloud services like VMs, containers, cloud storage and so on. AWS also has a better reliability track record. However, the success of Azure means that enterprise customers no longer need to go to AWS to get the benefits of cloud. Perhaps the more interesting question is the extent to which AWS (or Google) can persuade enterprise customers to shift away from Microsoft’s high-margin applications.

Longer term, there is significant risk for the company in its retreat from mobile. We are now seeing Google work hard in the laptop market with Chromebooks alongside Android mobile. Coming sometime is Google Fuchsia which may be a single operating system for both. It is worth recalling that Microsoft built its success on winning users for its PC operating system; and that IBM lost its IT dominance by ceding this to Microsoft.

Here is the breakdown by segment, such as it is:  

Quarter ending June 30th 2018 vs quarter ending June 30th 2017, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 9668 +1140 3466 +575
Intelligent Cloud 9606 +1784 3901 +990
More Personal Computing 10811 +1576 3012 +826

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Ubuntu goes minimal (but still much bigger than Alpine Linux), cosies up to Google Cloud Platform

Ubuntu has announced “Minimal Ubuntu”, a cut-down server image designed for containerised deployments. The Docker image for Minimal Ubuntu 18.04 is 29MB:

Editors, documentation, locales and other user-oriented features of Ubuntu Server have been removed. What remains are only the vital components of the boot sequence.  Images still contain ssh, apt and snapd so you can connect and install any package you’re missing. The unminimize tool lets you ‘rehydrate’ your image into a familiar Ubuntu server package set, suitable for command line interaction.

says Canonical.

29MB is pretty small; but not as small as Alpine Linux images, commonly used by Docker, which are nearer 5MB. Of course these image sizes soon increase when you add the applications you need.

I pulled Ubuntu 18.04 from Docker Hub and the image size is 31.26MB so this hardly seems a breakthrough.

Canonical quotes Paul Nash, Group Product Manager for Google Cloud Platform, in its press release. The image is being made available initially for Amazon EC2, Google Compute Engine, LXD, and KVM/OpenStack. The kernel has been optimized for each deployment, so the downloadable image is optimized for KVM and slightly different than the AWS or GCP versions.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

VMware Cloud on AWS: a game changer? What about Microsoft’s Azure Stack?

The biggest announcement from VMWorld in Las Vegas and then Barcelona was VMware Cloud on AWS; essentially VMware hosts on AWS servers.

image

A key point is that this really is VMware on AWS infrastructure; the release states “Run VMware software stack directly on metal, without nested virtualization”.

Why would you use this? Because it is hybrid cloud, allowing you to plan or move workloads between on-premises and public cloud infrastructure easily, using the same familiar tools (vCenter, vSphere, PowerCLI) as you do now, presuming you use VMware.

You also get low-latency connections to other AWS services, of which there are far too many to mention.

This strikes me as significant for VMware customers; and let’s not forget that the company dominates virtualisation in business computing.

Why would you not use VMware Cloud on AWS? Price is one consideration. Each host has 2 CPUs, 36 cores, 512GB RAM, 10.71TB local flash storage. You need a minimum of 4 hosts. Each host costs from $4.1616 to $8.3681 per hour, with the lowest price if you pay up front for a 3-year subscription (a substantial investment).

Price comparisons are always difficult. A big VM of a similar spec to one of these hosts will likely cost less. Maybe the best comparison is an EC2 Dedicated Host (where you buy a host on which you can run up VM instances without extra charge). An i3 dedicated host has 2 sockets and 36 cores, similar to a VMware host. It can run 16 xlarge VMs, each with 950GB SSD storage. Cost is from $2.323 to $5.491. Again, the lowest cost is for a 3 year subscription with payment upfront.

I may have this hasty calculation wrong; but there has to be a premium paid for VMware; but customers are used to that. The way the setup is designed (a 4-host cluster minimum) also makes it hard to be as flexible with with costs as you can be when running up individual VMs.

A few more observations. EC2 is the native citizen of AWS. By going for VMware on AWS instead of EC2 you are interposing a third party between you and AWS which intuitively seems to me a compromise. What you are getting though is smoother hybrid cloud which is no small thing.

What about Microsoft, previously the king of hybrid cloud? Microsoft’s hypervisor is Hyper-V and while there are a few features in VMware ESXi that Hyper-V lacks, they are not all that significant in my opinion. As a hypervisor, Hyper-V is solid. The pain points with Microsoft’s solution though are Cluster Shared Volumes, for high availability Hyper-V deployments, and System Center Virtual Machine Manager; VMware has better tools. There is a reason Azure uses Hyper-V but not SCVMM.

Hyper-V will always be cheaper than VMware (other than for small, free deployments) because it is a feature of Windows and not an add-on. Windows Server licenses are not cheap at all but that is another matter, and you have to suffer these anyway if you run Windows on VMware.

Thus far, Hyper-V has not been all that attractive to VMware shops, not only because of the cost of changing course, but also because of the shortcomings mentioned above.

Microsoft’s own game-changer here is Azure Stack, pre-packaged hardware which uses Azure rather than System Center technology, relieving admins of the burden of managing Cluster Shared Volumes and so forth. It is a great solution for hybrid since it really is the same (albeit with some missing features and some lag over implementing features that come to the public version) as Microsoft’s public cloud.

Azure Stack, like VMware on AWS, is new. Further, there is much more friction in migrating an existing datacenter to use Azure Stack, than in extending an existing VMware operation to use VMware Cloud on AWS.

But there is more. Is cloud computing really about running up VMs and moving them about? Arguably, not. Containers are another approach with some obvious advantages. Serverless is a big deal, and abstracts away both VMs and containers. Further, as you shift the balance of applications away from code you write and more towards use of cloud services (database, ML, BI, queuing and so on), the importance of VMs and containers lessens.

Azure Stack has an advantage here, since it gives an on-premises implementation of some Azure services, though far short of what is in Microsoft’s cloud. And VMware, of course, is not just about VMs.

Overall it seems to me that while VMware Cloud on AWS is great for VMware customers migrating towards hybrid cloud, it is unlikely to be optimal, either for cost or features, especially when you take a long view.

It remains a smart move and one that I would expect to have a rapid and significant take-up.

Amazon Web Services opens London data centers

Amazon Web Services (AWS) has opened a London Region, fulfilling its promise to open data centers in the UK, and joining Microsoft Azure which opened UK data centers in September 2016.

This is the third AWS European region, joining Ireland and Germany, and a fourth region, in France, has also been announced.

A region is not a single data center, but is comprised of at least two “Availability Zones”, each of which is a separate data center.

AWS Summit London 2016: no news but strong content, and a little bit of Echo

I attended day two (the developer day) of the Amazon Web Services Summit at the ExCel conference centre in London yesterday. A few quick observations.

It was a big event. I am not sure how many attended but heard “10,000” being muttered. I was there last year as well, and the growth was obvious. The exhibition has spilled out of its space to occupy part of an upper mezzanine floor as well. The main auditorium was packed.

image

Amazon does not normally announce much news at these events, and this one conformed to the pattern. It is a secretive company when it comes to future plans. The closest thing to news was when AWS UK and Ireland MD Gavin Jackson said that Amazon will go ahead with its UK region despite the referendum on leaving the EU.

CTO Dr Werner Vogels gave a keynote. It was mostly marketing which disappointed me, since Vogels is a technical guy with lots he could have said about AWS technology, but hey, this was a free event so what do you expect? That said, the latter part of the keynote was more interesting, when he talked about different models of cloud computing, and I will be writing this up for the Register shortly.

Otherwise this was a good example of a vendor technical conference, with plenty of how-to sessions that would be helpful to anyone getting started with AWS. The level of the sessions I attended was fairly high, even the ones described as “deep dive”, but you could always approach the speaker afterwards with your trickier issues. The event was just as good as some others for which you have to pay a fee.

The sessions I attended on DevOps, containers, microservices, and AWS Lambda (serverless computing) were all packed, with containers perhaps drawing the biggest crowd.

At the end of the day I went to a smaller session on programming for Amazon Echo, the home voice control device which you cannot get in the UK. The speaker refused to be drawn on when we might get it, but I suppose the fact that Amazon ran the session suggests that it will appear in the not too distant future. I found this session though-provoking. It was all about how to register a keyword with Amazon so that when a user says “Alexa what’s new with [mystuff]” then the mystuff service will be invoked. Amazon’s service will send your service the keywords (defined by you) that it detects in the question or interaction and you send back a response. The trigger word – called the Invocation Name – has to be registered with Amazon and I imagine there could be big competition for valuable ones. It is all rather limited at the moment; you cannot create a commercial service, for example, not even for ordering pizzas. Check out the Alexa Skills Kit for more.

Presuming commercial usage does come, there are some interesting issues around identity, authentication, and preventing unauthorised or inappropriate use. Echo does allow ordering from Amazon, and you can optionally set a voice PIN, but I would have thought a voice PIN is not much use if you want to stop children ordering stuff, for example, since they will hear it. If you watch your email, you would see the confirming email from Amazon and could quickly cancel if it were a problem. The security here seems weak though; it would be better to have an approval text sent to a mobile, for example, so that there is some real control.

Overall, AWS is still on a roll and I did not hear a single thing about security concerns or the risks of putting all your eggs in Amazon’s basket. I wonder if fears have gone from being over blown to under recognized? In the end these considerations are not quantifiable which makes risks hard to assess.

I could not help but contrast this AWS event to one I attended on Microsoft Azure last month. AzureCraft benefited from the presence of corporate VP Scott Guthrie but it was a tiny event in comparison to Amazon’s effort. If Microsoft is serious about competing with AWS it needs to rethink its events and put them on directly rather than working through user groups that have a narrow membership (AzureCraft was up on by the UK Azure User Group).

AWS Summit London: cloud growth, understanding Lambda, Machine Learning

I attended the Amazon Web Services (AWS) London Summit. Not much news there, since the big announcements were the week before in San Francisco, but a chance to drill into some of the AWS services and keep up to date with the platform.

image

The keynote by CTO Werner Vogels was a bit too much relentless promotion for my taste, but I am interested in the idea he put forward that cloud computing will gradually take over from on-premises and that more and more organisations will go “all in” on Amazon’s cloud. He instanced some examples (Netflix, Intuit, Tibco, Splunk) though I am not quite clear whether these companies have 100% of their internal IT systems on AWS, or merely that they run the entirety of their services (their product) on AWS. The general argument is compelling, especially when you consider the number of services now on offer from AWS and the difficulty of replicating them on-premises (I wrote this up briefly on the Reg). I don’t swallow it wholesale though; you have to look at the costs carefully, but even more than security, the loss of control when you base your IT infrastructure on a public cloud provider is a negative factor.

As it happens, the ticket systems for my train into London were down that morning, which meant that purchasers of advance tickets online could not collect their tickets.

image

The consequences of this outage were not too serious, in that the trains still ran, but of course there were plenty of people travelling without tickets (I was one of them) and ticket checking was much reduced. I am not suggesting that this service runs on AWS (I have no idea) but it did get me thinking about the impact on business when applications fail; and that led me to the question: what are the long-term implications of our IT systems and even our economy becoming increasingly dependent on a (very) small number of companies for their health? It seems to me that the risks are difficult to assess, no matter how much respect we have for the AWS engineers.

I enjoyed the technical sessions more than the keynote. I attended Dean Bryen’s session on AWS Lambda, “Event-driven code in the cloud”, where I discovered that the scope of Lambda is greater than I had previously realised. Lambda lets you write code that runs in response to events, but what is also interesting is that it is a platform as a service offering, where you simply supply the code and AWS runs it for you:

AWS Lambda runs your custom code on a high-availability compute infrastructure and administers all of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code, and security patches.

This is a different model than running applications in EC2 (Elastic Compute Cloud) VMs or even in Docker containers, which are also VM based. Of course we know that Lambda ultimately runs in VMs as well, but these details are abstracted away and scaling is automatic, which arguably is a better model for cloud computing. Azure Cloud Services or Heroku apps are somewhat like this, but neither is very pure; with Azure Cloud Services you still have to worry about how many VMs you are using, and with Heroku you have to think about dynos (app containers). Google App Engine is another example and autoscales, though you are charged by application instance count so you still have to think in those terms. With Lambda you are charged based on the number of requests, the duration of your code, and the amount of memory allocated, making it perhaps the best abstracted of all these PaaS examples.

But Lambda is just for event-handing, right? Not quite; it now supports synchronous as well as asynchronous event handling and you could create large applications on the service if you chose. It is well suited to services for mobile applications, for example. Java support is on the way, as an alternative to the existing Node.js support. I will be interested to see how this evolves.

I also went along to Carlos Conde’s session on Amazon Machine Learning (one instance in which AWS has trailed Microsoft Azure, which already has a machine learning service). Machine learning is not that easy to explain in simple terms, but I thought Conde did a great job. He showed us a spreadsheet which was a simple database of contacts with fields for age, income, location, job and so on. There was also a Boolean field for whether they had purchased a certain financial product after it had been offered to them. The idea was to feed this spreadsheet to the machine learning service, and then to upload a similar table but of different contacts and without the last field. The job of the service was to predict whether or not each contact listed would purchase the product. The service returned results with this field populated along with a confidence indicator. A simple example with obvious practical benefit, presuming of course that the prediction has reasonable accuracy.

So that was 2014: Samsung stumbles, all change for Microsoft, Sony hack, more cloud, more mobile

What happened in 2014? One thing I did not predict is that Samsung lost its momentum. Here are Gartner’s figures for global smartphone sales by vendor, for the third quarter of 2014:

image

Samsung is still huge, of course. But in 2013, Samsung seemed to be in such control of its premium brand that it could shape Android as it wished, rather than being merely an OEM for Google’s operating system. In the enterprise, Samsung KNOX held promise as a way to bring security and manageability to Android, but only in Samsung’s flavour. Today, that seems less likely. Market share is declining, and much of KNOX has been rolled into Android Lollipop. What is going wrong? The difficulty for Samsung is how to differentiate its products sufficiently, to avoid bleeding market share to keenly priced competition from vendors such as Xiaomi and Huawei. This is difficult if you do not control the operating system.

What of the overall mobile OS wars? 2013 brought few surprises: the Apple/Android duopoly continued, Blackberry further diminished its share, and Windows Phone struggles on, though it was not looking good for Microsoft’s OS as 2013 closed; the Nokia acquisition may have been fumbled.

All change at Microsoft

That brings me to Microsoft, a company I watch closely. 2014 saw Satya Nadella appointed as CEO and several strategic changes, though the extent to which Nadella introduced those changes is uncertain. What changes?

Office is going truly cross-platform, with first-class support for iOS and Android. I covered this recently on the Register; the summary is that there will be mobile versions of Office for iOS, Android and Windows (this last a Store app) with similar features, and that more and more of the functionality of desktop Office will turn up in the mobile versions. I learned from my interview with Technical Product Manager Kaberi Chowdhury that ODF (Open Document) support is planned, as is some level of programmability.

The plans for Office are a clue to the company’s wider strategy, which is focused on cloud and server. Key products include Office 365, Windows Azure, Active Directory (and Azure Active Directory), SQL Server, SharePoint, and System Center as a management tool for hybrid cloud.

The Windows client strategy is to bring back users who disliked Windows 8 with a renewed focus on the desktop in the forthcoming Windows 10, while retaining the Store app model for apps that are secure, touch-friendly, and easily deployed. It is still not clear what Windows 10 phones and tablets will look like, but we can expect convergence; no more Windows RT, but perhaps tablets running Windows Phone OS that are in effect the next generation of Windows RT without a desktop personality.

The company will also hedge its bets with full app support for Office and its cloud services on iOS and Android, and in doing so will make its Windows mobile offerings less compelling.

Microsoft’s developer tools are changing in line with this strategy. The next generation of .NET is open source and cross-platform on the server side, for Windows, Mac and Linux. Xamarin plugs the gap for .NET on iOS and Android, while Microsoft is also adding native support (not .NET based) for cross-platform mobile in the next Visual Studio.

These are big changes to the developer stack, and Microsoft is forking .NET between the continuing Windows-only .NET Framework, and the new cross-platform .NET Core. Developers have many questions about this; see this interview on the Register for what I could glean about the current plans. Watch our for the Build conference at the end of April when the company will attempt to put it all together into a coherent whole for developers targeting either Windows 10, or cloud apps, or cloud services with cross-platform mobile clients.

This entire strategy is a logical progression from the company’s failure in mobile. Can it now succeed with client apps running on platforms controlled by its competitors? Alternatively, is there hope that Windows 10 can keep businesses hooked on Windows clients? Maybe 2015 will bring some answers, though with Windows 10 not expected until towards the end of the year there will be a long wait while iOS, Android and even Chrome OS (the operating system of Chromebook) continue to build.

A side effect is that C# now has a better chance of building a cross-platform user base, rather than being a Windows language. This has already happened in game development, thanks to the use of Mono and C# in the popular Unity game engine. Could it also happen with ASP.NET, deployed to Linux servers, now that this will be officially supported? Or is there little room for it alongside Java, PHP, Ruby, Node.js and the rest? 

The puzzle with Microsoft is that there is still too much mediocrity and complacency that damages the company’s offerings. How can it expect to succeed in the crowded wearable market with a band that is uncomfortable to wear? There is still an attitude in some parts of the company that the world will be happy to put up with problems that might be fixed in a future version after some long interval. Then again, the Azure team is doing great things and Windows server continues to impress. Win or lose, there will be plenty of Microsoft news this year.

A theme for 2015: cloud optimization

Late last year I attended Amazon’s re:Invent conference in Las Vegas; I wrote this up here. The key announcement for me was Amazon Aurora, a MySQL clone, not so much because of its merits as a cloud database server, but more because it represents a new breed of applications that are designed for the cloud. If you design database storage with the knowledge that it will only ever run on a huge cloud-scale infrastructure, you can make optimizations that cannot be replicated on smaller systems. I tried to summarize what this means in another Register piece here. The fact that this type of technology can be rented by any of us at commodity prices increases the advantage of public cloud, despite reservations that many still have concerning security and control. It also poses a challenge for companies like Oracle and Microsoft whose technology is designed for on-premises as well as cloud deployment; they cannot achieve the same advantage unless they fork their products, creating cloud variants that use different architecture.

The Sony hack

The cyber invasion of Sony Pictures in late November was not just another hack; it was a comprehensive takedown in which (as far as I can tell) the company’s entire IT systems were entirely compromised and significantly damaged.

According to this report:

Mountains of documents had been stolen, internal data centers had been wiped clean, and 75 percent of the servers had been destroyed.

Most IT admins worry about disaster recovery (what to do after catastrophic system failure such as a fire in your data center) as well as about security (what to do if hackers gain access to sensitive information). In this case, both seemed to happen simultaneously. Further, as producing movies is in effect a digital business, the business suffered loss of some of its actual products, such as the unreleased “Annie”.

The incident is fascinating in itself, especially as we do not know the identity of the hackers or their purpose, but what interests me more are the implications.

Specifically, how many companies are equally at risk? It seems clear that Sony’s security was towards the weak end of the scale, but there is plenty of weak security out there, especially but not exclusively in smaller businesses.

With the outcome of the Sony hack so spectacular, it is likely that there will be similar efforts in 2015, as well as many businesses looking nervously at their own practices and wondering what they can do to protect themselves.

Cloud may be part of the answer though even if the cloud provider does security right, that is no guarantee that their customers do the same.   

Looking back on looking back

Here is what I wrote a year or so ago, Reflecting on 2013- the year of not the PC, no privacy, and the Internet of Things. Most of it still applies. I have not achieved any of the three goals I set for myself though. Maybe this year…

Quick reflections on Amazon re:Invent, open source, and Amazon Web Services

Last week I was in Las Vegas for my first visit to Amazon’s annual developer conference re:Invent. There were several announcements, the biggest being a new relational database service called RDS Aurora – a drop-in replacement for MySQL but with 3x write performance and 5x read performance as well as resiliency benefits – and EC2 Container Service, for deploying and managing Docker app containers. There is also AWS Lambda, a service which runs code in response to events.

You could read this news anywhere, but the advantage of being in Vegas was to immerse myself in the AWS culture and get to know the company better. Amazon is both distinctive and disruptive, and threes things that its retail operation and its web services have in common are large scale, commodity pricing, and customer focus.

Customer focus? Every company I have ever spoken to says it is customer focused, so what is different? Well, part of the press training at Amazon seems to be that when you ask about its future plans, the invariable answer is “what customers demand.” No doubt if you could eavesdrop at an Amazon executive meeting you would find that this is not entirely true, that there are matters of strategy and profitability which come into play, but this is the story the company wants us to hear. It also chimes with that of the retail operation, where customer service is generally excellent; the company would rather risk giving a refund or replacement to an undeserving customer and annoy its suppliers than vice versa. In the context of AWS this means something a bit different, but it does seem to me part of the company culture. “If enough customers keep asking for something, it’s very likely that we will respond to that,” marketing executive Paul Duffy told me.

That said, I would not describe Amazon as an especially open company, which is one reason I was glad to attend re:Invent. I was intrigued for example that Aurora is a drop-in replacement for an open source product, and wondered if it actually uses any of the MySQL code, though it seems unlikely since MySQL’s GPL license would require Amazon to publish its own code if it used any MySQL code; that said, the InnoDB storage engine code at least used to be available under a dual license so it is possible. When I asked Duffy though he said:

We don’t … at that level, that’s why we say it is compatible with MySQL. If you run the MySQL compatibility tool that will all check out. We don’t disclose anything about the inner workings of the service.

This of course touches on the issue of whether Amazon takes more from the open source community than it gives back.

image
Senior VP of AWS Andy Jassy

Someone asked Senior VP of AWS Andy Jassy, “what is your strategy of contributing to the open source ecosystem”, to which he replied:

We contribute to the open source ecosystem for many years. Zen, MySQL space, Linux space, we’re very active contributors, and will continue to do so in future.

That was it, that was the whole answer. Aurora, despite Duffy’s reticence, seems to be a completely new implementation of the MySQL API and builds on its success and popularity; could Amazon do more to share some of its breakthroughs with the open source community from which MySQL came? I think that is arguable; but Amazon is hard to hate since it tends to price so competitively.

Is Amazon worried about competition from Microsoft, Google, IBM or other cloud providers? I heard this question asked on several occasions, and the answer was generally along the lines that AWS is too busy to think about it. Again this is perhaps not the whole story, but it is true that AWS is growing fast and dominates the market to the extent that, say, Azure’s growth does not keep it awake at night. That said, you cannot accuse Amazon of complacency since it is adding new services and features at a high rate; 449 so far in 2014 according to VP and Distinguished Engineer James Hamilton, who also mentioned 99% usage growth in EC2 year on year, over 1,000,000 active customers, and 132% data transfer growth in the S3 storage service.

Cloud thinking

Hamilton’s session on AWS Innovation at Scale was among the most compelling of those I attended. His theme was that cloud computing is not just a bunch of hosted servers and services, but a new model of computing that enables new and better ways to run applications that are fast, resilient and scalable. Aurora is actually an example of this. Amazon has separated the storage engine from the relational engine, he explained, so that only deltas (the bits that have changed) are passed down for storage. The data is replicated 6 times across three Amazon availability zones, making it exceptionally resilient. You could not implement Aurora on-premises; only a cloud provider with huge scale can do it, according to Hamilton.

image
Distinguished Engineer James Hamilton

Hamilton was fascinating on the subject of networking gear – the cards, switches and routers that push bits across the network. Five years ago Amazon decided to build its own, partly because it considered the commercial products to be too expensive. Amazon developed its own custom network protocol stack. It worked out a lot cheaper, he said, since “even the support contract for networking gear was running into 10s of millions of dollars.” The company also found that reliability increased. Why was that? Hamilton quipped about how enterprise networking products evolve:

Enterprise customers give lots of complicated requirements to networking equipment producers who aggregate all these complicated requirements into 10s of billions of lines of code that can’t be maintained and that’s what gets delivered.

Amazon knew its own requirements and built for those alone. “Our gear is more reliable because we took on an easier problem,” he said.

AWS is also in a great position to analyse performance. It runs so much kit that it can see patterns of failure and where the bottlenecks lie. “We love metrics,” he said. There is an analogy with the way the popularity of Google search improves Google search; it is a virtuous circle that is hard for competitors can replicate.

Closing reflections

Like all vendor-specific conferences there was more marketing that I would have liked at re:Invent, but there is no doubting the excellence of the platform and its power to disrupt. There are aspects of public cloud that remain unsettling; things can go wrong and there will be nothing you can do but wait for them to be fixed. The benefits though are so great that it is worth the risk – though I would always advocate having some sort of plan B and off-cloud (or backup with another cloud provider) if that is feasible.