Tag Archives: amazon

Google Assistant was all over IFA in Berlin. What are the implications?

Last week I attended IFA in Berlin, perhaps Europe’s biggest consumer electronics event, and was struck by the ubiquity of Google Assistant. The company spent big on promoting its digital assistant both outside and inside the venue.

image
Mach mal, Google; or in English, Go Google.

image

On the stands and in press briefings I soon lost count of who was supporting Google’s voice assistant. A few examples:

image

JBL/Harman in its earbuds

image

Lenovo with its Home Control Solutions – Lenovo also uses its own cloud and will support Amazon Alexa

image

LG with audio, TV, kitchen, home automation and more

image 

Bang & Olufsen with its smart speakers. No logo, but it is using Google Assistant both as a feature in itself (voice search and so on) and to control other audio devices.

And Sony with its TVs and more. For example, then new AF9 and ZF9 series: “Using the Google Assistant with both the AF9 and ZF9 will be even easier. Both models have built-in microphones that will free the hands; now you simply talk to the TV to find what you quickly want, or to ask the Google Assistant to play TV shows, movies, and more.*

I was only at IFA for the pre-conference press days so this is just a snapshot of what I saw; there were many more Google Assistant integrations on display, and quite a few (though not as many) for Amazon Alexa.

It is fair to say then that Google is treating this as a high priority and having considerable success in getting vendors to sign up.

What is Google Assistant?

Google Assistant really only needs three things in order to work. A microphone, to hear you. An internet connection, to send your voice input to its internet service for voice to text transcription, and then to its AI/Search service to find a suitable response. And a speaker, to output the result. You can get it as a product called Google Home but it is the software and internet service that counts.

image

Vendors of smart devices – anything that has an internet connection – can develop integrations so that Google Assistant can control them. So you can say, “Hey Google, turn on the living room light” and it will be so. Cool.

Amazon Alexa has similar features and this is Google’s main competition. Alexa was first and ties in well with Amazon services such as shopping and media. However Google has the advantage of its search services, its control of Android, and its extensive personal data derived from search, Android, Google Maps and location services, GMail and more. This means Google can do better AI and richer personalisation.

Natural language UI

Back in March I attended an AI Assistant Summit in London organised by Re-Work. One of the speakers was Yariv Adan, a Product Lead at Google Assistant.

image

I attend lots of presentations but this one made a particular impact on me. Adan believes that natural language UI is the next big technological shift. The preceding ones he identified were the Internet in the nineties and smartphones in the early years of this century. Adan envisages an era in which we no longer constantly pull out devices.

“I believe the next revolution is happening now, powered by AI. I call it the paradigm switch to natural UI. Instead of humans adapting to machines, machines adapt to humans. What we’re trying to create is we interact with machines the same way we interact with each other, in a natural way. Meaning using natural language, showing things, pointing at things, assuming context, assuming a human-like memory, expecting personality, humour, opinion, some kind of an emotional connection, empathy.

[In future] it is not the device changing, it is the device disappearing. We are not going to interact with devices any more. We are starting to interact with this AI entity, an ambient entity that exists everywhere.”

Note: If you ever read Isaac Asimov’s science fiction novels, you will recognise this as very like his Multivac computer, which hears and responds to your questions wherever you are.

“Imagine now that everything is connected, that the entity follows you. That there is no more device that you need to take out, turn on, speak to it. It’s around you, it’s on the TV, it’s in the speakers, it’s in your headphones, it’s in the watch, it’s in the auto, it’s there. Internet of things, any connected device that only has a speaker you can actually start interacting with that thing,”

said Adan.

Adan gave a number of demonstrations. Incidentally, he never uttered the words “Hey Google”. Simply, he spoke into his phone, where I presume some special version of Google Assistant was running. In particular, he was keen to show how the AI is learning about context and memory. So he asked what is the largest castle in the UK where people live. Answer: Windsor Castle. Then, Who built it? When? Is it open now? How can I get there by public transport? What about food? In each case, the Assistant answered as a human would, understanding that the topic was Windsor Castle. “I found some restaurants within 0.4 miles,” said the Assistant, betraying a touch of computer-style logic.

“Thank you you’re awesome,” says Adan. “Not a problem”, responds the Assistant. This is an example of personality or emotion, key factors, said Adan, in making interaction natural.

Adan also talked about personalisation. “Show me my flight”. The Assistant knows he is away from home and also has access to his mailbox, from where it has parse flight details. So it answers this generic question with specific details about tomorrow’s flight to Zurich.

“Where did I park my car?” In this case, Adan had taken a picture of his car after parking. The Assistant knew the location of the picture and was able to show both the image and its place on a map.

“I want to show how we use some of that power for the ecosystem that we have built … we’re trying to make that revolution to a place where you don’t need to think about the machine any more, where you just interact in a way that is natural. I am optimistic, I think the revolution is happening now.”

Implications and unintended consequences

An earlier speaker at the Re-Work event (sorry I forget who it was) noted that voice systems give simplified results compared to text-based searches. Often you only get one result. Back in the nineties, we used to talk about “10 blue links” as the typical result of a search. This meant that you had some sort of choice about where you clicked, and an easy way to get several different perspectives. Getting just one result is great if the answer is purely factual and is correct, but reinforces the winner-takes-all tendency. Instead of being on the first page of results, you have to be top. Or possibly pay for advertising; that aspect has not yet emerged in the voice assistant world.

If we get into the habit of shopping via voice assistants, it will be disruptive for brands. Maybe Amazon Basics will do well, if users simply say “get me some A4 paper” rather than specifying a brand. Maybe more and more decisions will be taken for you. “Get me a takeaway dinner”, perhaps, with the assistant knowing both what you like, and what you ate yesterday and the day before.

All this is speculation, but it is obvious that a shift from screens to voice for both transactions and information will have consequences for vendors and information providers; and that probably it will tend to reduce rather than increase diversity.

What about your personal data? This is a big question and one that the industry hates to talk about. I heard nothing about it at IFA. The assumption was that if you could turn on a light, or play some music, without leaving your chair, that must be a good thing. Yet, having a device or devices in your home listening to your every word (in case you might say “Hey Google”) is something that makes me uncomfortable. I do not want Google reading my emails or tracking my location, but it is becoming hard to avoid.

For most people, Google Assistant will just be a feature of their TV, or audio system, or a way to call up recipes in the kitchen.

From Google’s perspective though, it is safe to assume that the ability to collect data is a key reason for its strong promotion and drive behind Google Assistant. That data has enormous value. Targeted advertising is the start, but it also provides deep insight into how we live, trends in human behaviour, changing patterns of consumption, and much more. When things are going wrong with our health, our finances or our relationships, it is not implausible that Google may know before we do.

This is a lot of power to give a giant US corporation; and we should also note that in some scenarios, if the US government were to demand that data be handed over, a company like Google has no choice but to comply.

Personalisation can make our lives better, but also has the potential to harm us. An area of concern is that of shared risk, such as health insurance. Insurers may be reluctant to give policies to those people most likely to make a claim. Could Google’s data store somehow end up impacting our ability to insure, or its cost?

Personalisation is always a trade-off. Organisation gets my data; I get a benefit. I shop at a supermarket and this is fairly transparent. I use a loyalty card so the shop knows what I buy; in return I get discount points and special offers.

In the case of Google Assistant it is not so transparent. The EU’s GDPR legislation has helped, giving citizens the right to access their data and the right to be forgotten. However, we are still in the era of one-sided privacy policies and in many cases the binary choice of agree, or do not use our services. This becomes a problem if the service provider has anything close to a monopoly, which is true in Google’s case. Regulation, it seems to me, is exactly the right answer to the risks inherent in putting too much power in the hands of a business entity.

For myself, I am happy to cross the room and turn on the light, and to find my flight in my calendar. The trade-off is not worth it. But if Adan’s “ambient entity” comes to pass (which is actually most likely Google) I am not sure of the extent to which I will have a choice.

Adan’s work is terrific and the ability for machines to converse with humans in something close to a natural way is a huge technical achievement. I have nothing but respect for him and his team. It is part of a wider picture though, about data gathering, personalisation, and control of information and transactions, and it seems to me that this deserves more attention.

AWS embraces hybrid cloud? Meet Snowball Edge

Amazon has announced Snowball Edge, an on-premises appliance that supports Amazon EC2 (Elastic Compute Cloud), AWS Lambda (“serverless” computing) and S3 (Simple Storage Service), all running locally.

image

Sounds like Microsoft’s Azure Stack? A bit, but the AWS appliance is tiny by comparison and therefore more limited in scope. Nevertheless, it is a big turnaround for the company, which has previously insisted that everything belongs in the cloud. One of the Snowball Edge case studies is the same general area as one used by Microsoft for Azure Stack: ships.

The specifications are shy about revealing what is inside, but there is 100TB storage (82TB usable), 10GB, 20GB and 40GB network connections (GBase-T, SFP+ and QSFP+), size is 259x671x386mm (pretty small), and power consumption 400 watts.

Jeff Barr’s official blog post adds that there is an “Intel Xeon D processor running at 1.8 GHz, and supports any combination of instances that consume up to 24 vCPUs and 32 GiB of memory.”

You can cluster Snowball Edge appliances though so substantial systems are possible.

Operating systems currently supported are Ubuntu Server and CentOS7.

Amazon’s approach is to extend its cloud to the edge rather than vice versa. You prepare your AMIs (Amazon Machine Instances) in the cloud before the appliance is shipped. The very fast networking support shows that the intent is to maintain the best possible connectivity, even though the nature of the requirement is that internet connectivity in some scenarios will be poor.

A point to note is that whereas the documentation emphasises use cases where there are technical advantages to on-premises (or edge) computing, Barr quotes instead a customer who wanted easier management. A side effect of the cloud computing revolution is that provisioning and managing cloud infrastructure is easier than with systems (like Microsoft’s System Center) designed for on-premises infrastructure. Otherwise they would not be viable. Having tasted what is possible in the cloud, customers want the same for on-premises.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

AWS Summit London 2016: no news but strong content, and a little bit of Echo

I attended day two (the developer day) of the Amazon Web Services Summit at the ExCel conference centre in London yesterday. A few quick observations.

It was a big event. I am not sure how many attended but heard “10,000” being muttered. I was there last year as well, and the growth was obvious. The exhibition has spilled out of its space to occupy part of an upper mezzanine floor as well. The main auditorium was packed.

image

Amazon does not normally announce much news at these events, and this one conformed to the pattern. It is a secretive company when it comes to future plans. The closest thing to news was when AWS UK and Ireland MD Gavin Jackson said that Amazon will go ahead with its UK region despite the referendum on leaving the EU.

CTO Dr Werner Vogels gave a keynote. It was mostly marketing which disappointed me, since Vogels is a technical guy with lots he could have said about AWS technology, but hey, this was a free event so what do you expect? That said, the latter part of the keynote was more interesting, when he talked about different models of cloud computing, and I will be writing this up for the Register shortly.

Otherwise this was a good example of a vendor technical conference, with plenty of how-to sessions that would be helpful to anyone getting started with AWS. The level of the sessions I attended was fairly high, even the ones described as “deep dive”, but you could always approach the speaker afterwards with your trickier issues. The event was just as good as some others for which you have to pay a fee.

The sessions I attended on DevOps, containers, microservices, and AWS Lambda (serverless computing) were all packed, with containers perhaps drawing the biggest crowd.

At the end of the day I went to a smaller session on programming for Amazon Echo, the home voice control device which you cannot get in the UK. The speaker refused to be drawn on when we might get it, but I suppose the fact that Amazon ran the session suggests that it will appear in the not too distant future. I found this session though-provoking. It was all about how to register a keyword with Amazon so that when a user says “Alexa what’s new with [mystuff]” then the mystuff service will be invoked. Amazon’s service will send your service the keywords (defined by you) that it detects in the question or interaction and you send back a response. The trigger word – called the Invocation Name – has to be registered with Amazon and I imagine there could be big competition for valuable ones. It is all rather limited at the moment; you cannot create a commercial service, for example, not even for ordering pizzas. Check out the Alexa Skills Kit for more.

Presuming commercial usage does come, there are some interesting issues around identity, authentication, and preventing unauthorised or inappropriate use. Echo does allow ordering from Amazon, and you can optionally set a voice PIN, but I would have thought a voice PIN is not much use if you want to stop children ordering stuff, for example, since they will hear it. If you watch your email, you would see the confirming email from Amazon and could quickly cancel if it were a problem. The security here seems weak though; it would be better to have an approval text sent to a mobile, for example, so that there is some real control.

Overall, AWS is still on a roll and I did not hear a single thing about security concerns or the risks of putting all your eggs in Amazon’s basket. I wonder if fears have gone from being over blown to under recognized? In the end these considerations are not quantifiable which makes risks hard to assess.

I could not help but contrast this AWS event to one I attended on Microsoft Azure last month. AzureCraft benefited from the presence of corporate VP Scott Guthrie but it was a tiny event in comparison to Amazon’s effort. If Microsoft is serious about competing with AWS it needs to rethink its events and put them on directly rather than working through user groups that have a narrow membership (AzureCraft was up on by the UK Azure User Group).

AWS Summit London: cloud growth, understanding Lambda, Machine Learning

I attended the Amazon Web Services (AWS) London Summit. Not much news there, since the big announcements were the week before in San Francisco, but a chance to drill into some of the AWS services and keep up to date with the platform.

image

The keynote by CTO Werner Vogels was a bit too much relentless promotion for my taste, but I am interested in the idea he put forward that cloud computing will gradually take over from on-premises and that more and more organisations will go “all in” on Amazon’s cloud. He instanced some examples (Netflix, Intuit, Tibco, Splunk) though I am not quite clear whether these companies have 100% of their internal IT systems on AWS, or merely that they run the entirety of their services (their product) on AWS. The general argument is compelling, especially when you consider the number of services now on offer from AWS and the difficulty of replicating them on-premises (I wrote this up briefly on the Reg). I don’t swallow it wholesale though; you have to look at the costs carefully, but even more than security, the loss of control when you base your IT infrastructure on a public cloud provider is a negative factor.

As it happens, the ticket systems for my train into London were down that morning, which meant that purchasers of advance tickets online could not collect their tickets.

image

The consequences of this outage were not too serious, in that the trains still ran, but of course there were plenty of people travelling without tickets (I was one of them) and ticket checking was much reduced. I am not suggesting that this service runs on AWS (I have no idea) but it did get me thinking about the impact on business when applications fail; and that led me to the question: what are the long-term implications of our IT systems and even our economy becoming increasingly dependent on a (very) small number of companies for their health? It seems to me that the risks are difficult to assess, no matter how much respect we have for the AWS engineers.

I enjoyed the technical sessions more than the keynote. I attended Dean Bryen’s session on AWS Lambda, “Event-driven code in the cloud”, where I discovered that the scope of Lambda is greater than I had previously realised. Lambda lets you write code that runs in response to events, but what is also interesting is that it is a platform as a service offering, where you simply supply the code and AWS runs it for you:

AWS Lambda runs your custom code on a high-availability compute infrastructure and administers all of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code, and security patches.

This is a different model than running applications in EC2 (Elastic Compute Cloud) VMs or even in Docker containers, which are also VM based. Of course we know that Lambda ultimately runs in VMs as well, but these details are abstracted away and scaling is automatic, which arguably is a better model for cloud computing. Azure Cloud Services or Heroku apps are somewhat like this, but neither is very pure; with Azure Cloud Services you still have to worry about how many VMs you are using, and with Heroku you have to think about dynos (app containers). Google App Engine is another example and autoscales, though you are charged by application instance count so you still have to think in those terms. With Lambda you are charged based on the number of requests, the duration of your code, and the amount of memory allocated, making it perhaps the best abstracted of all these PaaS examples.

But Lambda is just for event-handing, right? Not quite; it now supports synchronous as well as asynchronous event handling and you could create large applications on the service if you chose. It is well suited to services for mobile applications, for example. Java support is on the way, as an alternative to the existing Node.js support. I will be interested to see how this evolves.

I also went along to Carlos Conde’s session on Amazon Machine Learning (one instance in which AWS has trailed Microsoft Azure, which already has a machine learning service). Machine learning is not that easy to explain in simple terms, but I thought Conde did a great job. He showed us a spreadsheet which was a simple database of contacts with fields for age, income, location, job and so on. There was also a Boolean field for whether they had purchased a certain financial product after it had been offered to them. The idea was to feed this spreadsheet to the machine learning service, and then to upload a similar table but of different contacts and without the last field. The job of the service was to predict whether or not each contact listed would purchase the product. The service returned results with this field populated along with a confidence indicator. A simple example with obvious practical benefit, presuming of course that the prediction has reasonable accuracy.

So that was 2014: Samsung stumbles, all change for Microsoft, Sony hack, more cloud, more mobile

What happened in 2014? One thing I did not predict is that Samsung lost its momentum. Here are Gartner’s figures for global smartphone sales by vendor, for the third quarter of 2014:

image

Samsung is still huge, of course. But in 2013, Samsung seemed to be in such control of its premium brand that it could shape Android as it wished, rather than being merely an OEM for Google’s operating system. In the enterprise, Samsung KNOX held promise as a way to bring security and manageability to Android, but only in Samsung’s flavour. Today, that seems less likely. Market share is declining, and much of KNOX has been rolled into Android Lollipop. What is going wrong? The difficulty for Samsung is how to differentiate its products sufficiently, to avoid bleeding market share to keenly priced competition from vendors such as Xiaomi and Huawei. This is difficult if you do not control the operating system.

What of the overall mobile OS wars? 2013 brought few surprises: the Apple/Android duopoly continued, Blackberry further diminished its share, and Windows Phone struggles on, though it was not looking good for Microsoft’s OS as 2013 closed; the Nokia acquisition may have been fumbled.

All change at Microsoft

That brings me to Microsoft, a company I watch closely. 2014 saw Satya Nadella appointed as CEO and several strategic changes, though the extent to which Nadella introduced those changes is uncertain. What changes?

Office is going truly cross-platform, with first-class support for iOS and Android. I covered this recently on the Register; the summary is that there will be mobile versions of Office for iOS, Android and Windows (this last a Store app) with similar features, and that more and more of the functionality of desktop Office will turn up in the mobile versions. I learned from my interview with Technical Product Manager Kaberi Chowdhury that ODF (Open Document) support is planned, as is some level of programmability.

The plans for Office are a clue to the company’s wider strategy, which is focused on cloud and server. Key products include Office 365, Windows Azure, Active Directory (and Azure Active Directory), SQL Server, SharePoint, and System Center as a management tool for hybrid cloud.

The Windows client strategy is to bring back users who disliked Windows 8 with a renewed focus on the desktop in the forthcoming Windows 10, while retaining the Store app model for apps that are secure, touch-friendly, and easily deployed. It is still not clear what Windows 10 phones and tablets will look like, but we can expect convergence; no more Windows RT, but perhaps tablets running Windows Phone OS that are in effect the next generation of Windows RT without a desktop personality.

The company will also hedge its bets with full app support for Office and its cloud services on iOS and Android, and in doing so will make its Windows mobile offerings less compelling.

Microsoft’s developer tools are changing in line with this strategy. The next generation of .NET is open source and cross-platform on the server side, for Windows, Mac and Linux. Xamarin plugs the gap for .NET on iOS and Android, while Microsoft is also adding native support (not .NET based) for cross-platform mobile in the next Visual Studio.

These are big changes to the developer stack, and Microsoft is forking .NET between the continuing Windows-only .NET Framework, and the new cross-platform .NET Core. Developers have many questions about this; see this interview on the Register for what I could glean about the current plans. Watch our for the Build conference at the end of April when the company will attempt to put it all together into a coherent whole for developers targeting either Windows 10, or cloud apps, or cloud services with cross-platform mobile clients.

This entire strategy is a logical progression from the company’s failure in mobile. Can it now succeed with client apps running on platforms controlled by its competitors? Alternatively, is there hope that Windows 10 can keep businesses hooked on Windows clients? Maybe 2015 will bring some answers, though with Windows 10 not expected until towards the end of the year there will be a long wait while iOS, Android and even Chrome OS (the operating system of Chromebook) continue to build.

A side effect is that C# now has a better chance of building a cross-platform user base, rather than being a Windows language. This has already happened in game development, thanks to the use of Mono and C# in the popular Unity game engine. Could it also happen with ASP.NET, deployed to Linux servers, now that this will be officially supported? Or is there little room for it alongside Java, PHP, Ruby, Node.js and the rest? 

The puzzle with Microsoft is that there is still too much mediocrity and complacency that damages the company’s offerings. How can it expect to succeed in the crowded wearable market with a band that is uncomfortable to wear? There is still an attitude in some parts of the company that the world will be happy to put up with problems that might be fixed in a future version after some long interval. Then again, the Azure team is doing great things and Windows server continues to impress. Win or lose, there will be plenty of Microsoft news this year.

A theme for 2015: cloud optimization

Late last year I attended Amazon’s re:Invent conference in Las Vegas; I wrote this up here. The key announcement for me was Amazon Aurora, a MySQL clone, not so much because of its merits as a cloud database server, but more because it represents a new breed of applications that are designed for the cloud. If you design database storage with the knowledge that it will only ever run on a huge cloud-scale infrastructure, you can make optimizations that cannot be replicated on smaller systems. I tried to summarize what this means in another Register piece here. The fact that this type of technology can be rented by any of us at commodity prices increases the advantage of public cloud, despite reservations that many still have concerning security and control. It also poses a challenge for companies like Oracle and Microsoft whose technology is designed for on-premises as well as cloud deployment; they cannot achieve the same advantage unless they fork their products, creating cloud variants that use different architecture.

The Sony hack

The cyber invasion of Sony Pictures in late November was not just another hack; it was a comprehensive takedown in which (as far as I can tell) the company’s entire IT systems were entirely compromised and significantly damaged.

According to this report:

Mountains of documents had been stolen, internal data centers had been wiped clean, and 75 percent of the servers had been destroyed.

Most IT admins worry about disaster recovery (what to do after catastrophic system failure such as a fire in your data center) as well as about security (what to do if hackers gain access to sensitive information). In this case, both seemed to happen simultaneously. Further, as producing movies is in effect a digital business, the business suffered loss of some of its actual products, such as the unreleased “Annie”.

The incident is fascinating in itself, especially as we do not know the identity of the hackers or their purpose, but what interests me more are the implications.

Specifically, how many companies are equally at risk? It seems clear that Sony’s security was towards the weak end of the scale, but there is plenty of weak security out there, especially but not exclusively in smaller businesses.

With the outcome of the Sony hack so spectacular, it is likely that there will be similar efforts in 2015, as well as many businesses looking nervously at their own practices and wondering what they can do to protect themselves.

Cloud may be part of the answer though even if the cloud provider does security right, that is no guarantee that their customers do the same.   

Looking back on looking back

Here is what I wrote a year or so ago, Reflecting on 2013- the year of not the PC, no privacy, and the Internet of Things. Most of it still applies. I have not achieved any of the three goals I set for myself though. Maybe this year…

Quick reflections on Amazon re:Invent, open source, and Amazon Web Services

Last week I was in Las Vegas for my first visit to Amazon’s annual developer conference re:Invent. There were several announcements, the biggest being a new relational database service called RDS Aurora – a drop-in replacement for MySQL but with 3x write performance and 5x read performance as well as resiliency benefits – and EC2 Container Service, for deploying and managing Docker app containers. There is also AWS Lambda, a service which runs code in response to events.

You could read this news anywhere, but the advantage of being in Vegas was to immerse myself in the AWS culture and get to know the company better. Amazon is both distinctive and disruptive, and threes things that its retail operation and its web services have in common are large scale, commodity pricing, and customer focus.

Customer focus? Every company I have ever spoken to says it is customer focused, so what is different? Well, part of the press training at Amazon seems to be that when you ask about its future plans, the invariable answer is “what customers demand.” No doubt if you could eavesdrop at an Amazon executive meeting you would find that this is not entirely true, that there are matters of strategy and profitability which come into play, but this is the story the company wants us to hear. It also chimes with that of the retail operation, where customer service is generally excellent; the company would rather risk giving a refund or replacement to an undeserving customer and annoy its suppliers than vice versa. In the context of AWS this means something a bit different, but it does seem to me part of the company culture. “If enough customers keep asking for something, it’s very likely that we will respond to that,” marketing executive Paul Duffy told me.

That said, I would not describe Amazon as an especially open company, which is one reason I was glad to attend re:Invent. I was intrigued for example that Aurora is a drop-in replacement for an open source product, and wondered if it actually uses any of the MySQL code, though it seems unlikely since MySQL’s GPL license would require Amazon to publish its own code if it used any MySQL code; that said, the InnoDB storage engine code at least used to be available under a dual license so it is possible. When I asked Duffy though he said:

We don’t … at that level, that’s why we say it is compatible with MySQL. If you run the MySQL compatibility tool that will all check out. We don’t disclose anything about the inner workings of the service.

This of course touches on the issue of whether Amazon takes more from the open source community than it gives back.

image
Senior VP of AWS Andy Jassy

Someone asked Senior VP of AWS Andy Jassy, “what is your strategy of contributing to the open source ecosystem”, to which he replied:

We contribute to the open source ecosystem for many years. Zen, MySQL space, Linux space, we’re very active contributors, and will continue to do so in future.

That was it, that was the whole answer. Aurora, despite Duffy’s reticence, seems to be a completely new implementation of the MySQL API and builds on its success and popularity; could Amazon do more to share some of its breakthroughs with the open source community from which MySQL came? I think that is arguable; but Amazon is hard to hate since it tends to price so competitively.

Is Amazon worried about competition from Microsoft, Google, IBM or other cloud providers? I heard this question asked on several occasions, and the answer was generally along the lines that AWS is too busy to think about it. Again this is perhaps not the whole story, but it is true that AWS is growing fast and dominates the market to the extent that, say, Azure’s growth does not keep it awake at night. That said, you cannot accuse Amazon of complacency since it is adding new services and features at a high rate; 449 so far in 2014 according to VP and Distinguished Engineer James Hamilton, who also mentioned 99% usage growth in EC2 year on year, over 1,000,000 active customers, and 132% data transfer growth in the S3 storage service.

Cloud thinking

Hamilton’s session on AWS Innovation at Scale was among the most compelling of those I attended. His theme was that cloud computing is not just a bunch of hosted servers and services, but a new model of computing that enables new and better ways to run applications that are fast, resilient and scalable. Aurora is actually an example of this. Amazon has separated the storage engine from the relational engine, he explained, so that only deltas (the bits that have changed) are passed down for storage. The data is replicated 6 times across three Amazon availability zones, making it exceptionally resilient. You could not implement Aurora on-premises; only a cloud provider with huge scale can do it, according to Hamilton.

image
Distinguished Engineer James Hamilton

Hamilton was fascinating on the subject of networking gear – the cards, switches and routers that push bits across the network. Five years ago Amazon decided to build its own, partly because it considered the commercial products to be too expensive. Amazon developed its own custom network protocol stack. It worked out a lot cheaper, he said, since “even the support contract for networking gear was running into 10s of millions of dollars.” The company also found that reliability increased. Why was that? Hamilton quipped about how enterprise networking products evolve:

Enterprise customers give lots of complicated requirements to networking equipment producers who aggregate all these complicated requirements into 10s of billions of lines of code that can’t be maintained and that’s what gets delivered.

Amazon knew its own requirements and built for those alone. “Our gear is more reliable because we took on an easier problem,” he said.

AWS is also in a great position to analyse performance. It runs so much kit that it can see patterns of failure and where the bottlenecks lie. “We love metrics,” he said. There is an analogy with the way the popularity of Google search improves Google search; it is a virtuous circle that is hard for competitors can replicate.

Closing reflections

Like all vendor-specific conferences there was more marketing that I would have liked at re:Invent, but there is no doubting the excellence of the platform and its power to disrupt. There are aspects of public cloud that remain unsettling; things can go wrong and there will be nothing you can do but wait for them to be fixed. The benefits though are so great that it is worth the risk – though I would always advocate having some sort of plan B and off-cloud (or backup with another cloud provider) if that is feasible.

Amazon Reinvent: new products announced including Aurora database with claimed performance 5 times that of MySQL

Amazon is holding its third Reinvent conference in Las Vegas – 13,500 attendees catching up on Amazon’s Web Services platform. In this morning’s keynote, Amazon’s Senior VP of cloud services Andy Jassy evangelised the platform and announced a number of new services which, in typical Amazon style, are now available in preview.

image

Amazon is well ahead of its competitors in cloud services, in terms of market share and mindshare, and Jassy had no problems reeling off impressive statistics and case studies. A slide showing that AWS is not only larger but also growing faster yea-on-year than its competition prompted a small protest. Microsoft claims that Amazon understated its rate of growth:

cloudwars

The refrain from those who spoke on behalf of companies such as Intuit (which intends to move 100% of its applications to AWS) was that no alternative cloud provider could offer a realistic alternative to AWS. With the progress being made by competitors I wonder for how long this will be true – and bear in mind that this is an Amazon conference – but it testifies to the dominance that Amazon has achieved.

Jassy made a key point about security and compliance. The relative security of public cloud versus private datacenters has long been debated, initially on the assumption that computing resources you own and guard yourself must be more secure than those hosted by third-parties. The counter is that few organisations can afford the level of security that big public cloud providers can achieve. Jassy’s point though was that the number of certifications now achieved by AWS is now such that security and compliance is now a driver towards cloud computing.

The main news though was a series of product announcements:

Aurora relational database: a MySQL compatible database as a service for which Jassy claims 5x the performance of MySQL. He says that businesses stick with commercial, proprietary database managers because open source solutions lack the performance, but that Aurora now provides a solution at a commodity price. Unfortunately Aurora is not going to help those with applications locked into Oracle, SQL Server or others. Still, 5x performance is always welcome.

CodeDeploy: apparently based on a service Amazon uses internally, this is a deployment tool for pushing out updated applications to EC2 (Elastic Compute Cloud) VMs without downtime.

CodeCommit: a source code management service for Git repositories.

CodePipeline: automate your software release by defining a workflow of tests and approvals.

Key Management Service: if you manage encrypted data you will be familiar with the hassles of managing and rotating encryption keys. Here is a service to manage that.

AWS Config: A discovery service for the AWS resources you are using.

Service Catalog: a custom portal for users to browse and use AWS resources offered by an organisation.

This was day one; there is another keynote tomorrow and there may be more announcements.

There is no doubting the momentum behind AWS, and according to Jassy, there is still a long way to grow. Towards the end of the keynote he talked about businesses moving entire datacenters to AWS, for example when leases expire, and in the press Q&A session later he expressed the belief that eventually few companies will operate their own datacentres; he does not see much future for private cloud – in the sense of self-managed clouds on your own infrastructure. That is of course what you would expect Amazon to say.

Partnerships are key in this industry and I was interested to note the Reinvent sponsors:

image

The Diamond sponsors (who I presume have paid the most) are Accenture, Cloudnexa (AWS consultants), CSC (also consultants), Intel (I guess Amazon buys a lot of CPUs), Trend Micro and twilio (who must me doing well to be on this list).

Amazon Mobile SDK adds login, data sync, analytics for iOS and Android apps

Amazon Web Services has announced an updated AWS Mobile SDK, which provides libraries for mobile apps using Amazon’s cloud services as a back end. Version 2.0 of the SDK supporting iOS, and Android including Amazon Fire, is now in preview, adding several new features:

Amazon Cognito lets users log in with Amazon, Facebook or Google and then synchronize data across devices. The data is limited to a 20MB, stored as up to 20 datasets of key/value pairs. All data is stored as strings, though binary data can be encoded as a base64 string up to 1MB. The intent seems to be geared to things like configuration or game state data, rather than documents.

Amazon Mobile Analytics collects data on how users are engaging with your app. You can get data on metrics including daily and monthly active users, session count and average daily sessions per active user, revenue per active user, retention statistics, and custom events defined in your app.

Other services in the SDK, but which were already supported in version 1.7, include push messaging for Apple, Google, Fire OS and Windows devices; Amazon S3 storage (suitable for any amount of data, unlike the Cognito sync service), SimpleDB and Dynamo DB NoSQL database service, email service, and SQS (Simple Queue Service) messaging.

Windows Phone developers or those using cross-platform tools to build mobile apps cannot use Amazon’s mobile SDK, though all the services are published as a REST API so you could use it from languages other than Objective-C or Java by writing your own wrapper.

The list of supported identity providers for Cognito is short though, with notable exclusions being Microsoft accounts and Azure Active Directory. Getting round this is harder since the federated identity services are baked into the server-side API.

image

Microsoft Azure: growing but still has image problems

I attended a Microsoft Cloud Day in London organised by the Azure User Group; I booked this when Technical Fellow Mark Russinovich was set to attend, but regrettably he cancelled at a late stage. I skipped the substitute keynote by UK Microsoftie Dave Coplin as I heard the very same talk earlier this month, so arrived mid-morning at the venue in Whitechapel; not that easy to find amid the stalls of Whitechapel Market (well, not quite), but if you seek out the Whitechapel branch of the Foxcroft and Ginger cafe (not known to Here Maps on Windows Phone, incidentally) then you will find premises upstairs with logos for Barclays Accelerator and Microsoft Ventures; something to do with assisting the flow of cash from corporate giants desperate for community engagement to business start-ups desperate for cash.

Giving technical presentations is hard, and while I admired Richard Conway’s efforts at showing how, with some PowerShell, he could transform some large dataset into rows of numbers using the magic of Azure HDInsight I didn’t think it quite worked. Beat Schwegler dived into code to explain the how and why of Azure Notification Hubs, a service which delivers push notifications to mobile apps; useful material, but could have been compressed. Then there was Richard Astbury at software development company two10degrees who talked about Project Orleans, high scale applications via “an Actor Model framework of programmable in-memory objects”; we learned about grains and silos (or software equivalents) in a session that was mostly new to me.

At the break I chatted with a somewhat bemused attendee who had come in the hope of learning about whether he should migrate some or all of his small company’s server requirements to Azure. I explained about Office 365 and Azure Active Directory which he said was more relevant to him than the intricacies of software development. It turns out that the Azure User Group is really about software development using Azure services, which is only one perspective on Microsoft’s cloud platform.

For me the most intriguing presentation was from Michael Delaney at ElevateDirect, a young business which has a web application to assist businesses in finding employees directly rather than via recruitment agencies. His company picked Amazon Web Services (AWS) over Azure two and a half years ago, but is now moving to Microsoft’s cloud.

image
Michael Delaney, CTO and co-founder ElevateDirect

Why did he pick AWS? He is not a typical Microsoft-platform person, preferring open source products including Linux, Apache Solr, Python and MySQL. When he chose AWS, Azure was not a suitable platform for a mainly Linux-based application. However, he does prefer C# to Java. According to Delaney, AWS is a Java-first platform and he found this getting in the way of development.

Azure today, says Delaney, has the first-class support for Linux that it lacked a few years back, and is a better platform for C# applications than AWS even though AWS does support Windows servers.

Migrating the application was relatively straightforward, he said, with the biggest issue being the move from Amazon S3 (Simple Storage Service) to Azure Storage, though he overcame this by abstracting the storage API behind his own wrapper code.

Azure is not all the way there though. Delaney is disappointed with the relational database options on offer, essentially SQL Server or third-party managed MySQL from ClearDB. He would like to see options for PostgreSQL and others. He would also like the open source Elastic Search to be offered as an Azure service.

There was a panel discussion later at which the question of Azure’s market perception was discussed. Most businesses, according to one attendee, think of AWS as the only option for cloud, even if they are Microsoft-platform businesses for whom Azure might be more suitable. It is a branding problem caused by the AWS first-mover advantage and market dominance, said Microsoft’s Steve Plank.

I would add that Azure is relatively new, at least in its new incarnation offering full IaaS (infrastructure as a service). AWS is also ahead on the number and variety of services on offer, and has not really messed up, which means there is little incentive for existing users to move unless, like Delaney, they find some aspect of Microsoft’s platform (in his case C#) particularly compelling.

This leads me back to the bemused attendee. It seems to me that Azure’s biggest advantage is Azure Active Directory and seamless integration with Office 365. Having said that, it is not difficult to host an application on AWS that uses Azure Active Directory, but there may be some advantage in working with a single cloud provider (and you can expect fast low-latency networking between Azure and Office 365).