Tag Archives: aws

The AWS re:Invent 5K run 2023

Sunrise over Las Vegas – at the re:Invent 5K run 2023

It happens that, a little later in life than most, I have taken up running, and during the recent AWS re:Invent in Las Vegas I was one of 978 attendees to take part in the official event 5K run.

If there were around 50,000 at the conference that would be nearly 2% of us which is not bad considering the first coaches to the venue left our hotel at 5.15am. The idea was that you could do the run and still make the keynote I guess – which I did.

I would not call myself an experienced runner but I have taken part in a few races and this one seemed to have all the trimmings. The run was up and down Frank Sinatra Drive, which was closed for the event, and the start and finish was at the Michelob ULTRA Arena at Mandalay Bay. Snacks and drinks were available; there was a warm-up; there was a bag drop; there was a guy who kept up an enthusiastic commentary both for the start and the finish. The race was chip timed.

We started in three waves, being fast-ish, medium, and run/walk. I started perhaps optimistically in the fast-ish group and did what for me was a decent time; it was a quick course with the only real impediments being two u-turns at the ends of the loop.

Overall a lot of fun and I am grateful to the organisers for arranging it (it does seem to be a regular re:Invent feature).

Here is where it gets a bit odd though. The event is pushed quite hard; it is a big focus at the community stand outside the registration/swag hall and elsewhere at the other official re:Invent hotels. It is also a charity event, supporting the Fred Hutchinson Cancer Center. All good; but I was surprised never to be officially told my result.

I was curious about it and eventually tracked down the results – I figured that with chip timing they were probably posted somewhere – and yes, here they are. You will notice though that no names are included, only the bib numbers. If you know your bib number you can look up your time. This was mine.

It seems that AWS do not really publish the results which would have disappointed me if I had been the first finisher who achieved an excellent time of 16:23 – well done 1116!

I can’t pretend to understand why one would organise a chip-timed race but then not publish the results. Perhaps in the interests of inclusivity one could give people an option to be anonymous but for most runners the time achieved is part of the fun. I think we were meant to be emailed our results but mine never came; but even if I had received an email, I would like to browse through the full table and see how I did overall.

Microsoft’s strong financials, and some notes on Azure vs AWS and the risks of losing in mobile

Microsoft delivered another strong set of figures in its latest financial results, for the period April-June 2018. Total revenue of $30.085 million was up 17% year on year, and all three of the company’s sectors (Office, Azure and consumer) showed strong growth.

What’s notable? Largely this is more of the same. A few things to note. Linked in revenue increased 37% year on year – an acquisition that seems to be making sense for the company. Dynamics 365 revenue grew by 65%. The Dynamics story is all about cloud synergy. As an on-premises product Dynamics CRM (the part of the suite I know best) was relatively undistinguished but as a cloud product the seamless integration between Office 365 and Dynamics 365 (and Azure Active Directory) makes it compelling.

Windows 10 is doing OK, possibly as more businesses heave themselves off Windows 7 and buy new PCs with OEM licenses as they do.

Even areas in which Microsoft is far from dominant did well. Gaming was up 39%, Surface 25% and Search advertising up 17%.

The biggest growth in the quarter, according to the breakdown here, was in Azure. up 89%. This growth is not without pain; the Register reports capacity issues in the UK South region, for example, with users getting the message “Unfortunately, due to high demand for virtual machines in this region, we are not able to approve your quota request at this time.” You can still create VMs, but not necessarily in the region you want.

Will Microsoft outpace AWS? My take on this has not changed. AWS does very little wrong and remains the pre-eminent cloud for IAAS and many services by some distance. What AWS does not have is Office 365, or armies of Microsoft partners helping enterprise customers to shunt more and more of their IT infrastructure into Azure. Microsoft makes more money from licensing: Windows Server, SQL Server, Office 365 and Dynamics seats, and so on. AWS does more business at a lower margin. These are big differences. I see it as unlikely that Azure will overtake AWS in the provision of essential cloud services like VMs, containers, cloud storage and so on. AWS also has a better reliability track record. However, the success of Azure means that enterprise customers no longer need to go to AWS to get the benefits of cloud. Perhaps the more interesting question is the extent to which AWS (or Google) can persuade enterprise customers to shift away from Microsoft’s high-margin applications.

Longer term, there is significant risk for the company in its retreat from mobile. We are now seeing Google work hard in the laptop market with Chromebooks alongside Android mobile. Coming sometime is Google Fuchsia which may be a single operating system for both. It is worth recalling that Microsoft built its success on winning users for its PC operating system; and that IBM lost its IT dominance by ceding this to Microsoft.

Here is the breakdown by segment, such as it is:  

Quarter ending June 30th 2018 vs quarter ending June 30th 2017, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 9668 +1140 3466 +575
Intelligent Cloud 9606 +1784 3901 +990
More Personal Computing 10811 +1576 3012 +826

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

AWS embraces hybrid cloud? Meet Snowball Edge

Amazon has announced Snowball Edge, an on-premises appliance that supports Amazon EC2 (Elastic Compute Cloud), AWS Lambda (“serverless” computing) and S3 (Simple Storage Service), all running locally.

image

Sounds like Microsoft’s Azure Stack? A bit, but the AWS appliance is tiny by comparison and therefore more limited in scope. Nevertheless, it is a big turnaround for the company, which has previously insisted that everything belongs in the cloud. One of the Snowball Edge case studies is the same general area as one used by Microsoft for Azure Stack: ships.

The specifications are shy about revealing what is inside, but there is 100TB storage (82TB usable), 10GB, 20GB and 40GB network connections (GBase-T, SFP+ and QSFP+), size is 259x671x386mm (pretty small), and power consumption 400 watts.

Jeff Barr’s official blog post adds that there is an “Intel Xeon D processor running at 1.8 GHz, and supports any combination of instances that consume up to 24 vCPUs and 32 GiB of memory.”

You can cluster Snowball Edge appliances though so substantial systems are possible.

Operating systems currently supported are Ubuntu Server and CentOS7.

Amazon’s approach is to extend its cloud to the edge rather than vice versa. You prepare your AMIs (Amazon Machine Instances) in the cloud before the appliance is shipped. The very fast networking support shows that the intent is to maintain the best possible connectivity, even though the nature of the requirement is that internet connectivity in some scenarios will be poor.

A point to note is that whereas the documentation emphasises use cases where there are technical advantages to on-premises (or edge) computing, Barr quotes instead a customer who wanted easier management. A side effect of the cloud computing revolution is that provisioning and managing cloud infrastructure is easier than with systems (like Microsoft’s System Center) designed for on-premises infrastructure. Otherwise they would not be viable. Having tasted what is possible in the cloud, customers want the same for on-premises.

Ubuntu goes minimal (but still much bigger than Alpine Linux), cosies up to Google Cloud Platform

Ubuntu has announced “Minimal Ubuntu”, a cut-down server image designed for containerised deployments. The Docker image for Minimal Ubuntu 18.04 is 29MB:

Editors, documentation, locales and other user-oriented features of Ubuntu Server have been removed. What remains are only the vital components of the boot sequence.  Images still contain ssh, apt and snapd so you can connect and install any package you’re missing. The unminimize tool lets you ‘rehydrate’ your image into a familiar Ubuntu server package set, suitable for command line interaction.

says Canonical.

29MB is pretty small; but not as small as Alpine Linux images, commonly used by Docker, which are nearer 5MB. Of course these image sizes soon increase when you add the applications you need.

I pulled Ubuntu 18.04 from Docker Hub and the image size is 31.26MB so this hardly seems a breakthrough.

Canonical quotes Paul Nash, Group Product Manager for Google Cloud Platform, in its press release. The image is being made available initially for Amazon EC2, Google Compute Engine, LXD, and KVM/OpenStack. The kernel has been optimized for each deployment, so the downloadable image is optimized for KVM and slightly different than the AWS or GCP versions.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

VMware Cloud on AWS: a game changer? What about Microsoft’s Azure Stack?

The biggest announcement from VMWorld in Las Vegas and then Barcelona was VMware Cloud on AWS; essentially VMware hosts on AWS servers.

image

A key point is that this really is VMware on AWS infrastructure; the release states “Run VMware software stack directly on metal, without nested virtualization”.

Why would you use this? Because it is hybrid cloud, allowing you to plan or move workloads between on-premises and public cloud infrastructure easily, using the same familiar tools (vCenter, vSphere, PowerCLI) as you do now, presuming you use VMware.

You also get low-latency connections to other AWS services, of which there are far too many to mention.

This strikes me as significant for VMware customers; and let’s not forget that the company dominates virtualisation in business computing.

Why would you not use VMware Cloud on AWS? Price is one consideration. Each host has 2 CPUs, 36 cores, 512GB RAM, 10.71TB local flash storage. You need a minimum of 4 hosts. Each host costs from $4.1616 to $8.3681 per hour, with the lowest price if you pay up front for a 3-year subscription (a substantial investment).

Price comparisons are always difficult. A big VM of a similar spec to one of these hosts will likely cost less. Maybe the best comparison is an EC2 Dedicated Host (where you buy a host on which you can run up VM instances without extra charge). An i3 dedicated host has 2 sockets and 36 cores, similar to a VMware host. It can run 16 xlarge VMs, each with 950GB SSD storage. Cost is from $2.323 to $5.491. Again, the lowest cost is for a 3 year subscription with payment upfront.

I may have this hasty calculation wrong; but there has to be a premium paid for VMware; but customers are used to that. The way the setup is designed (a 4-host cluster minimum) also makes it hard to be as flexible with with costs as you can be when running up individual VMs.

A few more observations. EC2 is the native citizen of AWS. By going for VMware on AWS instead of EC2 you are interposing a third party between you and AWS which intuitively seems to me a compromise. What you are getting though is smoother hybrid cloud which is no small thing.

What about Microsoft, previously the king of hybrid cloud? Microsoft’s hypervisor is Hyper-V and while there are a few features in VMware ESXi that Hyper-V lacks, they are not all that significant in my opinion. As a hypervisor, Hyper-V is solid. The pain points with Microsoft’s solution though are Cluster Shared Volumes, for high availability Hyper-V deployments, and System Center Virtual Machine Manager; VMware has better tools. There is a reason Azure uses Hyper-V but not SCVMM.

Hyper-V will always be cheaper than VMware (other than for small, free deployments) because it is a feature of Windows and not an add-on. Windows Server licenses are not cheap at all but that is another matter, and you have to suffer these anyway if you run Windows on VMware.

Thus far, Hyper-V has not been all that attractive to VMware shops, not only because of the cost of changing course, but also because of the shortcomings mentioned above.

Microsoft’s own game-changer here is Azure Stack, pre-packaged hardware which uses Azure rather than System Center technology, relieving admins of the burden of managing Cluster Shared Volumes and so forth. It is a great solution for hybrid since it really is the same (albeit with some missing features and some lag over implementing features that come to the public version) as Microsoft’s public cloud.

Azure Stack, like VMware on AWS, is new. Further, there is much more friction in migrating an existing datacenter to use Azure Stack, than in extending an existing VMware operation to use VMware Cloud on AWS.

But there is more. Is cloud computing really about running up VMs and moving them about? Arguably, not. Containers are another approach with some obvious advantages. Serverless is a big deal, and abstracts away both VMs and containers. Further, as you shift the balance of applications away from code you write and more towards use of cloud services (database, ML, BI, queuing and so on), the importance of VMs and containers lessens.

Azure Stack has an advantage here, since it gives an on-premises implementation of some Azure services, though far short of what is in Microsoft’s cloud. And VMware, of course, is not just about VMs.

Overall it seems to me that while VMware Cloud on AWS is great for VMware customers migrating towards hybrid cloud, it is unlikely to be optimal, either for cost or features, especially when you take a long view.

It remains a smart move and one that I would expect to have a rapid and significant take-up.

Amazon Web Services opens London data centers

Amazon Web Services (AWS) has opened a London Region, fulfilling its promise to open data centers in the UK, and joining Microsoft Azure which opened UK data centers in September 2016.

This is the third AWS European region, joining Ireland and Germany, and a fourth region, in France, has also been announced.

A region is not a single data center, but is comprised of at least two “Availability Zones”, each of which is a separate data center.

AWS Summit London 2016: no news but strong content, and a little bit of Echo

I attended day two (the developer day) of the Amazon Web Services Summit at the ExCel conference centre in London yesterday. A few quick observations.

It was a big event. I am not sure how many attended but heard “10,000” being muttered. I was there last year as well, and the growth was obvious. The exhibition has spilled out of its space to occupy part of an upper mezzanine floor as well. The main auditorium was packed.

image

Amazon does not normally announce much news at these events, and this one conformed to the pattern. It is a secretive company when it comes to future plans. The closest thing to news was when AWS UK and Ireland MD Gavin Jackson said that Amazon will go ahead with its UK region despite the referendum on leaving the EU.

CTO Dr Werner Vogels gave a keynote. It was mostly marketing which disappointed me, since Vogels is a technical guy with lots he could have said about AWS technology, but hey, this was a free event so what do you expect? That said, the latter part of the keynote was more interesting, when he talked about different models of cloud computing, and I will be writing this up for the Register shortly.

Otherwise this was a good example of a vendor technical conference, with plenty of how-to sessions that would be helpful to anyone getting started with AWS. The level of the sessions I attended was fairly high, even the ones described as “deep dive”, but you could always approach the speaker afterwards with your trickier issues. The event was just as good as some others for which you have to pay a fee.

The sessions I attended on DevOps, containers, microservices, and AWS Lambda (serverless computing) were all packed, with containers perhaps drawing the biggest crowd.

At the end of the day I went to a smaller session on programming for Amazon Echo, the home voice control device which you cannot get in the UK. The speaker refused to be drawn on when we might get it, but I suppose the fact that Amazon ran the session suggests that it will appear in the not too distant future. I found this session though-provoking. It was all about how to register a keyword with Amazon so that when a user says “Alexa what’s new with [mystuff]” then the mystuff service will be invoked. Amazon’s service will send your service the keywords (defined by you) that it detects in the question or interaction and you send back a response. The trigger word – called the Invocation Name – has to be registered with Amazon and I imagine there could be big competition for valuable ones. It is all rather limited at the moment; you cannot create a commercial service, for example, not even for ordering pizzas. Check out the Alexa Skills Kit for more.

Presuming commercial usage does come, there are some interesting issues around identity, authentication, and preventing unauthorised or inappropriate use. Echo does allow ordering from Amazon, and you can optionally set a voice PIN, but I would have thought a voice PIN is not much use if you want to stop children ordering stuff, for example, since they will hear it. If you watch your email, you would see the confirming email from Amazon and could quickly cancel if it were a problem. The security here seems weak though; it would be better to have an approval text sent to a mobile, for example, so that there is some real control.

Overall, AWS is still on a roll and I did not hear a single thing about security concerns or the risks of putting all your eggs in Amazon’s basket. I wonder if fears have gone from being over blown to under recognized? In the end these considerations are not quantifiable which makes risks hard to assess.

I could not help but contrast this AWS event to one I attended on Microsoft Azure last month. AzureCraft benefited from the presence of corporate VP Scott Guthrie but it was a tiny event in comparison to Amazon’s effort. If Microsoft is serious about competing with AWS it needs to rethink its events and put them on directly rather than working through user groups that have a narrow membership (AzureCraft was up on by the UK Azure User Group).

AWS Summit London: cloud growth, understanding Lambda, Machine Learning

I attended the Amazon Web Services (AWS) London Summit. Not much news there, since the big announcements were the week before in San Francisco, but a chance to drill into some of the AWS services and keep up to date with the platform.

image

The keynote by CTO Werner Vogels was a bit too much relentless promotion for my taste, but I am interested in the idea he put forward that cloud computing will gradually take over from on-premises and that more and more organisations will go “all in” on Amazon’s cloud. He instanced some examples (Netflix, Intuit, Tibco, Splunk) though I am not quite clear whether these companies have 100% of their internal IT systems on AWS, or merely that they run the entirety of their services (their product) on AWS. The general argument is compelling, especially when you consider the number of services now on offer from AWS and the difficulty of replicating them on-premises (I wrote this up briefly on the Reg). I don’t swallow it wholesale though; you have to look at the costs carefully, but even more than security, the loss of control when you base your IT infrastructure on a public cloud provider is a negative factor.

As it happens, the ticket systems for my train into London were down that morning, which meant that purchasers of advance tickets online could not collect their tickets.

image

The consequences of this outage were not too serious, in that the trains still ran, but of course there were plenty of people travelling without tickets (I was one of them) and ticket checking was much reduced. I am not suggesting that this service runs on AWS (I have no idea) but it did get me thinking about the impact on business when applications fail; and that led me to the question: what are the long-term implications of our IT systems and even our economy becoming increasingly dependent on a (very) small number of companies for their health? It seems to me that the risks are difficult to assess, no matter how much respect we have for the AWS engineers.

I enjoyed the technical sessions more than the keynote. I attended Dean Bryen’s session on AWS Lambda, “Event-driven code in the cloud”, where I discovered that the scope of Lambda is greater than I had previously realised. Lambda lets you write code that runs in response to events, but what is also interesting is that it is a platform as a service offering, where you simply supply the code and AWS runs it for you:

AWS Lambda runs your custom code on a high-availability compute infrastructure and administers all of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code, and security patches.

This is a different model than running applications in EC2 (Elastic Compute Cloud) VMs or even in Docker containers, which are also VM based. Of course we know that Lambda ultimately runs in VMs as well, but these details are abstracted away and scaling is automatic, which arguably is a better model for cloud computing. Azure Cloud Services or Heroku apps are somewhat like this, but neither is very pure; with Azure Cloud Services you still have to worry about how many VMs you are using, and with Heroku you have to think about dynos (app containers). Google App Engine is another example and autoscales, though you are charged by application instance count so you still have to think in those terms. With Lambda you are charged based on the number of requests, the duration of your code, and the amount of memory allocated, making it perhaps the best abstracted of all these PaaS examples.

But Lambda is just for event-handing, right? Not quite; it now supports synchronous as well as asynchronous event handling and you could create large applications on the service if you chose. It is well suited to services for mobile applications, for example. Java support is on the way, as an alternative to the existing Node.js support. I will be interested to see how this evolves.

I also went along to Carlos Conde’s session on Amazon Machine Learning (one instance in which AWS has trailed Microsoft Azure, which already has a machine learning service). Machine learning is not that easy to explain in simple terms, but I thought Conde did a great job. He showed us a spreadsheet which was a simple database of contacts with fields for age, income, location, job and so on. There was also a Boolean field for whether they had purchased a certain financial product after it had been offered to them. The idea was to feed this spreadsheet to the machine learning service, and then to upload a similar table but of different contacts and without the last field. The job of the service was to predict whether or not each contact listed would purchase the product. The service returned results with this field populated along with a confidence indicator. A simple example with obvious practical benefit, presuming of course that the prediction has reasonable accuracy.

Quick reflections on Amazon re:Invent, open source, and Amazon Web Services

Last week I was in Las Vegas for my first visit to Amazon’s annual developer conference re:Invent. There were several announcements, the biggest being a new relational database service called RDS Aurora – a drop-in replacement for MySQL but with 3x write performance and 5x read performance as well as resiliency benefits – and EC2 Container Service, for deploying and managing Docker app containers. There is also AWS Lambda, a service which runs code in response to events.

You could read this news anywhere, but the advantage of being in Vegas was to immerse myself in the AWS culture and get to know the company better. Amazon is both distinctive and disruptive, and threes things that its retail operation and its web services have in common are large scale, commodity pricing, and customer focus.

Customer focus? Every company I have ever spoken to says it is customer focused, so what is different? Well, part of the press training at Amazon seems to be that when you ask about its future plans, the invariable answer is “what customers demand.” No doubt if you could eavesdrop at an Amazon executive meeting you would find that this is not entirely true, that there are matters of strategy and profitability which come into play, but this is the story the company wants us to hear. It also chimes with that of the retail operation, where customer service is generally excellent; the company would rather risk giving a refund or replacement to an undeserving customer and annoy its suppliers than vice versa. In the context of AWS this means something a bit different, but it does seem to me part of the company culture. “If enough customers keep asking for something, it’s very likely that we will respond to that,” marketing executive Paul Duffy told me.

That said, I would not describe Amazon as an especially open company, which is one reason I was glad to attend re:Invent. I was intrigued for example that Aurora is a drop-in replacement for an open source product, and wondered if it actually uses any of the MySQL code, though it seems unlikely since MySQL’s GPL license would require Amazon to publish its own code if it used any MySQL code; that said, the InnoDB storage engine code at least used to be available under a dual license so it is possible. When I asked Duffy though he said:

We don’t … at that level, that’s why we say it is compatible with MySQL. If you run the MySQL compatibility tool that will all check out. We don’t disclose anything about the inner workings of the service.

This of course touches on the issue of whether Amazon takes more from the open source community than it gives back.

image
Senior VP of AWS Andy Jassy

Someone asked Senior VP of AWS Andy Jassy, “what is your strategy of contributing to the open source ecosystem”, to which he replied:

We contribute to the open source ecosystem for many years. Zen, MySQL space, Linux space, we’re very active contributors, and will continue to do so in future.

That was it, that was the whole answer. Aurora, despite Duffy’s reticence, seems to be a completely new implementation of the MySQL API and builds on its success and popularity; could Amazon do more to share some of its breakthroughs with the open source community from which MySQL came? I think that is arguable; but Amazon is hard to hate since it tends to price so competitively.

Is Amazon worried about competition from Microsoft, Google, IBM or other cloud providers? I heard this question asked on several occasions, and the answer was generally along the lines that AWS is too busy to think about it. Again this is perhaps not the whole story, but it is true that AWS is growing fast and dominates the market to the extent that, say, Azure’s growth does not keep it awake at night. That said, you cannot accuse Amazon of complacency since it is adding new services and features at a high rate; 449 so far in 2014 according to VP and Distinguished Engineer James Hamilton, who also mentioned 99% usage growth in EC2 year on year, over 1,000,000 active customers, and 132% data transfer growth in the S3 storage service.

Cloud thinking

Hamilton’s session on AWS Innovation at Scale was among the most compelling of those I attended. His theme was that cloud computing is not just a bunch of hosted servers and services, but a new model of computing that enables new and better ways to run applications that are fast, resilient and scalable. Aurora is actually an example of this. Amazon has separated the storage engine from the relational engine, he explained, so that only deltas (the bits that have changed) are passed down for storage. The data is replicated 6 times across three Amazon availability zones, making it exceptionally resilient. You could not implement Aurora on-premises; only a cloud provider with huge scale can do it, according to Hamilton.

image
Distinguished Engineer James Hamilton

Hamilton was fascinating on the subject of networking gear – the cards, switches and routers that push bits across the network. Five years ago Amazon decided to build its own, partly because it considered the commercial products to be too expensive. Amazon developed its own custom network protocol stack. It worked out a lot cheaper, he said, since “even the support contract for networking gear was running into 10s of millions of dollars.” The company also found that reliability increased. Why was that? Hamilton quipped about how enterprise networking products evolve:

Enterprise customers give lots of complicated requirements to networking equipment producers who aggregate all these complicated requirements into 10s of billions of lines of code that can’t be maintained and that’s what gets delivered.

Amazon knew its own requirements and built for those alone. “Our gear is more reliable because we took on an easier problem,” he said.

AWS is also in a great position to analyse performance. It runs so much kit that it can see patterns of failure and where the bottlenecks lie. “We love metrics,” he said. There is an analogy with the way the popularity of Google search improves Google search; it is a virtuous circle that is hard for competitors can replicate.

Closing reflections

Like all vendor-specific conferences there was more marketing that I would have liked at re:Invent, but there is no doubting the excellence of the platform and its power to disrupt. There are aspects of public cloud that remain unsettling; things can go wrong and there will be nothing you can do but wait for them to be fixed. The benefits though are so great that it is worth the risk – though I would always advocate having some sort of plan B and off-cloud (or backup with another cloud provider) if that is feasible.