Tag Archives: aws

VMware Cloud on AWS: a game changer? What about Microsoft’s Azure Stack?

The biggest announcement from VMWorld in Las Vegas and then Barcelona was VMware Cloud on AWS; essentially VMware hosts on AWS servers.


A key point is that this really is VMware on AWS infrastructure; the release states “Run VMware software stack directly on metal, without nested virtualization”.

Why would you use this? Because it is hybrid cloud, allowing you to plan or move workloads between on-premises and public cloud infrastructure easily, using the same familiar tools (vCenter, vSphere, PowerCLI) as you do now, presuming you use VMware.

You also get low-latency connections to other AWS services, of which there are far too many to mention.

This strikes me as significant for VMware customers; and let’s not forget that the company dominates virtualisation in business computing.

Why would you not use VMware Cloud on AWS? Price is one consideration. Each host has 2 CPUs, 36 cores, 512GB RAM, 10.71TB local flash storage. You need a minimum of 4 hosts. Each host costs from $4.1616 to $8.3681 per hour, with the lowest price if you pay up front for a 3-year subscription (a substantial investment).

Price comparisons are always difficult. A big VM of a similar spec to one of these hosts will likely cost less. Maybe the best comparison is an EC2 Dedicated Host (where you buy a host on which you can run up VM instances without extra charge). An i3 dedicated host has 2 sockets and 36 cores, similar to a VMware host. It can run 16 xlarge VMs, each with 950GB SSD storage. Cost is from $2.323 to $5.491. Again, the lowest cost is for a 3 year subscription with payment upfront.

I may have this hasty calculation wrong; but there has to be a premium paid for VMware; but customers are used to that. The way the setup is designed (a 4-host cluster minimum) also makes it hard to be as flexible with with costs as you can be when running up individual VMs.

A few more observations. EC2 is the native citizen of AWS. By going for VMware on AWS instead of EC2 you are interposing a third party between you and AWS which intuitively seems to me a compromise. What you are getting though is smoother hybrid cloud which is no small thing.

What about Microsoft, previously the king of hybrid cloud? Microsoft’s hypervisor is Hyper-V and while there are a few features in VMware ESXi that Hyper-V lacks, they are not all that significant in my opinion. As a hypervisor, Hyper-V is solid. The pain points with Microsoft’s solution though are Cluster Shared Volumes, for high availability Hyper-V deployments, and System Center Virtual Machine Manager; VMware has better tools. There is a reason Azure uses Hyper-V but not SCVMM.

Hyper-V will always be cheaper than VMware (other than for small, free deployments) because it is a feature of Windows and not an add-on. Windows Server licenses are not cheap at all but that is another matter, and you have to suffer these anyway if you run Windows on VMware.

Thus far, Hyper-V has not been all that attractive to VMware shops, not only because of the cost of changing course, but also because of the shortcomings mentioned above.

Microsoft’s own game-changer here is Azure Stack, pre-packaged hardware which uses Azure rather than System Center technology, relieving admins of the burden of managing Cluster Shared Volumes and so forth. It is a great solution for hybrid since it really is the same (albeit with some missing features and some lag over implementing features that come to the public version) as Microsoft’s public cloud.

Azure Stack, like VMware on AWS, is new. Further, there is much more friction in migrating an existing datacenter to use Azure Stack, than in extending an existing VMware operation to use VMware Cloud on AWS.

But there is more. Is cloud computing really about running up VMs and moving them about? Arguably, not. Containers are another approach with some obvious advantages. Serverless is a big deal, and abstracts away both VMs and containers. Further, as you shift the balance of applications away from code you write and more towards use of cloud services (database, ML, BI, queuing and so on), the importance of VMs and containers lessens.

Azure Stack has an advantage here, since it gives an on-premises implementation of some Azure services, though far short of what is in Microsoft’s cloud. And VMware, of course, is not just about VMs.

Overall it seems to me that while VMware Cloud on AWS is great for VMware customers migrating towards hybrid cloud, it is unlikely to be optimal, either for cost or features, especially when you take a long view.

It remains a smart move and one that I would expect to have a rapid and significant take-up.

Amazon Web Services opens London data centers

Amazon Web Services (AWS) has opened a London Region, fulfilling its promise to open data centers in the UK, and joining Microsoft Azure which opened UK data centers in September 2016.

This is the third AWS European region, joining Ireland and Germany, and a fourth region, in France, has also been announced.

A region is not a single data center, but is comprised of at least two “Availability Zones”, each of which is a separate data center.

AWS Summit London 2016: no news but strong content, and a little bit of Echo

I attended day two (the developer day) of the Amazon Web Services Summit at the ExCel conference centre in London yesterday. A few quick observations.

It was a big event. I am not sure how many attended but heard “10,000” being muttered. I was there last year as well, and the growth was obvious. The exhibition has spilled out of its space to occupy part of an upper mezzanine floor as well. The main auditorium was packed.


Amazon does not normally announce much news at these events, and this one conformed to the pattern. It is a secretive company when it comes to future plans. The closest thing to news was when AWS UK and Ireland MD Gavin Jackson said that Amazon will go ahead with its UK region despite the referendum on leaving the EU.

CTO Dr Werner Vogels gave a keynote. It was mostly marketing which disappointed me, since Vogels is a technical guy with lots he could have said about AWS technology, but hey, this was a free event so what do you expect? That said, the latter part of the keynote was more interesting, when he talked about different models of cloud computing, and I will be writing this up for the Register shortly.

Otherwise this was a good example of a vendor technical conference, with plenty of how-to sessions that would be helpful to anyone getting started with AWS. The level of the sessions I attended was fairly high, even the ones described as “deep dive”, but you could always approach the speaker afterwards with your trickier issues. The event was just as good as some others for which you have to pay a fee.

The sessions I attended on DevOps, containers, microservices, and AWS Lambda (serverless computing) were all packed, with containers perhaps drawing the biggest crowd.

At the end of the day I went to a smaller session on programming for Amazon Echo, the home voice control device which you cannot get in the UK. The speaker refused to be drawn on when we might get it, but I suppose the fact that Amazon ran the session suggests that it will appear in the not too distant future. I found this session though-provoking. It was all about how to register a keyword with Amazon so that when a user says “Alexa what’s new with [mystuff]” then the mystuff service will be invoked. Amazon’s service will send your service the keywords (defined by you) that it detects in the question or interaction and you send back a response. The trigger word – called the Invocation Name – has to be registered with Amazon and I imagine there could be big competition for valuable ones. It is all rather limited at the moment; you cannot create a commercial service, for example, not even for ordering pizzas. Check out the Alexa Skills Kit for more.

Presuming commercial usage does come, there are some interesting issues around identity, authentication, and preventing unauthorised or inappropriate use. Echo does allow ordering from Amazon, and you can optionally set a voice PIN, but I would have thought a voice PIN is not much use if you want to stop children ordering stuff, for example, since they will hear it. If you watch your email, you would see the confirming email from Amazon and could quickly cancel if it were a problem. The security here seems weak though; it would be better to have an approval text sent to a mobile, for example, so that there is some real control.

Overall, AWS is still on a roll and I did not hear a single thing about security concerns or the risks of putting all your eggs in Amazon’s basket. I wonder if fears have gone from being over blown to under recognized? In the end these considerations are not quantifiable which makes risks hard to assess.

I could not help but contrast this AWS event to one I attended on Microsoft Azure last month. AzureCraft benefited from the presence of corporate VP Scott Guthrie but it was a tiny event in comparison to Amazon’s effort. If Microsoft is serious about competing with AWS it needs to rethink its events and put them on directly rather than working through user groups that have a narrow membership (AzureCraft was up on by the UK Azure User Group).

AWS Summit London: cloud growth, understanding Lambda, Machine Learning

I attended the Amazon Web Services (AWS) London Summit. Not much news there, since the big announcements were the week before in San Francisco, but a chance to drill into some of the AWS services and keep up to date with the platform.


The keynote by CTO Werner Vogels was a bit too much relentless promotion for my taste, but I am interested in the idea he put forward that cloud computing will gradually take over from on-premises and that more and more organisations will go “all in” on Amazon’s cloud. He instanced some examples (Netflix, Intuit, Tibco, Splunk) though I am not quite clear whether these companies have 100% of their internal IT systems on AWS, or merely that they run the entirety of their services (their product) on AWS. The general argument is compelling, especially when you consider the number of services now on offer from AWS and the difficulty of replicating them on-premises (I wrote this up briefly on the Reg). I don’t swallow it wholesale though; you have to look at the costs carefully, but even more than security, the loss of control when you base your IT infrastructure on a public cloud provider is a negative factor.

As it happens, the ticket systems for my train into London were down that morning, which meant that purchasers of advance tickets online could not collect their tickets.


The consequences of this outage were not too serious, in that the trains still ran, but of course there were plenty of people travelling without tickets (I was one of them) and ticket checking was much reduced. I am not suggesting that this service runs on AWS (I have no idea) but it did get me thinking about the impact on business when applications fail; and that led me to the question: what are the long-term implications of our IT systems and even our economy becoming increasingly dependent on a (very) small number of companies for their health? It seems to me that the risks are difficult to assess, no matter how much respect we have for the AWS engineers.

I enjoyed the technical sessions more than the keynote. I attended Dean Bryen’s session on AWS Lambda, “Event-driven code in the cloud”, where I discovered that the scope of Lambda is greater than I had previously realised. Lambda lets you write code that runs in response to events, but what is also interesting is that it is a platform as a service offering, where you simply supply the code and AWS runs it for you:

AWS Lambda runs your custom code on a high-availability compute infrastructure and administers all of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code, and security patches.

This is a different model than running applications in EC2 (Elastic Compute Cloud) VMs or even in Docker containers, which are also VM based. Of course we know that Lambda ultimately runs in VMs as well, but these details are abstracted away and scaling is automatic, which arguably is a better model for cloud computing. Azure Cloud Services or Heroku apps are somewhat like this, but neither is very pure; with Azure Cloud Services you still have to worry about how many VMs you are using, and with Heroku you have to think about dynos (app containers). Google App Engine is another example and autoscales, though you are charged by application instance count so you still have to think in those terms. With Lambda you are charged based on the number of requests, the duration of your code, and the amount of memory allocated, making it perhaps the best abstracted of all these PaaS examples.

But Lambda is just for event-handing, right? Not quite; it now supports synchronous as well as asynchronous event handling and you could create large applications on the service if you chose. It is well suited to services for mobile applications, for example. Java support is on the way, as an alternative to the existing Node.js support. I will be interested to see how this evolves.

I also went along to Carlos Conde’s session on Amazon Machine Learning (one instance in which AWS has trailed Microsoft Azure, which already has a machine learning service). Machine learning is not that easy to explain in simple terms, but I thought Conde did a great job. He showed us a spreadsheet which was a simple database of contacts with fields for age, income, location, job and so on. There was also a Boolean field for whether they had purchased a certain financial product after it had been offered to them. The idea was to feed this spreadsheet to the machine learning service, and then to upload a similar table but of different contacts and without the last field. The job of the service was to predict whether or not each contact listed would purchase the product. The service returned results with this field populated along with a confidence indicator. A simple example with obvious practical benefit, presuming of course that the prediction has reasonable accuracy.

Quick reflections on Amazon re:Invent, open source, and Amazon Web Services

Last week I was in Las Vegas for my first visit to Amazon’s annual developer conference re:Invent. There were several announcements, the biggest being a new relational database service called RDS Aurora – a drop-in replacement for MySQL but with 3x write performance and 5x read performance as well as resiliency benefits – and EC2 Container Service, for deploying and managing Docker app containers. There is also AWS Lambda, a service which runs code in response to events.

You could read this news anywhere, but the advantage of being in Vegas was to immerse myself in the AWS culture and get to know the company better. Amazon is both distinctive and disruptive, and threes things that its retail operation and its web services have in common are large scale, commodity pricing, and customer focus.

Customer focus? Every company I have ever spoken to says it is customer focused, so what is different? Well, part of the press training at Amazon seems to be that when you ask about its future plans, the invariable answer is “what customers demand.” No doubt if you could eavesdrop at an Amazon executive meeting you would find that this is not entirely true, that there are matters of strategy and profitability which come into play, but this is the story the company wants us to hear. It also chimes with that of the retail operation, where customer service is generally excellent; the company would rather risk giving a refund or replacement to an undeserving customer and annoy its suppliers than vice versa. In the context of AWS this means something a bit different, but it does seem to me part of the company culture. “If enough customers keep asking for something, it’s very likely that we will respond to that,” marketing executive Paul Duffy told me.

That said, I would not describe Amazon as an especially open company, which is one reason I was glad to attend re:Invent. I was intrigued for example that Aurora is a drop-in replacement for an open source product, and wondered if it actually uses any of the MySQL code, though it seems unlikely since MySQL’s GPL license would require Amazon to publish its own code if it used any MySQL code; that said, the InnoDB storage engine code at least used to be available under a dual license so it is possible. When I asked Duffy though he said:

We don’t … at that level, that’s why we say it is compatible with MySQL. If you run the MySQL compatibility tool that will all check out. We don’t disclose anything about the inner workings of the service.

This of course touches on the issue of whether Amazon takes more from the open source community than it gives back.

Senior VP of AWS Andy Jassy

Someone asked Senior VP of AWS Andy Jassy, “what is your strategy of contributing to the open source ecosystem”, to which he replied:

We contribute to the open source ecosystem for many years. Zen, MySQL space, Linux space, we’re very active contributors, and will continue to do so in future.

That was it, that was the whole answer. Aurora, despite Duffy’s reticence, seems to be a completely new implementation of the MySQL API and builds on its success and popularity; could Amazon do more to share some of its breakthroughs with the open source community from which MySQL came? I think that is arguable; but Amazon is hard to hate since it tends to price so competitively.

Is Amazon worried about competition from Microsoft, Google, IBM or other cloud providers? I heard this question asked on several occasions, and the answer was generally along the lines that AWS is too busy to think about it. Again this is perhaps not the whole story, but it is true that AWS is growing fast and dominates the market to the extent that, say, Azure’s growth does not keep it awake at night. That said, you cannot accuse Amazon of complacency since it is adding new services and features at a high rate; 449 so far in 2014 according to VP and Distinguished Engineer James Hamilton, who also mentioned 99% usage growth in EC2 year on year, over 1,000,000 active customers, and 132% data transfer growth in the S3 storage service.

Cloud thinking

Hamilton’s session on AWS Innovation at Scale was among the most compelling of those I attended. His theme was that cloud computing is not just a bunch of hosted servers and services, but a new model of computing that enables new and better ways to run applications that are fast, resilient and scalable. Aurora is actually an example of this. Amazon has separated the storage engine from the relational engine, he explained, so that only deltas (the bits that have changed) are passed down for storage. The data is replicated 6 times across three Amazon availability zones, making it exceptionally resilient. You could not implement Aurora on-premises; only a cloud provider with huge scale can do it, according to Hamilton.

Distinguished Engineer James Hamilton

Hamilton was fascinating on the subject of networking gear – the cards, switches and routers that push bits across the network. Five years ago Amazon decided to build its own, partly because it considered the commercial products to be too expensive. Amazon developed its own custom network protocol stack. It worked out a lot cheaper, he said, since “even the support contract for networking gear was running into 10s of millions of dollars.” The company also found that reliability increased. Why was that? Hamilton quipped about how enterprise networking products evolve:

Enterprise customers give lots of complicated requirements to networking equipment producers who aggregate all these complicated requirements into 10s of billions of lines of code that can’t be maintained and that’s what gets delivered.

Amazon knew its own requirements and built for those alone. “Our gear is more reliable because we took on an easier problem,” he said.

AWS is also in a great position to analyse performance. It runs so much kit that it can see patterns of failure and where the bottlenecks lie. “We love metrics,” he said. There is an analogy with the way the popularity of Google search improves Google search; it is a virtuous circle that is hard for competitors can replicate.

Closing reflections

Like all vendor-specific conferences there was more marketing that I would have liked at re:Invent, but there is no doubting the excellence of the platform and its power to disrupt. There are aspects of public cloud that remain unsettling; things can go wrong and there will be nothing you can do but wait for them to be fixed. The benefits though are so great that it is worth the risk – though I would always advocate having some sort of plan B and off-cloud (or backup with another cloud provider) if that is feasible.

Amazon Reinvent: new products announced including Aurora database with claimed performance 5 times that of MySQL

Amazon is holding its third Reinvent conference in Las Vegas – 13,500 attendees catching up on Amazon’s Web Services platform. In this morning’s keynote, Amazon’s Senior VP of cloud services Andy Jassy evangelised the platform and announced a number of new services which, in typical Amazon style, are now available in preview.


Amazon is well ahead of its competitors in cloud services, in terms of market share and mindshare, and Jassy had no problems reeling off impressive statistics and case studies. A slide showing that AWS is not only larger but also growing faster yea-on-year than its competition prompted a small protest. Microsoft claims that Amazon understated its rate of growth:


The refrain from those who spoke on behalf of companies such as Intuit (which intends to move 100% of its applications to AWS) was that no alternative cloud provider could offer a realistic alternative to AWS. With the progress being made by competitors I wonder for how long this will be true – and bear in mind that this is an Amazon conference – but it testifies to the dominance that Amazon has achieved.

Jassy made a key point about security and compliance. The relative security of public cloud versus private datacenters has long been debated, initially on the assumption that computing resources you own and guard yourself must be more secure than those hosted by third-parties. The counter is that few organisations can afford the level of security that big public cloud providers can achieve. Jassy’s point though was that the number of certifications now achieved by AWS is now such that security and compliance is now a driver towards cloud computing.

The main news though was a series of product announcements:

Aurora relational database: a MySQL compatible database as a service for which Jassy claims 5x the performance of MySQL. He says that businesses stick with commercial, proprietary database managers because open source solutions lack the performance, but that Aurora now provides a solution at a commodity price. Unfortunately Aurora is not going to help those with applications locked into Oracle, SQL Server or others. Still, 5x performance is always welcome.

CodeDeploy: apparently based on a service Amazon uses internally, this is a deployment tool for pushing out updated applications to EC2 (Elastic Compute Cloud) VMs without downtime.

CodeCommit: a source code management service for Git repositories.

CodePipeline: automate your software release by defining a workflow of tests and approvals.

Key Management Service: if you manage encrypted data you will be familiar with the hassles of managing and rotating encryption keys. Here is a service to manage that.

AWS Config: A discovery service for the AWS resources you are using.

Service Catalog: a custom portal for users to browse and use AWS resources offered by an organisation.

This was day one; there is another keynote tomorrow and there may be more announcements.

There is no doubting the momentum behind AWS, and according to Jassy, there is still a long way to grow. Towards the end of the keynote he talked about businesses moving entire datacenters to AWS, for example when leases expire, and in the press Q&A session later he expressed the belief that eventually few companies will operate their own datacentres; he does not see much future for private cloud – in the sense of self-managed clouds on your own infrastructure. That is of course what you would expect Amazon to say.

Partnerships are key in this industry and I was interested to note the Reinvent sponsors:


The Diamond sponsors (who I presume have paid the most) are Accenture, Cloudnexa (AWS consultants), CSC (also consultants), Intel (I guess Amazon buys a lot of CPUs), Trend Micro and twilio (who must me doing well to be on this list).

Amazon AWS and the continuing trend towards cloud services. Desktops next?

It was a lightbulb moment. The problem:  how to migrate a document store from one Office 365 (hosted SharePoint) instance to another. Copy it all out and copy it back in, obviously, but that is painful over ADSL (which is all I had at my disposal) since the “asynchronous” part of ADSL means slow uploads; and download from Office 365 was not that fast either.

Solution: use an Azure virtual machine. VM hosted by Microsoft, SharePoint hosted by Microsoft, result – a fast connection between the two. I ran up the VM in a few minutes using Microsoft’s nice Azure portal, used Remote Desktop to connect, and copied the documents out and back in no time.

There is a general point here. If you are contemplating cloud-hosted VDI (Virtual Desktop Infrastructure), there is huge advantage in having the server applications and data close to the VDI instances. All you then need is a connection good enough to work on that remote desktop, which is relatively lightweight. If the cloud vendor is doing its job, the internal connections in that cloud should be fast. In addition, from the client’s perspective, most of the data is download, transferring the screen image to the client, rather than upload, transmitting mouse and keyboard interactions, so that is a good use case for ADSL.

The further implication is that the more you use cloud services, the more attractive hosted desktops become. Desktops are expensive to manage, which is why I would expect a service like Amazon Workspaces, hosted Windows desktops as a service, to find a ready market – even at $600 per year for a desktop with Office Professional 2010 preinstalled, or $420 per year if you install and license Office yourself, or use Open Office or some other alternative.

Workspaces are currently in limited preview, which means a closed beta, but there are hints that a public beta is coming soon.

Adopting this kind of setup means a massive dependency on Amazon of course, which is a concern if you worry about that kind of thing (and I think you should); but how much business is now dependent on one of the major cloud providers (I tend to think of Amazon, Microsoft and Google as the top three) already?

Thinking back to my Office 365 example, it also seems to me that Microsoft will make a serious play for cloud VDI in the not too distant future, since it makes so much sense. The problem for Microsoft is further cannibalisation of its on-premise business, and further disruption for Microsoft partners, but if the alternative is giving away business to Amazon, it has little choice.

I was at an Amazon Web Services briefing today and asked whether we might see an Office 365-like package from AWS in future. Unlikely, I was told; but many customers do use AWS for hosting the likes of Exchange and SharePoint.

The really clever thing for Amazon would be a package that looked like Office 365, but using either open source or internally developed applications that removed the need to pay license fees to Microsoft.

What else is new from AWS? I have no exclusives to share, since Amazon has a policy of never pre-announcing new features or services. There were a few statistics, one of which is that Redshift, hosted data warehousing, is Amazon’s fastest-growing product.

Amazon also talked about Kinesis, which lets you analyse streams of data in a 24-hour window. For example, if you wanted to analyse the output from thousands of sensors (say,weather) but do not need to store the data, you can use Kinesis. If you do want to store the data, you can integrate with Redshift or DynamoDb, two of Amazon’s database services.

The company also talked up its Relational Database Service (RDS), where you purchase a managed database service which can currently be MySQL, PostgreSQL, Oracle or Microsoft SQL Server. Amazon handles all the infrastructure management so you only need worry about your data and applications.

RSD pricing ranges start from $25 a month for MySQL, to $514 a month for SQL Server Standard (which is actually more expensive than Oracle at $223 per month for the same instance size). Higher capacity instances cost more of course. SQL Server Web edition comes down below Oracle at $194 per month, but I was surprised to see how high the SQL Server costs are. Note that these prices include all the CALs (Client Access Licenses). The prices are actually per hour, eg $0.715 for SQL Server Standard, so you could save money if your business can turn off or reduce the service out of working hours, for example.

How much premium does Amazon charge for its managed RDS versus what you would pay for equivalent capacity in a VM that you manage yourself? I asked this question but did not receive a meaningful reply; you need to do your own homework.

My reflection on this is that just as supermarkets make more money from pre-packaged ready meals than from basic groceries, so too the cloud providers can profit by bundling management and applications into their products rather than offering only basic infrastructure services. You still have the choice; but database admin costs money too.

Finally, we took a quick look at AppStream, which is a proprietary protocol, SDK and service for multimedia applications. You write applications such as games that render video on the server and stream it efficiently to the client, which could be a smartphone or low-power tablet. In this case again, you are taking a total dependency on Amazon to enable your application to run.

If you are interested in AWS, look out for a summit near you. There is one in London on 30th April. Or go to the Reinvent conference in Las Vegas in November.

My overall reflection is that the momentum behind AWS and its pace of innovation is impressive; yet it also seems to me that rivals like Microsoft and Google are becoming more effective. The cloud computing market is such that there is room for all to grow.

Catching up with Amazon’s cloud services

I attended Amazon’s AWS (Amazon Web Services) Update in London. This was not a major news event; more a chance to catch up on what is new with Amazon’s cloud services, the dominant force in cloud computing infrastructure.

One thing that caught my interest is the speed which which Amazon is rolling out new features. The pattern seems to be that one or more significant features are rolled out each month. The session in London covered announcements since July 2012, with new stuff including:

  • DKIM signing for the Simple Email Service
  • High I/O EC2 (Elastic Compute Cloud) instances
  • Cross-origin resource sharing for S3 (Simple Storage Service), lets web apps interact directly with S3 content
  • Amazon Glacier service for archival storage
  • Binary data support in DynamoDB
  • SQL Server 2012 in RDS (Relational Database Service)
  • Provisioned IOPS (1,000 to 10,000 IOPS) storage for RDS
  • New instance types and price reductions – there are now seventeen types of VM, see the current range here.
  • General availability of Storage Gateway, which lets you attach cloud storage to your local network via iSCSI, with local caching for performance.
  • Ruby support in Elastic Beanstalk
  • Completely rewritten SDK for PHP using modern coding style
  • Consistent BatchGet for DynamoDB
  • Increased Provisioned IOPS for EBS (Elastic Block Storage) to a maximum of 2000 IOPS

What I want to highlight is not so much the features themselves as the pace of development, which is impressive.

There was considerable discussion of Provisioned IOPS which let you purchase fast data traffic between your application and your storage. This can have a dramatic impact. Netflix used it to reduce the instance count and eliminate Memcached caching from their application. Increasing performance is another route to scalability.

Reserved instances are interesting. If you reserve an instance for a period, rather than paying as you go, you save up to 63% but lose the benefit of down-sizing on demand. However Amazon has also created a marketplace where you can sell unused reserved instances. It is all smoke and mirrors for Amazon; a reserved instance is just a billing mechanism. It collects 12% of any resale though.

Elastic Beanstalk also got some attention. I have always thought of this primarily as an auto-scaling feature. However, the discussion focused more on ease of deployment. The two are related, since Elastic Beanstalk has to know how to automatically deploy your application in order to scale it automatically. It is “AWS for the lazy”, we were told.

Amazon is getting high demand for node.js on Elastic Beanstalk – not available yet but watch this space.

There was a session on CloudSearch which left me unexcited. This is in effect another type of cloud database designed for search with relevance ranking, field weighting and so on. However it is not trivial to implement; you will have to work out how to feed CloudSearch with data in its SDF format, matching what you want to search, and how to keep it up to date.

I would have liked to hear more about the DynamoDB NoSQL database manager which is proving a popular service.

If you want to track AWS as it evolves, I recommend following the official blog.

Amazon entices Android developers with $50 incentive

Amazon is offering Android developers $50 of AWS (Amazon Web Services) credit if they submit an app to the Amazon Android app store.

Although the announcement refers to apps that actually make use of AWS, this does not seem to be a pre-condition:

September 7 – November 15: Android developers who submit an app that is approved to the Amazon Appstore for Android through October 15 will receive a $50 promotional code towards the use of AWS products and services

The move ties in with reports of Amazon developing its own Android-based tablet/Kindle. Exactly what Amazon will offer is still under wraps.

Amazon is an interesting contender in the mobile wars because it has its own instant ecosystem – millions of customers who are already signed up with accounts and stored credit card details. Add in Kindle eBooks, the MP3 store, and the Amazon Instant Video Store for streaming video, and it amounts to a comprehensive content offering that approaches that of Apple.

The AWS element is also significant, and in this respect Amazon is ahead of Apple. Of course there is nothing to stop you using AWS with apps for iOS or other platforms, though there is synergy when it comes to payments.

The relationship with Google is interesting, in that Google controls Android but Amazon is not hooking into Google services or the official Android Marketplace. Amazon is showing no sign of developing its own search engine though, so Google will still get some benefit if Amazon devices are popular, provided Google remains the default for search.