Catching up with Amazon’s cloud services

I attended Amazon’s AWS (Amazon Web Services) Update in London. This was not a major news event; more a chance to catch up on what is new with Amazon’s cloud services, the dominant force in cloud computing infrastructure.

One thing that caught my interest is the speed which which Amazon is rolling out new features. The pattern seems to be that one or more significant features are rolled out each month. The session in London covered announcements since July 2012, with new stuff including:

  • DKIM signing for the Simple Email Service
  • High I/O EC2 (Elastic Compute Cloud) instances
  • Cross-origin resource sharing for S3 (Simple Storage Service), lets web apps interact directly with S3 content
  • Amazon Glacier service for archival storage
  • Binary data support in DynamoDB
  • SQL Server 2012 in RDS (Relational Database Service)
  • Provisioned IOPS (1,000 to 10,000 IOPS) storage for RDS
  • New instance types and price reductions – there are now seventeen types of VM, see the current range here.
  • General availability of Storage Gateway, which lets you attach cloud storage to your local network via iSCSI, with local caching for performance.
  • Ruby support in Elastic Beanstalk
  • Completely rewritten SDK for PHP using modern coding style
  • Consistent BatchGet for DynamoDB
  • Increased Provisioned IOPS for EBS (Elastic Block Storage) to a maximum of 2000 IOPS

What I want to highlight is not so much the features themselves as the pace of development, which is impressive.

There was considerable discussion of Provisioned IOPS which let you purchase fast data traffic between your application and your storage. This can have a dramatic impact. Netflix used it to reduce the instance count and eliminate Memcached caching from their application. Increasing performance is another route to scalability.

Reserved instances are interesting. If you reserve an instance for a period, rather than paying as you go, you save up to 63% but lose the benefit of down-sizing on demand. However Amazon has also created a marketplace where you can sell unused reserved instances. It is all smoke and mirrors for Amazon; a reserved instance is just a billing mechanism. It collects 12% of any resale though.

Elastic Beanstalk also got some attention. I have always thought of this primarily as an auto-scaling feature. However, the discussion focused more on ease of deployment. The two are related, since Elastic Beanstalk has to know how to automatically deploy your application in order to scale it automatically. It is “AWS for the lazy”, we were told.

Amazon is getting high demand for node.js on Elastic Beanstalk – not available yet but watch this space.

There was a session on CloudSearch which left me unexcited. This is in effect another type of cloud database designed for search with relevance ranking, field weighting and so on. However it is not trivial to implement; you will have to work out how to feed CloudSearch with data in its SDF format, matching what you want to search, and how to keep it up to date.

I would have liked to hear more about the DynamoDB NoSQL database manager which is proving a popular service.

If you want to track AWS as it evolves, I recommend following the official blog.