Tag Archives: web services

Amazon Glacier: archiving on demand at low prices

Amazon has announced a new product in its Amazon Web Services cloud suite. Amazon Glacier is designed for archiving. According to the service description, you get redundant storage over “multiple facilities and on multiple devices within each facility” with regular data integrity checks, giving annual durability which Amazon works out somehow as 99.999999999%.

Storage pricing is $0.011 per GB / month. So keeping a cloud-based copy of that 1TB drive you just bought is $11.00 per month or $132 per year. Not a bad price considering the redundancy and off-site problem that it solves, as long as you can live with sub-contracting the task.

For comparison, Amazon S3, which is designed for day to day storage, costs  $0.125 per GB for the first 1TB, falling to $0.055 per GB for 5000 TB or more, or $0.037 per GB for what Amazon calls “reduced redundancy storage”. Glacier is less than one third of the price.

Note that Glacier is not suitable if you need to get at the data quickly:

You can download data directly from the service using the service’s REST API. When you make a request to retrieve data from Glacier, you initiate a retrieval job. Once the retrieval job completes, your data will be available to download for 24 hours. Retrieval jobs typically complete within 3-5 hours.

In other words, you cannot retrieve data directly. You have to ask for it to be made available first. Glacier is not a cheap alternative to S3, other than for archiving.

There are additional charges for retrieving data beyond 1GB per month, $0.12 per GB falling to $0.050 per GB for over 350 TB, or less for very large retrievals. It is well known that beyond a certain amount, it is quicker and cheaper to send data on the back of a truck than over the internet.

Five years of Amazon Web Services

Amazon introduced its Simple Storage Service in March 2006. S3 was not the first of the Amazon Web Services (AWS); they were originally developed for affiliates who needed programmatic access to the Amazon retail store in order to use its data on third-party web sites. That said, there is a profound difference between a web service for your own affiliates, and one for generic use. I consider S3 to mark the beginning of Amazon’s venture into cloud computing as a provider.

It is also something I have tracked closely since those early days. I quickly wrote a Delphi wrapper for S3; it did not set the open source world alight but did give me some hands-on experience of the API. I was also on the early beta for EC2.

Amazon now dominates the section of the cloud computing market which is its focus, thanks to keen pricing, steady improvements, and above all the fact that the services have mostly worked as advertised. I am not sure what its market share is, or even how to measure it, since cloud computing is a nebulous concept. This Wall Street Journal article from February 2011 gives Rackspace the number two slot but with only one third of Amazon’s cloud services turnover, and includes the memorable remark by William Fellows of the 451 Group, “In terms of market share Amazon is Coke and there isn’t yet a Pepsi.”

The open source Eucalyptus platform has paid Amazon a compliment by implementing its EC2 API:

Eucalyptus is a private cloud-computing platform that implements the Amazon specification for EC2, S3, and EBS. Eucalyptus conforms to both the syntax and the semantic definition of the Amazon API and tool suite, with few exceptions.

AWS is not just EC2 and S3. Other offerings include two varieties of cloud database, services for queuing, notification and email, and the impressive Elastic Beanstalk for automatically scaling your application on demand.

Should we worry about Amazon’s dominance in cloud computing? Possibly, especially as the barriers to entry are considerable. Another concern is that as more computing infrastructure becomes dependent on Amazon, the potential disruption if the service were to break increases. How many of Amazon’s AWS customers have a plan B for when EC2 fails? Amazon defuses anti-competitive concerns by continuing to offer commodity pricing.

Amazon has quietly changed the computing landscape though; and though this is a few weeks late the 5th birthday of its cloud services deserves a mention.

WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

SOA, REST and Flash/Flex – why Flash does not PUT

Adobe’s Duane Nickull has an illuminating post on how the Flash player handles REST. Nickull is responding to a post by Malcolm Box in which he complains how hard it is to use Flash with a REST web service. Box observes that Flash cannot send POST, PUT and DELETE requests when running in the browser, and does not send cookies.

Nickull defends the Flash behaviour:

Flash’s HTTP libraries currently support GET and POST. My architectural view of this is that the HTTP libraries only should really support these and not worry about the others.

He also notes that cookies are a poor way to manage state:

Cookies are for the browser and belong in the browser. Having Flash Player able to access cookies would be a mistake in my own opinion. Any logic that is facilitated by a browser should probably be dealt with at the browser layer before Flash Player is used.

Now, I think the comments on REST are important to read if you are engaged in designing a web service, as many of us in these days of cloud+device. There is a kind-of “word on the street” approach to web services which says that REST is good, SOA/SOAP is bad; but in reality it is not so simple, and these distinctions are muddled. REST is arguably a form of SOA, you can do SOAP with REST, and so on.

One factor is that reading data in a web client is far more common than writing data. It is easy to be an advocate of the simplicity of REST if all you are doing is GET.

The question Nickull asks is whether the transport protocol has any business dictating how the data it transports should be processed, for example whether it is an operation to retrieve or to write data:

In an SOA world, the transport functionality (usually implemented using SOAP) should focus on just delivering the message and it’s associated payload(s) to the destination(s), optionally enforcing rules of reliability and security rather than declaring to the application layer processing instructions to the service endpoint.

Read the post for more of the rationale behind this. Maybe, even if you are doing REST, restricting your web service to GET and POST is not such a bad idea after all.

That said, whatever you think about the architectural principles, you may find yourself having to write a browser-hosted Flash client for a service that requires an HTTP verb other than GET or POST. There are ways round it: see this discussion of Amazon S3 (which uses PUT) and Flash for an example.