Tag Archives: cloud computing

Disappearing cloud APIs: a new legacy software problem in the making

Today Google announced that a number of its APIs are to be withdrawn:

This highlights an issue with platform as a service (PaaS). If you build an app on a set of cloud services provided by a third-party, there is a risk that those services will change or disappear so that your app no longer works.

Many of Google’s APIs are free, or free for all but the heaviest users, so you can argue that you get what you pay for. Developers are particularly frustrated by the disappearance of the Translate API, where Google says:

Due to the substantial economic burden caused by extensive abuse, the number of requests you may make per day will be limited and the API will be shut off completely on December 1, 2011.

Poor old Google with its “substantial economic burden”. Oddly though, some developers are willing to pay, but this is not going to be an option:

It would be much better if Google charges using Translate API than shutting it down. Now i will have to remove this functionality from my projects. Nicely done Google 🙁

says one developer. Further, Google’s rationale for withdrawing APIs is likely multi-faceted. If I write an app that calls a Google API in the background, there is no direct benefit to Google unless it makes a charge. From Google’s perspective it is better to supply a widget, that can be branded, or to get users logging onto its site so it can gather stats and display advertising.

Still, my main interest here is in the implications for cloud computing, particularly PaaS. I frequently see old apps running in businesses. In some cases the original developer is long gone and nobody understands the code, if they have it at all. These apps keep running though. In a PaaS world that will no longer be possible; they will stop working if the APIs they use disappear.

The other side of this coin is that responsible service providers have to keep old APIs supported in order not to break applications. I have an application which I wrote 5 years ago using Amazon’s Simple Storage Service (S3). It still works today. There must be thousands of other applications which also use it. Amazon is now stuck with that API, and I guess that unless some major security flaw is discovered it will continue to work for the foreseeable future.

Three questions about Microsoft’s cloud play at TechEd 2011

This year’s Microsoft TechEd is subtitled Cloud Power: Delivered, and sky blue is the theme colour. Microsoft seems to be serious about its cloud play, based on Windows Azure.

Then again, Microsoft is busy redefining its on-premise solutions in terms of cloud as well. A bunch of Windows Servers on virtual machines managed by System Center Virtual Machine Manager (SCVMM) is now called a private cloud – note that the forthcoming SCVMM 2012 can manage VMWare and Citrix XenServer as well as Microsoft’s own Hyper-V. If everything is cloud then nothing is cloud, and the sceptical might wonder whether this is rebranding rather than true cloud computing.

I think there is a measure of that, but also that Microsoft really is pushing Azure heavily, as well as hosted applications like Office 365, and does intend to be a cloud computing company. Here are three side-questions which I have been mulling over; I would be interested in comments.

image

Microsoft gets Azure – but does its community?

At lunch today I sat next to a delegate and asked what she thought of all the Azure push at TechEd. She said it was interesting, but irrelevant to her as her organisation looks after its own IT. She then added, unprompted, that they have a 7,000-strong IT department.

How much of Microsoft’s community will actually buy into Azure?

Is Microsoft over-complicating the cloud?

One of the big announcements here at TechEd is about new features in AppFabric, the middleware part of Windows Azure. When I read about new features in the Azure service bus I think how this shows maturity in Azure; but with the niggling question of whether Microsoft is now replicating all the complexity of on-premise software in a new cloud environment, rather than bringing radical new simplicity to enterprise computing. Is Microsoft over-complicating the cloud, or it is more that the same necessity for complex solutions exist wherever you deploy your applications?

What are the implications of cloud for Microsoft partners?

TechEd 2011 has a huge exhibition and of course every stand has contrived to find some aspect of cloud that it supports or enables. However, Windows Azure is meant to shift the burden of maintenance from customers to Microsoft. If Azure succeeds, will there be room for so many third-party vendors? What about the whole IT support industry, internal and external, are their jobs at risk? It seems to me that if moving to a multi-tenanted platform really does reduce cost, there must be implications for IT jobs as well.

The stock answer for internal staff is that reducing infrastructure cost is an opportunity for new uses of IT that are beneficial to the business. Staff currently engaged in keeping the wheels turning can now deliver more and better applications. That seems to me a rose-tinted view, but there may be something in it.

Microsoft’s Azure toolkit for Apple iOS and Android is a start, but nothing like enough

Microsoft ‘s Jamin Spitzer has announced toolkits for Apple iOS, Google Android and Windows Phone, to support its Azure cloud computing platform.

I downloaded the toolkit for iOS and took a look. It is a start, but it is really only a toolkit for Azure storage, excluding SQL Azure.

image

What would I hope for from an iOS toolkit for Azure? Access to SQL Server in Azure would be useful, as would a client for WCF (Windows Communication Foundation). In fact, I would suggest that the WCF RIA Services which Microsoft has built for Silverlight and other .NET clients has a more useful scope than the Azure toolkit; I realise it is not exactly comparing like with like, but most applications built on Azure will be .NET applications and iOS lacks the handy .NET libraries.

A few other observations. The rich documentation for WFC RIA Services is quite a contrast to the Doxyfile docs for the iOS toolkit and its few samples, though Wade Wegner has a walkthrough. One comment asks reasonably enough why the toolkit does not use a two or three letter prefix for its classes, as Apple recommends for third-party developers, in order to avoid naming conflicts caused by Obective C’s lack of namespace support.

The development tool for Azure is Visual Studio, which does not run on a Mac. Microsoft offers a workaround: a Cloud Ready Package which is a pre-baked Azure application; you just have to amend the configuration in a text editor to point to your own storage account, so developers without Visual Studio can get started. That is all very well; but I cannot imagine that many developers will deploy Azure services on this basis.

I never know quite what to make of these little open source projects that Microsoft comes up with from time to time. It looks like a great start, but what is its long-term future? Will it be frozen if its advocate within Microsoft happens to move on?

In other words, this looks like a project, not a strategy.

The Windows Azure Tools for Eclipse, developed by Soyatec and funded by Microsoft, is another example. I love the FAQ:

image

This sort of presentation says to developers: Microsoft is not serious about this, avoid.

That is a shame, because a strategy for making Azure useful across a broad range of Windows and non-Windows clients and devices is exactly what Microsoft should be working on, in order to compete effectively with other cloud platforms out there. A strategy means proper resources, a roadmap, and integration into the official Microsoft site rather than quasi-independent sites strewn over the web.

Microsoft’s Scott Guthrie moving to Windows Azure

According to an internal memo leaked to ZDNet’s Mary Jo Foley, Microsoft’s Scott Guthrie who is currently Corporate VP of the .NET Developer Platform is moving to lead the Azure Application Platform team. This means he will report to Ted Kummert who is in charge of the Business Platform Division, instead of S Somasegar who runs the Developer Division; however both divisions are part of the overall Server and Tools Division. Server and Tools is the division from which Bob Muglia was ousted as president in January; the reason for this is still not clear to me, though I would guess at some significant strategy disagreement with CEO Steve Ballmer.

Guthrie was co-inventor of ASP.NET and is one of the most approachable of senior Microsoft execs; he is popular and respected by developers and his blog is one of the first places I look for in-depth and hands-on explanations of new features in Microsoft’s developer platform, such as ASP.NET MVC and Entity Framework.

I have spent a lot of time researching and using Visual Studio 2010, and while not perfect it is among the most impressive developer products I know, from the detail of the editor and debug features right through to ALM (Application Lifecycle Management) aspects like Team Foundation Server, testing in various forms, and build management. Some of that quality is likely due to Guthrie’s influence. The successful evolution of ASP.NET from web forms towards the leaner and more flexible ASP.NET MVC is another achievement in which I am sure he played a significant role.

Is it wise to take Guthrie away from his first love and over to the Azure platform? Only Microsoft can answer that, and of course he will still be responsible for an ASP.NET platform. I’d guess that we will see further improvement in the Visual Studio tools for Azure as well.

Still, it is a bold move and one that underlines the importance of Azure to the company. In my own research I have gained increasing respect for Azure and I would expect Guthrie’s arrival there to be successful in winning attention from the Microsoft platform developer community.

Implications of Amazon’s cloud failure

Amazon is into day three of a major failure of its Elastic Compute Cloud at its North Virginia datacenter, and at the time of writing it is still not fully recovered.

image

I am reminded of a prescient remark by Tony Lucas at Flexiant, a UK cloud provider, who told me a couple of years ago (with commendable honesty) that cloud failures will be rare, but when they occur will be on a grand scale.

It seems that it is hard to engineer around the possibility of cascading failure. I am not sure what happened in North Virginia, but Amazon says on its status page that:

A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances.

It sounds like an automated recovery system built into the compute cloud actually became the problem, as a large number of volumes tried to fix themselves at the same time.

This is not the first Amazon outage, but I believe it is the most severe; though it could have been worse and I have not heard that any data was lost. What are the implications?

Any computer system can fail. There will be a lot of companies reflecting on this though, both those directly affected and others, and realising that the cloud can be a single point of failure, despite the scale and expertise which a company like Amazon invests in high availability.

Is Amazon EC2 more or less likely to fail for an extended period than Salesforce.com? Or Microsoft Azure? Or Google App Engine, or Gmail, or IBM’s evolving SmartCloud? Clearly an excellent question; but I am not sure how we go about answering it other than by reviewing historical performance. I do not expect any of these companies to take advantage of Amazon’s problems to proclaim their own superior resiliency; they will all be worrying too much about the same thing happening on their platforms.

My guess is that the industry will get better at this, and that at some unspecified future moment the chance of one of these cloud platforms failing for three days will become exceedingly small – of course risk can never be eliminated, only reduced.

It seems that the risk is not exceedingly small on Amazon’s cloud today; and we should probably assume that the same applies to other providers.

That is something we have always known, so in one sense nothing has changed. This outage is a sharp reminder though; and planning for failure is a hidden cost of cloud computing that has now been brought into the light.

Getting started with VMWare Cloud Foundry

I have been meaning to post about VMWare’s Cloud Foundry, a new cloud platform for Spring Java, Ruby on Rails and Node.js applications. Good, incidentally, to see Node.js getting increasing attention – see my post from December when I heard Ryan Dahl present on the subject. I signed up for Cloud Foundry when it was announced, since it has free participation in the current beta.

Today my account went live and I spent just a few minutes creating a first application using the steps here. Well, my first app is indistinguishable from a static web page; but having been immersed in Windows Azure for the last few days, with its accounts and subscriptions and certificates and the portal and the ServiceConfiguration.cscfg and the ServiceDefinition.csdef and all the rest of it, I admit to having a pleasant sense of “is that it?” when deploying the Cloud Foundry app.

I typed a few lines of Ruby code, saved the file to a directory, and then typed vmc push.

image

image

Let me add that I like what Microsoft is doing with Azure, and that in combination with Visual Studio it makes an impressive cloud development platform that, from what I have seen, works very well; I think it is of major importance. I also like the fact that Microsoft has pushed down the cost of entry, with its Extra Small instances and more generous offers to MSDN subscribers, partners and others. It easier to experiment a bit with Azure than it used to be.

Nevertheless, Azure lacks the delightful simplicity of getting started that you can see in Cloud Foundry. After doing my Hello World app I look forward to trying something more advanced; and while I am sure that the delightful simplicity soon disappears as the lines of Ruby code increase, and as you try to make your application scalable and transactional and secure and all those good things, the fact that it draws you in is an advantage.

Hands on with Office 365 – great service, some hassles

I have been trying Microsoft’s Office 365 which has recently gone into public beta, and is expected to go live later this year.

This cloud service provides Exchange 2010, SharePoint 2010 with Office Web Apps, and Lync Server to provide a complete collaboration service for organisations who prefer not to run these servers themselves – which is understandable give their cost and complexity.

Trying the beta is a little complex when you already have a working email and collaboration infrastructure. I chose to use a virtual machine running Windows 7 Professional. I also pre-installed Office 2010 Professional in an attempt to get the best experience.

Initial sign-up is easy and I was soon online looking at the admin screen. I could also sign into Outlook Web Access and view my SharePoint site.

image

Hassles started when I clicked to setup up desktop applications. This is done by a helper application which configures and updates Outlook, SharePoint and Lync on your desktop PC. At this point I had not configured my own domain; I was simply username@username.onmicrosoft.com.

setup-office-365

The wizard successfully configured SharePoint and Lync, but not Outlook.

image

There was a “Learn more” link; but I was in a maze of twisty passages, all alike, none of which seemed to lead to the information I needed.

Part of the problem – and I have noticed this with BPOS as well – is that the style of the online help is masterful at telling you things you know already, while neglecting to tell you what you need to know. It also has a patronising style that I find infuriating, and a habit of showing you marketing videos at every opportunity.

I did eventually find instructions for configuring Outlook manually for Office 365, but they did not work. I also noticed discrepancies in the instructions. For example, this document says that the Exchange server is ch1prd0201.mailbox.outlook.com and that the proxy server for Outlook over HTTP is pod51004.outlook.com. That did not match with the server given in my online account for IMAP, POP3 and SMTP use, which was a different podnnnnn.outlook.com. I tried all sorts of combinations and none worked.

Eventually I found this comment in another help document:

Currently, the only supported scenario for configuring Outlook to work with Office 365 is a fully migrated environment.

I am not sure if this is true, but it did seem to explain my problems. Of course it would be easy for Microsoft to surface this information in a more obvious place, such as building it into the setup wizard. Anyway, I decided to go for the full Office 365 experience and to set up a domain.

Fortunately I have a domain which I obtained for a bright idea that I have yet to find time for. I added it to Office 365. This is a process which involves first adding a CNAME record to the DNS in order to prove ownership, and then making Office 365 the authoritative nameserver for the domain. I was not impressed by the process, because when Microsoft took over the nameserver role it threw away existing settings. This means that if you have a web site or blog at that domain, for example, it will disappear from the internet after the transfer. Once transferred, you can reinstate custom records.

Still, I had chosen an unused domain so that I did not care about this. I set up a new user with an email address at the new domain, and I amended the default SharePoint web site address to use the domain as well.

image

That all worked fine; but what about Outlook? The bad news was that the setup wizard still failed to configure Outlook, and I still did not know the correct server settings.

I could have contacted support; but I had one last try. I went into the mail applet in control panel and deleted the Outlook profile, so Outlook had no profile at all. Then I ran Outlook, went through the setup wizard, and it all worked, using autodiscover. Out of interest, I then checked the server settings that the wizard had found, which were indeed different in every case from those in the various help documents I had seen.

A few hassles then, and I am not happy with the way this stuff is documented, but nevertheless it all looks good once set up. The latest Exchange and SharePoint does make a capable collaboration platform, the storage limits are generous – up to 25GB per Exchange mailbox – and I think it makes a lot of sense. I expect Microsoft’s online services to win huge amounts of business that is currently going to Small Business Server, and some business from larger organisations too. Migration from existing Microsoft-platform servers should be smooth.

The biggest disappointment so far is that in Lync online the Enterprise Voice feature is disabled. This means no general-purpose voice over IP, though you can call PC to PC. To get this you have to install Lync on-premise:

Organizations that want to leverage the full benefits of Microsoft Unified Communications can purchase and deploy Microsoft Lync Server 2010 on their premises as part of Microsoft Office 365. Lync Server 2010 on-premises delivers full enterprise voice and premises-based, dial-in audio conferencing, enabling customers to reduce costs and increase productivity by replacing or enhancing traditional PBX systems.

though it is confusing since Enterprise Voice is listed as a feature of the high-end E4 edition; I believe this implies an on-premise server alongside Office 365 in the cloud.

Perhaps the biggest question is the unknown: will Office 365 live up to its promised 99.9% scheduled uptime SLA, and how will its reliability compare to that of Google Apps?

Office 365 is priced at $10 per user per month for the basic service (E1), $16 to add Office Web Apps (E2), $24.00 to add licenses for Office Professional, archiving for Exchange (E3) and voicemail, and $27.00 to add Enterprise Voice (E4). The version in beta is E3.

Trying out Remote Desktop to a Microsoft Azure virtual machine

I have been trying out Visual Studio LightSwitch, which has an option to deploy apps to Windows Azure.

Of course I wanted to try this,  and after a certain amount of hassle generating certificates and switching between Visual Studio LightSwitch and the Azure management portal I succeeded.

image

I doubt I would have made it without this step by step guide by Andy Kung. The article begins:

One of the many features introduced in Visual Studio LightSwitch Beta 2 is the ability to publish your app directly to Windows Azure with storage in SQL Azure. We have condensed many steps one would typically have to go through to deploy an application to the cloud manually.

Somewhere between 30 and 40 screens later he writes:

The last step shows you a summary of what you’re about to publish. FINALLY! Click Publish.

We just have to imagine how many screens there would have been if Microsoft had not condensed the “many steps”. The result is also not quite right, because it uses self-signed certificates that will present security warnings when you use the app. For a product supposedly aimed at non-developers it is all hopelessly difficult; but I guess techies are used to this kind of thing.

I was not content though. First, I wanted to use an Extra Small instance, and LightSwitch defaults to a Small instance with no obvious way to change it. I cracked that one. You switch the view in Solution Explorer to the File view, then find the file ServiceDefinition.csdef and edit the vmsize attribute:

image

It worked and I had an Extra Small instance.

I was still not satisfied though. I wanted to use Remote Desktop so I could check out the VM Azure had created for me. I could not see any easy way to do this in the LightSwitch project, so I created another Azure project and configured it for Remote Desktop access using the guide on MSDN. More certificate fun, more passwords. I then started to publish the project, but bailed out when it warned me that I was overwriting a previous deployment.

Then I copied the likely looking parts of ServiceDefinition.csdef and ServiceConfiguration.cscfg from the standard Azure project to the LightSwitch project. In ServiceDefinition.csdef that was the Imports section and the Certificates section. In ServiceConfiguration.cscfg it was all the settings starting Microsoft.WindowsAzure.Plugins.Remote…; and again the Certificates section. I think that was it.

It worked. I published the LightSwitch app, went to the Azure management portal, selected the instance, and clicked Connect.

image

What I found was a virtual Quad-Core Opteron with 767MB RAM and running Windows Server 2008 Enterprise SP2. It seems Azure does not use Server 2008 R2 yet – at least, not for Extra Small instances.

image

750MB RAM is less than I would normally consider for Server 2008 – this is Extra Small, remember – but I tried using my simple LightSwitch app and it seemed to cope OK, though memory is definitely tight.

image

This VM is actually not that small in relation to many Linux VMs out there, happily running Apache, PHP, MySQL and numerous web applications. Note that my Azure VM is not running SQL Server; SQL Azure runs on separate servers. I am not 100% sure why Azure does not use Server Core for VMs like this. It may be because server core is usually used in conjunction with GUI tools running remotely, and setting up all the permissions for this to work is a hassle.

I took a look at the Event Viewer. I have never seen a Windows event log without at least a few errors, and I was interested to see if a Microsoft-managed VM would be the first. It was not, though there are a mere 16 “Administrative events” which is pretty good, though the VM has only been running for an hour or so. There were a bunch of boot-start drivers which failed to load:

image

and this, which I would describe as a typical obscure and probably-unimportant-but-who-knows Windows error:

image

The Azure VM is not domain-joined, but is in a workgroup. It is also not activated; I presume it will become activated if I leave it running for more than 14 days.

Internet Explorer is installed but I was unable to browse the web, and attempting to ping out gave me “Request timed out”. Possibly strict firewall rules prevent this. It must be carefully balanced, since applications will need to connect out.

The DNS suffix is reddog.microsoft.com – a remnant of the Red Dog code name which was originally used for Azure.

As I understand it, the main purpose of remote desktop access is for troubleshooting, not so that you can install all sorts of extra stuff on your VM. But what if you did install all sorts of extra stuff? It would not be a good idea, since – again as I understand it – the VM could be zapped by Azure at any time, and replaced with a new one that had reverted to the original configuration. You are not meant to keep any data that matters on the VM itself; that is what the Azure storage services are for.

Five years of Amazon Web Services

Amazon introduced its Simple Storage Service in March 2006. S3 was not the first of the Amazon Web Services (AWS); they were originally developed for affiliates who needed programmatic access to the Amazon retail store in order to use its data on third-party web sites. That said, there is a profound difference between a web service for your own affiliates, and one for generic use. I consider S3 to mark the beginning of Amazon’s venture into cloud computing as a provider.

It is also something I have tracked closely since those early days. I quickly wrote a Delphi wrapper for S3; it did not set the open source world alight but did give me some hands-on experience of the API. I was also on the early beta for EC2.

Amazon now dominates the section of the cloud computing market which is its focus, thanks to keen pricing, steady improvements, and above all the fact that the services have mostly worked as advertised. I am not sure what its market share is, or even how to measure it, since cloud computing is a nebulous concept. This Wall Street Journal article from February 2011 gives Rackspace the number two slot but with only one third of Amazon’s cloud services turnover, and includes the memorable remark by William Fellows of the 451 Group, “In terms of market share Amazon is Coke and there isn’t yet a Pepsi.”

The open source Eucalyptus platform has paid Amazon a compliment by implementing its EC2 API:

Eucalyptus is a private cloud-computing platform that implements the Amazon specification for EC2, S3, and EBS. Eucalyptus conforms to both the syntax and the semantic definition of the Amazon API and tool suite, with few exceptions.

AWS is not just EC2 and S3. Other offerings include two varieties of cloud database, services for queuing, notification and email, and the impressive Elastic Beanstalk for automatically scaling your application on demand.

Should we worry about Amazon’s dominance in cloud computing? Possibly, especially as the barriers to entry are considerable. Another concern is that as more computing infrastructure becomes dependent on Amazon, the potential disruption if the service were to break increases. How many of Amazon’s AWS customers have a plan B for when EC2 fails? Amazon defuses anti-competitive concerns by continuing to offer commodity pricing.

Amazon has quietly changed the computing landscape though; and though this is a few weeks late the 5th birthday of its cloud services deserves a mention.

Fit for business? Google updates App Engine with the Enterprise in mind

Google has updated App Engine to 1.4.3. The new version adds:

Prospective Search API for Python – this lets you register a large set of queries which are executed against a flow of data so you can create notifications or other actions whenever a match is found.

Testbed Unit Test Framework for Python – this lets you create stubs for Google services for lightweight unit tests.

Concurrent requests for Java – a single application instance can now serve multiple requests provided it is marked threadsafe. An important feature.

Java Remote API – the remote API lets you access an App Engine datastore from your local machine.

I have had the sense that Google App Engine is more attractive to start-ups and small organisations than to enterprise customers. It is interesting to see Google working on bringing the Java and Python runtimes closer to parity, as Java is more widely used for enterprise development.

Another initiative aimed at enterprise customers is App Engine for Business, currently in preview. What you get is:

An Enterprise Administration Console console for managing all apps built by your company, with access control lists.

99.9% service level agreement

Hosted SQL:

While many applications can be built on the App Engine Datastore (which uses Google’s BigTable database system), we know SQL is the industry standard for the enterprise, so we’ve got you covered. SQL database support on App Engine gives enterprise developers access to the full capabilities of a dedicated relational database, without the headache of managing it.

SSL to an URL that uses your domain, such as https://myapp.apps.example.com.

Pricing – $8 per user up to a maximum of $1000 per month. In other words, if you have more than 125 users the cost per user starts coming down; if you have 1000 users it is a bargain.

Has Google done enough to make App Engine attractive to enterprise customers? This post from a frustrated developer back in November 2010 complained about stability issues and other annoyances that do not really exist on Amazon or Microsoft Azure; the Salesforce.com platform does have some throttling limitations. But it does seem that Google is determined to address the issues and App Engine for Business looks promising.