Tag Archives: cloud computing

The Microsoft Azure VM role and why you might not want to use it

I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.

It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better off using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

UK business applications stagger towards the cloud

I spent today evaluating several competing vertical applications for a small business working in a particular niche – I am not going to identify it or the vendors involved. The market is formed by a number of companies which have been serving the market for some years, and which have Windows applications born in the desktop era and still being maintained and enhanced, plus some newer companies which have entered the market more recently with web-based solutions.

Several things interested me. The desktop applications seemed to suffer from all the bad habits of application development before design for usability became fashionable, and I saw forms with a myriad of fields and controls, each one no doubt satisfying a feature request, but forming a confusing and ugly user interface when put together. The web applications were not great, but seemed more usable, because a web UI encourages a simpler page-based approach.

Next, I noticed that the companies providing desktop applications talking to on-premise servers had found a significant number of their customers asking for a web-hosted option, but were having difficulty fulfilling the request. Typically they adopted a remote application approach using something like Citrix XenApp, so that they could continue to use their desktop software. In this type of solution, a desktop application runs on a remote machine but its user interface is displayed on the user’s desktop. It is a clever solution, but it is really a desktop/web hybrid and tends to be less convenient than a true web application. I felt that they needed to discard their desktop legacy and start again, but of course that is easier said than done when you have an existing application widely deployed, and limited development resources.

Even so, my instinct is to be wary of vendors who call desktop applications served by XenApp or the like cloud computing.

Finally, there was friction around integrating with Outlook and Exchange. Most users have Microsoft Office and use Outlook and Exchange for email, calendar and tasks. The vendors with web application found their users demanding integration, but it is not easy to do this seamlessly and we saw a number of imperfect attempts at synchronisation. The vendors with desktop applications had an easier task, except when these were repurposed as remote applications on a hosted service. In that scenario the vendors insisted that customers also use their hosted Exchange, so they could make it work. In other words, customers have to build almost their entire IT infrastructure around the requirements of this single application.

It was all rather unsatisfactory. The move towards the cloud is real, but in this particular small industry sector it seems slow and painful.

Reflections on Microsoft PDC 2010

I’m in Seattle airport waiting to head home – so here are some quick reflections on Microsoft’s Professional Developers Conference 2010.

Let’s start with the content. There was a clear focus on two things: Windows Azure, and Windows Phone 7.

On the Azure front, the cloud platform, Microsoft impressed. Features are being added rapidly, and it looks solid and interesting. The announcements at PDC mean that Azure provides pretty much the complete Windows Server platform, should you want it. You will get elevated privileges for complete control over a server instance; and full IIS functionality including support for multiple web sites and the ability to install modules. You will also be able to remote desktop into your Azure servers, which is going to make Windows admins feel more comfortable with Azure.

The new virtual machine role is also a big deal, even though in some ways it goes against the multi-tenanted philosophy by leaving the customer responsible for patches and updates. Businesses with existing virtual servers can simply move them to Azure if they no longer wish to run their own hardware. There are also existing tools for migrating physical servers to virtual.

I asked Bob Muglia, president of server and tools at Microsoft, whether having all these VMs maintained by customers and potentially compromised with malware posed a security threat to the platform. He assured me that they are fully isolated, and that the main danger is to the customer who might consume unexpected amounts of bandwidth.

Simply running on an Azure VM does not take full advantage of the platform though. It makes more sense to hook into Azure services such as SQL Azure, or the non-relational storage services, and deploy to Azure web or worker roles where Microsoft take care of maintenance. There is also a range of middleware services called AppFabric; see here for a few notes on these.

If there was one gap in the Azure story at PDC, it was a lack of partner announcements. Microsoft says there are more than 20,000 applications running on Azure, but we did not hear much about them, or about notable large customers embracing Azure. There is still a lot of resistance to the cloud among customers. I asked some attendees at lunch whether they expect to use Azure; the answer was “no, we have our own datacenter”.

I think the partner announcements will come. Microsoft is firmly behind Azure now, and it makes sense for its customers. I expect Azure to succeed; but whether it will do well enough to counter-balance the cost to Microsoft of migration away from on-premise servers is an open question.

Alongside Azure, though hardly mentioned at PDC, is the hosted application business originally called BPOS and now called Office 365. This is not currently hosted on Azure, though Muglia told me that most of it will in time move there. There are some potential synergies here, for example in Azure workflow applications that handle SharePoint forms or documents.

Microsoft’s business is primarily based on partners selling Windows hardware and licenses for on-premise or client software. Another open question is how easily the company can re-orient itself to be a cloud platform and services company. It is a massive shift.

What about Windows Phone? Microsoft has some problems here, and they are not primarily to do with the phone itself, which is decent. There are a few issues over the design of the launch devices, and features that are lacking initially. Further, while the Silverlight and XNA SDK forms a strong development platform, there is a need for a native code SDK and I expect this will follow at some point.

The key issue though is that outside the Microsoft bubble there is not much interest in the phone. Google Android meets the needs of the OEM hardware and operator partners, being open and easily customised. Apple owns the market for high-end devices with the design quality and ease of use that comes from single-vendor control of the whole stack. The momentum behind these platforms is such that it will not be easy for Microsoft to grab much market share, or attention from third-party app developers. It deserves to do well; but I will not be surprised if it under-performs relative to its quality.

There was also some good material to be found on the PDC sidelines, as it were. Andes Hejlsberg presented on new asynchronous features coming in C# 5.0, which look like a breakthrough in making concurrent programming safer and easier. He also showed a bit of Microsoft’s work on compiler as a service, which has huge potential. Patrick Smaccia has an enthusiastic report on the C# presentation. Herb Sutter gave a brilliant talk on lambdas.

The PDC site lets you stream pretty much all the sessions and seems to work very well. The player application is written in Silverlight. Note that there are twice as many sessions as appear in the schedule, since many were pre-recorded and only show in the full session list.

Why did Microsoft run such a small event, with only around 1000 attendees? I asked a couple of people about this; the answer seems to be partly as a cost-saving measure – it is much cheaper to run an event on the Microsoft campus than to hire an external venue and pay transport and expenses for all the speakers and staff – and partly to emphasise the virtual aspect of PDC, with a global audience tuning in.

This does not altogether make sense to me. Microsoft is still generating a ton of cash, as we heard in the earnings call at the event, and PDC is a key opportunity to market its platform to developers and influencers, so it should not worry too much about the cost. Second, you can do virtual as well as physical; they are not alternatives. You get more engagement from people who are actually present.

One of the features of the player is that you see how many are currently streaming the content. I tuned into Mark Russinovich’s excellent session on Azure – he says he has “drunk the cloud kool-aid” – while it was being streamed live, and was surprised to see only around 300 virtual attendees. If that figure is accurate, it is disappointing, though I am sure there will be thousands of further views after the event.

Finally, what about all the IE9/HTML 5 vs Silverlight discussion generated at PDC? Clearly Microsoft’s messaging went badly awry here, and frankly the company has only itself to blame. It cannot be surprised if after making a huge noise about how IE9 forms a great client for web applications, standards-based and integrated with Windows, that people question what sort of role is envisaged for Silverlight. It did not help that a planned session on Silverlight futures was apparently cancelled, probably for innocent reasons such as not being quite ready to show, but increasing speculation that Silverlight is now getting downplayed.

Microsoft chose to say nothing on the subject, other than some remarks by Bob Muglia to freelance journalist Mary-Jo Foley which seem to confirm that yes, Silverlight is no longer Microsoft’s key technology for cross-platform web applications.

If that was not quite the message Microsoft intended, then why not clarify the matter to press, myself included, as we sat in the press room on Microsoft’s campus?

My take is that while Silverlight is by no means dead, it seems destined for a lesser role than was once envisaged – a shame, as it is an excellent cross-platform .NET client.

Microsoft PDC big on Azure, quiet on Silverlight

I’m at Microsoft PDC in Seattle. The keynote, introduced by CEO Steve Ballmer, started with a recap of the company’s success with Windows 7 – 240 million sold, we were told, and adoption plans among 88% of businesses – and showing off Windows Phone 7 (all attendees will receive a device) and Internet Explorer 9.

IE9 guy Dean Hachamovitch demonstrated the new browser’s hardware acceleration, and made an intriguing comment. When highlighting IE9’s embrace of web standards, he noted that “accelerating only pieces of the browser holds back the web.” It sounded like a jab at plug-ins, but what about Microsoft’s own plug-in, Silverlight? A good question. You could put this together with Ballmer’s comment that “We’ve tried to make web the feel more like native applications” as evidence that Microsoft sees HTML 5 rather than Silverlight as its primary web application platform.

Then again you can argue that it just happens Microsoft had nothing to say about Silverlight, other than in the context of Windows Phone 7 development, and that its turn will come. The new Azure portal is actually built in Silverlight.

The messaging is tricky, and I found it intriguing, especially coming after the Adobe MAX conference where there were public sessions on Flash vs HTML and a focus in the day two keynote emphasising the importance of both. All of which shows that Adobe has a tricky messaging problem as well; but it is at least addressing it, whereas Microsoft so far is not.

The keynote moved on to Windows Azure, and this is where the real news was centered. Bob Muglia, president of the Server and Tools business, gave a host of announcements on the subject. Azure is getting a Virtual Machine role, which will allow you to upload server images to run on Microsoft’s cloud platform, and to create new virtual machines with full control over how they are configured. Server 2008 R2 is the only supported OS initially, but Server 2003 will follow.

Remote Desktop is also coming to Azure, which will mean instant familiarity for Windows admins and developers.

Another key announcement was Windows Azure Marketplace, where third parties will be able to sell “building block components training, services, and finished services and applications.” This includes DataMarket, the new name for the Dallas project, which is for delivering live data as a service using the odata protocol. An odata library has been added to the Windows Phone 7 SDK, making the two a natural fit.

Microsoft is also migrating Team Foundation Server (TFS) to Azure, interesting both as a case study in moving a complex application, and as a future option for development teams who would rather not wrestle with the complexities of deploying this product.

Next came Windows Azure AppFabric Access Control, which despite its boring name has huge potential. This is about federated identity – both with Active Directory and other identity services. In the example we saw, Facebook was used as an identity provider alongside Microsoft’s own Active Directory, and users got different access rights according to the login they used.

In another guide Azure AppFabric – among the most confusing Microsoft product names ever – is a platform for hosting composite workflow applications.

Java support is improving and Microsoft says that you will be able to run the Java environment of your choice from 2011.

Finally, there is a new “Extra small” option for Azure instances, aimed at developers, priced at $0.05 per compute hour. This is meant to make the platform more affordable for small developers, though if you calculate the cost over a year it still amounts to over $400; not too much perhaps, but still significant.

Attendees were left in no doubt about Microsoft’s commitment to Azure. As for Silverlight, watch this space.

Ray Ozzie no longer to be Microsoft’s Chief Software Architect

A press release, in the form of a memo from CEO Steve Ballmer, tells us that Ray Ozzie is to step down from his role as Chief Software Architect. He is not leaving the company:

Ray and I are announcing today Ray’s intention to step down from his role as chief software architect. He will remain with the company as he transitions the teams and ongoing strategic projects within his organization … Ray will be focusing his efforts in the broader area of entertainment where Microsoft has many ongoing investments.

It is possible that I have not seen the best of Ozzie. His early Internet Services Disruption memo was impressive, but the public appearances I have seen at events like PDC have been less inspiring. He championed Live Mesh, which I thought had promise but proved disappointing on further investigation, and was later merged with Live Synch, becoming a smaller initiative than was once envisaged. Balmer says Ozzie was also responsible for “conceiving, incubating and shepherding” Windows Azure, in which case he deserves credit for what seems to be a solid platform.

Ozzie may have done great work out of public view; but my impression is that Microsoft lacks the ability to articulate its strategy effectively, with neither Ozzie nor Ballmer succeeding in this. Admittedly it is a difficult task for such a diffuse company; but it is a critical one. Ballmer says he won’t refill the CSA role, which is a shame in some ways. A gifted strategist and communicator in that role could bring the company considerable benefit.

Salesforce.com is the wrong kind of cloud says Oracle’s Larry Ellison

Oracle CEO Larry Ellison took multiple jabs at Salesforce.com in the welcome keynote at OpenWorld yesterday.

He said it was old, not fault tolerant, not elastic, and built on a bad security model since all customers share the same application. “Elastic” in this context means able to scale on demand.

Ellison was introducing Oracle’s new cloud-in-a-box, the Exalogic Elastic Cloud. This features 30 servers and 360 cores packaged in a single cabinet. It is both a hardware and software product, using Oracle’s infiniband networking internally for fast communication and the Oracle VM for hosting virtual machines running either Oracle Linux or Solaris. Oracle is positioning Exalogic as the ideal machine for Java applications, especially if they use the Oracle WebLogic application server, and as a natural partner for the Exadata Database Machine.

Perhaps the most interesting aspect of Exalogic is that it uses the Amazon EC2 (Elastic Compute Cloud) API. This is also used by Eucalyptus, the open source cloud infrastructure adopted by Canonical for its Ubuntu Enterprise Cloud. With these major players adopting the Amazon API, you could almost call it as standard.

Ellison’s Exalogic cloud is a private cloud, of course, and although he described it as low maintenance it is nevertheless the customer’s responsibility to provide the site, the physical security and to take responsibility for keeping it up and running. Its elasticity is also open to question. It is elastic from the perspective of an application running on the system, presuming that there is spare capacity to run up some more VMs as needed. It is not elastic if you think of it as a single powerful server that will be eye-wateringly expensive; you pay for all of it even though you might not need all of it, and if your needs grow to exceed its capacity you have to buy another one – though Ellison claimed you could run the entire Facebook web layer on just a couple of Exalogics.

In terms of elasticity, there is actually an advantage in the Salesforce.com approach. If you share a single multi-tenanted application with others, then elasticity is measured by the ability of that application to scale on demand. Behind the scenes, new servers or virtual servers may come into play, but that is not something that need concern you. The Amazon approach is more hands-on, in that you have to work out how to spin up (or down) VMs as needed. In addition, running separate application instances for each customer means a larger burden of maintenance falling on the customer – which with a private cloud might mean an internal customer – rather than on the cloud provider.

In the end it is not a matter of right and wrong, more that the question of what is the best kind of cloud is multi-faceted. Do not believe all that you hear, whether the speaker is Oracle’s Ellison or Marc Benioff from Salesforce.com.

Incidentally, Salesforce.com runs on Oracle and Benioff is a former Oracle VP.

Postscript: as Dennis Howlett observes, the high capacity of Exalogic is actually a problem – he estimates that only 5% at most of Oracle’s customers could make use of such an expensive box. Oracle will address this by offering public cloud services, presumably sharing some of the same technology.

Cloud users get Microsoft Office Web Apps update first

Users of Office Web Apps have just been given some minor but welcome updates, described here.

They include printing in Word when in edit mode,new chart tools in Excel, and again in Excel the handy autofill tool, which lets you drag the bottom left corner of a selection to extend it automatically. In the example below, the blank cells fill with the remaining months of the year.

image

Office Web Apps also work on SharePoint 2010 deployed internally. However, the version of Office Web Apps for SharePoint has not been updated, so these users (who have to pay for Office licenses) now have an inferior version to that available for free users on SkyDrive.

Automatic and incremental bug-fixes and updates are one of the inherent advantages of cloud computing.

Office and Windows Live SkyDrive – don’t miss unlucky Clause 13

How secure is Windows Live SkyDrive?

One of the most notable features of Office 2010 is that you can save directly to the Web, without any fuss. In most of the applications this option is accessed via the File menu and the Save & Send submenu. Incidentally, this submenu used to be called Share, but someone decided that was confusing and that Save & Send is less confusing. I think they are both confusing; I would put the Save options under the Save submenu but there it is; it is not too hard to find.

image

Microsoft does not like to be too consistent; so OneNote 2010 has separate Share and Send menus. The Share menu has a Share On Web option.

image

What Save to Web actually does is to put your document on Windows Live SkyDrive. I am a fan of SkyDrive; it is capacious (25GB), performs OK, reliable in my experience, and free.

The way the sharing works is based on Microsoft Live IDs and Live Messenger. You can only set permissions for a folder, not for an individual document, and you have options ranging from private to public. Usually the most useful way to set permissions is not through the slider but by adding specific people. Provided they have a Live ID matching the email address they give, they will then get access.

image 

You can also specify whether the access is view only, or “add, edit details, and delete files” – a bit all-or-nothing, but still useful.

image

SkyDrive hooks in with Office Web Apps so you can create and edit documents directly in the browser – provided it is a supported browser and that the Web App doesn’t detect you are on a mobile device, in which case it is view-only. The view-only thing is a shame when it comes to a large screen device like an iPad, though the full version nearly works.

image

Overall it’s a major change for Office, even though similar functionality has been around for a while from the likes of Zoho and Google Docs. This is Office, after all, the most popular Office suite; and plenty of users will be trying out these features because they are there, and thinking that they could be pretty useful.

There is one awkward question though. Is Windows Live SkyDrive secure? It turns out that this is not an easy question to answer. Of course it cannot be 100% secure; but even assessing its security is not easy. If you try to find out you are likely to end up here – the Microsoft Service Agreement. Which says, in bold type so you don’t miss it:

13. WE MAKE NO WARRANTY.

We provide the service ‘as-is,’ ‘with all faults’ and ‘as available.’ We do not guarantee the accuracy or timeliness of information available from the service. We and our affiliates, resellers, distributors and vendors (collectively, the ‘ Microsoft parties’) give no express warranties, guarantees or conditions. You may have additional consumer rights under your local laws that this contract cannot change. We exclude any implied warranties including those of merchantability, fitness for a particular purpose, workmanlike effort and non-infringement.

14. LIABILITY LIMITATION.

You can recover from the Microsoft parties only direct damages up to an amount equal to your service fee for one month. You cannot recover any other damages, including consequential, lost profits, special, indirect, incidental or punitive damages.

I guess Clause 13 could be called the unlucky clause. If you are unlucky, don’t come crying to Microsoft.

There are two big questions here. One is how secure your documents are against unauthorised access. The other is how reliable the service is. Might you log on one day and find you cannot get access, or that all your documents have disappeared?

Three observations. First, despite clause 13, Microsoft has a lot to lose if its service fails. It has to succeed in cloud computing to have a profitable future, and a major data-losing catastrophe is costly, in that it drives customers away. The Danger episode was bad enough; though even then Microsoft eventually recovered the data it said initially had been lost.

Second, it may well be that the biggest security risk is from careless users, not from Microsoft. If your password (or that of a friend to whom you have given read or write access) is a favourite football team it won’t be surprising if somebody guesses.

Third, I have no idea how to quantify the risk of Microsoft losing data or denying access to my documents. That suggests it would be foolish to keep data there without backing it up elsewhere from time to time. The same applies to other cloud services. I guess if you pay for a service, and know how it is backed up to a different location, and have tested the effectiveness of that backup, and know that there are archives as well as backups – in other words, you can go back in time – I guess that then you might reasonably feel more confident. Otherwise, well, see clause 13 above.

Microsoft TechEd 2010 wrap-up: cloud benefits, cloud sceptics

Microsoft TechEd in New Orleans continues today, but I’m back in the UK; unfortunately I was not able to stay for the whole event.

So aside from discovering that walking the streets of New Orleans in June is like taking a Turkish bath, what did I learn? The biggest takeaway for me is that Microsoft is now serious about cloud computing, at least on the server and tools side represented here. As I put it in my report for The Register, the body language has changed: instead of “we do cloud if you must”, Microsoft is now pushing hard to promote Windows Azure and BPOS – hosted Exchange, SharePoint and Live Meeting – while still emphasising that Windows continues to give you a choice of on-premise servers.

That does not mean Microsoft is winning in the cloud, of course. There is a question in my mind about whether Microsoft is merely exporting the complexity of on-premise to serve it over the Internet, rather than developing low-touch cloud systems. I think there is a bit of both. Windows InTune is an interesting case. This is a sort of cloud version of system center, for managing laptops and desktop PCs.On the one hand, I was impressed with its ease of use in the demos we saw. On the other hand, what does managing the intricacies of desktop PCs have to do with cloud computing? Not much, perhaps, except that it is a task that still needs to be done, and if the cloud can make it easier then I’m all in favour.

Although Microsoft was talking up the cloud at TechEd, many of the attendees I spoke to were less enthusiastic. One telling point: I spoke to a training company in the vast exhibition and asked what were the most popular courses. Among other things, he said he was doing a lot of Silverlight, a little WPF, and that there was little interest in Windows Azure.

I also attended an “expert panel” on cloud security, which proved an entertaining affair. The lively Laura Chappell said the whole thing was a nightmare, and none of the other experts dared to disagree. I chatted to her afterwards about some of the issues. Here is a sample:

One of the things is ediscovery. You have something on your computer that indicates someone is planning something against the president of the united states. With the Patriot Act, they can immediately go to that service provider, and they don’t care if it’s virtualised across 10 different systems, they are going to shut them down, and they do not care who else’s stuff is on there, the Patriot Act gives them the power to do that. You went out of business, so did 7 other companies, and they don’t have a timeline, with the patriot act, for them to bring their servers back up.

If anyone sceptical of the benefits of cloud went along, they would not have come away reassured.

Finally, there was a ton of good stuff announced at TechEd. I attended a press briefing the day before, with sessions on Server 2008 RS SP1, InTune, and other topics. The most interesting part of the day was a session which I am not allowed to talk about; but I will say mysteriously that Microsoft’s strategy for the product was not too far removed from one that I proposed on this blog, though I am sure there is no connection.

The other announcements were public. If you have not checked out the new Azure Tools, don’t hesitate; they are much improved. Unfortunately I hardly dare to use Azure, because although I have some free hours from MSDN I’m worried about leaving some app running by mistake and ending up with a big credit card bill. Microsoft needs to make Azure more friendly for developers experimenting.

Windows AppFabric is now released and pretty interesting, though it was not prominent at TechEd. Given that many business processes are essentially workflows, and that this in combination with Visual Studio 2010 makes building and deploying a workflow app much easier, I am surprised it does not get more attention.

Serena flip-flops: goes Google, then back to Microsoft

Interesting story from Serena software, an 800-employee company with 29 offices around the globe whose products cover application lifecycle management and business process management.

In June 2009 the company switched to Google Apps, meriting a post on the Official Google Enterprise Blog. Ron Brister, Senior manager of Global IT Operations talks about the change:

it was becoming increasingly clear that our messaging infrastructure was lacking. Inbox storage space was a constant complaint. Server maintenance was extremely time-consuming, and backups were inconsistent. Then we found that – calculating additional licenses of Microsoft Exchange, client access licenses for users, disaster recovery software, and additional disk storage space to increase mailbox quotas to 1.5GB – staying with our existing provider would have cost us upwards of $1 million. That was a nearly impossible number to justify with executives.

We thought about replacing our on-premise solution, but to tell the truth, we were skeptical. I, personally, had been a Microsoft admin for 15 years, and Microsoft technologies were ingrained in my thought processes. But Google Apps provided many pluses: Gmail, Google’s Postini messaging security software and 25 GB of mailbox space, as well as greater uptime and 24/7 phone support.

The overall move to Google Apps took all of six hours. We waited for the phones to ring, but all we heard was silence – in fact, we sat there playing meebo for quite a while – and still, nothing happened. We cut the cord all in one stroke to avoid the hassle of living in two environments at once. We made the switch globally, all in one day – and, due to the advantages of this cloud computing solution, we’ve never looked back.

Sounds good – the perfect PR story for Google. Until this happened, one year on – it’s Brister again:

We work closely with our 15,000 worldwide customers to deliver solutions that help them be more successful.  As a result, we rely heavily on collaboration tools for our employees to share information and work together with customers and partners. 

This is one of the chief reasons we’ve chosen to adopt Exchange Online and SharePoint Online together with Office 2010.  They deliver trustworthy, enterprise-class solutions – with the performance, security, privacy, reliability and support we require. We know that Microsoft is a leader in the providing these kinds of solutions, and in our discussions with them, it became clear that they are 100% committed to Serena’s success and delivering solutions that drive the future of collaboration.

Using Office, SharePoint and Exchange will allow us to collaborate more effectively internally and with customers and partners, many of whom use the same technologies, and we can do so without having to deal with content loss or clients being unable to open or edit a document. In particular, Exchange is unchallenged in its calendaring and contact management abilities, mission critical functions for a global company such as Serena.

Big change. Leaving aside the fluff about “trustworthy, enterprise-class solutions”, what went wrong? Did the phones start ringing?

I’m guessing that the biggest clue here is the point about many of Serena’s customers using “the same technologies”. Apparently there was friction between Office and Exchange elsewhere, and Google Apps at Serena. Of course this could work the other way, if the day comes when more of your customers are on Google.

Here’s a few more clues from Brister:

There are alternatives on the market that promise lower costs, but in our experience, this is a fallacy.  When looking at alternatives, CIOs should really evaluate the total cost of ownership as well as the impact on user productivity and satisfaction, as there can be hidden costs and higher TCO.  For instance, slow performance and/or lack of enterprise-class features (e.g., with calendaring and contact management) will torpedo the value of such a backbone system, and may get the CIO fired.

We are currently upgrading to Office 2010, and look forward to taking advantage its hybrid nature– enabling us to embrace the cloud for scale and more rapid technology innovation while preserving what we like about software, including powerful capabilities and the ability to work anywhere – even offline. 

Brister again mentions calendaring and contact management. I guess things like those meeting invitations that automatically populate your calendar and which you accept or reject with a click or two. Offline gets a plug too.

Note that Serena has not gone back to on-premise. I’d be interested to know how the cost of the new BPOS solution compares to the “upwards of $1 million” cost which Brister complained about in 2009, for staying on-premise.

Did Microsoft simply buy Serena back? Brister says no:

Since this blog posted, there has been some speculation that our decision to migrate from Google Apps to Microsoft BPOS was based solely on price, and that Microsoft, to quote a favorite film, made us an offer we couldn’t refuse.  This is 100% false.  Microsoft is not giving us anything for free. 

It’s important not to make too much of one case study. Who knows, Brister may be back a year from now with another story. But it shows that Microsoft cannot be counted out when it comes to cloud-hosted Enterprise software. I’d be interested in hearing other accounts of how the “Go Google” switch works out in practice.