Category Archives: azure

The Microsoft Azure VM role and why you might not want to use it

I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.

It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better off using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

Reflections on Microsoft PDC 2010

I’m in Seattle airport waiting to head home – so here are some quick reflections on Microsoft’s Professional Developers Conference 2010.

Let’s start with the content. There was a clear focus on two things: Windows Azure, and Windows Phone 7.

On the Azure front, the cloud platform, Microsoft impressed. Features are being added rapidly, and it looks solid and interesting. The announcements at PDC mean that Azure provides pretty much the complete Windows Server platform, should you want it. You will get elevated privileges for complete control over a server instance; and full IIS functionality including support for multiple web sites and the ability to install modules. You will also be able to remote desktop into your Azure servers, which is going to make Windows admins feel more comfortable with Azure.

The new virtual machine role is also a big deal, even though in some ways it goes against the multi-tenanted philosophy by leaving the customer responsible for patches and updates. Businesses with existing virtual servers can simply move them to Azure if they no longer wish to run their own hardware. There are also existing tools for migrating physical servers to virtual.

I asked Bob Muglia, president of server and tools at Microsoft, whether having all these VMs maintained by customers and potentially compromised with malware posed a security threat to the platform. He assured me that they are fully isolated, and that the main danger is to the customer who might consume unexpected amounts of bandwidth.

Simply running on an Azure VM does not take full advantage of the platform though. It makes more sense to hook into Azure services such as SQL Azure, or the non-relational storage services, and deploy to Azure web or worker roles where Microsoft take care of maintenance. There is also a range of middleware services called AppFabric; see here for a few notes on these.

If there was one gap in the Azure story at PDC, it was a lack of partner announcements. Microsoft says there are more than 20,000 applications running on Azure, but we did not hear much about them, or about notable large customers embracing Azure. There is still a lot of resistance to the cloud among customers. I asked some attendees at lunch whether they expect to use Azure; the answer was “no, we have our own datacenter”.

I think the partner announcements will come. Microsoft is firmly behind Azure now, and it makes sense for its customers. I expect Azure to succeed; but whether it will do well enough to counter-balance the cost to Microsoft of migration away from on-premise servers is an open question.

Alongside Azure, though hardly mentioned at PDC, is the hosted application business originally called BPOS and now called Office 365. This is not currently hosted on Azure, though Muglia told me that most of it will in time move there. There are some potential synergies here, for example in Azure workflow applications that handle SharePoint forms or documents.

Microsoft’s business is primarily based on partners selling Windows hardware and licenses for on-premise or client software. Another open question is how easily the company can re-orient itself to be a cloud platform and services company. It is a massive shift.

What about Windows Phone? Microsoft has some problems here, and they are not primarily to do with the phone itself, which is decent. There are a few issues over the design of the launch devices, and features that are lacking initially. Further, while the Silverlight and XNA SDK forms a strong development platform, there is a need for a native code SDK and I expect this will follow at some point.

The key issue though is that outside the Microsoft bubble there is not much interest in the phone. Google Android meets the needs of the OEM hardware and operator partners, being open and easily customised. Apple owns the market for high-end devices with the design quality and ease of use that comes from single-vendor control of the whole stack. The momentum behind these platforms is such that it will not be easy for Microsoft to grab much market share, or attention from third-party app developers. It deserves to do well; but I will not be surprised if it under-performs relative to its quality.

There was also some good material to be found on the PDC sidelines, as it were. Andes Hejlsberg presented on new asynchronous features coming in C# 5.0, which look like a breakthrough in making concurrent programming safer and easier. He also showed a bit of Microsoft’s work on compiler as a service, which has huge potential. Patrick Smaccia has an enthusiastic report on the C# presentation. Herb Sutter gave a brilliant talk on lambdas.

The PDC site lets you stream pretty much all the sessions and seems to work very well. The player application is written in Silverlight. Note that there are twice as many sessions as appear in the schedule, since many were pre-recorded and only show in the full session list.

Why did Microsoft run such a small event, with only around 1000 attendees? I asked a couple of people about this; the answer seems to be partly as a cost-saving measure – it is much cheaper to run an event on the Microsoft campus than to hire an external venue and pay transport and expenses for all the speakers and staff – and partly to emphasise the virtual aspect of PDC, with a global audience tuning in.

This does not altogether make sense to me. Microsoft is still generating a ton of cash, as we heard in the earnings call at the event, and PDC is a key opportunity to market its platform to developers and influencers, so it should not worry too much about the cost. Second, you can do virtual as well as physical; they are not alternatives. You get more engagement from people who are actually present.

One of the features of the player is that you see how many are currently streaming the content. I tuned into Mark Russinovich’s excellent session on Azure – he says he has “drunk the cloud kool-aid” – while it was being streamed live, and was surprised to see only around 300 virtual attendees. If that figure is accurate, it is disappointing, though I am sure there will be thousands of further views after the event.

Finally, what about all the IE9/HTML 5 vs Silverlight discussion generated at PDC? Clearly Microsoft’s messaging went badly awry here, and frankly the company has only itself to blame. It cannot be surprised if after making a huge noise about how IE9 forms a great client for web applications, standards-based and integrated with Windows, that people question what sort of role is envisaged for Silverlight. It did not help that a planned session on Silverlight futures was apparently cancelled, probably for innocent reasons such as not being quite ready to show, but increasing speculation that Silverlight is now getting downplayed.

Microsoft chose to say nothing on the subject, other than some remarks by Bob Muglia to freelance journalist Mary-Jo Foley which seem to confirm that yes, Silverlight is no longer Microsoft’s key technology for cross-platform web applications.

If that was not quite the message Microsoft intended, then why not clarify the matter to press, myself included, as we sat in the press room on Microsoft’s campus?

My take is that while Silverlight is by no means dead, it seems destined for a lesser role than was once envisaged – a shame, as it is an excellent cross-platform .NET client.

AppFabric – Microsoft’s new middleware

I took the opportunity here at Microsoft PDC to find out what Microsoft means by AppFabric. Is it a product? a brand? a platform?

The explanation I was given is that AppFabric is Microsoft’s middleware brand. You will normally see the work in conjunction with something more specific, as in “AppFabric Caching” (once known as Project Velocity) or “AppFabric Composition Runtime” (once known as Project Dublin. The chart below was shown at a PDC AppFabric session:

image

Of course if you add in the Windows Azure prefix you get a typical Microsoft mouthful such as “Windows Azure AppFabric Access Control Service.”

Various AppFabric pieces run on Microsoft’s on-premise servers, though the emphasis here at PDC is on AppFabric as part of the Windows Azure cloud platform. On the AppFabric stand in the PDC exhibition room, I was told that AppFabric in Azure is now likely to get new features ahead of the on-premise versions. The interesting reflection is that cloud customers may be getting a stronger and more up-to-date platform than those on traditional on-premise servers.

Microsoft PDC big on Azure, quiet on Silverlight

I’m at Microsoft PDC in Seattle. The keynote, introduced by CEO Steve Ballmer, started with a recap of the company’s success with Windows 7 – 240 million sold, we were told, and adoption plans among 88% of businesses – and showing off Windows Phone 7 (all attendees will receive a device) and Internet Explorer 9.

IE9 guy Dean Hachamovitch demonstrated the new browser’s hardware acceleration, and made an intriguing comment. When highlighting IE9’s embrace of web standards, he noted that “accelerating only pieces of the browser holds back the web.” It sounded like a jab at plug-ins, but what about Microsoft’s own plug-in, Silverlight? A good question. You could put this together with Ballmer’s comment that “We’ve tried to make web the feel more like native applications” as evidence that Microsoft sees HTML 5 rather than Silverlight as its primary web application platform.

Then again you can argue that it just happens Microsoft had nothing to say about Silverlight, other than in the context of Windows Phone 7 development, and that its turn will come. The new Azure portal is actually built in Silverlight.

The messaging is tricky, and I found it intriguing, especially coming after the Adobe MAX conference where there were public sessions on Flash vs HTML and a focus in the day two keynote emphasising the importance of both. All of which shows that Adobe has a tricky messaging problem as well; but it is at least addressing it, whereas Microsoft so far is not.

The keynote moved on to Windows Azure, and this is where the real news was centered. Bob Muglia, president of the Server and Tools business, gave a host of announcements on the subject. Azure is getting a Virtual Machine role, which will allow you to upload server images to run on Microsoft’s cloud platform, and to create new virtual machines with full control over how they are configured. Server 2008 R2 is the only supported OS initially, but Server 2003 will follow.

Remote Desktop is also coming to Azure, which will mean instant familiarity for Windows admins and developers.

Another key announcement was Windows Azure Marketplace, where third parties will be able to sell “building block components training, services, and finished services and applications.” This includes DataMarket, the new name for the Dallas project, which is for delivering live data as a service using the odata protocol. An odata library has been added to the Windows Phone 7 SDK, making the two a natural fit.

Microsoft is also migrating Team Foundation Server (TFS) to Azure, interesting both as a case study in moving a complex application, and as a future option for development teams who would rather not wrestle with the complexities of deploying this product.

Next came Windows Azure AppFabric Access Control, which despite its boring name has huge potential. This is about federated identity – both with Active Directory and other identity services. In the example we saw, Facebook was used as an identity provider alongside Microsoft’s own Active Directory, and users got different access rights according to the login they used.

In another guide Azure AppFabric – among the most confusing Microsoft product names ever – is a platform for hosting composite workflow applications.

Java support is improving and Microsoft says that you will be able to run the Java environment of your choice from 2011.

Finally, there is a new “Extra small” option for Azure instances, aimed at developers, priced at $0.05 per compute hour. This is meant to make the platform more affordable for small developers, though if you calculate the cost over a year it still amounts to over $400; not too much perhaps, but still significant.

Attendees were left in no doubt about Microsoft’s commitment to Azure. As for Silverlight, watch this space.

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.

Microsoft maybe gets the cloud – maybe too late

Microsoft CEO Steve Ballmer gave a talk on the company’s cloud strategy at the University of Washington yesterday. Although a small event, the webcast was widely publicised and coincides with a leaked internal memo on “how cloud computing will change the way people and businesses use technology”, a new Cloud website, and a Cloud Computing press portal, so it is fair to assume that this represents a significant strategy shift.

According to Ballmer:

about 70 percent of our folks are doing things that are entirely cloud-based, or cloud inspired. And by a year from now that will be 90 percent

I watched the webcast, and it struck me as significant that Ballmer kicked off with a vox pop video where various passers by were asked what they thought about cloud computing. Naturally they had no idea, the implication being, I suppose, that the cloud is some new thing that most people are not yet aware of. Ballmer did not spell out why Microsoft made the video, but I suspect he was trying to reassure himself and others that his company is not too late.

I thought the vox pop was mis-conceived. Cloud computing is a technical concept. What if you did a vox pop on the graphical user interface? or concurrency? or Unix? or SQL? You would get equally baffled responses.

It was an interesting contrast with Google’s Eric Schmidt who gave a talk at last month’s Mobile World Congress that was also a big strategy talk; I posted about it here. Schmidt takes the cloud for granted. He does not treat it as the next big thing, but as something that is already here. His talk was both inspiring and chilling. It was inspiring in the sense of what is now possible – for example, that you can go into a restaurant, point your mobile at a foreign-language menu, and get back an instant translation, thanks to Google’s ability to mine its database of human activity. It was chilling with its implications for privacy and Schmidt’s seeming disregard for them.

Ballmer on the other hand is focused on how to transition a company whose business is primarily desktop operating systems and software to one that can prosper in the cloud era:

If you think about where we grew up, other than Windows, we grew up with this product called Microsoft Office. And it’s all about expressing yourself. It’s e-mail, it’s Word, it’s PowerPoint. It’s expression, and interaction, and collaboration. And so really taking Microsoft Office to the cloud, letting it run in the cloud, letting it run from the cloud, helping it let people connect and communicate, and express themselves. That’s one of the core kind of technical ambitions behind the next release of our Office product, which you’ll see coming to market this June.

Really? That’s not my impression of Office 2010. It’s the same old desktop suite, with a dollop of new features and a heavily cut-down online version called Office Web Apps. The problem is not only that Office Web Apps is designed to keep you dependent on offline Office. The problem is that the whole model is wrong. The business model is still based on the three-year upgrade cycle. The real transition comes when the Web Apps are the main version, to which we subscribe, which get constant incremental updates and have an API that lets them participate in mash-ups across the internet.

That said, there are parallels between Ballmer’s talk and that of Schmidt. Ballmer spoke of 5 dimensions:

  • The cloud creates opportunities and responsibilities
  • The cloud learns and helps you learn, decide and take action
  • The cloud enhances your social and professional interactions
  • The cloud wants smarter devices
  • The cloud drives server advances

In the most general sense, those are similar themes. I can even believe that Ballmer, and by implication Microsoft, now realises the necessity of a deep transition, not just adding a few features to Office and Windows. I am not sure though that it is possible for Microsoft as we know it, which is based on Windows, Office and Partners.

Someone asks if Microsoft is just reacting to others. Ballmer says:

You know, if I take a look and say, hey, look, where am I proud of where we are relative to other guys, I’d point to Azure. I think Azure is very different than anything else on the market. I don’t think anybody else is trying to redefine the programming model. I think Amazon has done a nice job of helping you take the server-based programming model, the programming model of yesterday that is not scale agnostic, and then bringing it into the cloud. They’ve done a great job; I give them credit for that. On the other hand, what we’re trying to do with Azure is let you write a different kind of application, and I think we’re more forward-looking in our design point than on a lot of things that we’re doing, and at least right now I don’t see the other guy out there who’s doing the equivalent.

Sorry, I don’t buy this either. Azure does have distinct advantages, mainly to do with porting your existing ASP.NET application and integrating with existing Windows infrastructure. I don’t believe it is “scale agnostic”; something like Google App Engine is better in that respect. With Azure you have to think about how many virtual machines you want to purchase. Nor do I think Azure lets you write “a different kind of application.” There is too little multi-tenancy, too much of the old Windows server model remains in Azure.

Finally, I am surprised how poor Microsoft has become at articulating its message. Azure was badly presented at last year’s PDC, which Ballmer did not attend. It is not an attractive platform for small-scale developers, which makes it hard to get started.

Google storage 10 times cheaper than Azure – but not as cheap as Skydrive

According to Jerry Huang of Gladinet, whose Cloud Desktop exposes a variety of cloud storage services as mapped drives in Windows Explorer, Google storage is “about 10 times cheaper” than Windows Azure. Since Amazon S3 has similar prices to Azure, I imagine Google undercuts that by some margin as well.

Gladinet compares Google and Azure using some other criteria as well. On speed, it gave the edge to Azure but observed that it might just depend which data center was nearest. On SLA, the two seem similar.  On API, it says Azure is easier if you use Visual Studio, but not if you work with “PHP, Ruby or anything other than .NET”.

In another post, Huang has a nice summary of accessing Azure storage from C#.

It’s worth noting that Microsoft Skydrive offers a relatively generous 25GB of storage for free, but there is no way to extend this limit.  There is also no official Skydrive API, though one has been hacked unofficially. Gladinet supports Skydrive too, using either this or the unofficial WebDAV support.

I am a fan of Gladinet. There is a free starter edition, or paid-for with extra features.

image

Explorer integration is a big deal, since it means any application with a standard open or save dialog can access the files. Imagine for example that you need to upload a document from cloud storage to a web site. Without Explorer integration, you have to extract the file from cloud storage to your local drive, then upload it from there. The same is true of SharePoint, which is why it is unfortunate that Explorer integration is so difficult to get working.

Windows Azure is too expensive for small apps

I’m researching Windows Azure development; and as soon as you check out early feedback one problem jumps out immediately. Azure is prohibitively expensive for small applications.

Here’s a thread that makes the point:

Currently I’m hosting 3 relatively small ASP.net web applications on a VPS. This is costing about $100 per month. I’m considering transitioning to Azure.
Q: Will I need to have 1 azure instance per each ASP.net application? So if I have 3 web apps, then I will need to run 3 instances which costs about $300 per month minimum, correct?

The user is correct. Each application consumes an “instance”, costing from $0.12 per hour, and this cost is incurred whenever the application is available.

Amazon also charges $0.12 per hour for a Windows instance; but the Amazon instance is a virtual machine. You can run as many applications on there as you like, until it chokes.

Google App Engine has a free quota for getting started, and then it is charged according to CPU time. If the app is idle, you don’t pay.

In addition, all these services charge extra for storage and data transfer; but in a low-usage application these are likely to be a small proportion of the total.

Summary: Azure’s problem is that it does not scale down in a way that makes business sense. There is no free quota, unless you count what is bundled with an MSDN subscription.

I realise that it is hard to compare like with like. A cheap Windows plan with a commodity ISP will cost less than either Amazon EC2 or Azure, but it is worth less, because you don’t get a complete VM as with Amazon, or a managed platform as with Azure, or the scalability of either platform. The point though is that by cutting out smaller businesses, and making small apps excessively expensive for customers of any size – even enterprises run small apps – Azure is creating a significant deterrent to adoption and will lose out to its rivals.

Check out the top feature request for Azure right now: Make it less expensive to run my very small service.

New HP and Microsoft agreement commits $50 million less than similar 2006 deal

I’ve held back comment on the much-hyped HP and Microsoft three-year deal announced on Wednesday mainly because I’ve been uncertain of its significance, if any. It didn’t help that the press release was particularly opaque, full of words with many syllables but little meaning. I received the release minutes before the conference call, during which most of us were asking the same thing: how is this any different from what HP and Microsoft have always done?

It’s fun to compare and contrast with this HP and Microsoft release from December 2006 – three years ago:

We’ve agreed to a three-year, US$300 million investment between our two companies, and a very aggressive go-to-market program on top of that. What you’ll see us do is bring these solutions to the marketplace in a very aggressive way, and go after our customers with something that we think is quite unique in what it can do to change the way people work.

$300 million for three years in 2006; $250 million for three years in 2010. Hmm, not exactly the new breakthrough partnership which has been billed. Look here for what the press release should have said: it’s mainly common-sense cooperation and joint marketing.

Still, I did have a question for CEOs Mark Hurd and Steve Ballmer which was what level of cloud focus was in this new partnership, drawing these remarks from Ballmer:

The fact that our two companies are very directed at the cloud is the driving force behind this deal at this time. The cloud really means a modern architecture for how you build and deploy applications. If you build and deploy them to our service that we operate that’s called Windows Azure. If a customer deploys them inside their own data centre or some other hosted environment, they need a stack on which to build, hardware software and services, that instances the same application model that we’ll have on Windows Azure. I think of it as the private cloud version of Windows Azure.

That thing is going to be an integrated stack from the hardware, the virtualization layer, the management layer and the app model. It’s on that that we are focusing the technical collaboration here … we at Microsoft need to evangelize that same application model whether you choose to host in the the cloud or on your own premises. So in a sense this is entirely cloud motivated.

Hurd added his insistence that this is not just more of the same:

I would not want you to write that it sounds a lot like what Microsoft and HP have been talking about for years. This is the deepest level of collaboration and integration and technical work we’ve done that I’m aware of … it’s a different thing that what you’ve seen before. I guarantee Steve and I would not be on this phone call if this was just another press release from HP and Microsoft.

Well, you be the judge.

I did think Ballmer’s answer was interesting though, in that it shows how much Microsoft (and no doubt HP) are pinning their hopes on the private cloud concept. The term “private cloud” is a dubious one, in that some of the defining characteristics of cloud – exporting your infrastructure, multi-tenancy, shifting the maintenance burden to a third-party – are simply not delivered by a private cloud. That said, in a large organisation they might look similar to most users.

I can’t shake off the thought that since HP wants to carry on selling us servers, and Microsoft wants to carry on selling us licences for Windows and Office, the two are engaged in disguised cloud avoidance. Take Office Web Apps in Office 2010 for example: good enough to claim the online document editing feature; bad enough to keep us using locally installed Office.

That will not work long-term and we will see increasing emphasis on Microsoft’s hosted offerings, which means HP will sell fewer servers. Maybe that’s why the new deal is for a few dollars less than the old one.

PDC day one: Windows in the cloud

Today was cloud day at PDC. Microsoft announced that Windows Azure will become a production platform on January 1st, with billing starting from February 1st. It also announced the beta of Windows Server AppFabric role, for on-premise apps that can either stay on-premise or be deployed to Azure later; and some new developments like the Windows Server Virtual Machine role on Azure, a pre-configured Windows Server VM into which you will be able to deploy an application.

Azure was first announced at the 2008 PDC, and had a stuttering start, with a CTP (Community Tech Preview) that was difficult to use, major changes to SQL Server Data Services – a simplified cloud database that was scrapped and replaced with full SQL Server – and generally poor marketing from Microsoft. I was not sure whether the company was serious about Azure, or merely trying to tick the cloud box.

I do now think it is serious, and delivering some interesting technology for easily scalable cloud-hosted applications. Microsoft does not sees its cloud services as replacing your in-house servers (no surprise there), but more as a way of deploying certain kinds of web applications. A great feature is that thanks to Active Directory Federation Services in combination with the new .NET library called Windows Identity Foundation you can relatively easily have use your Azure applications authenticated against your internal Active Directory.

The surprise of the day was when Matt Mullenweg of WordPress fame turned up to demo WordPress running on Azure, which now supports PHP and MySQL as well as Java applications. Another unexpected guest was Loic Le Meur of Seesmic, who introduced Seesmic for Windows and also talked about a coming Silverlight version.

That said, the keynote did not exactly crackle with excitement. Microsoft seemed almost to downplay what is now possible with Azure, perhaps sensing that it could be disruptive to its own business model. A telling moment came during a press briefing when Doug Hauger, Azure General Manager, denied that Windows or Office were in any sort of decline. Despite his position he seems to be under the illusion that we will happily continue with our fragile on-premise, single platform, micro-managed IT systems.

I enjoyed the day though. The beauty of PDC is that Microsoft rolls out its best speakers; it was great to hear Mark Russinovich explain the kernel changes in Windows 7 and Server 2008 R2 – same kernel of course – and I will be writing more about the session shortly.

I’m expecting more focus on Office, Silverlight and Visual Studio tomorrow, when Steven Sinofsky, Scott Guthrie and Kurt DelBene will be giving the keynote, and hoping for some compelling announcements.