Tag Archives: cloud computing

Adobe opens up Creative Cloud to app developers

At the Adobe Max conference in Los Angeles, Adobe has announced enhancements and additions to its Creative Cloud service, which includes core applications such as Photoshop, Illustrator, InDesign and Dreamweaver, mobile apps for Apple’s iPad, and the online portfolio site Behance. Creative Cloud is also the mechanism by which Adobe has switched its customers from perpetual software licences to subscription, even for desktop applications.

One of today’s announcements is a public preview version of the Creative SDK for iOS, with an Android version also available on request. Nothing for Windows Phone, though Adobe does seem interested in supporting high-end Windows tablets such as Surface Pro 3, thanks to their high quality screens and pen input support.

image

The Creative SDK lets developers integrate apps with Adobe’s cloud, including access to cloud storage, import and export of PSD (Photoshop) layers, and image processing using cloud services. It also gives developers the ability to support Adobe hardware such as Ink and Slide, which offers accurate drawing even on iOS tablets designed exclusively for touch control.

Adobe’s brand guidelines forbid the use of Adobe product names like Photoshop or Illustrator in your app name, but do allow words such as “Photoshop enabled” and “Creative Cloud connected.”

Other Adobe announcements today include:

Mobile app changes

Adobe’s range of mobile apps has been revised:

  • Adobe Sketch is now Photoshop Sketch and lets you send drawings to Photoshop.
  • Adobe Line is now Illustrator Line and lets you send sketches to Illustrator.
  • Adobe Ideas is now Illustrator Draw, again with Illustrator integration.
  • Adobe Kuler is now Adobe Colour CC and lets you capture colours and save them as themes for use elsewhere.
  • Adobe Brush CC and Adobe Shape CC are new apps for creating new brushes and shapes respectively. For example, you could convert a photo into a vector art that you can use for drawing in Illustrator.
  • Adobe Premiere Clip is a simple video editor for iOS that allows export to Premiere Pro CC.
  • Lightroom Mobile has been updated to enable comments on photos shared online, and synchronisation with Lightroom desktop.

There are now a confusingly large number of ways you can draw or paint on the iPad using an Adobe app, but the common theme is better integration with the desktop Creative cloud applications.

Desktop app enhancements

On the desktop app side, Adobe announcements include Windows 8 touch support in Illustrator, Photoshop, Premiere Pro and After Effects; 3D print features in Photoshop CC; a new curvature tool in Illustrator; and HiDPI (high resolution display support) in After Effects.

New cloud services

New Adobe cloud services include Creative Cloud Libraries,a design asset management service that connects with both mobile and desktop Adobe apps, and Creative Cloud Extract which converts Photoshop PSD imagines into files that web designers and developers can use, such as colours, fonts and CSS files.

Adobe’s Creative Cloud is gradually growing its capabilities, even though Adobe’s core products remain desktop applications, and its move to subscription licensing has been executed smoothly and effectively despite annoying some users. The new SDK is mainly an effort to hook more third-party apps into the Adobe design workflow, though the existence of hosted services for image processing is an intriguing development.

It is a shame though that the new SDK is so platform-specific, causing delays to the Android version and lack of support for other platforms such as Windows Phone.

Adobe actually has its own cross-platform mobile toolkit, called PhoneGap, though I imagine Adobe’s developers feel that native code rather than JavaScript is the best fit for design-oriented apps.

Microsoft Azure: new preview portal is “designed like an operating system” but is it better?

How important is the Azure portal, the web-based user interface for managing Microsoft’s cloud computing platform? You can argue that it is not all that important. Developers and users care more about the performance and reliability of the services themselves. You can also control Azure services through PowerShell scripts.

My view is the opposite though. The portal is the entry point for Azure and a good experience makes developers more likely to continue. It is also a dashboard, with an overview of everything you have running (or not running) on Azure, the health of your services, and how much they are costing you. I also think of the portal as an index of resources. Can you do this on Azure? Browsing through the portal gives you a quick answer.

The original Azure portal was pretty bad. I wish I had more screenshots; this 2009 post comparing getting started on Google App Engine with Azure may bring back some memories. In 2011 there were some big management changes at Microsoft, and Scott Guthrie moved over to Azure along with various other executives. Usability and capability improved fast, and one of the notable changes was the appearance of a new portal. Written in HTML 5, it was excellent, showing all the service categories in a left-hand column. Select a category, and all your services in that category are listed. Select a service and you get a detailed dashboard. This portal has evolved somewhat since it was introduced, notably through the addition of many more services, but the design is essentially the same.

image

The New button lets you create a new service:

image

The portal also shows credit status right there – no need to hunt through links to account management pages:

image

It is an excellent portal, in other words, logically laid out, easy to use, and effective.

That is the old portal though. Microsoft has introduced a new portal, first demonstrated at the Build conference in April. The new portal is at http://portal.azure.com, versus http://manage.windowsazure.com for the old one.

The new portal is different in look and feel:

image

Why a new portal and how does it work? Microsoft’s Justin Beckwith, a program manager, has a detailed explanatory post. He says that the old portal worked well at first but became difficult to manage:

As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However – it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.

The new portal is the outcome of some deep thinking about the future. It is architected, according to Beckwith, more like an operating system than like a web application.

The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.

Each service has its own extension, or “application”, which runs in an iframe (inline frame) and is isolated from other extensions. Unusually, the iframes are not used to render content, but only to run scripts. These scripts communicate with the main frame using the window.postMessage API call – familiar territory for Windows developers, since messages also drive the Windows desktop operating system.

Microsoft is also using TypeScript, a high-level language that compiles to JavaScript, and open source resources including Less and Knockout.

Beckwith’s post is good reading, but the crunch question is this: how does the new portal compare to the old one?

I get the sense that Microsoft has put a lot of effort into the new portal (which is still in preview) and that it is responsive to feedback. I expect that the new portal will in time be excellent. Currently though I have mixed feeling about it, and often prefer to use the old portal. The new portal is busier, slower and more confusing. Here is the equivalent to the previous New screen shown above:

image

The icons are prettier, but there is something suspiciously like an ad at top right; I would rather see more services, with bigger text and smaller icons; the text conveys more information.

Let’s look at scaling a website. In the old portal, you select a website, then click Scale in the top menu to get to a nice scaling screen where you can set up autoscaling, define the number of instances and so on.

How do you find this in the new portal? You get this screen when you select a website (I have blanked out the name of the site).

image

This screen scrolls vertically and if you scroll down you can find a small Scale panel. Click it and you get to the scaling panel, which has a nicely done UI though the way panels constantly appear and disappear is something you have to get used to.

There are also additional scaling options in the preview portal (the old one only offers scaling based on CPU usage):

image

The preview portal also integrates with Visual Studio online for cloud-based devops.

The challenge for Microsoft is that the old portal set a high bar for clarity and usability. The preview portal does more than the old, and is more fit for purpose as the number and capability of Azure services increases, but its designers need to resist the temptation to let prettiness obstruct performance and efficiency.

Developers can give feedback on the portal here.

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Microsoft Azure: growing but still has image problems

I attended a Microsoft Cloud Day in London organised by the Azure User Group; I booked this when Technical Fellow Mark Russinovich was set to attend, but regrettably he cancelled at a late stage. I skipped the substitute keynote by UK Microsoftie Dave Coplin as I heard the very same talk earlier this month, so arrived mid-morning at the venue in Whitechapel; not that easy to find amid the stalls of Whitechapel Market (well, not quite), but if you seek out the Whitechapel branch of the Foxcroft and Ginger cafe (not known to Here Maps on Windows Phone, incidentally) then you will find premises upstairs with logos for Barclays Accelerator and Microsoft Ventures; something to do with assisting the flow of cash from corporate giants desperate for community engagement to business start-ups desperate for cash.

Giving technical presentations is hard, and while I admired Richard Conway’s efforts at showing how, with some PowerShell, he could transform some large dataset into rows of numbers using the magic of Azure HDInsight I didn’t think it quite worked. Beat Schwegler dived into code to explain the how and why of Azure Notification Hubs, a service which delivers push notifications to mobile apps; useful material, but could have been compressed. Then there was Richard Astbury at software development company two10degrees who talked about Project Orleans, high scale applications via “an Actor Model framework of programmable in-memory objects”; we learned about grains and silos (or software equivalents) in a session that was mostly new to me.

At the break I chatted with a somewhat bemused attendee who had come in the hope of learning about whether he should migrate some or all of his small company’s server requirements to Azure. I explained about Office 365 and Azure Active Directory which he said was more relevant to him than the intricacies of software development. It turns out that the Azure User Group is really about software development using Azure services, which is only one perspective on Microsoft’s cloud platform.

For me the most intriguing presentation was from Michael Delaney at ElevateDirect, a young business which has a web application to assist businesses in finding employees directly rather than via recruitment agencies. His company picked Amazon Web Services (AWS) over Azure two and a half years ago, but is now moving to Microsoft’s cloud.

image
Michael Delaney, CTO and co-founder ElevateDirect

Why did he pick AWS? He is not a typical Microsoft-platform person, preferring open source products including Linux, Apache Solr, Python and MySQL. When he chose AWS, Azure was not a suitable platform for a mainly Linux-based application. However, he does prefer C# to Java. According to Delaney, AWS is a Java-first platform and he found this getting in the way of development.

Azure today, says Delaney, has the first-class support for Linux that it lacked a few years back, and is a better platform for C# applications than AWS even though AWS does support Windows servers.

Migrating the application was relatively straightforward, he said, with the biggest issue being the move from Amazon S3 (Simple Storage Service) to Azure Storage, though he overcame this by abstracting the storage API behind his own wrapper code.

Azure is not all the way there though. Delaney is disappointed with the relational database options on offer, essentially SQL Server or third-party managed MySQL from ClearDB. He would like to see options for PostgreSQL and others. He would also like the open source Elastic Search to be offered as an Azure service.

There was a panel discussion later at which the question of Azure’s market perception was discussed. Most businesses, according to one attendee, think of AWS as the only option for cloud, even if they are Microsoft-platform businesses for whom Azure might be more suitable. It is a branding problem caused by the AWS first-mover advantage and market dominance, said Microsoft’s Steve Plank.

I would add that Azure is relatively new, at least in its new incarnation offering full IaaS (infrastructure as a service). AWS is also ahead on the number and variety of services on offer, and has not really messed up, which means there is little incentive for existing users to move unless, like Delaney, they find some aspect of Microsoft’s platform (in his case C#) particularly compelling.

This leads me back to the bemused attendee. It seems to me that Azure’s biggest advantage is Azure Active Directory and seamless integration with Office 365. Having said that, it is not difficult to host an application on AWS that uses Azure Active Directory, but there may be some advantage in working with a single cloud provider (and you can expect fast low-latency networking between Azure and Office 365).

Amazon AWS and the continuing trend towards cloud services. Desktops next?

It was a lightbulb moment. The problem:  how to migrate a document store from one Office 365 (hosted SharePoint) instance to another. Copy it all out and copy it back in, obviously, but that is painful over ADSL (which is all I had at my disposal) since the “asynchronous” part of ADSL means slow uploads; and download from Office 365 was not that fast either.

Solution: use an Azure virtual machine. VM hosted by Microsoft, SharePoint hosted by Microsoft, result – a fast connection between the two. I ran up the VM in a few minutes using Microsoft’s nice Azure portal, used Remote Desktop to connect, and copied the documents out and back in no time.

There is a general point here. If you are contemplating cloud-hosted VDI (Virtual Desktop Infrastructure), there is huge advantage in having the server applications and data close to the VDI instances. All you then need is a connection good enough to work on that remote desktop, which is relatively lightweight. If the cloud vendor is doing its job, the internal connections in that cloud should be fast. In addition, from the client’s perspective, most of the data is download, transferring the screen image to the client, rather than upload, transmitting mouse and keyboard interactions, so that is a good use case for ADSL.

The further implication is that the more you use cloud services, the more attractive hosted desktops become. Desktops are expensive to manage, which is why I would expect a service like Amazon Workspaces, hosted Windows desktops as a service, to find a ready market – even at $600 per year for a desktop with Office Professional 2010 preinstalled, or $420 per year if you install and license Office yourself, or use Open Office or some other alternative.

Workspaces are currently in limited preview, which means a closed beta, but there are hints that a public beta is coming soon.

Adopting this kind of setup means a massive dependency on Amazon of course, which is a concern if you worry about that kind of thing (and I think you should); but how much business is now dependent on one of the major cloud providers (I tend to think of Amazon, Microsoft and Google as the top three) already?

Thinking back to my Office 365 example, it also seems to me that Microsoft will make a serious play for cloud VDI in the not too distant future, since it makes so much sense. The problem for Microsoft is further cannibalisation of its on-premise business, and further disruption for Microsoft partners, but if the alternative is giving away business to Amazon, it has little choice.

I was at an Amazon Web Services briefing today and asked whether we might see an Office 365-like package from AWS in future. Unlikely, I was told; but many customers do use AWS for hosting the likes of Exchange and SharePoint.

The really clever thing for Amazon would be a package that looked like Office 365, but using either open source or internally developed applications that removed the need to pay license fees to Microsoft.

What else is new from AWS? I have no exclusives to share, since Amazon has a policy of never pre-announcing new features or services. There were a few statistics, one of which is that Redshift, hosted data warehousing, is Amazon’s fastest-growing product.

Amazon also talked about Kinesis, which lets you analyse streams of data in a 24-hour window. For example, if you wanted to analyse the output from thousands of sensors (say,weather) but do not need to store the data, you can use Kinesis. If you do want to store the data, you can integrate with Redshift or DynamoDb, two of Amazon’s database services.

The company also talked up its Relational Database Service (RDS), where you purchase a managed database service which can currently be MySQL, PostgreSQL, Oracle or Microsoft SQL Server. Amazon handles all the infrastructure management so you only need worry about your data and applications.

RSD pricing ranges start from $25 a month for MySQL, to $514 a month for SQL Server Standard (which is actually more expensive than Oracle at $223 per month for the same instance size). Higher capacity instances cost more of course. SQL Server Web edition comes down below Oracle at $194 per month, but I was surprised to see how high the SQL Server costs are. Note that these prices include all the CALs (Client Access Licenses). The prices are actually per hour, eg $0.715 for SQL Server Standard, so you could save money if your business can turn off or reduce the service out of working hours, for example.

How much premium does Amazon charge for its managed RDS versus what you would pay for equivalent capacity in a VM that you manage yourself? I asked this question but did not receive a meaningful reply; you need to do your own homework.

My reflection on this is that just as supermarkets make more money from pre-packaged ready meals than from basic groceries, so too the cloud providers can profit by bundling management and applications into their products rather than offering only basic infrastructure services. You still have the choice; but database admin costs money too.

Finally, we took a quick look at AppStream, which is a proprietary protocol, SDK and service for multimedia applications. You write applications such as games that render video on the server and stream it efficiently to the client, which could be a smartphone or low-power tablet. In this case again, you are taking a total dependency on Amazon to enable your application to run.

If you are interested in AWS, look out for a summit near you. There is one in London on 30th April. Or go to the Reinvent conference in Las Vegas in November.

My overall reflection is that the momentum behind AWS and its pace of innovation is impressive; yet it also seems to me that rivals like Microsoft and Google are becoming more effective. The cloud computing market is such that there is room for all to grow.

SQL Server 2014 is done: Hekaton, Azure integration

Microsoft has released SQL Server 2014 to manufacturing (an odd phrase in these diskless days) but which signifies that it is code complete for the initial release. General availability is April 1st.

What do you do if hardware trends enable you to stuff vast amounts of RAM into your server, along with many CPU cores? The answer is that you optimize applications to work mostly in RAM, with disk important as a persistence layer. This contrasts to the approach when you have large amounts of disk space and little RAM, when you focus on loading only as much data into memory as you absolutely need.

The implications for a database server are profound. Instead of a logic that goes something like “read from disk, do something, write to disk” you can address the data directly; it is just a memory pointer.

Now combine that with stored procedures compiled to native code. Performance leaps up, and by much more than you get simply by caching data in RAM, or using fast SSD storage, but still using the old disk-based approach in the database engine.

This is the reasoning behind “Hekaton”, properly known as In-Memory OLTP (online transaction processing), which is a new in-memory database engine that comes with SQL Server 2014.

It is fully integrated. You just have to add a filegroup to a a SQL Server database with the keyword CONTAINS MEMORY_OPTIMIZED_DATA and then create a table with the keyword WITH (MEMORY_OPTIMIZED=ON). And for the stored procedures, use WITH NATIVE_COMPILATION.

The speed-up is as great as you would expect. I have seen demonstrations of 30x or more performance increases, like this one in a demo based on one from the SQL Pass conference, but which I did for myself in one of Microsoft’s “Hands On Labs”:

image

In another demo, on an Azure VM, I got a speed up of 7x. Only seven times faster! Still, hard to complain about those sorts of numbers.

Unfortunately, in-memory OLTP is spoilt by some rather severe limitations in this release. The first problem is that a combination of the need to support native compilation of stored procedures, and other limitations, means that only a subset of T-SQL (the query and management language of SQL Server) is supported. You can see the list of what is not supported here; and it is depressing reading, with lots of keywords that you likely do use at the moment; even IDENTITY is on the list of what does not work.

Another issue is that the ability of In-Memory OLTP to take advantage of hardware is not as extensive as you might hope. Lead program manager Kevin Liu told me at a recent press workshop that the team recommends restricting total data size to 256GB, and that the recommended number of CPU sockets is two. You can get servers today with much more memory and more sockets. It gets complicated though: in a multi-socket server memory has processor affinity and there is a thing called NUMA (Non-Uniform Nemory Access) that describes the way memory is shared between processors.

According to Liu, Microsoft expects to lift these limitations in future releases, as well as improving T-SQL support, but things like this remind you that it is a version one release.

What else is in SQL Server 2014? There is some neat Azure integration, including a managed backup tool that is almost one click to have your data backed up to Azure storage; a brilliant facility for small businesses. You can also use Azure for high availability, creating always-on replicas in Azure VMs.

Data warehouse users will like the new clustered columnstore indexes, which allow you do use a column-oriented table structure for much faster processing of typical report and analysis queries. Columnstore indexes first appeared in SQL Server 2012 but were not updateable. Now they are.

SQL Server is well liked, licensing hassles aside; and even on licensing, Microsoft can always point at Oracle and claim, rightly, to be cheaper and less complex. It has earned a reputation for solid performance. SQL Server 2014 looks as good as ever, even if the management tools now look rather dated – the shell for SQL Server Management Studio uses an old version of Visual Studio, which is one of the reasons. I also suspect the SQL Server team lacks a dialog designer, but doubt that the average database admin cares one jot.

That said, it is difficult to describe this as a must-have upgrade, unless you can make good use of “Hekaton” in-memory OLTP. The porting effort will be worth it presuming you can get it to work. One of the good fits for the technology is managing web app session data, or, as in the example above, rapid processing to display recommendations or customisations on a web site.

I can imaging though that many users will look at Hekaton and decide that it is too much work or too immature for immediate use. What is left for them, apart from some nice Azure integration?

Not a huge amount, it seems to me, making this to my mind a transitional release.

Are you planning to upgrade? I would be interested to know your reasons why or why not.

Microsoft OneDrive and Office Online is Office 365 lite

Microsoft has transitioned its cloud storage service name from SkyDrive to OneDrive.

Is OneDrive just cloud storage though? Not really. It is part of a suite of cloud applications. Go to OneDrive, drop down the Create menu, and you see this:

image

These links to Office document types open in Office Online, formerly Office Web Apps, which is a browser version of Microsoft Office, and now pretty good.

image

No offline functionality, and if you print you just generate a PDF, but not bad for free.

Drop down the OneDrive menu and there are the other apps in Microsoft’s consumer cloud suite, including Outlook.com, People and Calendar.

image

The functionality parallels that in Office 365, where you get Exchange online in place of Outlook.com and hosted SharePoint in place of OneDrive.

Microsoft also has Skype, which is the consumer version of Lync in Office 365.

It all looks rather coherent, though Microsoft has a bit of work to do under the covers. It makes little sense for OneDrive to use different technology than SharePoint for online storage, though frankly OneDrive beats SharePoint in some respects so it would be good to see some of the consumer tech migrating into the enterprise offering. Lync and Skype are also separate products though work is under way to bring them together.

Microsoft’s big problem is this. To what it extent can it continue to improve the browser-based apps before it threatens its desktop Office business? Its dilemma is that if it holds back the browser versions, it will cede market share to Google which has no qualms about crushing Microsoft Office.

OneDrive, SkyDrive, whatever: Microsoft needs to make it better – especially in Office 365

This week brought the news that SkyDrive is to be renamed OneDrive:

For current users of either SkyDrive or SkyDrive Pro, you’re all set. The service will continue to operate as you expect and all of your content will be available on OneDrive and OneDrive for Business respectively as the new name is rolled out across the portfolio.

I have no strong views on whether OneDrive or SkyDrive is a better name (the reason for the change was a legal challenge from the UK’s BSkyB).

I do have views on SkyDrive OneDrive though.

First, it is confusing that OneDrive and OneDrive for Business share the same name. I have been told by Microsoft that they are completely different platforms. OneDrive is the consumer offering, and OneDrive for Business is hosted SharePoint in Office 365. It is this paid offering that interests me most in a business context.

SharePoint is, well, SharePoint, and it seems fairly solid even though it is slow and over-complex. The Office Web Apps are rather good. The client integration is substandard though. A few specifics:

Yesterday I assisted a small business which has upgraded to full-fat Office 365, complete with subscription to the Office 2013 Windows applications. We set up the team site and created a folder, and used the Open in Explorer feature for convenient access in Windows. Next, run Word, type a new document, choose Save As, and attempt to save to that folder.

Word thought for a long time, then popped up a password dialog (Microsoft seems to love these password dialogs, which pop up from time to time no matter how many times you check Remember Me). Entered the correct credentials, it thought for a bit then prompted again, this time with a CAPTCHA added as a further annoyance. Eventually we hit cancel out of frustration, and lo, the document was saved correctly after all.

Another time and it might work perfectly, but I have seen too many of these kinds of problems to believe that it was a one-off.

Microsoft offers another option, which is called SkyDrive OneDrive Pro. This is our old friend Groove, also once known as Microsoft SharePoint Workspace 2010, but now revamped to integrate with Explorer. This guy is a sync engine, whereas “Open in Explorer” uses WebDAV.S

Synchronisation has its place, especially if you want to work offline, but unfortunately SkyDrive Pro is just not reliable. All the businesses I know that have attempted to use it in anger, gave up. They get endless upload errors that are hard to resolve, from the notorious Office Upload Center. The recommended fix is to “clear the cache”, ie wipe and start again, with no clarity about whether work may be lost. Avoid.

One of the odd things is that there seems to be a sync element even if you are NOT using SkyDrive Pro. The Upload Center manages a local cache. Potentially that could be a good thing, if it meant fast document saving and seamless online/offline use. Instead though, Microsoft seems to have implemented it for the worst of every world. You get long delays and sign-in problems when saving, sometimes, as well as cache issues like apparently successful saves followed by upload failures.

OK, let’s use an iPad instead. There is an app called SkyDrive Pro which lets you access your Office 365 documents. It is more or less OK unless you want to share a document – one of the the main reasons to use a cloud service. There is no way to access a folder someone else has shared in SkyDrive Pro on an iPad, nor can you access the Team Site which is designed for sharing documents in Office 365. Is Microsoft serious about supporting iPad users?

Office 365 is strategic for Microsoft, and SharePoint is its most important feature after Exchange. The customers are there; but with so many frustrations in trying to use Office 365 SharePoint clients other than the browser, it will not be surprising if many of them turn to other solutions.

Microsoft financials: record revenue, consumer sales declining in drift towards Enterprise

Microsoft has announced record revenue for its second financial quarter, October-December 2013. Revenue was bumped up by the launch of Xbox One (3.9 million sold) and new Surface hardware. The real stars though were the server products:

  • SQL Server continued to gain market share with revenue growing double-digits.

  • System Center showed continued strength with double-digit revenue growth.

  • Commercial cloud services revenue more than doubled.

  • Office 365 commercial seats and Azure customers both grew triple-digits.

says the press release.

Another plus point is Bing, which Microsoft says now has 18.2% market share in the USA. Search advertising revenue is up 34%.

It is not all good news. While Microsoft is doing fine in server and cloud, the consumer market is not going well, leaving aside the expected boost from a new Xbox launch:

  • Windows OEM non-pro revenue down 20% year on year (that’s consumer PCs)
  • Office consumer revenue down 24% year on year – partly attributed to the shift towards subscription sales of Office 365 Home Premium

As usual, I have put the results into a quick table for easier viewing:

Quarter ending December 31st 2013 vs quarter ending December 31st 2012, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 5384 -319 4978 -153
Devices and Consumer Hardware 4729 +1921 411 -351
Devices and Consumer Other 1793 -206 431 -455
Commercial Licensing 10888 +753 10077 +751
Commercial Other 1780 +391 415 +199

The categories are opaque so here is a quick summary:

Devices and Consumer Licensing: non-volume and non-subscription licensing of Windows, Office, Windows Phone, and “ related patent licensing; and certain other patent licensing revenue” – all those Android royalties?

Devices and Consumer Hardware: the Xbox 360, Xbox Live subscriptions, Surface, and Microsoft PC accessories.

Devices and Consumer Other: Resale, including Windows Store, Xbox Live transactions (other than subscriptions), Windows Phone Marketplace; search advertising; display advertising; Office 365 Home Premium subscriptions; Microsoft Studios (games), retail stores.

Commercial Licensing: server products, including Windows Server, Microsoft SQL Server, Visual Studio, System Center, and Windows Embedded; volume licensing of Windows, Office, Exchange, SharePoint, and Lync; Microsoft Dynamics business solutions, excluding Dynamics CRM Online; Skype.

Commercial Other: Enterprise Services, including support and consulting; Office 365 (excluding Office 365 Home Premium), other Microsoft Office online offerings, and Dynamics CRM Online; Windows Azure.

Here is what is notable. Looking at these figures, Microsoft’s cash cow is obvious: licensing server products, Windows and Office to businesses, which is profitable almost to the point of disgrace: gross margin $million 10,077 on sales of $million 10,888. Microsoft breaks this down a little. Hyper-V has gained 5 points of share, it says, and Windows volume licensing is up 10%.

Cloud (Office 365, Azure, Dynamics CRM online) may be growing strongly, but it is a sideshow relative to the on-premises licensing.

How do we reconcile yet another bumper quarter with the Microsoft/Windows is dead meme? The answer is that it is not dead yet, but the shift away from the consumer market and the deep dependency on on-premises licensing are long-term concerns. Microsoft remains vulnerable to disruption from cheap and easy to maintain clients like Google’s Chromebook, tied to non-Microsoft cloud services.

Nevertheless, these figures do show that, for the moment at least, Microsoft can continue to thrive despite the declining PC market, more so that most of its hardware partners.

Postscript: Microsoft’s segments disguise the reality of its gross margins. The cost of “licensing” is small but it is obvious from its figures that Microsoft is not including all the costs of creating and maintaining the products being licensed. If we look at the figures from a year ago, for example, Microsoft reported a gross margin of $million 2121 on revenue of $million 5186 for Server and Tools. That information is no longer provided and as far as I can tell, we can only guess at the cost per segment of its software products . However, looking at the income statements, you can see that overall Microsoft spent $million 2748 on Research and Development, $million 4283 on Sales and Marketing, and $million 1235 on General and administrative in the quarter.