Microsoft repositions for a post-Windows client world

Microsoft CEO Satya Nadella has penned a rather long public letter which sets out his ambitions for the company. It is not full of surprises for those who have been paying attention, but confirms what we are already seeing in projects such as Office for iPad: Microsoft is positioning itself for a world in which the Windows client does not dominate.

The statement that stands out most to me is this one (the highlighting is mine):

Apps will be designed as dual use with the intelligence to partition data between work and life and with the respect for each person’s privacy choices. All of these apps will be explicitly engineered so anybody can find, try and then buy them in friction-free ways. They will be built for other ecosystems so as people move from device to device, so will their content and the richness of their services

Microsoft is saying that it will build work/personal data partitioning into its applications, particularly one would imagine Office, and that it will write them for ecosystems other than its own, particularly one would imagine iOS and Android.

This is a big change from the Windows company, and one that I will expect to see reflected in the tools it offers to developers. If Microsoft is not trying to acquire Xamarin, you would wonder why not. It has to make Visual Studio a premier tool for writing cross-platform mobile applications. It also has to address the problem that an increasingly large proportion of developers now use Macs (I do not know the figures, but observe at some developer conferences that Windows machines are a rarity), perhaps via improved online developer tools or new tools that themselves run cross-platform.

Nadella is careful to avoid giving the impression that Microsoft is abandoning its first-party device efforts, making specific mention of Windows Phone, Surface, Cortana and Xbox, for example.

Our first-party devices will light up digital work and life. Surface Pro 3 is a great example – it is the world’s best productivity tablet. In addition, we will build first-party hardware to stimulate more demand for the entire Windows ecosystem. That means at times we’ll develop new categories like we did with Surface. It also means we will responsibly make the market for Windows Phone, which is our goal with the Nokia devices and services acquisition.

Here is another statement that caught my eye:

We will increase the fluidity of information and ideas by taking actions to flatten the organization and develop leaner business processes.

The company has become increasingly bureaucratic over the years, and that is holding back its ability to be agile (though some teams seem to move at high speed regardless; I would instance the Azure team as an example).

Nadella’s letter has too many flowery passages of uncertain meaning – “We will reinvent productivity for people who are swimming in a growing sea of devices, apps, data and social networks. We will build the solutions that address the productivity needs of groups and entire organizations as well as individuals by putting them at the center of their computing experiences.” – but I do not doubt that major change is under way.

Review: Kingston HyperX Cloud headset, excellent sound and comfort

Beautifully packaged and presented (strong inner box with outer sleeve) this gaming headset has a real premium feel to it, further enhanced by a high-quality drawstring bag which includes an outer pocket to store the heap of supplied cables and adaptors.

image

What is a “gaming headset”? Essentially, simply including a microphone is enough for some, though you might expect a gaming headset to be tilted towards a more exciting presentation with deep bass and sharp treble. Personally I favour a neutral presentation since getting an exciting sound is the job of those producing and mastering the audio for the game, not the headset, though an extended frequency response is needed. Fortunately the HyperX Cloud gets this mostly right, which is why it is decent for music as well as games.

“You are now on the way to the ultimate gaming experience,” proclaims the letter on the inner box (though that is all the documentation I could find, save what is printed on the outside of the box itself – you can download a manual from the HyperX site if you want).

image

But is the claim justified?

Despite the futuristic brand name, this is a traditional over-ear closed-back headset with analogue-only connections. This means you have a jack plug for the headphones and a second jack plug for the microphone. There is also an adapter that combines them to form the four-way jack used by smartphones, tablets, and PlayStation 4. A further cable lets you add an in-line control box with passive volume control, call/answer button and microphone mute. The closed back design means good noise isolation and less disturbance for others in the same room.

Analogue connections are essential for smartphone use, but on a PC it means you are reliant on the quality of the audio out and mic in on the soundcard. The microphone input is often a weak point. You can avoid this by using a USB headset, so don’t get this unless you are confident of the quality of your soundcard. Further, with an analogue headset there are no whizzy virtual effects, no great loss in my opinion.

Here is what you get in the box:

image

  • Adapter for smartphones and tablets
  • 1m extension cable with inline control box
  • 2m extension cable
  • Aeroplane adapter (for old-style aeroplane seats)
  • Detachable microphone
  • Generous drawstring bag
  • A pair of spare earpads, with a fabric finish in place of the smooth finish on the pre-fitted earpads. Both are comfortable.

The main cable is braided, as is the control box extension, but the other cables are not braided, which is odd.

If you use all the cables you end up with a 4m cable. If you want to use the control box, you end up with a 2m cable. Too long is better than too short, but you might find it getting in the way.

It is a tiny detail, but I would have liked colour coding on the floating jack sockets, to match the colour coding on the plugs. The sockets are marked if you look closely but it is easy to connect them wrong.

Another slight nit is that the socket for the detachable microphone has a small cover that I will probably lose. I would prefer this to be a hinged flap.

The control box is OK but not up to the standard of the rest of the kit.

image

The microphone mute button is stiff and awkward, and the volume control feels cheap. Both worked fine though.

The good news is that sound quality is exceptional. There is a real three-dimensionality to the sound, which together with extended frequency response (15Hz to 25,000 Hz is claimed) makes for a great experience.

Compared to the very best (and generally more expensive) headphones the HyperX is slightly coarse, and the tone is slightly weighted towards the bass, but I find the headset fine for music (especially pop/rock; they are less suitable for classical) as well as gaming, and for the money this is one of the best I have heard.

The headset is comfortable enough that I can happily wear them for a long session, whether gaming or music.

The microphone is also reasonable quality, with a high enough output for my PC soundcard to get decent volume though with some hiss. It is good enough for uses like Skype, dictation software and so on as well as gaming.

Overall I recommend this headset, if you are looking for an analogue rather than a USB connection. It is well made, well presented, and ticks the two most important boxes: comfort and sound quality.

More details on the HyperX site here.

Supporting developers: how could Microsoft improve?

Microsoft invests substantial resources in supporting developers; yet the last two topics I have explored in earnest – the Azure blob storage service, and ASP.NET MVC with Azure Active Directory integration – have been frustrating and difficult. Admittedly I am only an occasional developer, but I suspect my experience is common. What is going wrong, and how could Microsoft improve?

Among the problems I have encountered:

  • Abundant documentation of simple first steps with a vacuum for anything more advanced
  • Samples that do not run without tweaking
  • Samples designed for old versions of Visual Studio
  • Samples which use obsolete or deprecated libraries
  • Samples which are poor solutions for the problem they are supposed to address
  • Documentation or samples which use preview, beta or even alpha libraries. Microsoft sometimes seems to make more effort documenting what is in preview than what is fully released.
  • Posts on a topic which are out of date, but for which it is hard to find something current
  • Circular links – click here for more information – you get another article which links back to the first one, perhaps with an intermediate step
  • Poor quality responses to questions on official Microsoft forums

On the positive side, the reference documentation is not too bad. StackOverflow is a great resource and seems to attract higher quality responses (even sometimes from Microsoft staff) than the company’s own forums.

Here then are some of the improvements I would like to see:

1. A sharper distinction between what is in preview and what is production-ready. For any given problem, it would be great to find a clear statement of how you should address it for production now, with fully released and supported libraries, and another statement showing how you will be able to address it with the latest and greatest (but perhaps less stable) technology which is in preview.

2. For key teams in Microsoft to maintain sites which offer clearly delineated production and preview sections and which are kept rigorously up to date.

3. More short samples and fewer “this demonstrates everything” samples. Large samples are more difficult to install and study and have more complex dependencies.

4. Posts and their accompanying code inevitably go out of date and I do not favour removing them, which causes more difficulties than it solves (broken links). However it seems to me reasonable for teams to maintain a number of key samples for their product area and keep them up to date.

What am I missing – or am I complaining too much about what is normal in software development? As ever, I welcome your views.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.