Category Archives: .net

Book review: Professional ASP.NET MVC 5. Is this the way to learn ASP.NET MVC?

This book caught my eye because while I like ASP.NET MVC, Microsoft’s modern web application framework, it seems to be badly documented. Even the word “badly” is not quite right; there is lots of documentation, some of high quality, but finding your way around it is challenging, thanks to the many different pieces involved. When I completed an ASP.NET MVC project recently, I found it frustrating thanks to over-reliance on sample projects (hey, here is a an application we did that works, see if you can figure out how we did it), many out of date articles relating to old versions; and the opposite, posts and samples which include preview software that does not seem wise to use in production.

image

In my experience ASP.NET MVC is both cleaner and faster than ASP.NET Web Forms, the older .NET web framework, but there is more to learn before you can go ahead and write an application.

Professional ASP.NET MVC 5 gives you nearly 600 pages on the subject. It is aimed at a broad readership: the introduction states:

Professional ASP.NET MVC 5 is designed to teach ASP.NET MVC, from a beginner level through advanced topics.

Perhaps that is too broad, though the idea is that the first six chapters (about 150 pages) cover the basics, and that the later chapters are more advanced, so if you are not a beginner you can start at chapter 7.

The main author is Jon Galloway who is a Technical Evangelist at Microsoft. The other authors are Brad Wilson, formerly at Microsoft and now at CenturyLink Cloud; K Scott Allen at OdeToCode, David Matson who is on the ASP.NET MVC team at Microsoft, and Phil Haack formerly at Microsoft and now at GitHub. I get the impression that Haack wrote several chapters in an earlier edition of the book, but did not work directly on this one; Galloway brought his chapters up to date.

Be in no doubt: there are plenty of well-informed ASP.NET MVC people on this team.

The earlier part of the book uses a sample Music Store application, a version of which is publicly available here. You can also download a tutorial, based on the sample, written by Galloway. The public tutorial however dates from 2011 and is based on ASP.NET MVC 3 and Visual Studio 2010. The book uses Visual Studio 2013.

Chapters 1 to 6, the beginner section, do a decent job of talking you through how to build a first application. There are chapters on Controllers, Views, Models, Forms and HTML Helpers, and finally Data Annotations and Validation. It’s a good basic introduction but if you are like me you will come out with many questions, like what is an ActionResult (the type of most Controller methods)? You have to wait until chapter 16 for a full description.

Chapter 7 is on Membership, Authorization and Security. That is too much for one chapter. It is mostly on security, and inadequate on membership. One of my disappointments with this book is that Azure Active Directory hardly gets a mention; yet to my mind integration of web applications with Office 365 (which uses Azure AD) is a huge feature for Microsoft.

On security though, this is a useful chapter, with handy coverage of Cross-Site Request Forgery and other common vulnerabilities.

Next comes a chapter on AJAX with a little bit on JQuery, client-side validation, and Ajax ActionLinks. Here is the dilemma though. Does it make sense to cover JQuery in detail, when this very popular open source library is widely documented elsewhere? On the other hand, does it make sense not to cover JQuery in detail, when it is usually a vital part of your ASP.NET MVC application?

I would add that this title is poor on design aspects of a web application. That said, I was not expecting much on the design side; but what would help would be coverage of how to work with designers: what is safe to hand over to designers, and how does a typical designer/developer workflow play out with ASP.NET MVC?

I would also like to see more coverage of how to work with Bootstrap, the CSS framework which is integrated with ASP.NET MVC 5 in Visual Studio. I found it a challenge, for example, to discover the best way to change the default fonts and colours used, which is rather basic.

Chapter 9 is on routing, dry but essential background. Chapter 10 on NuGet, the Visual Studio package manager, and a good chapter given how important NuGet now is for most Visual Studio work.

Incidentally, many of the samples for the book can be installed via NuGet. It’s not completely obvious how to do this. I found the best way is to go to http://www.nuget.org and search for Wrox.ProMvc5 – here is the link to the search results. This lists all the packages available; note the package names. Then open the Nuget package manager console and type:

install-package [packagename]

to get the sample.

Chapter 11 is a too-brief chapter on the Web API. I would like to see more on this, maybe even walking through a complete application with clients for say, Windows Phone and a web application – though the following chapter does present a client example using AngularJS.

Chapter 13 is a somewhat theoretical look at dependency injection and inversion of control; handy as Microsoft developers talk a lot about this.

Next comes a very brief introduction to unit testing, intended I think only as a starting point.

For me, the the next two chapters are the most valuable. Chapter 15 concerns extending MVC: you learn about extending models with value providers and model binders; validating models; writing HTML helpers and Razor (the view engine in ASP.NET MVC) helpers; authentication filters and authorization filters. Chapter 16 on advanced topics looks in more detail at Razor, routing, templates, ActionResult and a few other things.

Finally, we get a look at how the Nuget.org application was put together, and an appendix covering some miscellaneous details like what is new in ASP.NET MVC 5.1.

Conclusions

I find this one hard to summarise. There is too much missing to give this an unreserved recommendation. I would like more on topics including ASP.NET Identity, Azure AD integration, Entity Framework, Bootstrap, and more. Trying to cover every developer from beginner to advanced is too much; removing some of the introductory material would have left more room for the more interesting sections. The book is also rather weighted towards theory rather than hands-on coding. At some points it felt more like an explanation from the ASP.NET MVC team on “why we did it this way”, than a developer tutorial.

That said, having those insights from the team is valuable in itself. As someone who has only recently engaged with ASP.NET MVC in a real application, I did find the book useful and will come back to some of those explanations in future.

Looking at what else is available, it seems to me that there is a shortage of books on this subject and that a “what you need to know” title aimed at professional developers would be widely welcomed. It would pay Microsoft to sponsor it, since my sense is that some developers stick with ASP.NET Web Forms not because it is better, but because it is more approachable.

 

Microsoft introduces a new 2D graphics API for the Windows Runtime

Microsoft has announced Win2D, a Windows Runtime API that wraps Direct2D (part of DirectX), for accelerated graphics in Windows Store apps.

The new API is described here and you can download the current binary here. It is in its early stages, but already supports basic drawing, bitmap loading, some image effects, and a vector and matrix math library. Here is some sample code:

void canvasControl_Draw(CanvasControl sender, CanvasDrawEventArgs args)
{
args.DrawingSession.Clear(Colors.CornflowerBlue);
args.DrawingSession.DrawEllipse(190, 125, 140, 40, Colors.Black, 6);
args.DrawingSession.DrawText("Hello, world!", 100, 100, Colors.Yellow);
}

Although this hardly looks exciting, it is important because it enables accelerated custom drawing from languages other than C++, and without needing to learn Direct2D itself. It will be easier to make rich custom controls, or casual 2D games.

That said, there are already alternative C# wrappers for DirectX in Windows Runtime apps, such as SharpDX.

Some of the comments on the MSDN post are sceptical:

Managed DirectX and XNA were however cancelled despite the frustration from the community which in response created open source alternatives to save the projects and customers that had invested in technology Microsoft introduced.

I understand that the future is "uncertain", but is this technology something that we should dare invest in or will it see the same fate as it’s earlier incarnations?

Microsoft’s Shawn Hargreaves assures:

Win2D is absolutely not a side project or some kind of stop gap that will later be replaced by anything different.

The target here is universal apps, so not just Windows Store apps but also Windows Phone. Despite the hesitant reception for the Windows Runtime in Windows 8, it looks as if Microsoft is still committed to the platform and that it will remain centre stage in Windows vNext.

Bing Developer Assistant adds code samples to Visual Studio IntelliSense, with mixed results

Microsoft has updated its Bing Developer Assistant Beta, a Visual Studio 2013 add-in which hooks into IntelliSense so that you get code samples as well as brief documentation. For example, in an Entity Framework project, if you select dbContext.SaveChanges, you get a code sample which uses that method.

image

There is no guarantee of course that the sample is relevant to what you are trying to accomplish. You can hit Search More though and get a selection of code snippets and sample projects, drawn from sites including MSDN, StackOverflow and Codeproject.

image

Developer beware though. Looking at the code samples, the top one is from a 2011 blog post relating to CTP (Community Tech Preview) 5 of Entity Framework 4.1. If you hit the link, you get this:

image

“The information in this post is out of date”, it says, followed by a link to what is in fairness a rather helpful article on using SaveChanges.

Hmm, maybe Bing Developer Assistant should try filtering the search to eliminate samples on preview or obsolete APIs? A snag here though is that on occasion the blogs and samples on preview frameworks are all you can get, because by the time the thing is actually released, the developer evangelists have move on to blog about the next up and coming cool thing.

If you choose an object member for which Bing finds no code sample, you are prompted to add one of your own:

image

This takes to to the Developer Network sample upload page:

image

This form is quite a lot of work, but lets you add a code snippet or sample project together with title and comments explaining what it does.

The Bing Developer Assistant also searches for sample projects:

image

Again it is a case of picking and choosing what is really relevant; but developers are experts and expected to use common sense.

A drawback with Bing Developer Assistant is that only one add-on can extend IntelliSense, so if you use Resharper or another tool which also does this, you have to choose which one to allow.

In the end, this is all about integrating web search into the IDE. Is that a good idea, or is it better simply to have your web browser open, perhaps on another display, and type “dbContext SaveChanges EF6” or some such into your favourite search engine?

There is some merit in a search engine that automatically filters to show only code samples – hey, that is what Google’s popular Code Search did, until it was mysteriously shut down – though I’m not sure how much I like the idea of possibly obsolete and deprecated samples showing up in Visual Studio as you are coding.

Still, the truth is that web search is critical to software development today and it is good to see that recognised.

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

Should you use Entity Framework for .NET applications?

I have been working on a project which I thought would be simpler than it turned out to be – nothing new there, most software projects are like that.

The project involves upload and download of large files from Azure storage. There is a database as part of the application, nothing too demanding, but requiring some typical CRUD (Create, Retrieve, Update, Delete) functionality. I had to decide how to implement this.

First, a confession. I am comfortable using SQL and my normal approach to a database application is to use ADO.NET DataReaders to read data. They are brilliant; you just send some SQL to the database and back comes the data in a format that is easy to read back in C# code.

When I need to update the data, I use SqlCommand.ExecuteNonQuery which executes arbitrary SQL. It is easy to use parameters and transactions, and I get full control over how many connections are open and so on.

This approach has always worked well for me and I get excellent performance and complete flexibility.

However, when coding in ASP.NET MVC and Visual Studio you are now steered firmly towards Entity Framework (EF), Microsoft’s object-relational mapping library. You can use a code-first approach. Simply create a C# class for the object you want to store, and EF handles all the drudgery of creating tables and building SQL queries, letting you concentrate on the unique features of your application.

In addition, you can right-click in the Solution Explorer, choose Add Controller, and a wizard will generate all the code for listing, creating, editing and deleting those objects.

image

Well, that is the idea, and it does work, but I soon ran into issues that made me wonder if I had made the right decision.

One of the issues is what happens when you change your mind. Maybe that field should be an Int rather than a String. Maybe you need a second phone number field. Maybe you need to create new tables. How do you keep the database in synch with your classes?

This is called Code First Migrations and involves running commands that work out how the database needs to change and generates code to update it. It’s clever stuff, but the downside is that I now have a bunch of generated classes and a generated _MigrationHistory table which I did not need before. In addition, something when slightly wrong in my case and I ended up having to comment out some of the generated code in order to make the migration work.

At this point EF is creating work for me, rather than saving it.

Another issue I encountered was puzzling out how to do stuff beyond the most trivial. How do you replace an HTML edit box with a dropdown list? How do you exclude fields from being saved when you call dbContext.SaveChanges? What is the correct way to retrieve and modify data in pure code, without data binding?

I am not the first to have questions. I came across this documentation: an article promisingly entitled How to: Add, Modify, and Delete Objects which tells you nothing of value. Spot how many found it helpful:

image

You should probably start here instead. Still, be aware that EF is by no means straightforward. Instead of having to know SQL and the basics of ADO.NET commands and DataReaders, you now have to know EF, and I am not sure it is any less intricate. You also need to be comfortable with data binding and LINQ (Language Integrated Query) to make sense of it all, though I will add that strong data binding support is one reason whey EF is a good fit for ASP.NET MVC.

Should you use Entity Framework? It remains, as far as I can tell, the strategic direction for data access on Microsoft’s platform, and once you have worked out the basics you should be able to put together simple database applications more quickly and more naturally than with manually coded SQL.

I am not sure it makes sense for heavy-duty data access, since it is harder to fine-tune performance and if you hit subtle bugs, you may end up in the depths of EF rather than debugging your own code.

I would be interested in hearing from other developers. Do you love EF, avoid it, or is it just about OK?

The UK government is adopting Open Document: some observations

The UK government is adopting the Open Document Format for Office Applications, for documents that are editable (read-only documents will be PDF or HTML). You can read Mike Bracken’s (Government Digital Service) blog on the subject here, and the details of the new requirements here. If you want to see the actual standards, they are on the OASIS site here.

I followed the XML document standards wars in some details back in 2006-2008. The origins of ODF go back to Sun Microsystems (a staunch opponent of Microsoft) which acquired an Office suite called Star Office, made it open source, and supported OpenOffice.org. My impression was that Sun’s intentions were in part to disrupt the market for Microsoft Office, and in part to promote a useful open standard out of conviction. OpenOffice eventually found its way to the Apache Foundation after Oracle’s acquisition of Sun. You can find it here.

During the time, Microsoft responded by shifting Office to use XML formats by default – these are the formats we know as .docx, .xlsx etc. It also made the formats an open standard via ECMA and ISO, to the indignation of ODF advocates who found every possible fault in the standards and the process. There were and are faults; but it has always seemed to me that an open XML standard for Microsoft Office documents was a real step forward from the wholly proprietary (but reverse engineered) binary formats.

The standards wars are to some extent a proxy for the effort to shift Microsoft from its dominance of business document authoring. Microsoft charges a lot for Office, particularly for businesses, and arguably this is an unnecessary burden. On the other hand, it is a good product which I personally prefer to the alternatives on Windows (on the Mac I am not so sure), and considering the amount of use Office gets during the working day even a small improvement in productivity is worth paying for.

As a further precaution, Microsoft added ODF support into its own Office suite. This was poor at first, though it has no doubt improved since 2007. However I would not advise anyone to set Microsoft Office to use ODF by default, unless mandated by some requirement such as government regulation. It is not the native format and I would expect a greater likelihood that something could go slightly wrong in formatting or metadata.

Bracken does not mention Microsoft Office in his blog; but as ever, the interesting part of this decision is how it will impact Office users in government, or working with government. If it is a matter of switching defaults in Office, that is no big deal, but if it means replacing Microsoft Office with Open Office or its fork, Libre Office, that will have more impact.

The problem with abandoning Microsoft Office is not only that that the alternatives may fall short, but also that the ecosystem around Microsoft Office and is document formats is richer – in other words, tools that consume or generate Office documents, add-ins for Office, and so on.

This also means that Microsoft Office documents are, in my experience, more interoperable (not less) than ODF documents.

That does not in itself make the UK government’s decision a bad one, because in making the decision it is helping to promote an alternative ecosystem. On the other hand, it does mean that the decision could be costly in constraining the choice of tools while the ODF ecosystem catches up (if it does).

How does the move towards cloud services like Office 365 and Google Docs impact on all this? Microsoft says it supports ODF in SharePoint; but for sure it is better to use Microsoft’s own formats there. For example, check the specifications for Office Online. You can edit docx in the browser, but not odt (Open Document Text); it is the same story with spreadsheets and presentations.

Google has recently added native support for the Microsoft formats to Google Docs.

Amazon’s Zocalo service, which I have just reviewed for the Register, can preview Microsoft’s formats in the browser, but while it also supports odt for preview, it does not support ods (Open Document Spreadsheet).

A good decision then by the UK government? Your answer may be partly ideological, but as a UK taxpayer, my feelings are mixed.

For more information on this and other government IT matters, I recommend Bryan Glick’s pieces over on Computer Weekly, like this one.

RemObjects previews native Apple Mac IDE for C#, .NET, Oxygene

RemObjects is previewing a new native Mac IDE for its Oxygene and C# compilers. Oxygene is a Delphi-like language (in other words, a variant of Object Pascal) which targets iOS, Mac, Android, Windows Phone and Windows. RemObjects C# shares the same targets. Both can compile to .NET assemblies for Windows, or to Mono for cross-platform .NET, or to a Mac or iOS executable (using the LLVM compiler), or to Java bytecode for the Android Dalvik runtime. You can get both Oxygene and RemObjects  C# bundled in a product called Elements.

In the past, RemObjects has used Visual Studio as its IDE. While this is a natural choice for Windows users, much development today is done on the Mac. Requiring Mac users to develop in a Windows Virtual Machine adds friction, so RemObjects is now working on a native IDE for the Mac codenamed Fire.

image

I gave Fire the briefest of looks. Here are some of the options for a new .NET application:

image

Note the appearance of ASP.NET MVC 4, and even Silverlight.

Here are the options for a new Cocoa application:

image

If you are developing for Cocoa, you can edit the resource file in Apple’s Xcode and use it in your application. I started a new C# Cocoa app, made a few changes and and then ran it from the IDE:

image

I imagine Microsoft will be keeping an eye on tools like this – if it is not, it should – since they fit with the strategy of supporting Microsoft services on multiple devices. Visual Studio is a fine tool but if Microsoft is serious about cross-platform, it needs strong Mac-native development tools. Xamarin came up with Xamarin Studio, which is cross-platform for Windows and Mac, but the RemObjects approach also looks worth investigating.

PS The first release of RemObjects C# lacked full generic support, for which failing Xamarin and Mono founder Miguel de Icaza took RemObjects to task on Twitter. I was amused to see this in the changelog for April 2014:

 image

65764 Full support for Generics on Cocoa, as requested by Miguel

For more details on Fire, see here.

Farewell Nokia X? Not quite, but the signs are clear as Microsoft bets on Universal Apps

I could never make sense of Nokia X, the Android-with-Microsoft-services device which Nokia announced less than a year ago at Mobile World Congress in Barcelona:

If Nokia X is a worse Android than Android, and a worse Windows Phone than Windows Phone, what is the point of it and why will anyone buy?

Nokia X is Android without Google’s Play Store; if Amazon struggles to persuade developers to port apps to Kindle Fire (another non-Google Android) then the task for Nokia, lacking Amazon’s ecosystem, is even harder. Now, following Microsoft’s acquisition, it makes even less sense: how can Microsoft simultaneously evangelise both Windows Phone and an Android fork with its own incompatible platform and store?

Nokia X was meant to be a smartphone at feature phone prices, or something like that, but since Windows phone runs well on low-end hardware, that argument does not stand up either.

Now Nokia X is all but dead. Microsoft CEO Satya Nadella:

image

Second, we are working to integrate the Nokia Devices and Services teams into Microsoft. We will realize the synergies to which we committed when we announced the acquisition last September. The first-party phone portfolio will align to Microsoft’s strategic direction. To win in the higher price tiers, we will focus on breakthrough innovation that expresses and enlivens Microsoft’s digital work and digital life experiences. In addition, we plan to shift select Nokia X product designs to become Lumia products running Windows. This builds on our success in the affordable smartphone space and aligns with our focus on Windows Universal Apps.

and former Nokia CEO Stephen Elop, now in charge of Microsoft devices:

In addition to the portfolio already planned, we plan to deliver additional lower-cost Lumia devices by shifting select future Nokia X designs and products to Windows Phone devices. We expect to make this shift immediately while continuing to sell and support existing Nokia X products.

Nadella has also announced a huge round of job cuts, mainly of former Nokia employees, around 12,500 which is roughly 50% of those who came over. Nokia’s mobile phone business is no all Windows Phone (Lumia) and Nokia X. In addition, it sells really low-end phones, the kind you can pick up for £10 at a supermarket, and the Asha range which are budget smartphones. Does Microsoft have any interest in Asha? Elop does not even mention it.

It seems then that Microsoft is focusing on what it considers strategic: Windows Phone at every price point, and Universal Apps which let developers create apps for both Windows Phone and full Windows (8 and higher) from a single code base.

Microsoft does also intend to support Android and iOS with apps, but has no need to make its own Android phones in order to do so.

My view is that Nokia did an good job with Windows Phone within the constraints of a difficult market; not perfect (the early Lumia 800 devices were buggy, for example), but better by far than Microsoft managed with any other OEM partner. I currently use a Lumia 1020 which I regard as something of a classic, with its excellent camera and general high quality.

It seems to me reassuring (from a Windows Phone perspective) that Microsoft is keeping Windows Phone engineering in Finland:

Our phone engineering efforts are expected to be concentrated in Salo, Finland (for future, high-end Lumia products) and Tampere, Finland (for more affordable devices). We plan to develop the supporting technologies in both locations.

says Elop, who also notes that Surface and Xbox teams will be little touched by today’s announcements.

Incidentally, I wrote recently about Universal Apps here (free registration required) and expressed the view that Microsoft cannot afford yet another abrupt shift in its developer platform; the continuing support for Universal Apps in the Nadella era makes that less likely.

Speculating a little, it also would not surprise me if Universal Apps were extended via Xamarin support to include Android and iOS – now that is really a universal app.

Will Microsoft add some kind of Android support to Windows Phone itself? This is rumoured, though it could be counter-productive in terms of winning over developers: why bother to create a Windows Phone app if your Android app will kind-of run?

Further clarification of Microsoft’s strategy is promised in the public earnings call on July 22nd.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Supporting developers: how could Microsoft improve?

Microsoft invests substantial resources in supporting developers; yet the last two topics I have explored in earnest – the Azure blob storage service, and ASP.NET MVC with Azure Active Directory integration – have been frustrating and difficult. Admittedly I am only an occasional developer, but I suspect my experience is common. What is going wrong, and how could Microsoft improve?

Among the problems I have encountered:

  • Abundant documentation of simple first steps with a vacuum for anything more advanced
  • Samples that do not run without tweaking
  • Samples designed for old versions of Visual Studio
  • Samples which use obsolete or deprecated libraries
  • Samples which are poor solutions for the problem they are supposed to address
  • Documentation or samples which use preview, beta or even alpha libraries. Microsoft sometimes seems to make more effort documenting what is in preview than what is fully released.
  • Posts on a topic which are out of date, but for which it is hard to find something current
  • Circular links – click here for more information – you get another article which links back to the first one, perhaps with an intermediate step
  • Poor quality responses to questions on official Microsoft forums

On the positive side, the reference documentation is not too bad. StackOverflow is a great resource and seems to attract higher quality responses (even sometimes from Microsoft staff) than the company’s own forums.

Here then are some of the improvements I would like to see:

1. A sharper distinction between what is in preview and what is production-ready. For any given problem, it would be great to find a clear statement of how you should address it for production now, with fully released and supported libraries, and another statement showing how you will be able to address it with the latest and greatest (but perhaps less stable) technology which is in preview.

2. For key teams in Microsoft to maintain sites which offer clearly delineated production and preview sections and which are kept rigorously up to date.

3. More short samples and fewer “this demonstrates everything” samples. Large samples are more difficult to install and study and have more complex dependencies.

4. Posts and their accompanying code inevitably go out of date and I do not favour removing them, which causes more difficulties than it solves (broken links). However it seems to me reasonable for teams to maintain a number of key samples for their product area and keep them up to date.

What am I missing – or am I complaining too much about what is normal in software development? As ever, I welcome your views.