Category Archives: visual studio

Book review: Professional ASP.NET MVC 5. Is this the way to learn ASP.NET MVC?

This book caught my eye because while I like ASP.NET MVC, Microsoft’s modern web application framework, it seems to be badly documented. Even the word “badly” is not quite right; there is lots of documentation, some of high quality, but finding your way around it is challenging, thanks to the many different pieces involved. When I completed an ASP.NET MVC project recently, I found it frustrating thanks to over-reliance on sample projects (hey, here is a an application we did that works, see if you can figure out how we did it), many out of date articles relating to old versions; and the opposite, posts and samples which include preview software that does not seem wise to use in production.

image

In my experience ASP.NET MVC is both cleaner and faster than ASP.NET Web Forms, the older .NET web framework, but there is more to learn before you can go ahead and write an application.

Professional ASP.NET MVC 5 gives you nearly 600 pages on the subject. It is aimed at a broad readership: the introduction states:

Professional ASP.NET MVC 5 is designed to teach ASP.NET MVC, from a beginner level through advanced topics.

Perhaps that is too broad, though the idea is that the first six chapters (about 150 pages) cover the basics, and that the later chapters are more advanced, so if you are not a beginner you can start at chapter 7.

The main author is Jon Galloway who is a Technical Evangelist at Microsoft. The other authors are Brad Wilson, formerly at Microsoft and now at CenturyLink Cloud; K Scott Allen at OdeToCode, David Matson who is on the ASP.NET MVC team at Microsoft, and Phil Haack formerly at Microsoft and now at GitHub. I get the impression that Haack wrote several chapters in an earlier edition of the book, but did not work directly on this one; Galloway brought his chapters up to date.

Be in no doubt: there are plenty of well-informed ASP.NET MVC people on this team.

The earlier part of the book uses a sample Music Store application, a version of which is publicly available here. You can also download a tutorial, based on the sample, written by Galloway. The public tutorial however dates from 2011 and is based on ASP.NET MVC 3 and Visual Studio 2010. The book uses Visual Studio 2013.

Chapters 1 to 6, the beginner section, do a decent job of talking you through how to build a first application. There are chapters on Controllers, Views, Models, Forms and HTML Helpers, and finally Data Annotations and Validation. It’s a good basic introduction but if you are like me you will come out with many questions, like what is an ActionResult (the type of most Controller methods)? You have to wait until chapter 16 for a full description.

Chapter 7 is on Membership, Authorization and Security. That is too much for one chapter. It is mostly on security, and inadequate on membership. One of my disappointments with this book is that Azure Active Directory hardly gets a mention; yet to my mind integration of web applications with Office 365 (which uses Azure AD) is a huge feature for Microsoft.

On security though, this is a useful chapter, with handy coverage of Cross-Site Request Forgery and other common vulnerabilities.

Next comes a chapter on AJAX with a little bit on JQuery, client-side validation, and Ajax ActionLinks. Here is the dilemma though. Does it make sense to cover JQuery in detail, when this very popular open source library is widely documented elsewhere? On the other hand, does it make sense not to cover JQuery in detail, when it is usually a vital part of your ASP.NET MVC application?

I would add that this title is poor on design aspects of a web application. That said, I was not expecting much on the design side; but what would help would be coverage of how to work with designers: what is safe to hand over to designers, and how does a typical designer/developer workflow play out with ASP.NET MVC?

I would also like to see more coverage of how to work with Bootstrap, the CSS framework which is integrated with ASP.NET MVC 5 in Visual Studio. I found it a challenge, for example, to discover the best way to change the default fonts and colours used, which is rather basic.

Chapter 9 is on routing, dry but essential background. Chapter 10 on NuGet, the Visual Studio package manager, and a good chapter given how important NuGet now is for most Visual Studio work.

Incidentally, many of the samples for the book can be installed via NuGet. It’s not completely obvious how to do this. I found the best way is to go to http://www.nuget.org and search for Wrox.ProMvc5 – here is the link to the search results. This lists all the packages available; note the package names. Then open the Nuget package manager console and type:

install-package [packagename]

to get the sample.

Chapter 11 is a too-brief chapter on the Web API. I would like to see more on this, maybe even walking through a complete application with clients for say, Windows Phone and a web application – though the following chapter does present a client example using AngularJS.

Chapter 13 is a somewhat theoretical look at dependency injection and inversion of control; handy as Microsoft developers talk a lot about this.

Next comes a very brief introduction to unit testing, intended I think only as a starting point.

For me, the the next two chapters are the most valuable. Chapter 15 concerns extending MVC: you learn about extending models with value providers and model binders; validating models; writing HTML helpers and Razor (the view engine in ASP.NET MVC) helpers; authentication filters and authorization filters. Chapter 16 on advanced topics looks in more detail at Razor, routing, templates, ActionResult and a few other things.

Finally, we get a look at how the Nuget.org application was put together, and an appendix covering some miscellaneous details like what is new in ASP.NET MVC 5.1.

Conclusions

I find this one hard to summarise. There is too much missing to give this an unreserved recommendation. I would like more on topics including ASP.NET Identity, Azure AD integration, Entity Framework, Bootstrap, and more. Trying to cover every developer from beginner to advanced is too much; removing some of the introductory material would have left more room for the more interesting sections. The book is also rather weighted towards theory rather than hands-on coding. At some points it felt more like an explanation from the ASP.NET MVC team on “why we did it this way”, than a developer tutorial.

That said, having those insights from the team is valuable in itself. As someone who has only recently engaged with ASP.NET MVC in a real application, I did find the book useful and will come back to some of those explanations in future.

Looking at what else is available, it seems to me that there is a shortage of books on this subject and that a “what you need to know” title aimed at professional developers would be widely welcomed. It would pay Microsoft to sponsor it, since my sense is that some developers stick with ASP.NET Web Forms not because it is better, but because it is more approachable.

 

Xamarin announces large round of funding, plans international expansion

It is a case of “right time, right place” for Xamarin, as it scoops up Windows developers who need either to transition to iOS and Android, or to add mobile support to existing applications. You can also port applications to the Mac with its cross-platform development framework based on C#; no bad thing as Mac sales continue to boom.

image

Xamarin also fits with Microsoft’s new strategy, as I understand it, which is to provide strong support for iOS and Android for applications such as Microsoft Office, and services such as those hosted on Microsoft Azure.

Now the company has announced an additional $54 million of funding, which CEO Nat Friedman tells me is “the largest round of financing achieved by any mobile platform company ever”.

The financing comes from “new and existing investors, including Lead Edge Capital, Insight Venture Partners, Charles River Ventures, Ignition Partners, and Floodgate.”

What will the money be spent on? “Two things,” says Friedman. “We’re planning to expand our sales and marketing into Europe. We’re opening a sales office in London in the Fall. We did a roadshow with Microsoft in Europe and it was extremely successful. Second, we’re going to invest in improving the quality of our platforms.”

Friedman notes that mobile should not be considered a development niche. “Our view is that in the future all software will be mobile software in some way or another, when you build an application it will have to have some kind of mobile surface area.”

A few other points to note. One is that Xamarin Forms, recently introduced, has been a big hit with developers. “The Xamarin Forms forum has been our most popular forum,” says Friedman. “We’ve been really surprised.”

The company used to promote the idea of avoiding cross-platform code for the user interface, but then introduced Xamarin Forms as a cross-platform GUI framework, arguing that because it uses only native controls, it avoids the main drawbacks of the idea.

Some of the funding then will go into improving Xamarin Forms and tools to work with the framework.

Another key area is Visual Studio integration. The acquisition of the Visual Studio integration team from Clarius Consulting, in May 2014, is also significant here, since Clarius had strong expertise in this area.

Might Microsoft try to acquire Xamarin? Interesting question, and one which Friedman is not in a position to discuss; I am not a financial expert but would guess that Xamarin’s independent expansion increases its ability to be independent, though investors may be hoping to reap the rewards of an acquisition, who knows?

Bing Developer Assistant adds code samples to Visual Studio IntelliSense, with mixed results

Microsoft has updated its Bing Developer Assistant Beta, a Visual Studio 2013 add-in which hooks into IntelliSense so that you get code samples as well as brief documentation. For example, in an Entity Framework project, if you select dbContext.SaveChanges, you get a code sample which uses that method.

image

There is no guarantee of course that the sample is relevant to what you are trying to accomplish. You can hit Search More though and get a selection of code snippets and sample projects, drawn from sites including MSDN, StackOverflow and Codeproject.

image

Developer beware though. Looking at the code samples, the top one is from a 2011 blog post relating to CTP (Community Tech Preview) 5 of Entity Framework 4.1. If you hit the link, you get this:

image

“The information in this post is out of date”, it says, followed by a link to what is in fairness a rather helpful article on using SaveChanges.

Hmm, maybe Bing Developer Assistant should try filtering the search to eliminate samples on preview or obsolete APIs? A snag here though is that on occasion the blogs and samples on preview frameworks are all you can get, because by the time the thing is actually released, the developer evangelists have move on to blog about the next up and coming cool thing.

If you choose an object member for which Bing finds no code sample, you are prompted to add one of your own:

image

This takes to to the Developer Network sample upload page:

image

This form is quite a lot of work, but lets you add a code snippet or sample project together with title and comments explaining what it does.

The Bing Developer Assistant also searches for sample projects:

image

Again it is a case of picking and choosing what is really relevant; but developers are experts and expected to use common sense.

A drawback with Bing Developer Assistant is that only one add-on can extend IntelliSense, so if you use Resharper or another tool which also does this, you have to choose which one to allow.

In the end, this is all about integrating web search into the IDE. Is that a good idea, or is it better simply to have your web browser open, perhaps on another display, and type “dbContext SaveChanges EF6” or some such into your favourite search engine?

There is some merit in a search engine that automatically filters to show only code samples – hey, that is what Google’s popular Code Search did, until it was mysteriously shut down – though I’m not sure how much I like the idea of possibly obsolete and deprecated samples showing up in Visual Studio as you are coding.

Still, the truth is that web search is critical to software development today and it is good to see that recognised.

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Having it both ways: can Microsoft equally back Windows Phone and “Any device”?

I attended an event in London which was a kind-of UK launch for Windows Phone 8.1. The first Lumia device running 8.1, the Lumia 630, is now on sale, though this was not the main focus. It was more about asking businesses to take another look at Windows Phone (and Windows tablets), following improvements Microsoft has made. The company is particularly pleased with a new white paper from MobileIron, a well-known company in mobile device management, praising the new security and manageability features:

Windows Phone 8 did not meet the stringent policies some enterprises required for protecting corporate data and resources. The release of Windows Phone 8.1 changes the game. Microsoft is delivering a rich new feature-set for business users, and providing IT departments with the compliance and security they require. These new security and management features, called the Enterprise Feature Pack, are included as a core component of Windows Phone 8.1. When combined with an enterprise mobility management (EMM) platform, these capabilities make it much easier for enterprises to adopt the Windows Phone platform.

Fair enough, though from what I can tell Windows Phone is still struggling to get the momentum it needs. Too many companies perceive that if they support iOS and Android then that is it, job done, as evidenced by this advertisement I saw recently. This in turn dampens sales. It is an unfortunate position to be in, particularly given the good work Microsoft (and Nokia) has done on the phone OS itself. I prefer the Windows Phone user interface to that in Android, but still need an Android device in order to try out new apps.

This could change if Microsoft can continue gradually bumping up its market share, but it is tough. The wider company is now side-stepping the problem by focusing on its strengths in Office, Active Directory and Office 365, and offering first-class support for these on iOS and Android, as evidenced by the excellent Office for iPad launched earlier this year.

There is a dilemma here though. Some Windows Phone users choose the phone because they feel it will work best with Microsoft’s business platform. Could the “any device” policy end up undermining Microsoft’s efforts to promote Windows Phone?

I put this to Chris Weber, Microsoft’s Corporate Vice President of Mobile Device Sales, who has come to the company from Nokia (before which he was at Microsoft, so a true Windows veteran).

image

From a business perspective, providing cloud services, management, security, it is a multi-platform world. It is a great business decision for Microsoft to be multi-platform. Customers demand it as well.  That doesn’t mean we don’t want to create the most compelling platform and set of devices that bring Windows to life. I think the cross-platform thing is a great story … but the benefit of us [Nokia and Microsoft] coming together is now we have hardware, software and services that can be integrated in a totally different way, and we’re one of the few players that have all those components. The level of integration is much greater on the Windows platform. For example, Office is built in, you don’t have to go to a store and download it. The Linq client is built into the calendar. The email client, being able to have rights protection. The mail client itself is the best of any of them. The ability to access a SharePoint site across the firewall without a VPN connection, unique to Windows Phone.

Then we also have to win the end user. We have to win IT and those requirements, but you also have to get end users excited. Things that you see in 8.1, like Cortana, there’s a huge benefit there. And we’re bringing that across every price point.

Fair points; yet currently the iPad has a better touch-friendly Office than Windows tablets or Windows Phone; and Windows phone users have frustrations where the integration falls short. One remarkable thing, for example, is that there is no way to use a shared Exchange or SharePoint calendar on Windows Phone other than in the browser, so no integration with the built-in calendar or offline support.

What Weber describes, near-perfect integration between Windows mobile devices and Microsoft’s server applications, should be the case though; making this even better should be a high priority for CEO Satya Nadella’s new Microsoft.

Weber makes the bold claim that he can convert any user to Windows Phone, but says the challenge is to make this happen at retail level, when the customer wanders in looking for a smartphone:

If you give me fifteen minutes, I think I can convince any iPhone or Android user to move to Windows Phone. We have to do this not in fifteen minutes but in probably a minute and a half, at retail, with people who are selling multiple devices and are used to selling the competitor platform more than us.

Focusing on enterprise integration is in my view long overdue, and a few large enterprise adoptions would give Windows Phone a significant boost. At retail though, my guess is that Microsoft’s main hope is what Nokia did so well: delivering a good smartphone experience in budget devices – the “every price point” to which Weber refers.

Visual Studio “14” announced, preview available with “Roslyn” open source compiler

Microsoft’s Soma Somasegar has announced the next version of Visual Studio, currently known as Visual Studio 14, but likely to be fully released in 2015 (and, I am guessing, likely to be called Visual Studio 2015).

This is a major release. It includes a new VB and C# compiler which is itself written in managed code, codenamed Roslyn. The open source Roslyn project provides new APIs that enable more powerful IDE features. Visual Basic is getting refactoring support for the first time.

The preview also includes a major update to ASP.NET that unifies ASP.NET MVC and the ASP.NET Web API, and has a new deployment model and developer experience:

Thanks to the Rosyln compiler, if you change ".cs” files or project.json file and want to see the change in the browser, you don’t need to build the project any more. Just refresh the browser.

There is no IIS express, nor IIS involved when you run from the command line. It means that you can publish your website to a USB drive, and run it by double clicking the web.cmd file!

On the C++ side, there is improved C++ 11 support and more features from C++ 14:

The Visual Studio "14" CTP includes support for user-defined literals, noexcept, alignof and alignas, and inheriting constructors from C++11, generalized lambda capture, auto function return type deduction, and generic lambdas from C++14, as well as many more new C++ features.

says Somasegar. There is also a refactored C Runtime (CRT):

msvcr140.dll no longer exists. It is replaced by a trio of DLLs: vcruntime140.dll, appcrt140.dll, and desktopcrt140.dll.

If you install the CTP (mine is downloading) use a spare machine or VM; it is an early preview that does not work side-by-side with other versions and the only uninstall may be to flatten the machine:

Installing a CTP release will place a computer in an unsupported state. For that reason, we recommend only installing CTP releases in a virtual machine, or on a computer that is available for reformatting.

Xamarin 3.0 brings iOS visual design to Visual Studio, cross-platform XAML, F#, NuGet and more

Xamarin has announced the third version of its cross-platform tools, which use C# and .NET to target multiple platforms, including iOS, Android and Mac OS X.

Xamarin 3.0 is a big release. In summary:

Xamarin Designer for iOS

Using a visual designer for iOS Storyboard projects, you can create and modify a GUI in both Visual Studio and Xamarin Studio (Xamarin’s own IDE). The designer uses the native Storyboard format, so you can open and modify existing files created in Xcode on the Mac. The technology here is amazing, since you iOS controls are rendered remotely on a Mac, and transmitted to the designer on Windows. See here for a quick hands-on.

Xamarin Forms

Xamarin has created the cross-platform GUI framework that it said it did not believe in. It is based on XAML though not compatible with Microsoft’s existing XAML implementations. There is no visual designer yet.

Why has Xamarin changed its mind? It was pressure from enterprise customers, from what I heard from CEO Nat Friedman. They want to make internal mobile apps with many forms, and do not want to rewrite the GUI code for every mobile platform they support.

Friedman made the point that Xamarin Forms still render as native controls. There is no drawing code in Xamarin Forms.

“The challenge for us in  building Xamarin forms was to give people enhanced productivity without compromising the native approach. The mix and match approach, where you can mix in native code at any point, you can get a handle for the native control, we’re think we’ve got the right compromise. And we’re not forcing Xamarin forms on you, this is just an option,”

he told me.

Again, there is a quick hands-on here.

F# support

F# is now officially supported in Xamarin projects. This brings functional programming to Xamarin, and will be warmly welcomed by the small but enthusiastic F# community (including, as I understand it, key .NET users in the financial world).

Portable Class Libraries

Xamarin now supports Microsoft’s Portable Class Libraries, which let you state what targets you want to support, and have Visual Studio ensure that you write compatible code. This also means that library vendors can easily support Xamarin if they choose to do so.

NuGet Packages

The NuGet package manager has transformed the business of getting hold of new libraries for use in Visual Studio. Now you can use it with Xamarin in both Visual Studio and Xamarin Studio.

Microsoft partnership

Perhaps the most interesting part of my interview with Nat Friedman was what he said about the company’s partnership with Microsoft. Apparently this is now close both from a technical perspective, and for business, with Microsoft inviting Xamarin for briefings with key customers.