Category Archives: visual studio

image

Windows 10 at Mobile World Congress 2015: a quick reflection

I attended Mobile World Congress in Barcelona last week – with 93,000 attendees and 2,100 exhibitors according to the latest figures.

It was a big event for Microsoft’s new Windows. It started for me on the Saturday before, when Acer unveiled a low-end Windows Phone (write-up on the Reg). Next was Microsoft’s press conference; Stephen Elop was on stage, presenting two new mid-range Lumias as if nothing had changed since last year when he announced the now-defunct Nokia X:

image

The Lumia 640 looks good value, especially in its XL guise: 5.7” 1280 x 720 display, 8GB storage plus microSD slot, 13MP camera, 4G LTE, quad-core 1.2GHz CPU, €189 ex VAT. The smaller Lumia 640 is now on presale at £169.99; we were told €139 ex VAT at MWC, so I guess the real price of the 640XL may be something like £230, though there will be deals.

These phones will ship with Windows Phone 8.1 but get Windows 10 when available.

The big Windows 10 event was elsewhere though, and not mentioned at the press conference. This was the developer event, where General Manager Todd Brix, Director of Program Management Kevin Gallo and others presented the developer story behind the new Universal App Platform (not the same as the old Universal App Platform, as I explain here).

image

This was the real deal, with lots of code. There was even a hands-on session where we built our own Universal Apps in Visual Studio 2015. Note that the Visual Studio build we used featured an additional application type for Windows 10; this is not the same as a Store app in Windows 8, though both use the Windows Runtime.

As someone with hands-on experience of developing a Store app, I am optimistic that the new platform will achieve more success. It is a second attempt with a bit more maturity, and much greater effort to integrate with the Windows desktop, whereas the first iteration went out of its way not to integrate.

Much of the focus was on the Adaptive UX, creating layouts that resize intelligently on different devices. The cross-platform UI concept is controversial, with strong arguments that you only get an excellent UI if you design specifically for a device, rather than trying to make one that runs everywhere. The Universal App Platform is a bit different though, since it is all Windows Runtime. Microsoft’s pitch is that by writing to the UAP you can target desktop, Windows Phone, tablet and Xbox One, with a single code base; and without a cross-device UI this pitch would lose much of its force. Windows 7 legacy is a problem of course; but if we see Windows 10 adopted as rapidly as Windows 7 (following the Vista hiccup) this may not be a deal-breaker.

The official account of the MWC event is in Gallo’s blog post which went out on the same day. There was much more detail at the event, but Microsoft is holding this back, perhaps for its Build conference at the end of April. So in this case you had to be there.

image

Aside: if you look at the publicity Microsoft got from MWC, you will note that it is mostly based on the press conference and the launch of two mid-range Lumias, hardly ground-breaking. The fact that a ton of new stuff got presented at the developer event got far less attention, though of course sharp eyes like those of Mary Jo Foley was onto it. I have a bias towards developer content; but even so, it strikes me that a session of new content that is critical to the future of Windows counts for more than a couple of new Lumias. This demonstrates the extent to which the big vendors control the news that is written about them – most of the time.

Microsoft takes its .NET runtime open source and cross-platform, announces new C++ compilers for iOS and Android: unpacking today’s news

Microsoft announced today that the .NET runtime will be open source and cross-platform for Linux and Mac. There are a several announcements and it is potentially confusing, so here is a quick summary.

The .NET runtime, also known as the CLR (Common Language Runtime) is the virtual machine that runs Microsoft’s C#, F# and Visual Basic .NET languages, performing just –in-time compilation to native code and providing interop between the application code and the operating system APIs. It is distinct from the .NET Framework, which is the library of mostly C# code that underlies application platforms like ASP.NET, Windows Presentation Foundation (WPF), Windows Forms, Windows Communication Foundation and more.

There is is already a cross-platform version of .NET, an open source project called Mono founded by Miguel de Icaza in 2001, not long after the first preview release of C# in 2000. Mono runs on Linux, Mac and Windows. In addition, de Icaza is co-founder of Xamarin, which uses Mono together with its own technology to compile C# for iOS, Android and Mac OS X.

Further, some of .NET is already open source. At Microsoft’s Build conference earlier this year, Anders Hejlsberg made the Roslyn project, the compiler for the next generation of the .NET Runtime, open source under the Apache 2.0 license. I spoke to Hejlsberg about the announcement and wrote it up on the Register here. Note the key point:

Since Roslyn is the compiler for the forthcoming C# 6.0, does that mean C# itself is now an open source language? “Yes, absolutely,” says Hejlsberg.

What then is today’s news? Blow by blow, here are what seems to me the main pieces:

  • The CLR itself will be open source. This is the C++ code from which the CLR is compiled.
  • Microsoft will provide a full open source server stack for Mac and Linux including the CLR. This will not include the frameworks for client applications; no Windows Forms or WPF. Rather, it is the “.NET Core Runtime” and “.NET Core Framework”. However Microsoft is working with the Mono team which does support client applications so there could be some interesting permutations (bear in mind that Mono also has its own runtime). However Microsoft is focused on the server stack.
  • Microsoft will release C++ frameworks and compilers for iOS and Android, using the open source Clang (C and C++ compiler front-end) and LVVM (code generation back end), but with Visual Studio as the IDE. If you are targeting iOS you will need a Mac with a build agent, or you can use a cloud build service (see below). The Android compiler is available now in preview, the iOS compiler is coming soon. “You can edit and debug a single set of C++ source code, and build it for iOS, Android and Windows,” says Microsoft’s Soma Somasegar, corporate VP of the developer division.
  • Microsoft has a new Android emulator for Windows based on Hyper-V. This will assist with Android development using Cordova (the HTML and JavaScript approach also used by PhoneGap) as well as the new C++ option.

    image

  • The next Visual Studio will be called Visual Studio 2015 and is now available in preview; download it here.
  • There will be a thing called Connected Services to make it easier to code against Office 365, Salesforce and Azure
  • A new edition of Visual Studio 2013, called the Community Edition, is now available for free, download it here. The big difference between this and the current Express editions is first that the Community Edition supports multiple target types, whereas you needed a different Express edition for Web applications, Windows Store and Phone apps, and Windows desktop apps.  Second, the Community Edition is extensible so that third parties can create plug-ins; today Xamarin was among the first to announce support. There may be some license restrictions; I am clarifying and will update later.
  • New Cloud Deployment Projects for Azure enable the cloud infrastructure associated with a project to be captured as code.
  • Release Management is being added to Visual Studio Online, Microsoft’s cloud-hosted Team Foundation Server.
  • Enhancements to the Visual Studio Online build service will support builds for iOS and OS X
  • Visual Studio 2013 Update 4 is complete. This is not a big update but adds fixes for TFS and Visual C++ as well as some new features in TFS and in GPU performance diagnostics.

The process by which these new .NET projects will interact with the open source community will be handled by the .NET Foundation.

What is Microsoft up to?

Today’s announcements are extensive, but with two overall themes.

The first is about open sourcing .NET,  a process that was already under way, and the second is about cross-platform.

It is the cross-platform announcements that are more notable, though they go hand in hand with the open source process, partly because of Microsoft’s increasingly close relationship with Mono and Xamarin. Note that Microsoft is doing its own C++ compilers for iOS and Android, but leaving the mobile C# and .NET space open for Xamarin.

By adding native code iOS and Android mobile into Visual Studio, Microsoft is signalling real commitment to these platforms. You could interpret this as an admission that Windows Phone and Windows tablets will never reach parity with their rivals, but it is more a consequence of the company’s focus on cloud, and in particular Office 365 and Azure. The company is prioritising the promotion of its cloud services by providing strong tooling for all major client platforms.

The provision of new Microsoft server-side .NET runtimes for Mac and Linux is a surprise to me. The Mac is not much used as a server but very widely used for development. Linux is an increasingly important operating system within the Azure cloud platform.

A side effect of all this is that the .NET Framework may finally fulfil its cross-platform promise, something Microsoft suppressed for years by only supporting it on Windows. That is good news for those who like programming in C#.

The .NET Framework is changing substantially in its next version. This is partly because of the Roslyn compiler, which is itself written in C# and opens up new possibilities for rich refactoring and code transformation; and partly because of .NET Core and major changes in the forthcoming version of ASP.NET.

Is Microsoft concerned that by supporting Linux it might reduce the usage of Windows Server? “In Azure, Windows and Linux are a core part of our platform,” Somesegar told me. “Helping developers by providing a good set of tools and letting them decide what server they run on, we feel is all goodness. If you want a complete open source platform, we have the tools for them.”

How big are these announcements? “I would say huge,”  Somasegar told me, “What is shows is that we are not being constrained by any one platform. We are doing more open source, more cross-platform, delivering Visual Studio free to a broader set of people. It’s all about having a great developer offering irrespective of what platform they are targeting or what kind of app they are building.”

That’s Microsoft’s perspective then. In the end, whether you interpret these moves as a sign of strength or weakness for Microsoft, developers will gain from these enhancements to Visual Studio and the .NET platform.

Coding Office for cross platform: Microsoft explains its approach

At last month’s @Scale conference in San Francisco, developers from a number of well-known companies (Google, Facebook, Twitter, Dropbox and others) spoke about the challenge of scaling applications and services to millions or even billions of users.

Among the speakers was Igor Zaika, Distinguished Engineer in the Microsoft Office team, and the video (embedded below) is illuminating not only as an example of how to code across multiple platforms, but also as an insight into where the company is taking Office.

Zaika gives a brief résumé of the history of Office, mentioning how the team has experienced the highs and lows of cross-platform code. Word 6.0 (1993) was great on Windows but a disaster on the Mac. The team built an entire Win32 emulation layer for the Mac, enabling a high level of code reuse, but resulting in a poor user experience and lots of platform-specific bugs and performance issues in the Mac version.

Next came Word 98 for the Mac, which took the opposite approach, forking the code to create an optimized Mac-specific version. It was well received and great for user experience, but “it was only fun for the first couple of years,” says Zaika. As the Windows version evolved, merging code from the main trunk into the Mac version became increasingly difficult.

Today Microsoft is committed not only to Mac and Windows versions of Word, but to all the major platforms, by which Zaika means Apple (including iOS), Android, Windows (desktop and WinRT) and Web. “If we don’t, we are not going to have a sustainable business,” he says.

WinRT is short for the Windows Runtime, also known as Metro, or as the Store App platform. Zaika says that the relationship between WinRT and Win32 (desktop Windows) is similar to that between Apple’s OS X and iOS.

Time for a brief digression of my own: some observers have said that Microsoft should have made a dedicated version of Windows for touch/mobile rather than attempting to do both at once in Windows 8. The truth is that it did, but Microsoft chose to bundle both into one operating system in Windows 8. Windows RT (the ARM version used in Surface RT) is a close parallel to the iPad, since only WinRT apps can be installed. What seems to be happening now is that Windows Phone and Windows RT will be merged, so that the equivalence of WinRT and iOS will be closer and more obvious.

Microsoft’s goal with Office is to achieve high content fidelity and consistency of functionality across all platforms, but to use native UX/UI frameworks so that each version integrates properly with the operating system on which it runs. The company also wants to achieve a faster shipping cycle; the traditional two-year cycle is not fast enough, says Zaika.

What then is Microsoft’s technical strategy for cross-platform Office now? The starting point, Zaika explains, is a shared core of C++ code. Office has always been written in C/C++, and “that has worked out well for us,” he says, since it is the only language that compiles to native code across all the platforms (web is an exception, and one that Zaika did not talk much about, except to note the importance of “shared service code,” cloud-based code that is used for features that do not need to work offline).

In order for the shared non-visual code to work correctly cross-platform, Microsoft has a number of platform abstraction layers (PALs). No #ifdefs (to handle platform differences) are allowed in the shared code itself. However, rather than a monolithic Win32 emulation as used in Word 6.0 for the Mac, Microsoft now has numerous mini-PALs. There is also a willingness to compromise, abandoning shared code if it is necessary for a good platform experience.

image

How do you ensure cross-platform fidelity in places where you cannot share code? The alternative is unit testing, says Zaika, and there is a strong reliance on this in Office development.

There is also an abstraction layer for document rendering. Office requires composition, animation and touch APIs on each platform. Microsoft uses DirectX on Win32, a thin layer over Apple’s CoreAnimation API on Mac and iOS, a thin layer over XAML on WinRT, and a thinnish layer over Java on Android.

The outcome of Microsoft’s architectural work is a high level of code sharing, despite the commitment to native frameworks for UX. Zaika showed a slide revealing code sharing of over 95% for PowerPoint on WinRT and Android.

image

What can Microsoft-watchers infer from this about the future of Office? While there are no revelations here, it does seem that work on Office for WinRT and for Android is well advanced.

Office for WinRT has implications for future Windows tablets. If a version of Office with at least the functionality of Office for iPad runs on WinRT, there is no longer any need to include the Windows desktop on future Windows tablets – by which I mean not laptop replacements like Surface 3.0, but smaller tablets. That will make such devices less perplexing for users than Surface RT, though with equivalent versions of Office on both Android and iOS tablets, the unique advantages of Windows tablets will be harder to identify.

Thanks to WalkingCat on Twitter for alerting me to this video.

VLC is coming to Windows Phone

The popular media playback app VLC is coming to Windows Phone as well as Windows tablets, according to an email sent to supporters of its Kickstarter project for VLC for WinRT (Windows Runtime).

A new preview release has been made available on the Windows Store, with the following changes:

  • using libVLC 2.2.0 core,
  • redesign of the interface,
  • huge performance improvements,
  • use of Winsock for networking instead of WinRTsock,
  • use of Windows 8.1 widgets,
  • move the interface code to Universal to prepare Windows Phone 8.1 port.

The app is currently x86 only, and this will have to change before a Windows Phone version is possible, since Windows Phone currently runs only on ARM chipsets. The VLC developers say:

While this release is still x86-only, we’ve made great advances on the ARM port. More news soon.

The progress of apps like VLC will be interesting to watch following the release of Windows 10 next year, which (from the user’s perspective) blurs the distinction between desktop apps (like the old version of VLC) and Store apps (like this new one). Can the Store app be good enough that users will not feel the need to have the desktop version installed? Even if it is, of course, the desktop version will remain the only choice for those on Windows 7 and earlier. In fact, make that Windows 8.0 and earlier, since the new version requires Windows 8.1.

image

Book review: Professional ASP.NET MVC 5. Is this the way to learn ASP.NET MVC?

This book caught my eye because while I like ASP.NET MVC, Microsoft’s modern web application framework, it seems to be badly documented. Even the word “badly” is not quite right; there is lots of documentation, some of high quality, but finding your way around it is challenging, thanks to the many different pieces involved. When I completed an ASP.NET MVC project recently, I found it frustrating thanks to over-reliance on sample projects (hey, here is a an application we did that works, see if you can figure out how we did it), many out of date articles relating to old versions; and the opposite, posts and samples which include preview software that does not seem wise to use in production.

image

In my experience ASP.NET MVC is both cleaner and faster than ASP.NET Web Forms, the older .NET web framework, but there is more to learn before you can go ahead and write an application.

Professional ASP.NET MVC 5 gives you nearly 600 pages on the subject. It is aimed at a broad readership: the introduction states:

Professional ASP.NET MVC 5 is designed to teach ASP.NET MVC, from a beginner level through advanced topics.

Perhaps that is too broad, though the idea is that the first six chapters (about 150 pages) cover the basics, and that the later chapters are more advanced, so if you are not a beginner you can start at chapter 7.

The main author is Jon Galloway who is a Technical Evangelist at Microsoft. The other authors are Brad Wilson, formerly at Microsoft and now at CenturyLink Cloud; K Scott Allen at OdeToCode, David Matson who is on the ASP.NET MVC team at Microsoft, and Phil Haack formerly at Microsoft and now at GitHub. I get the impression that Haack wrote several chapters in an earlier edition of the book, but did not work directly on this one; Galloway brought his chapters up to date.

Be in no doubt: there are plenty of well-informed ASP.NET MVC people on this team.

The earlier part of the book uses a sample Music Store application, a version of which is publicly available here. You can also download a tutorial, based on the sample, written by Galloway. The public tutorial however dates from 2011 and is based on ASP.NET MVC 3 and Visual Studio 2010. The book uses Visual Studio 2013.

Chapters 1 to 6, the beginner section, do a decent job of talking you through how to build a first application. There are chapters on Controllers, Views, Models, Forms and HTML Helpers, and finally Data Annotations and Validation. It’s a good basic introduction but if you are like me you will come out with many questions, like what is an ActionResult (the type of most Controller methods)? You have to wait until chapter 16 for a full description.

Chapter 7 is on Membership, Authorization and Security. That is too much for one chapter. It is mostly on security, and inadequate on membership. One of my disappointments with this book is that Azure Active Directory hardly gets a mention; yet to my mind integration of web applications with Office 365 (which uses Azure AD) is a huge feature for Microsoft.

On security though, this is a useful chapter, with handy coverage of Cross-Site Request Forgery and other common vulnerabilities.

Next comes a chapter on AJAX with a little bit on JQuery, client-side validation, and Ajax ActionLinks. Here is the dilemma though. Does it make sense to cover JQuery in detail, when this very popular open source library is widely documented elsewhere? On the other hand, does it make sense not to cover JQuery in detail, when it is usually a vital part of your ASP.NET MVC application?

I would add that this title is poor on design aspects of a web application. That said, I was not expecting much on the design side; but what would help would be coverage of how to work with designers: what is safe to hand over to designers, and how does a typical designer/developer workflow play out with ASP.NET MVC?

I would also like to see more coverage of how to work with Bootstrap, the CSS framework which is integrated with ASP.NET MVC 5 in Visual Studio. I found it a challenge, for example, to discover the best way to change the default fonts and colours used, which is rather basic.

Chapter 9 is on routing, dry but essential background. Chapter 10 on NuGet, the Visual Studio package manager, and a good chapter given how important NuGet now is for most Visual Studio work.

Incidentally, many of the samples for the book can be installed via NuGet. It’s not completely obvious how to do this. I found the best way is to go to http://www.nuget.org and search for Wrox.ProMvc5 – here is the link to the search results. This lists all the packages available; note the package names. Then open the Nuget package manager console and type:

install-package [packagename]

to get the sample.

Chapter 11 is a too-brief chapter on the Web API. I would like to see more on this, maybe even walking through a complete application with clients for say, Windows Phone and a web application – though the following chapter does present a client example using AngularJS.

Chapter 13 is a somewhat theoretical look at dependency injection and inversion of control; handy as Microsoft developers talk a lot about this.

Next comes a very brief introduction to unit testing, intended I think only as a starting point.

For me, the the next two chapters are the most valuable. Chapter 15 concerns extending MVC: you learn about extending models with value providers and model binders; validating models; writing HTML helpers and Razor (the view engine in ASP.NET MVC) helpers; authentication filters and authorization filters. Chapter 16 on advanced topics looks in more detail at Razor, routing, templates, ActionResult and a few other things.

Finally, we get a look at how the Nuget.org application was put together, and an appendix covering some miscellaneous details like what is new in ASP.NET MVC 5.1.

Conclusions

I find this one hard to summarise. There is too much missing to give this an unreserved recommendation. I would like more on topics including ASP.NET Identity, Azure AD integration, Entity Framework, Bootstrap, and more. Trying to cover every developer from beginner to advanced is too much; removing some of the introductory material would have left more room for the more interesting sections. The book is also rather weighted towards theory rather than hands-on coding. At some points it felt more like an explanation from the ASP.NET MVC team on “why we did it this way”, than a developer tutorial.

That said, having those insights from the team is valuable in itself. As someone who has only recently engaged with ASP.NET MVC in a real application, I did find the book useful and will come back to some of those explanations in future.

Looking at what else is available, it seems to me that there is a shortage of books on this subject and that a “what you need to know” title aimed at professional developers would be widely welcomed. It would pay Microsoft to sponsor it, since my sense is that some developers stick with ASP.NET Web Forms not because it is better, but because it is more approachable.

 

Xamarin announces large round of funding, plans international expansion

It is a case of “right time, right place” for Xamarin, as it scoops up Windows developers who need either to transition to iOS and Android, or to add mobile support to existing applications. You can also port applications to the Mac with its cross-platform development framework based on C#; no bad thing as Mac sales continue to boom.

image

Xamarin also fits with Microsoft’s new strategy, as I understand it, which is to provide strong support for iOS and Android for applications such as Microsoft Office, and services such as those hosted on Microsoft Azure.

Now the company has announced an additional $54 million of funding, which CEO Nat Friedman tells me is “the largest round of financing achieved by any mobile platform company ever”.

The financing comes from “new and existing investors, including Lead Edge Capital, Insight Venture Partners, Charles River Ventures, Ignition Partners, and Floodgate.”

What will the money be spent on? “Two things,” says Friedman. “We’re planning to expand our sales and marketing into Europe. We’re opening a sales office in London in the Fall. We did a roadshow with Microsoft in Europe and it was extremely successful. Second, we’re going to invest in improving the quality of our platforms.”

Friedman notes that mobile should not be considered a development niche. “Our view is that in the future all software will be mobile software in some way or another, when you build an application it will have to have some kind of mobile surface area.”

A few other points to note. One is that Xamarin Forms, recently introduced, has been a big hit with developers. “The Xamarin Forms forum has been our most popular forum,” says Friedman. “We’ve been really surprised.”

The company used to promote the idea of avoiding cross-platform code for the user interface, but then introduced Xamarin Forms as a cross-platform GUI framework, arguing that because it uses only native controls, it avoids the main drawbacks of the idea.

Some of the funding then will go into improving Xamarin Forms and tools to work with the framework.

Another key area is Visual Studio integration. The acquisition of the Visual Studio integration team from Clarius Consulting, in May 2014, is also significant here, since Clarius had strong expertise in this area.

Might Microsoft try to acquire Xamarin? Interesting question, and one which Friedman is not in a position to discuss; I am not a financial expert but would guess that Xamarin’s independent expansion increases its ability to be independent, though investors may be hoping to reap the rewards of an acquisition, who knows?

Bing Developer Assistant adds code samples to Visual Studio IntelliSense, with mixed results

Microsoft has updated its Bing Developer Assistant Beta, a Visual Studio 2013 add-in which hooks into IntelliSense so that you get code samples as well as brief documentation. For example, in an Entity Framework project, if you select dbContext.SaveChanges, you get a code sample which uses that method.

image

There is no guarantee of course that the sample is relevant to what you are trying to accomplish. You can hit Search More though and get a selection of code snippets and sample projects, drawn from sites including MSDN, StackOverflow and Codeproject.

image

Developer beware though. Looking at the code samples, the top one is from a 2011 blog post relating to CTP (Community Tech Preview) 5 of Entity Framework 4.1. If you hit the link, you get this:

image

“The information in this post is out of date”, it says, followed by a link to what is in fairness a rather helpful article on using SaveChanges.

Hmm, maybe Bing Developer Assistant should try filtering the search to eliminate samples on preview or obsolete APIs? A snag here though is that on occasion the blogs and samples on preview frameworks are all you can get, because by the time the thing is actually released, the developer evangelists have move on to blog about the next up and coming cool thing.

If you choose an object member for which Bing finds no code sample, you are prompted to add one of your own:

image

This takes to to the Developer Network sample upload page:

image

This form is quite a lot of work, but lets you add a code snippet or sample project together with title and comments explaining what it does.

The Bing Developer Assistant also searches for sample projects:

image

Again it is a case of picking and choosing what is really relevant; but developers are experts and expected to use common sense.

A drawback with Bing Developer Assistant is that only one add-on can extend IntelliSense, so if you use Resharper or another tool which also does this, you have to choose which one to allow.

In the end, this is all about integrating web search into the IDE. Is that a good idea, or is it better simply to have your web browser open, perhaps on another display, and type “dbContext SaveChanges EF6” or some such into your favourite search engine?

There is some merit in a search engine that automatically filters to show only code samples – hey, that is what Google’s popular Code Search did, until it was mysteriously shut down – though I’m not sure how much I like the idea of possibly obsolete and deprecated samples showing up in Visual Studio as you are coding.

Still, the truth is that web search is critical to software development today and it is good to see that recognised.

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.