Category Archives: visual studio

Bing Developer Assistant adds code samples to Visual Studio IntelliSense, with mixed results

Microsoft has updated its Bing Developer Assistant Beta, a Visual Studio 2013 add-in which hooks into IntelliSense so that you get code samples as well as brief documentation. For example, in an Entity Framework project, if you select dbContext.SaveChanges, you get a code sample which uses that method.

image

There is no guarantee of course that the sample is relevant to what you are trying to accomplish. You can hit Search More though and get a selection of code snippets and sample projects, drawn from sites including MSDN, StackOverflow and Codeproject.

image

Developer beware though. Looking at the code samples, the top one is from a 2011 blog post relating to CTP (Community Tech Preview) 5 of Entity Framework 4.1. If you hit the link, you get this:

image

“The information in this post is out of date”, it says, followed by a link to what is in fairness a rather helpful article on using SaveChanges.

Hmm, maybe Bing Developer Assistant should try filtering the search to eliminate samples on preview or obsolete APIs? A snag here though is that on occasion the blogs and samples on preview frameworks are all you can get, because by the time the thing is actually released, the developer evangelists have move on to blog about the next up and coming cool thing.

If you choose an object member for which Bing finds no code sample, you are prompted to add one of your own:

image

This takes to to the Developer Network sample upload page:

image

This form is quite a lot of work, but lets you add a code snippet or sample project together with title and comments explaining what it does.

The Bing Developer Assistant also searches for sample projects:

image

Again it is a case of picking and choosing what is really relevant; but developers are experts and expected to use common sense.

A drawback with Bing Developer Assistant is that only one add-on can extend IntelliSense, so if you use Resharper or another tool which also does this, you have to choose which one to allow.

In the end, this is all about integrating web search into the IDE. Is that a good idea, or is it better simply to have your web browser open, perhaps on another display, and type “dbContext SaveChanges EF6” or some such into your favourite search engine?

There is some merit in a search engine that automatically filters to show only code samples – hey, that is what Google’s popular Code Search did, until it was mysteriously shut down – though I’m not sure how much I like the idea of possibly obsolete and deprecated samples showing up in Visual Studio as you are coding.

Still, the truth is that web search is critical to software development today and it is good to see that recognised.

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Having it both ways: can Microsoft equally back Windows Phone and “Any device”?

I attended an event in London which was a kind-of UK launch for Windows Phone 8.1. The first Lumia device running 8.1, the Lumia 630, is now on sale, though this was not the main focus. It was more about asking businesses to take another look at Windows Phone (and Windows tablets), following improvements Microsoft has made. The company is particularly pleased with a new white paper from MobileIron, a well-known company in mobile device management, praising the new security and manageability features:

Windows Phone 8 did not meet the stringent policies some enterprises required for protecting corporate data and resources. The release of Windows Phone 8.1 changes the game. Microsoft is delivering a rich new feature-set for business users, and providing IT departments with the compliance and security they require. These new security and management features, called the Enterprise Feature Pack, are included as a core component of Windows Phone 8.1. When combined with an enterprise mobility management (EMM) platform, these capabilities make it much easier for enterprises to adopt the Windows Phone platform.

Fair enough, though from what I can tell Windows Phone is still struggling to get the momentum it needs. Too many companies perceive that if they support iOS and Android then that is it, job done, as evidenced by this advertisement I saw recently. This in turn dampens sales. It is an unfortunate position to be in, particularly given the good work Microsoft (and Nokia) has done on the phone OS itself. I prefer the Windows Phone user interface to that in Android, but still need an Android device in order to try out new apps.

This could change if Microsoft can continue gradually bumping up its market share, but it is tough. The wider company is now side-stepping the problem by focusing on its strengths in Office, Active Directory and Office 365, and offering first-class support for these on iOS and Android, as evidenced by the excellent Office for iPad launched earlier this year.

There is a dilemma here though. Some Windows Phone users choose the phone because they feel it will work best with Microsoft’s business platform. Could the “any device” policy end up undermining Microsoft’s efforts to promote Windows Phone?

I put this to Chris Weber, Microsoft’s Corporate Vice President of Mobile Device Sales, who has come to the company from Nokia (before which he was at Microsoft, so a true Windows veteran).

image

From a business perspective, providing cloud services, management, security, it is a multi-platform world. It is a great business decision for Microsoft to be multi-platform. Customers demand it as well.  That doesn’t mean we don’t want to create the most compelling platform and set of devices that bring Windows to life. I think the cross-platform thing is a great story … but the benefit of us [Nokia and Microsoft] coming together is now we have hardware, software and services that can be integrated in a totally different way, and we’re one of the few players that have all those components. The level of integration is much greater on the Windows platform. For example, Office is built in, you don’t have to go to a store and download it. The Linq client is built into the calendar. The email client, being able to have rights protection. The mail client itself is the best of any of them. The ability to access a SharePoint site across the firewall without a VPN connection, unique to Windows Phone.

Then we also have to win the end user. We have to win IT and those requirements, but you also have to get end users excited. Things that you see in 8.1, like Cortana, there’s a huge benefit there. And we’re bringing that across every price point.

Fair points; yet currently the iPad has a better touch-friendly Office than Windows tablets or Windows Phone; and Windows phone users have frustrations where the integration falls short. One remarkable thing, for example, is that there is no way to use a shared Exchange or SharePoint calendar on Windows Phone other than in the browser, so no integration with the built-in calendar or offline support.

What Weber describes, near-perfect integration between Windows mobile devices and Microsoft’s server applications, should be the case though; making this even better should be a high priority for CEO Satya Nadella’s new Microsoft.

Weber makes the bold claim that he can convert any user to Windows Phone, but says the challenge is to make this happen at retail level, when the customer wanders in looking for a smartphone:

If you give me fifteen minutes, I think I can convince any iPhone or Android user to move to Windows Phone. We have to do this not in fifteen minutes but in probably a minute and a half, at retail, with people who are selling multiple devices and are used to selling the competitor platform more than us.

Focusing on enterprise integration is in my view long overdue, and a few large enterprise adoptions would give Windows Phone a significant boost. At retail though, my guess is that Microsoft’s main hope is what Nokia did so well: delivering a good smartphone experience in budget devices – the “every price point” to which Weber refers.

Visual Studio “14” announced, preview available with “Roslyn” open source compiler

Microsoft’s Soma Somasegar has announced the next version of Visual Studio, currently known as Visual Studio 14, but likely to be fully released in 2015 (and, I am guessing, likely to be called Visual Studio 2015).

This is a major release. It includes a new VB and C# compiler which is itself written in managed code, codenamed Roslyn. The open source Roslyn project provides new APIs that enable more powerful IDE features. Visual Basic is getting refactoring support for the first time.

The preview also includes a major update to ASP.NET that unifies ASP.NET MVC and the ASP.NET Web API, and has a new deployment model and developer experience:

Thanks to the Rosyln compiler, if you change ".cs” files or project.json file and want to see the change in the browser, you don’t need to build the project any more. Just refresh the browser.

There is no IIS express, nor IIS involved when you run from the command line. It means that you can publish your website to a USB drive, and run it by double clicking the web.cmd file!

On the C++ side, there is improved C++ 11 support and more features from C++ 14:

The Visual Studio "14" CTP includes support for user-defined literals, noexcept, alignof and alignas, and inheriting constructors from C++11, generalized lambda capture, auto function return type deduction, and generic lambdas from C++14, as well as many more new C++ features.

says Somasegar. There is also a refactored C Runtime (CRT):

msvcr140.dll no longer exists. It is replaced by a trio of DLLs: vcruntime140.dll, appcrt140.dll, and desktopcrt140.dll.

If you install the CTP (mine is downloading) use a spare machine or VM; it is an early preview that does not work side-by-side with other versions and the only uninstall may be to flatten the machine:

Installing a CTP release will place a computer in an unsupported state. For that reason, we recommend only installing CTP releases in a virtual machine, or on a computer that is available for reformatting.

Xamarin 3.0 brings iOS visual design to Visual Studio, cross-platform XAML, F#, NuGet and more

Xamarin has announced the third version of its cross-platform tools, which use C# and .NET to target multiple platforms, including iOS, Android and Mac OS X.

Xamarin 3.0 is a big release. In summary:

Xamarin Designer for iOS

Using a visual designer for iOS Storyboard projects, you can create and modify a GUI in both Visual Studio and Xamarin Studio (Xamarin’s own IDE). The designer uses the native Storyboard format, so you can open and modify existing files created in Xcode on the Mac. The technology here is amazing, since you iOS controls are rendered remotely on a Mac, and transmitted to the designer on Windows. See here for a quick hands-on.

Xamarin Forms

Xamarin has created the cross-platform GUI framework that it said it did not believe in. It is based on XAML though not compatible with Microsoft’s existing XAML implementations. There is no visual designer yet.

Why has Xamarin changed its mind? It was pressure from enterprise customers, from what I heard from CEO Nat Friedman. They want to make internal mobile apps with many forms, and do not want to rewrite the GUI code for every mobile platform they support.

Friedman made the point that Xamarin Forms still render as native controls. There is no drawing code in Xamarin Forms.

“The challenge for us in  building Xamarin forms was to give people enhanced productivity without compromising the native approach. The mix and match approach, where you can mix in native code at any point, you can get a handle for the native control, we’re think we’ve got the right compromise. And we’re not forcing Xamarin forms on you, this is just an option,”

he told me.

Again, there is a quick hands-on here.

F# support

F# is now officially supported in Xamarin projects. This brings functional programming to Xamarin, and will be warmly welcomed by the small but enthusiastic F# community (including, as I understand it, key .NET users in the financial world).

Portable Class Libraries

Xamarin now supports Microsoft’s Portable Class Libraries, which let you state what targets you want to support, and have Visual Studio ensure that you write compatible code. This also means that library vendors can easily support Xamarin if they choose to do so.

NuGet Packages

The NuGet package manager has transformed the business of getting hold of new libraries for use in Visual Studio. Now you can use it with Xamarin in both Visual Studio and Xamarin Studio.

Microsoft partnership

Perhaps the most interesting part of my interview with Nat Friedman was what he said about the company’s partnership with Microsoft. Apparently this is now close both from a technical perspective, and for business, with Microsoft inviting Xamarin for briefings with key customers.

Hands on with Xamarin 3.0: a cross-platform breakthrough for Visual Studio

Today Xamarin announced version 3.0 of its cross-platform mobile development tools, which let you target Android and iOS with C# and .NET. I have been trying a late beta preview.

In order to use Xamarin 3.0 with iOS support you do need a Mac. However, you can do essentially all of your development in Visual Studio, and just use the Mac for debugging.

To get started, I installed Xamarin 3.0 on both Windows (with Visual Studio 2013 installed) and on a Mac Mini on the same network.

image

Unfortunately I was not able to sit back and relax. I got an error installing Xamarin Studio, following which the installer would not proceed further. My solution was to download the full DMG (Mac virtual disk image) for Xamarin Studio and run that separately. This worked, and I was able to complete the install with the combined installer.

When you start a Visual Studio iOS project, you are prompted to pair with a Mac. To do this, you run a utility on the Mac called Xamarin.IOS Build Host, which generates a PIN. You enter the PIN in Visual Studio and then pairing is active.

image

Once paired, you can create or open iOS Storyboard projects in Visual Studio, and use Xamarin’s amazing visual designer.

image

Please click this image to open it full-size. What you are seeing is a native iOS Storyboard file open in Visual Studio 2013 and rendering the iOS controls. On the left is a palette of visual components I can add to the Storyboard. On the right is the normal Visual Studio solution explorer and property inspector.

The way this works, according to what Xamarin CEO Nat Friedman told me, is that the controls are rendered using the iOS simulator on the Mac, and then transmitted to the Windows designer. Thus, what you see is exactly what the simulator will render at runtime. Friedman says it is better than the Xcode designer.

“The way we do event handling is far more intuitive than Xcode. It supports the new iOS 7 auto-layout feature. It allows you to live preview custom controls. Instead of getting a grey rectangle you can see it live rendered inside the canvas. We use the iOS native format for Storyboard files so you can open existing Storyboard files and edit them.”

I made a trivial change to the project, configured the project to debug on the iOS simulator, and hit Start. On the Mac side, the app opened in the simulator. On the Windows side, I have breakpoint debugging.

image

Now, I will not pretend that everything ran smoothly in the short time I have had the preview. I have had problems with the pairing after switching projects in Visual Studio. I also had to quit and restart the iOS Simulator in order to get rendering working again. This is an amazing experience though, combining remote debugging with a visual designer on Visual Studio in Windows that remote-renders design-time controls.

Still, time to look at another key new feature in Xamarin 3: Xamarin Forms. This is none other than our old friend XAML, implemented for iOS and Android. The Mono team has some experience implementing XAML on Linux, thanks to the Moonlight project which did Silverlight on Linux, but this is rather different. Xamarin forms does not do any custom drawing, but wraps native controls. In other words, it like is the Eclipse SWT approach for Java, and not like the Swing approach which does its own drawing. This is keeping with Xamarin’s philosophy of keeping apps as native as possible, even though the very existence of a cross-platform GUI framework is something of a compromise.

I have not had long to play with this. I did create a new Xamarin Forms project, and copy a few lines of XAML from a sample into a shared XAML file. Note that Xamarin Forms uses Shared Projects in Visual Studio, the same approach used by Microsoft’s Universal Apps. However, Xamarin Forms apps are NOT Universal Apps, since they do not support Windows 8 (yet).

image 

In a Shared Project, you have some code that is shared, and other code that is target-specific. By default hardly any code is shared, but you can move code to the shared node, or create new items there. I created XamFormsExample.xaml in the shared node, and amended App.cs so that it loads automatically. Then I ran the project in the Android emulator.

image

I was also able to run this on iOS using the remote connection.

I noticed a few things about the XAML. The namespace is:

xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"

I have not seen this before. Microsoft’s XAML always seems to have a “2006” namespace. For example, this is for a Universal App:

xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml

However, XAML 2009 does exist and apparently can be used in limited circumstances:

In WPF, you can use XAML 2009 features, but only for XAML that is not WPF markup-compiled. Markup-compiled XAML and the BAML form of XAML do not currently support the XAML 2009 language keywords and features.

It’s odd, because of course Xamarin’s XAML is cut-down compared to Microsoft’s XAML. That said, I am not sure of the exact specification of XAML in Xamarin Forms. I have a draft reference but it is incomplete. I am not sure that styles are supported, which would be a major omission. However you do get layout managers including AbsoluteLayout, Grid, RelativeLayout and StackLayout. You also get controls (called Views) including Button, DatePicker, Editor, Entry (single line editor), Image, Label, ListView, OpenGLView, ProgressBar, SearchBar, Slider, TableView and WebView.

Xamarin is not making any claims for compatibility in its XAML implementation. There is no visual designer, and you cannot port from existing XAML code. The commitment to wrapping native controls may limit prospects for compatibility. However, Friedman did say that Xamarin hopes to support Universal Apps, ie. to run on Windows 8 as well as Windows Phone, iOS and Android. He said:

I think it is the right strategy, and if it does take off, which I think it will, we will support it.

Friedman says the partnership with Microsoft (which begin in November 2013) is now close, and it would be reasonable to assume that greater compatibility with Microsoft XAML is a future goal. Note that Xamarin 3 also supports Portable Class Libraries, so on the non-visual side sharing code with Microsoft projects should be straightforward.

Personally I think both the Xamarin forms and the iOS visual designer (which, note, does NOT support Xamarin Forms) are significant features. The iOS designer matters because you can now do almost all of your cross-platform mobile development within Visual Studio, even if you want to follow the old Xamarin model of a different, native user interface for each platform; and Xamarin Forms because it enables a new level of code sharing for Xamarin projects, as well as making XAML into a GUI language that you can use across all the most popular platforms. Note that I do have reservations about XAML; but it does tick the boxes for scaling to multiple form factors and for enormous flexibility.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.