Tag Archives: c++

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:


To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:


The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Time for another look at “pure .NET”

Back in the Nineties there was a lot of fuss about “pure Java”. This meant Java code without any native code invocations that tie the application to a specific operating system.

It is possible to write cross-platform Java code that invokes native code, but it adds to the complexity. If it is an operating system API you need conditional code so that the write API is called on each platform. If it is a custom library it will have to be compiled separately for each platform.

Over on the Microsoft .NET site, developers have tended to have a more casual approach. After all, in the great majority of cases the code would only ever run on Windows. Further, Microsoft tended to steer developers towards Windows-only dependencies like SQL Server. After all, that is the value of owning a developer platform.

Times change. Microsoft has got the cross-platform bug, with its business strategy based on attracting businesses to its cloud properties (Office 365 and Azure) rather than Windows. The .NET Framework has been forked to create .NET Core, which runs on Mac and Linux as well as Windows. SQL Server is coming to Linux.

Another issue is porting applications from 32-bit to 64-bit, as I was reminded recently when migrating some ASP.NET applications to a new site. If your .NET code avoids P/Invoke (Platform Invoke) then you can compile for “Any CPU” and 64-bit will just work. If you used P-invoke and want to support both 32-bit and 64-bit it requires more care. IntPtr, used frequently in P/Invoke calls, is a different size. If you have custom native libraries, you need to compile them separately for each platform. The lazy solution is always to run as 32-bit but that is a shame.

What this means is that P/Invoke should only be used as a last resort. Arguably this has always been true, but the reasons are stronger today.

This is also an issue for libraries and components intended for general use, whether open source or commercial. It is early days for .NET Core support, but any native code dependencies will be a problem.

Breaking the P/Invoke habit will not be easy but “Pure .NET” is the way to go whenever possible.

Xamarin announces large round of funding, plans international expansion

It is a case of “right time, right place” for Xamarin, as it scoops up Windows developers who need either to transition to iOS and Android, or to add mobile support to existing applications. You can also port applications to the Mac with its cross-platform development framework based on C#; no bad thing as Mac sales continue to boom.


Xamarin also fits with Microsoft’s new strategy, as I understand it, which is to provide strong support for iOS and Android for applications such as Microsoft Office, and services such as those hosted on Microsoft Azure.

Now the company has announced an additional $54 million of funding, which CEO Nat Friedman tells me is “the largest round of financing achieved by any mobile platform company ever”.

The financing comes from “new and existing investors, including Lead Edge Capital, Insight Venture Partners, Charles River Ventures, Ignition Partners, and Floodgate.”

What will the money be spent on? “Two things,” says Friedman. “We’re planning to expand our sales and marketing into Europe. We’re opening a sales office in London in the Fall. We did a roadshow with Microsoft in Europe and it was extremely successful. Second, we’re going to invest in improving the quality of our platforms.”

Friedman notes that mobile should not be considered a development niche. “Our view is that in the future all software will be mobile software in some way or another, when you build an application it will have to have some kind of mobile surface area.”

A few other points to note. One is that Xamarin Forms, recently introduced, has been a big hit with developers. “The Xamarin Forms forum has been our most popular forum,” says Friedman. “We’ve been really surprised.”

The company used to promote the idea of avoiding cross-platform code for the user interface, but then introduced Xamarin Forms as a cross-platform GUI framework, arguing that because it uses only native controls, it avoids the main drawbacks of the idea.

Some of the funding then will go into improving Xamarin Forms and tools to work with the framework.

Another key area is Visual Studio integration. The acquisition of the Visual Studio integration team from Clarius Consulting, in May 2014, is also significant here, since Clarius had strong expertise in this area.

Might Microsoft try to acquire Xamarin? Interesting question, and one which Friedman is not in a position to discuss; I am not a financial expert but would guess that Xamarin’s independent expansion increases its ability to be independent, though investors may be hoping to reap the rewards of an acquisition, who knows?

RemObjects previews native Apple Mac IDE for C#, .NET, Oxygene

RemObjects is previewing a new native Mac IDE for its Oxygene and C# compilers. Oxygene is a Delphi-like language (in other words, a variant of Object Pascal) which targets iOS, Mac, Android, Windows Phone and Windows. RemObjects C# shares the same targets. Both can compile to .NET assemblies for Windows, or to Mono for cross-platform .NET, or to a Mac or iOS executable (using the LLVM compiler), or to Java bytecode for the Android Dalvik runtime. You can get both Oxygene and RemObjects  C# bundled in a product called Elements.

In the past, RemObjects has used Visual Studio as its IDE. While this is a natural choice for Windows users, much development today is done on the Mac. Requiring Mac users to develop in a Windows Virtual Machine adds friction, so RemObjects is now working on a native IDE for the Mac codenamed Fire.


I gave Fire the briefest of looks. Here are some of the options for a new .NET application:


Note the appearance of ASP.NET MVC 4, and even Silverlight.

Here are the options for a new Cocoa application:


If you are developing for Cocoa, you can edit the resource file in Apple’s Xcode and use it in your application. I started a new C# Cocoa app, made a few changes and and then ran it from the IDE:


I imagine Microsoft will be keeping an eye on tools like this – if it is not, it should – since they fit with the strategy of supporting Microsoft services on multiple devices. Visual Studio is a fine tool but if Microsoft is serious about cross-platform, it needs strong Mac-native development tools. Xamarin came up with Xamarin Studio, which is cross-platform for Windows and Mac, but the RemObjects approach also looks worth investigating.

PS The first release of RemObjects C# lacked full generic support, for which failing Xamarin and Mono founder Miguel de Icaza took RemObjects to task on Twitter. I was amused to see this in the changelog for April 2014:


65764 Full support for Generics on Cocoa, as requested by Miguel

For more details on Fire, see here.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.


There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Embarcadero AppMethod: another route to cross-platform mobile, now with C++ support

Embarcadero has updated AppMethod, its IDE for cross-platform mobile and desktop applications. The IDE now supports C++, and as a special offer, you can develop Android phone “free forever”, according to the web site.

AppMethod is none other than our old friend Delphi, combined with the FireMonkey cross-platform framework. The difference between AppMethod and the older RAD Studio product line (current version is XE6) is twofold:

1. AppMethod does not include the VCL, the Delphi framework for Windows applications. It does let you develop for Windows or Mac OS X using FireMonkey.

2. You can buy RAD Studio outright with a perpetual license, from £1342.00 plus VAT for a new user (RAD Studio Professional). AppMethod is only available on subscription.

AppMethod pricing is per developer per platform per year. Currently this is £179.83 plus VAT for individuals (very small businesses up to a maximum of 5 employees in the entire organisation) or £600 for larger businesses (a rather large premium).

C++ support is new in AppMethod 1.14 and supports all target platforms except the iOS Simulator (an annoying limitation). It supports ARC (Automatic Reference Counting) on Android as well as iOS. Mac OS X is supported from 10.8 (Mountain Lion) and up.

There are also a few changes in FireMonkey. You can load HTML into the TWebBrowser component using LoadFromStrings. There is a new date picker component.

Another new feature is in the RTL (run time library). Called App Tethering, it lets applications communicate with each other, for example using TCP. These can be apps on the same device or remote apps. Once paired, apps can run remote actions and share standard data types and streams.

There are also updates to push notifications for iOS and Android, Google Glass support, updated OpenGL and DirectX support on Windows, and more: see here for the complete documentation of what is new.

A Quick Hands-on

I installed the latest AppMethod on Windows 8. The install warns that AppMethod cannot co-exist with RAD Studio XE6, presumably because it is essentially the same thing re-wrapped. The product name is relatively new, but there is plenty of old stuff under the covers. AppMethod still has a dependency on JSharp, Microsoft’s Java implementation for .NET. Java code in the IDE dating back to who knows when?


There is a 10-field dialog conforming paths for Android tools, which is a reminder of how many moving parts there are here. It is more complex that most Android development environments because it uses the NDK (Native Development Kit) as well as the usual SDK.


Once up and running, you can start a new project such as a FireMonkey mobile application:


and then you are in an IDE which would not be entirely unfamiliar to a Delphi user in 1995 (or I suppose, a C++ Builder user in 1997) – I am not saying this is a bad thing, though the IDE feels dated in comparison to Microsoft’s Visual Studio.


After coming from a spell of development with XAML it feels odd to have a form builder that defaults to xy layout, but layout managers are available:


Compile and run, and after the usual slow initialization of the Android emulator, the app appeared.


Why AppMethod?

In the crowded world of cross-platform mobile development, why use AppMethod?

Embarcadero makes a big play of its native development, though it is “native” in respect of code execution but not in GUI fidelity since by default visual controls are custom-drawn by the framework. This is in contrast to Xamarin (the obvious alternative for developers from a Windows background) which does no custom drawing but only uses native controls; however for raw performance AppMethod may have the edge (I have not done comparisons).

Delphi developers should also look at RemObjects Oxygene which also uses a Delphi-like language but is hosted in Visual Studio and, like Xamarin, uses native UI components.

The AppMethod approach does make sense if you prioritise maximum code-sharing over getting exactly the right look and feel for each supported platform, and need better performance or more capability than HTML and JavaScript can get you. There is no support for Windows Phone though; if that is in your plans, Xamarin or HTML and JavaScript development is a better fit.

Apple’s Swift programming language: easy coding for OS X and iOS at last?

Apple has announced a new programming language, called Swift. (There was already a language called Swift, used for parallel scripting, but Apple links to the other Swift in case you land on the wrong page. So far it looks like the other Swift has not returned the favour).

For as long as I can remember, serious Apple developers have had to use Objective-C, an object-oriented C that is not like C++. I have only dabbled in Objective-C but when I last tried it I was pleasantly surprised: memory management was no hassle and I found it productive. Nevertheless it is an intimidating language if you come from a background of, say, JavaScript or Microsoft .NET. Apple’s focus on Objective-C has left a gap for easier to use alternatives, though the main reason developers use something other than Objective-C, as far as I am aware, is for cross-platform projects. Companies such as Xamarin and Embarcadero (with Delphi) have had some success, and of course Adobe PhoneGap (or the open source Cordova) has had significant take-up for cross-platform code based on HTML and JavaScript.

I should mention that RAD (Rapid Application Development) on OS X has long been possible using the wholly-owned Filemaker, a database manager with a powerful scripting language, but this is not suitable for general-purpose apps.

Overall, it is fair to say that coding for OS X and iOS has a higher bar than for Windows because Apple has not provided anything like Microsoft’s C# or Visual Basic, type-safe languages with easy form builders that let you snap together an application in a short time, while still being powerful enough for almost any purpose. This has been a differentiator for Windows. Visual Basic is almost as old as Windows itself, and C# was introduced in 2000.

Now Apple has come up with its own equivalent. I am new to Swift as are most people outside Apple, but took a quick look at the book, The Swift Programming Language, along with the announcement details. A few highlights:

  • Swift is a type-safe language that compiles to native code using LLVM.
  • The IDE for Swift is Xcode. It supports Cocoa development (Apple’s user interface framework) via import of the existing Objective-C frameworks, which become Swift APIs via the import keyword:

import UIKit

  • You can mix Swift and Objective-C in a single project. In Objective C you can use #import to make Swift code visible and usable.
  • Swift is a C-family language and you will find familiar features like curly braces and semi-colons to terminate lines (though semi-colons are optional).
  • Swift uses reference counting for automatic memory management. There is rather complex section in the book about weak references and unowned references, to solve some of the problems inherent in reference counting.
  • Type inference is the preferred approach to declaring the type of a variable, but you can state the type if required. You can also declare constants.
  • Swift supports single inheritance for classes and multiple inheritance for protocols (protocols are more or less equivalent to interfaces in other languages).
  • There are advanced features including closures, generics, tuples, and variadic parameters. (I am not sure if “advanced” is the right word, but other languages such as C# and Java took a while to get these).
  • Swift has something like destructors which it calls deinitializers.
  • There is an interesting feature called Extensions which lets you add methods to any existing type. For example, you could extend Int with a prettyprint method and then call 3.prettyprint.
  • Swift variables are not normally nullable; they must have a value. However you can declare optional types (add a ?, such as Int?) that can be set to nil. You can also declare implicitly unwrapped optionals which can be nil, but once assigned a value cannot be nil thereafter.
  • Swift includes the AnyObject type which can represent anything.

Swift seems to me to have similar goals to Microsoft’s C#: easier and safer than C or C++, but intended for any use right up to large and complex applications. One of the best things about it is the smooth interoperability with Objective-C; this also saves Apple from having to write native Swift frameworks for its entire stack.

A smart move? I think so, though Swift is different enough from any other language that developers have some learning to do.

What difference will Swift make? Initially, not that much. Objective-C developers now have a choice and some will move over or start mixing and matching, but Swift is still single-platform and will not change the developer landscape. That said, Swift may make Apple’s platform more attractive to business developers, for whom C# or Java is currently more productive; and perhaps Apple could find ways of using Swift in places where previously you would have to use AppleScript, extending its usefulness.

If Apple developers were tempted towards Xamarin or Delphi for productivity, as opposed to cross-platform, they will probably now use Swift; but I doubt there were all that many in that particular group.

I would be interested to hear from developers though: what do you think of Swift?

Xamarin 3.0 brings iOS visual design to Visual Studio, cross-platform XAML, F#, NuGet and more

Xamarin has announced the third version of its cross-platform tools, which use C# and .NET to target multiple platforms, including iOS, Android and Mac OS X.

Xamarin 3.0 is a big release. In summary:

Xamarin Designer for iOS

Using a visual designer for iOS Storyboard projects, you can create and modify a GUI in both Visual Studio and Xamarin Studio (Xamarin’s own IDE). The designer uses the native Storyboard format, so you can open and modify existing files created in Xcode on the Mac. The technology here is amazing, since you iOS controls are rendered remotely on a Mac, and transmitted to the designer on Windows. See here for a quick hands-on.

Xamarin Forms

Xamarin has created the cross-platform GUI framework that it said it did not believe in. It is based on XAML though not compatible with Microsoft’s existing XAML implementations. There is no visual designer yet.

Why has Xamarin changed its mind? It was pressure from enterprise customers, from what I heard from CEO Nat Friedman. They want to make internal mobile apps with many forms, and do not want to rewrite the GUI code for every mobile platform they support.

Friedman made the point that Xamarin Forms still render as native controls. There is no drawing code in Xamarin Forms.

“The challenge for us in  building Xamarin forms was to give people enhanced productivity without compromising the native approach. The mix and match approach, where you can mix in native code at any point, you can get a handle for the native control, we’re think we’ve got the right compromise. And we’re not forcing Xamarin forms on you, this is just an option,”

he told me.

Again, there is a quick hands-on here.

F# support

F# is now officially supported in Xamarin projects. This brings functional programming to Xamarin, and will be warmly welcomed by the small but enthusiastic F# community (including, as I understand it, key .NET users in the financial world).

Portable Class Libraries

Xamarin now supports Microsoft’s Portable Class Libraries, which let you state what targets you want to support, and have Visual Studio ensure that you write compatible code. This also means that library vendors can easily support Xamarin if they choose to do so.

NuGet Packages

The NuGet package manager has transformed the business of getting hold of new libraries for use in Visual Studio. Now you can use it with Xamarin in both Visual Studio and Xamarin Studio.

Microsoft partnership

Perhaps the most interesting part of my interview with Nat Friedman was what he said about the company’s partnership with Microsoft. Apparently this is now close both from a technical perspective, and for business, with Microsoft inviting Xamarin for briefings with key customers.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.


Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.