Tag Archives: .net

Time for another look at “pure .NET”

Back in the Nineties there was a lot of fuss about “pure Java”. This meant Java code without any native code invocations that tie the application to a specific operating system.

It is possible to write cross-platform Java code that invokes native code, but it adds to the complexity. If it is an operating system API you need conditional code so that the write API is called on each platform. If it is a custom library it will have to be compiled separately for each platform.

Over on the Microsoft .NET site, developers have tended to have a more casual approach. After all, in the great majority of cases the code would only ever run on Windows. Further, Microsoft tended to steer developers towards Windows-only dependencies like SQL Server. After all, that is the value of owning a developer platform.

Times change. Microsoft has got the cross-platform bug, with its business strategy based on attracting businesses to its cloud properties (Office 365 and Azure) rather than Windows. The .NET Framework has been forked to create .NET Core, which runs on Mac and Linux as well as Windows. SQL Server is coming to Linux.

Another issue is porting applications from 32-bit to 64-bit, as I was reminded recently when migrating some ASP.NET applications to a new site. If your .NET code avoids P/Invoke (Platform Invoke) then you can compile for “Any CPU” and 64-bit will just work. If you used P-invoke and want to support both 32-bit and 64-bit it requires more care. IntPtr, used frequently in P/Invoke calls, is a different size. If you have custom native libraries, you need to compile them separately for each platform. The lazy solution is always to run as 32-bit but that is a shame.

What this means is that P/Invoke should only be used as a last resort. Arguably this has always been true, but the reasons are stronger today.

This is also an issue for libraries and components intended for general use, whether open source or commercial. It is early days for .NET Core support, but any native code dependencies will be a problem.

Breaking the P/Invoke habit will not be easy but “Pure .NET” is the way to go whenever possible.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Microsoft open sources heart of .NET: CoreCLR runtime now on GitHub

Microsoft’s CoreCLR is now available on GitHub. We knew this was coming, but it is still a significant step, since this piece is the very heart of .NET: the execution engine that consumes a .NET IL (Intermediate Language) executable and compiles it to machine code for execution. The IL can easily be decompiled back to C#; it is in a sense fairly close to what you wrote in the editor. The CLR piece compiles it to a native executable, and also handles garbage collection (automatic memory management) and interop with other  native code libraries. The just-in-time compiler in CoreCLR is called RyuJIT.

CoreCLR is not same as the .NET Framework CLR (as found in the Windows desktop today), though one thing we now learn is that it is a true subset:

CoreCLR is a subset of the .NET Framework CLR. They share the same codebase and are updated together. For example, an update to the .NET GC improves both CoreCLR and the .NET Framework CLR.

We setup a live 2-way mirror between the coreclr repo on GitHub and the .NET Framework TFS server within Microsoft. The latency of the mirror is low, measurable in minutes.

Contributions made to the coreclr repo are integrated to the Microsoft TFS server automatically and will become part of both the .NET Framework and .NET Core products. The same is true in reverse, that .NET Framework CLR changes (within the CoreCLR subset) are mirrored to the CoreCLR repo. These changes will sometimes result in large commits to unrelated components.

This is good news since it reduces the risk of fragmentation between the .NET Framework and the CoreCLR. Note that the same does not apply to the framework libraries, which are forked between .NET Framework and CoreFX. The reason for the fork is to enable cross-platform .NET and to benefit from greater modularity in the Framework without breaking the existing .NET Framework.

Some other points of interest:

  • CoreCLR will run on Linux and Mac but not yet, this is work in progress
  • CoreCLR powers Windows Phone apps as well as ASP.NET 5
  • CoreCLR uses the CMake build system rather than MSBuild, because it runs cross-platform

There is a key architectural difference between CoreCLR and the .NET Framework, which is that in CoreCLR each application is deployed with the runtime and libraries it requires, whereas in the .NET Framework applications depend on a system-managed runtime and shared libraries. This has the advantage that applications are standalone, and you could run one from say a portable USB drive on a system which did not have .NET or Mono installed.

The disadvantage, aside from greater use of disk space, is that patching the same libraries across multiple applications is hard. In the interview here Microsoft offers a clue about how it might come up with a solution for this. Jan Kotas on the CLR team talks about an ideal scenario where identical copies of the same DLL are in fact shared even though each application appears to have its own copy. This sounds similar to the mechanism used by de-duplication in Windows Server. The file system makes it look as if several copies of a file exist in different directories, but in fact there is only one. If you update a file though, the right thing happens and only the virtual copy that you overwrite is changed. It sounds as if Kotas has in mind a variant where you could say, “update this file and all its instances elsewhere.” This would of course somewhat undermine the concept of app-isolated dependencies; but you know what they say about cakes and eating them:

“The ideal we should get to is every application has a local copy of everything. People eventually get to a point where through some OS mechanisms or through some other means the DLLs that are the same between different applications would get shared. That way nobody needs to worry about is this shared, or is it not shared. The ideal place that we’d like to get to is that sharing happens under the hood. It can happen through different mechanisms for different applications. [That would be the] ideal place for the runtime and how to version it.”

said Kotas. Possibly I am misinterpreting this; but it does sound like some kind of sharing-but-not-sharing solution to the patching problem.

Another point to note: a managed code application cannot execute without help. In order to run, every managed application needs three things:

1. The application code

2. The CLR – either CoreCLR or the .NET Framework CLR

3. A CLR host which loads the CLR and instructs it to execute the application. The CLR host has to be native code, for obvious reasons.

In the .NET Framework this third piece is invisible, since it is handled by the operating system (though apparently SQL Server is a special case). In the CoreCLR world though, you need to think about the CLR host. ASP.NET 5.0 has the KRuntime (K probably stands for Katana) which I think is the same as Project K. If you want to test CoreCLR today, you can use a host called CoreConsole which (as its name implies) lets you run console apps. Apparently there are a few technical problems using CoreCLR with ASP.NET 5 as the moment.

image

What is .NET Core, “the foundation of all future .NET platforms”?

I have been looking at .NET Core, an official Microsoft open source project which you can find on github and which is at the heart of Microsoft’s plans to open source most of its .NET technology.

Currently there are three Microsoft repositories for the .NET Core platform. There are the .NET Compiler Platform (“Roslyn”), ASP.NET 5, and the .NET Core Framework. Note that these are all v.Next versions of the .NET Framework. ASP.NET 5 and the .NET Core Framework are on github, but Roslyn is on CodePlex, Microsoft’s open source repository site. There is also a github repository for Entity Framework 7, currently part of ASP.NET though I am not sure that it belongs there. The current version of EF is 6.11 but the code for this is on CodePlex. The KRuntime, which is the implementation of the parts of the .NET Runtime needed to host an ASP.NET application, is also in the ASP.NET repository. Its full name is the K Runtime Environment (KRE); I am not sure what K stands for. Note that Microsoft has only promised to open source the .NET server stack, not desktop frameworks like Windows Presentation Foundation.

I had a look at the .NET Core Framework. This is the key set of libraries for .NET applications. The easiest way to build the core libraries is from the command line. Open a Visual Studio 2013 Developer Command Prompt (which sets up the path and environment for command line builds), go to your clone of the github repository and type build.

image

Cool. But what is in it? Not that much: System.Collections, Parallel Linq, Vectors and XML libraries.

“More is coming soon. Stay tuned!” say the docs. And in this blog post by Microsoft’s Immo Landwerth:

Consider the subset we have today a down-payment on what is to come. Our goal is to open source the entire .NET Core library stack by Build 2015.

Landwerth says that Microsoft is “currently figuring out the plan for open sourcing the runtime”; this is the native code that creates the .NET Virtual Machine which executes .NET code.

Of course there is also Mono, the old open source implementation of .NET which is from an independent code base.

This is exciting stuff for .NET developers, especially since official runtimes for Linux and Mac are also promised, but also somewhat confusing. What is .NET Core versus what we have known as the .NET Framework?

Here is a diagram from Landwerth’s blog:

image

I presume that the top left box (.NET Framework) has not been promised as open source, but the other two boxes have. Note that ASP.NET 5 will run on either .NET Core or the full .NET Framework; and that .NET Native – the project to compile a .NET application as true native code – sits as part of .NET Core.

Store apps (also known as Windows Runtime apps, or Metro apps) are not covered in the above diagram, but since .NET Native currently only works for Store apps, maybe .NET Core is also the .NET runtime for Store apps. Landwerth says:

.NET Core is a modular development stack that is the foundation of all future .NET platforms. It’s already used by ASP.NET 5 and .NET Native.

There are also some clues about .NET Core in the home page for the github repository:

.NET Core and the .NET Framework have (for the most part) a subset-superset relationship. .NET Core is named "Core" since it contains the core features from the .NET Framework, for both the runtime and framework libraries. For example, .NET Core and the .NET Framework share the GC, the JIT and types such as String and List<T>. We’ll continue improving these components for both .NET Core and .NET Framework.

.NET Core was created so that .NET could be open source, cross platform and be used in more resource-constrained environments. We have also published a subset of the .NET Reference Source under the MIT license, so that you and the community can port additional .NET Framework features to .NET Core.

The second paragraph is intriguing. Microsoft has posted parts of the source for the .NET Framework library so that the community can port some of it to .NET Core. What this means I think is not that this code should be part of .NET Core (otherwise it becomes more than just core) but rather that it would run on .NET Core.

It seems, contrary to what you might have thought, that the full .NET Framework is not a superset of .NET Core, although it is intended to be close to that. This has interesting implications for future compatibility. If .NET Core is intended to be more agile and to evolve more rapidly than the .NET Framework, since it is somewhat free of backwards compatibility constraints, we will soon find that there are features in .NET Core that do not exist in the .NET Framework as well as vice versa, in other words, two incompatible stacks. That could be a problem.

Despite Microsoft’s impressive openness in publishing much of its .NET work and forming the .NET Foundation, I for one would appreciate a clearer presentation of the plans for .NET Core and .NET Framework and the extent to which .NET Framework should now be considered a legacy or Windows desktop only technology. I suspect the answer for the moment is “wait for Build.”

Xamarin announces large round of funding, plans international expansion

It is a case of “right time, right place” for Xamarin, as it scoops up Windows developers who need either to transition to iOS and Android, or to add mobile support to existing applications. You can also port applications to the Mac with its cross-platform development framework based on C#; no bad thing as Mac sales continue to boom.

image

Xamarin also fits with Microsoft’s new strategy, as I understand it, which is to provide strong support for iOS and Android for applications such as Microsoft Office, and services such as those hosted on Microsoft Azure.

Now the company has announced an additional $54 million of funding, which CEO Nat Friedman tells me is “the largest round of financing achieved by any mobile platform company ever”.

The financing comes from “new and existing investors, including Lead Edge Capital, Insight Venture Partners, Charles River Ventures, Ignition Partners, and Floodgate.”

What will the money be spent on? “Two things,” says Friedman. “We’re planning to expand our sales and marketing into Europe. We’re opening a sales office in London in the Fall. We did a roadshow with Microsoft in Europe and it was extremely successful. Second, we’re going to invest in improving the quality of our platforms.”

Friedman notes that mobile should not be considered a development niche. “Our view is that in the future all software will be mobile software in some way or another, when you build an application it will have to have some kind of mobile surface area.”

A few other points to note. One is that Xamarin Forms, recently introduced, has been a big hit with developers. “The Xamarin Forms forum has been our most popular forum,” says Friedman. “We’ve been really surprised.”

The company used to promote the idea of avoiding cross-platform code for the user interface, but then introduced Xamarin Forms as a cross-platform GUI framework, arguing that because it uses only native controls, it avoids the main drawbacks of the idea.

Some of the funding then will go into improving Xamarin Forms and tools to work with the framework.

Another key area is Visual Studio integration. The acquisition of the Visual Studio integration team from Clarius Consulting, in May 2014, is also significant here, since Clarius had strong expertise in this area.

Might Microsoft try to acquire Xamarin? Interesting question, and one which Friedman is not in a position to discuss; I am not a financial expert but would guess that Xamarin’s independent expansion increases its ability to be independent, though investors may be hoping to reap the rewards of an acquisition, who knows?

Should you use Entity Framework for .NET applications?

I have been working on a project which I thought would be simpler than it turned out to be – nothing new there, most software projects are like that.

The project involves upload and download of large files from Azure storage. There is a database as part of the application, nothing too demanding, but requiring some typical CRUD (Create, Retrieve, Update, Delete) functionality. I had to decide how to implement this.

First, a confession. I am comfortable using SQL and my normal approach to a database application is to use ADO.NET DataReaders to read data. They are brilliant; you just send some SQL to the database and back comes the data in a format that is easy to read back in C# code.

When I need to update the data, I use SqlCommand.ExecuteNonQuery which executes arbitrary SQL. It is easy to use parameters and transactions, and I get full control over how many connections are open and so on.

This approach has always worked well for me and I get excellent performance and complete flexibility.

However, when coding in ASP.NET MVC and Visual Studio you are now steered firmly towards Entity Framework (EF), Microsoft’s object-relational mapping library. You can use a code-first approach. Simply create a C# class for the object you want to store, and EF handles all the drudgery of creating tables and building SQL queries, letting you concentrate on the unique features of your application.

In addition, you can right-click in the Solution Explorer, choose Add Controller, and a wizard will generate all the code for listing, creating, editing and deleting those objects.

image

Well, that is the idea, and it does work, but I soon ran into issues that made me wonder if I had made the right decision.

One of the issues is what happens when you change your mind. Maybe that field should be an Int rather than a String. Maybe you need a second phone number field. Maybe you need to create new tables. How do you keep the database in synch with your classes?

This is called Code First Migrations and involves running commands that work out how the database needs to change and generates code to update it. It’s clever stuff, but the downside is that I now have a bunch of generated classes and a generated _MigrationHistory table which I did not need before. In addition, something when slightly wrong in my case and I ended up having to comment out some of the generated code in order to make the migration work.

At this point EF is creating work for me, rather than saving it.

Another issue I encountered was puzzling out how to do stuff beyond the most trivial. How do you replace an HTML edit box with a dropdown list? How do you exclude fields from being saved when you call dbContext.SaveChanges? What is the correct way to retrieve and modify data in pure code, without data binding?

I am not the first to have questions. I came across this documentation: an article promisingly entitled How to: Add, Modify, and Delete Objects which tells you nothing of value. Spot how many found it helpful:

image

You should probably start here instead. Still, be aware that EF is by no means straightforward. Instead of having to know SQL and the basics of ADO.NET commands and DataReaders, you now have to know EF, and I am not sure it is any less intricate. You also need to be comfortable with data binding and LINQ (Language Integrated Query) to make sense of it all, though I will add that strong data binding support is one reason whey EF is a good fit for ASP.NET MVC.

Should you use Entity Framework? It remains, as far as I can tell, the strategic direction for data access on Microsoft’s platform, and once you have worked out the basics you should be able to put together simple database applications more quickly and more naturally than with manually coded SQL.

I am not sure it makes sense for heavy-duty data access, since it is harder to fine-tune performance and if you hit subtle bugs, you may end up in the depths of EF rather than debugging your own code.

I would be interested in hearing from other developers. Do you love EF, avoid it, or is it just about OK?

Xamarin 3.0 brings iOS visual design to Visual Studio, cross-platform XAML, F#, NuGet and more

Xamarin has announced the third version of its cross-platform tools, which use C# and .NET to target multiple platforms, including iOS, Android and Mac OS X.

Xamarin 3.0 is a big release. In summary:

Xamarin Designer for iOS

Using a visual designer for iOS Storyboard projects, you can create and modify a GUI in both Visual Studio and Xamarin Studio (Xamarin’s own IDE). The designer uses the native Storyboard format, so you can open and modify existing files created in Xcode on the Mac. The technology here is amazing, since you iOS controls are rendered remotely on a Mac, and transmitted to the designer on Windows. See here for a quick hands-on.

Xamarin Forms

Xamarin has created the cross-platform GUI framework that it said it did not believe in. It is based on XAML though not compatible with Microsoft’s existing XAML implementations. There is no visual designer yet.

Why has Xamarin changed its mind? It was pressure from enterprise customers, from what I heard from CEO Nat Friedman. They want to make internal mobile apps with many forms, and do not want to rewrite the GUI code for every mobile platform they support.

Friedman made the point that Xamarin Forms still render as native controls. There is no drawing code in Xamarin Forms.

“The challenge for us in  building Xamarin forms was to give people enhanced productivity without compromising the native approach. The mix and match approach, where you can mix in native code at any point, you can get a handle for the native control, we’re think we’ve got the right compromise. And we’re not forcing Xamarin forms on you, this is just an option,”

he told me.

Again, there is a quick hands-on here.

F# support

F# is now officially supported in Xamarin projects. This brings functional programming to Xamarin, and will be warmly welcomed by the small but enthusiastic F# community (including, as I understand it, key .NET users in the financial world).

Portable Class Libraries

Xamarin now supports Microsoft’s Portable Class Libraries, which let you state what targets you want to support, and have Visual Studio ensure that you write compatible code. This also means that library vendors can easily support Xamarin if they choose to do so.

NuGet Packages

The NuGet package manager has transformed the business of getting hold of new libraries for use in Visual Studio. Now you can use it with Xamarin in both Visual Studio and Xamarin Studio.

Microsoft partnership

Perhaps the most interesting part of my interview with Nat Friedman was what he said about the company’s partnership with Microsoft. Apparently this is now close both from a technical perspective, and for business, with Microsoft inviting Xamarin for briefings with key customers.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.

A quick hands-on with native code compilation for .NET

I had a quick look at the .NET Native Preview. I am interested to see what the benefits might be. Note that currently the preview only supports 64-bit Windows Store apps.

Here is what is promised:

For users of your apps, .NET Native offers these advantages:

  • Fast execution times
  • Consistently speedy startup times
  • Low deployment and update costs
  • Optimized app memory usage

I created a small C# app that counts prime numbers using a simple approach. I created it first as a Universal App that does not use .NET Native, and then as a second app that does use .NET Native.

image

My initial results are disappointing. The time taken to count prime numbers is similar in both apps.

image

As a further test, I added code that adds the prime numbers found to a ListView control using an async task. The idea was to see if GUI interaction and multi-threading would be more revealing than simply crunching numbers in a tight loop. The apps takes much longer with this enabled, but again, there is nothing to choose between the two.

Did I really succeed in compiling the app to native code? I think so. Here are the contents of the AppX folder for the standard .NET executable:

image

and here for the native compiled executable – note the additional DLLs:

image

I am not actually surprised that there is no performance benefit in my example. JIT (just-in-time) compilation means that any .NET application is native code at runtime, though optimization might be different.

I have ended up with a much larger app to deploy, though the relative difference would be less, I imagine, with an app that contains more code.

I can also readily believe that start-up time will be better for a native compiled app, since there is no need for the JIT compiler. However my app is so small that this is not significant.

My question though is what kind of app will benefit most from native compilation? I would be interested to see guidance on this.

Of course it is also possible that later iterations of the technology will yield different results.

Infragistics building cross-platform development strategy on XAML says CEO

I spoke to Dean Guida, CEO at Infragistics, maker of components for Windows, web and mobile development platforms. Windows developers with long memories will remember Sheridan software, who created products including Data Widgets and VBAssist. Infragistics was formed in 2000 when Sheridan merged with another company, ProtoView.

In other words, this is a company with roots in the Microsoft developer platform, though for a few years now it has been madly diversifying in order to survive in the new world of mobile. Guida particularly wanted to talk about IgniteUI, a set of JQuery controls which developers use either for web applications or for mobile web applications wrapped as native with PhoneGap/Cordova.

“The majority of the market is looking at doing hybrid apps because it is so expensive to do native,” Guida told me.

Infragistics has also moved into the business iOS market, with SharePlus for SharePoint access on an iPad, and ReportPlus for reporting from SQL Server or SharePoint to iPad clients. Infragistics is building on what appears to be a growing trend: businesses which run Microsoft on the server, but are buying in iPads as mobile clients.

image

Other products include Nuclios, a set of native iOS components for developers, and IguanaUI for Android.

I asked Guida how the new mobile markets compared to the traditional Windows platform, for Infragistics as a component vendor.

“The whole market’s in transition,” he says. “People are looking at mobility strategy and how to support BYOD [Bring Your Own Device], all these different platforms, and a lot of our conversations are around IgniteUI. We need to reach the iPad, and more than the iPad as well.”

“There’s still a huge market doing ASP.NET, Windows Forms, WPF. It’s still a bigger market, but the next phase is around mobility.”

What about Windows 8, does he think Microsoft has got it right? Guida’s first reaction to my question is to state that the traditional Windows platform is by no means dead. “[Microsoft] may have shifted the focus away from Silverlight and WPF, but the enterprise hasn’t, in terms of WPF. The enterprise has not shifted aware from WPF. We’ve brought some of our enterprise customers to Microsoft to show them that, some of the largest banks in the world, the insurance industry, the retail industry. These companies are making a multi-year investment decision on WPF, where the life of the application if 5 years plus.

“Silverlight, nobody was really happy about that, but Microsoft made that decision. We’re going to continue to support Silverlight, because it makes sense for us. We have a codebase of XAML that covers both WPF and Silverlight.”

Guida adds that Windows 8 and Windows Phone 8 are “great innovation”, mentioning features like Live Tiles and people hub social media aggregation, which has application in business as well. “They’re against a lot of headwind of momentum and popularity, but because Microsoft is such an enterprise company, they are going to be successful.”

How well does the XAML in Infragistics components, built for WPF and Silverlight, translate to XAML on the Windows Runtime, for Windows 8 store apps?

“It translates well now, it did not translate well in the beginning,” Guida says, referring to the early previews. “We’re moving hundreds of our HTML and XAML components to WinJS and WinRT XAML. We’re able to reuse our code. We have to do more work with touch, and we want to maintain performance. We’re in beta now with a handful of components, but we’ll get up to 100s of components available.”

It turns out that XAML is critical to the Infragistics development strategy for iOS as well as Windows. “We wrote a translator that translates XAML code to iOS and XAML code to HTML and JavaScript. We can code in XAML, add new features, fix bugs, and then it moves over to these other platforms. It’s helped us move as quickly as we’ve moved.”

What about Windows on ARM, as in Surface RT? “We fully support it,” says Guida, though “with a straight port, you lose performance. That’s what we’re working on.”