Tag Archives: .net

A glimpse into Microsoft history which goes some way to explaining the decline of Windows

Why is Windows in decline today? Short answer: because Microsoft lost out and/or gave up on Windows Phone / Mobile.

But how did it get to that point? A significant part of the story is the failure of Longhorn (when two to three years of Windows development was wasted in a big reset), and the failure of Windows 8.

In fact these two things are related. Here’s a post from Justin Chase; it is from back in May but only caught my attention when Jose Fajardo put it on Twitter. Chase was a software engineer at Microsoft between 2008 and 2014.

Chase notes that Internet Explorer (IE) stagnated because many of the developers working on it switched over to work on Windows Presentation Foundation, one of the “three pillars” of Longhorn. I can corroborate this to the extent that I recall a conversation with a senior Microsoft executive at Tech Ed Europe, in pre-Longhorn days, when I asked why not much was happening with IE. He said that the future lay in rich internet-connected applications rather than browser applications. Insightful perhaps, if you look at mobile apps today, but no doubt Microsoft also had in mind locking people into Windows.

WPF, based on .NET and DirectX, was intended to be used for the entire Windows shell in Longhorn. It was too slow, memory hungry, and buggy, eventually leading to the Longhorn reset.

“Ever since Longhorn the Windows team has had an extremely bitter attitude towards .NET. I don’t think its completely fair as they essentially went all in on a brand new technology and .NET has done a lot of evolving since then but nonetheless that sentiment remains among some of the now top players in Microsoft. So effectively there is a sentiment that some of the largest disasters in Microsoft history (IE’s fall from grace and multiple “bad” versions of Windows) are, essentially, totally the fault of gambling on .NET and losing (from their perspective). “

writes Chase.

This went on to impact Windows 8. You will recall that Windows Phone development was once based on Silverlight. Windows 8 however did not use Silverlight but instead had its own flavour of XAML. At the time I was bemused that Microsoft, with an empty Windows 8 app store, had not enabled compatibility with Windows Phone applications which would have given Windows 8 a considerable boost as well as helping developers port their code. Chase explains:

“So when Microsoft went to make their new metro apps for windows 8/10, they almost didn’t even support XAML apps but only C++ and JavaScript. It was only the passion of the developer community that pushed it over the edge and let it in.”

That was a shame because Silverlight was a great bit of technology, lightweight, powerful, graphically rich, and even cross-platform to some extent. If Microsoft had given developers a consistent and largely compatible path from Silverlight to Windows Phone to Windows 8 to Windows 10, rather than the endless changes of direction that happened instead, its modern Windows development platform would be stronger. Perhaps, even, Windows Phone / Mobile would not have been abandoned; and we would not have to choose today between the Apple island and the ad-driven Android.

State of Microsoft .NET: transition to .Net Core or be left behind

The transition of Microsoft’s .NET platform from Windows-only to cross-platform (and open source) is the right thing. Along with Xamarin (.NET for mobile platforms), it means that developers with skills in C#, F# and Visual Basic can target new platforms, and that existing applications can with sufficient effort be migrated to Linux on the server or to mobile clients.

That does not mean it is easy. Microsoft forked .NET to create .NET Core (it is only four years since I wrote up one of the early announcements on The Register) and the problem with forks is that you get divergence, making it harder to jump from one fork to the other.

At first this was disguised. The idea was that .NET Framework (the old Windows-only .NET) would be evolved alongside .NET Core and new language features would apply to both, at least initially. In addition, ASP.NET Core (the web framework) runs on either .NET Framework or .NET Core.

This is now changing. Microsoft has shifted its position so that .NET Framework is in near-maintenance mode and that new features come only to .NET Core. Last month, Microsoft’s Damian Edwards stated that ASP.NET Core will only run on .NET Core starting from 3.0, the next major version.

This week Mads Torgersen, C# Program Manager, summarised new features in the forthcoming C# 8.0. Many of these features will only work on .NET Core:

Async streams, indexers and ranges all rely on new framework types that will be part of .NET Standard 2.1. As Immo describes in his post Announcing .NET Standard 2.1, .NET Core 3.0 as well as Xamarin, Unity and Mono will all implement .NET Standard 2.1, but .NET Framework 4.8 will not. This means that the types required to use these features won’t be available when you target C# 8.0 to .NET Framework 4.8.

Default interface member implementations rely on new runtime enhancements, and we will not make those in the .NET Runtime 4.8 either. So this feature simply will not work on .NET Framework 4.8 and on older versions of .NET.

The obvious answer is to switch to .NET Core. Microsoft is making this more feasible by supporting WPF and Windows Forms with .NET Core, on Windows only. Entity Framework 6 will also be supported.  It is also likely that this will work on Windows 7 as well as Windows 10.

This move will not be welcome to all developers. The servicing for .NET Framework is automatic, via Windows Update or on-premises equivalents, but for .NET Core requires developer attention. Inevitably some things will not work quite the same on .NET Core and for long-term stability it may be preferable to stay with .NET Framework. The more rapid release cycle of .NET Core is not necessarily a good thing if you prioritise reliability over new features.

The problem though: from now on, .NET Framework will not evolve much. There are a few new things in .NET Framework 4.8, like high DPI support, Edge-based browser control, and better touch support. There are really minimal essential updates. In time, maintaining applications on .NET Framework will look like a mistake as application capabilities and performance fall behind. That means, if you are a .NET developer, .NET Core is in your future.

Should you convert your Visual Basic .NET project to C#? Why and why not…

When Microsoft first started talking about Roslyn, the .NET compiler platform, one of the features described was the ability to take some Visual Basic code and “paste as C#”, or vice versa.

Some years later, I wondered how easy it is to convert a VB project to C# using Roslyn. The SharpDevelop team has a nice tool for this, CodeConverter, which promises to “Convert code from C# to VB.NET and vice versa using Roslyn”. You can also find this on the Visual Studio marketplace. I installed it to try out.

image

Why would you do this though? There are several reasons, the foremost of which is cross-platform support. The Xamarin framework can use VB to some extent, but it is primarily a C# framework. .NET Core was developed first for C#. Microsoft has stated that “with regard to the cloud and mobile, development beyond Visual Studio on Windows and for non-Windows platforms, and bleeding edge technologies we are leading with C#.”

Note though that Visual Basic is still under active development and history suggests that your Windows VB.NET project will continue running almost forever (in IT terms that is). Even Visual Basic 6.0 applications still run, though you might find it convenient to keep an old version of Windows running for the IDE.

Still, if converting a project is just a right-click in Visual Studio, you might as well do it, right?

image

I tried it, on a moderately-size VB DLL project. Based on my experience, I advise caution – though acknowledging that the converter does an amazing job, and is free and open source. There were thousands of errors which will take several days of effort to fix, and the generated code is not as elegant as code written for C#. In fact, I was surprised at how many things went wrong. Here are some of the issues:

The tool makes use of the Microsoft.VisualBasic namespace to simplify the conversion. This namespace provides handy VB features like DateDiff, which calculates the difference between two dates. The generated project failed to set a reference to this assembly, generating lots of errors about unknown objects called Information, Strings and so on. This is quick to fix. Less good is that statements using this assembly tend to be more convoluted, making maintenance harder. You can often simplify the code and remove the reference; but of course you might introduce a bug with careless typing. It is probably a good idea to remove this dependency, but it is not a problem if you want the quickest possible port.

Moving from a case-insensitive language to a case-sensitive language is a problem. Visual Studio does a good job of making your VB code mostly consistent with regard to case, but that is not a fix. The converter was unable to fix case-sensitivity issues, and introduced some of its own (Imports System.Text became using System.text and threw an error). There were problems with inheritance, and even subtle bugs. Consider the following, admittedly ugly and contrived, code:

image

Here, the VB coder has used different case for a parameter and for referencing the parameter in the body of the method. Unfortunately another variable with the different case is also accessible. The VB code and the converted C# code both compile but return different results. Incidentally, the VB editor will work very hard to prevent you writing this code! However it does illustrate the kind of thing that can go wrong and similar issues can arise in less contrived cases.

C# is more strict than VB which causes errors in conversion. In most cases this is not a bad thing, but can cause headaches. For example, VB will let you pass object members ByRef but C# will not. In fact, VB will let you pass anything ByRef, even literal values, which is a puzzle! So this compiles and runs:

image

Another example is that in VB you can use an existing variable as the iteration variable, but in C# foreach you cannot.

Collections often go wrong. In VB you use an Item property to access the members of a collection like a DataReader. In C# this is omitted, but the converter does not pick this up.

Overloading sometimes goes wrong. The converter does not always successfully convert overloaded methods. Sometimes parameters get stripped away and a spurious new modifier is added.

Bitwise operators are not correctly converted.

VB allows indexed properties and properties with parameters. C# does not. The converter simply strips out the parameters so you need to fix this by hand. See https://stackoverflow.com/questions/2806894/why-c-sharp-doesnt-implement-indexed-properties if the language choices interest you.

There is more, but the above gives some idea about why this kind of conversion may not be straightforward.

It is probably true that the higher the standard of coding in the original project, the more straightforward the conversion is likely to be, the caveat being that more advanced language features are perhaps more likely to go wrong.

Null strings behave differently

Another oddity is that VB treats a String set to null (Nothing) as equivalent to an empty string:

Dim s As String = Nothing

If (s = String.Empty) Then ‘TRUE in VB
     MsgBox(“TRUE!”)
End If

C# does not:

String s = null;

   if (s == String.Empty) //FALSE in C#
    {
        //won’t run
    }

Same code, different result, which can have unfortunate consequences.

Worth it?

So is it worth it? It depends on the rationale. If you do not need cross-platform, it is doubtful. The VB code will continue to work fine, and you can always add C# projects to a VB solution if you want to write most new code in C#.

If you do need to move outside Windows though, conversion is worthwhile, and automated conversion will save you a ton of manual work even if you have to fix up some errors.

There are two things to bear in mind though.

First, have lots of unit tests. Strange things can happen when you port from one language to another. Porting a project well covered by tests is much safer.

Second, be prepared for lots of refactoring after the conversion. Aim to get rid of the Microsoft.VisualBasic dependency, and use the stricter standards of C# as an opportunity to improve the code.

What is happening with desktop development on Windows and will WPF be upgraded at last?

Once upon a time all Windows development was desktop development. Then there was web development, but that was a server thing. Then in October 2012 Windows 8 arrived, and it was all about full-screen, touch control and Store-delivered applications that were sandboxed and safe to run. Underneath this there was a new platform-within-a-platform called the Windows Runtime or WinRT (or sometimes Metro). Developing for Windows became a choice: new WinRT platform, or old-style desktop development, the latter remaining necessary if your application needed more features than were available in WinRT, or to run on Windows 7.

Windows 8 failed and was replaced by Windows 10 (July 2015), in large part a return to the desktop. The Start menu returned, and each application again had a window. WinRT lived on though, now rebranded as UWP (Universal Windows Platform). The big selling point was that your UWP app would run on phones, Xbox and HoloLens as well as PCs. It was still locked down, though less so, and still Store-delivered.

Then Microsoft decided to abandon Windows Phone, a decision obvious to Microsoft-watchers in June 2015 when ex-Nokia CEO Stephen Elop left Microsoft, just before the launch of Windows 10, even though Windows Phone was not formally killed off until much later. UWP now had a rather small u (that is, not very universal).

In addition, Microsoft decided that locking down UWP was not the way forward, and opened up more and more Windows APIs to the platform. The distinction between UWP and desktop applications was further blurred by Project Centennial, now known as Desktop Bridge, which lets you wrap desktop applications for Store delivery.

Perhaps the whole WinRT/UWP thing was not such a good idea. A side-effect though of all the focus on UWP was that the old development frameworks, such as Windows Forms (WinForms) and Windows Presentation Foundation (WPF), received little attention – even though they were more widely used. Some Windows 10 APIs were only available in UWP, while other features only worked in WinForms or WPF, giving developers a difficult decision.

The Build 2018 event, which was on last week in Seattle, was the moment Microsoft announced that it would endeavour to undo the damage by bringing UWP and desktop development together. “We’ve taken all the UI stacks and merged them together” said Mike Harsh and Scott Hunter in a session on “Modernizing desktop apps” (BRK3501 if you want to look it up).

According to Harsh and Hunter, Windows desktop application development is increasing, despite the decline of the PC (note that this is hardly a neutral source).

image

So what was actually announced? Here is a quick summary. Note that the announced features are for the most part applicable to future versions of Windows 10. As ever, Build is for the initial announcement. So features are subject to change and will not work yet, other than possibly in pre-release form.

Greater information density in UWP applications. WinRT/UWP was originally designed for touch control, so with lots of white space. Most Windows users though have mouse and keyboard. The spacious UWP layout looked wrong on big desktop displays, and it made porting applications harder. The standard layout is getting less dense, and a new Compact Size, an application setting, will pack more information into the same space.

image

More controls for UWP. New DataGrid, Forms with data validation, Menu bar, and coming in future, Status bar, tab controls and Ribbon. The idea is to make UWP more suitable for line-of-business applications, which accounts for a large part of Windows application development overall.

New Windowing APIs for UWP. WinRT/UWP was designed for full-screen applications, not the popup-dialogs or floating windows possible in desktop applications. Those capabilities are coming though. We will get tool windows, light-dismiss windows (eg type and press Enter), and multiple windows on one thread so that they work like a single application when minimized or cycled through with alt-tab. Coming in future are topmost windows, modal windows, custom title bars, and maybe even MDI (Multiple Document Interface), though this last seems surprising since it is discouraged even in the desktop frameworks.

What many developers will care about more though is new features coming to desktop applications. There are two big announcements.

.NET Core 3.0 will support WinForms and WPF. This is big news, partly because it performs better than the Windows-only .NET Framework, but more important, because it allows side-by-side deployment of the .NET runtime. Even better, a linker will let you deliver a .NET Core desktop application as a single executable with no dependencies. What performance gain? An example shown at Build was an application which uses File APIs running nearly three times faster on .NET Core 3.0.

image

XAML Islands enabling UWP features in WinForms and WPF. The idea is that you can pop a UWP host control in your WinForms or WPF application, and show UWP content there. Microsoft is also preparing wrapper controls that you can use directly. Mentioned were WebView, MediaPlayer, InkCanvas, InkToolBar, Map and SwapChainPanel (for DirectX content). There will be a few compromises. The XAML host window will be rectangular (based on an HWND) which means non-rectangular and transparent content will not work correctly. There is also the Windows 7 problem: no UWP on Windows 7, so what happens to your XAML Islands? They will not run, though Microsoft is working on a mechanism that lets your application substitute compatible Windows 7 content rather than crashing.

MSIX deployment. MSIX is Microsoft’s latest deployment technology. It will work with both UWP and Desktop applications, will support Windows 7 and 10, will provide for auto-updates, and will have tooling built into Visual Studio, as well as a packager for both your own and third-party applications. Applications installed with MSIX are managed and updated by Windows, have tamper protection, and are installed per-user. It seems to build upon the Desktop Bridge concept, the aim being to make Windows more manageable in the Enterprise as well as safer for all users, if Microsoft can get widespread adoption. The packaging format will also work on Android, Mac and Linux and you can check out the SDK here.

image

Will WPF or WinForms be updated?

The above does not quite answer the question, will WPF or Windows Forms be significantly updated, other than with the ability to use UWP content? I could not get a clear answer on this question at Build, though I was told that adding support for .NET Core 3.0 required significant changes to these frameworks so it is no longer true to say they are frozen. With regard to WPF Microsoft Corporate VP Julia Liuson told me:

“We will be looking at more controls, more capabilities. It is widely recognised that WPF is the best framework for desktop development on Windows. The fact that we’re moving on top of .NET Core 3.0 gives us a path forward.”

That said, I also heard that the team would rather write code once and use it across UWP, WPF and WinForms via XAML Islands, than write new controls for each framework. That makes sense, the difficulty being Windows 7. Microsoft would rather promote migration to Windows 10, than write new UI components that work across both Windows 7 and Windows 10.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Time for another look at “pure .NET”

Back in the Nineties there was a lot of fuss about “pure Java”. This meant Java code without any native code invocations that tie the application to a specific operating system.

It is possible to write cross-platform Java code that invokes native code, but it adds to the complexity. If it is an operating system API you need conditional code so that the write API is called on each platform. If it is a custom library it will have to be compiled separately for each platform.

Over on the Microsoft .NET site, developers have tended to have a more casual approach. After all, in the great majority of cases the code would only ever run on Windows. Further, Microsoft tended to steer developers towards Windows-only dependencies like SQL Server. After all, that is the value of owning a developer platform.

Times change. Microsoft has got the cross-platform bug, with its business strategy based on attracting businesses to its cloud properties (Office 365 and Azure) rather than Windows. The .NET Framework has been forked to create .NET Core, which runs on Mac and Linux as well as Windows. SQL Server is coming to Linux.

Another issue is porting applications from 32-bit to 64-bit, as I was reminded recently when migrating some ASP.NET applications to a new site. If your .NET code avoids P/Invoke (Platform Invoke) then you can compile for “Any CPU” and 64-bit will just work. If you used P-invoke and want to support both 32-bit and 64-bit it requires more care. IntPtr, used frequently in P/Invoke calls, is a different size. If you have custom native libraries, you need to compile them separately for each platform. The lazy solution is always to run as 32-bit but that is a shame.

What this means is that P/Invoke should only be used as a last resort. Arguably this has always been true, but the reasons are stronger today.

This is also an issue for libraries and components intended for general use, whether open source or commercial. It is early days for .NET Core support, but any native code dependencies will be a problem.

Breaking the P/Invoke habit will not be easy but “Pure .NET” is the way to go whenever possible.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Microsoft open sources heart of .NET: CoreCLR runtime now on GitHub

Microsoft’s CoreCLR is now available on GitHub. We knew this was coming, but it is still a significant step, since this piece is the very heart of .NET: the execution engine that consumes a .NET IL (Intermediate Language) executable and compiles it to machine code for execution. The IL can easily be decompiled back to C#; it is in a sense fairly close to what you wrote in the editor. The CLR piece compiles it to a native executable, and also handles garbage collection (automatic memory management) and interop with other  native code libraries. The just-in-time compiler in CoreCLR is called RyuJIT.

CoreCLR is not same as the .NET Framework CLR (as found in the Windows desktop today), though one thing we now learn is that it is a true subset:

CoreCLR is a subset of the .NET Framework CLR. They share the same codebase and are updated together. For example, an update to the .NET GC improves both CoreCLR and the .NET Framework CLR.

We setup a live 2-way mirror between the coreclr repo on GitHub and the .NET Framework TFS server within Microsoft. The latency of the mirror is low, measurable in minutes.

Contributions made to the coreclr repo are integrated to the Microsoft TFS server automatically and will become part of both the .NET Framework and .NET Core products. The same is true in reverse, that .NET Framework CLR changes (within the CoreCLR subset) are mirrored to the CoreCLR repo. These changes will sometimes result in large commits to unrelated components.

This is good news since it reduces the risk of fragmentation between the .NET Framework and the CoreCLR. Note that the same does not apply to the framework libraries, which are forked between .NET Framework and CoreFX. The reason for the fork is to enable cross-platform .NET and to benefit from greater modularity in the Framework without breaking the existing .NET Framework.

Some other points of interest:

  • CoreCLR will run on Linux and Mac but not yet, this is work in progress
  • CoreCLR powers Windows Phone apps as well as ASP.NET 5
  • CoreCLR uses the CMake build system rather than MSBuild, because it runs cross-platform

There is a key architectural difference between CoreCLR and the .NET Framework, which is that in CoreCLR each application is deployed with the runtime and libraries it requires, whereas in the .NET Framework applications depend on a system-managed runtime and shared libraries. This has the advantage that applications are standalone, and you could run one from say a portable USB drive on a system which did not have .NET or Mono installed.

The disadvantage, aside from greater use of disk space, is that patching the same libraries across multiple applications is hard. In the interview here Microsoft offers a clue about how it might come up with a solution for this. Jan Kotas on the CLR team talks about an ideal scenario where identical copies of the same DLL are in fact shared even though each application appears to have its own copy. This sounds similar to the mechanism used by de-duplication in Windows Server. The file system makes it look as if several copies of a file exist in different directories, but in fact there is only one. If you update a file though, the right thing happens and only the virtual copy that you overwrite is changed. It sounds as if Kotas has in mind a variant where you could say, “update this file and all its instances elsewhere.” This would of course somewhat undermine the concept of app-isolated dependencies; but you know what they say about cakes and eating them:

“The ideal we should get to is every application has a local copy of everything. People eventually get to a point where through some OS mechanisms or through some other means the DLLs that are the same between different applications would get shared. That way nobody needs to worry about is this shared, or is it not shared. The ideal place that we’d like to get to is that sharing happens under the hood. It can happen through different mechanisms for different applications. [That would be the] ideal place for the runtime and how to version it.”

said Kotas. Possibly I am misinterpreting this; but it does sound like some kind of sharing-but-not-sharing solution to the patching problem.

Another point to note: a managed code application cannot execute without help. In order to run, every managed application needs three things:

1. The application code

2. The CLR – either CoreCLR or the .NET Framework CLR

3. A CLR host which loads the CLR and instructs it to execute the application. The CLR host has to be native code, for obvious reasons.

In the .NET Framework this third piece is invisible, since it is handled by the operating system (though apparently SQL Server is a special case). In the CoreCLR world though, you need to think about the CLR host. ASP.NET 5.0 has the KRuntime (K probably stands for Katana) which I think is the same as Project K. If you want to test CoreCLR today, you can use a host called CoreConsole which (as its name implies) lets you run console apps. Apparently there are a few technical problems using CoreCLR with ASP.NET 5 as the moment.

image

What is .NET Core, “the foundation of all future .NET platforms”?

I have been looking at .NET Core, an official Microsoft open source project which you can find on github and which is at the heart of Microsoft’s plans to open source most of its .NET technology.

Currently there are three Microsoft repositories for the .NET Core platform. There are the .NET Compiler Platform (“Roslyn”), ASP.NET 5, and the .NET Core Framework. Note that these are all v.Next versions of the .NET Framework. ASP.NET 5 and the .NET Core Framework are on github, but Roslyn is on CodePlex, Microsoft’s open source repository site. There is also a github repository for Entity Framework 7, currently part of ASP.NET though I am not sure that it belongs there. The current version of EF is 6.11 but the code for this is on CodePlex. The KRuntime, which is the implementation of the parts of the .NET Runtime needed to host an ASP.NET application, is also in the ASP.NET repository. Its full name is the K Runtime Environment (KRE); I am not sure what K stands for. Note that Microsoft has only promised to open source the .NET server stack, not desktop frameworks like Windows Presentation Foundation.

I had a look at the .NET Core Framework. This is the key set of libraries for .NET applications. The easiest way to build the core libraries is from the command line. Open a Visual Studio 2013 Developer Command Prompt (which sets up the path and environment for command line builds), go to your clone of the github repository and type build.

image

Cool. But what is in it? Not that much: System.Collections, Parallel Linq, Vectors and XML libraries.

“More is coming soon. Stay tuned!” say the docs. And in this blog post by Microsoft’s Immo Landwerth:

Consider the subset we have today a down-payment on what is to come. Our goal is to open source the entire .NET Core library stack by Build 2015.

Landwerth says that Microsoft is “currently figuring out the plan for open sourcing the runtime”; this is the native code that creates the .NET Virtual Machine which executes .NET code.

Of course there is also Mono, the old open source implementation of .NET which is from an independent code base.

This is exciting stuff for .NET developers, especially since official runtimes for Linux and Mac are also promised, but also somewhat confusing. What is .NET Core versus what we have known as the .NET Framework?

Here is a diagram from Landwerth’s blog:

image

I presume that the top left box (.NET Framework) has not been promised as open source, but the other two boxes have. Note that ASP.NET 5 will run on either .NET Core or the full .NET Framework; and that .NET Native – the project to compile a .NET application as true native code – sits as part of .NET Core.

Store apps (also known as Windows Runtime apps, or Metro apps) are not covered in the above diagram, but since .NET Native currently only works for Store apps, maybe .NET Core is also the .NET runtime for Store apps. Landwerth says:

.NET Core is a modular development stack that is the foundation of all future .NET platforms. It’s already used by ASP.NET 5 and .NET Native.

There are also some clues about .NET Core in the home page for the github repository:

.NET Core and the .NET Framework have (for the most part) a subset-superset relationship. .NET Core is named "Core" since it contains the core features from the .NET Framework, for both the runtime and framework libraries. For example, .NET Core and the .NET Framework share the GC, the JIT and types such as String and List<T>. We’ll continue improving these components for both .NET Core and .NET Framework.

.NET Core was created so that .NET could be open source, cross platform and be used in more resource-constrained environments. We have also published a subset of the .NET Reference Source under the MIT license, so that you and the community can port additional .NET Framework features to .NET Core.

The second paragraph is intriguing. Microsoft has posted parts of the source for the .NET Framework library so that the community can port some of it to .NET Core. What this means I think is not that this code should be part of .NET Core (otherwise it becomes more than just core) but rather that it would run on .NET Core.

It seems, contrary to what you might have thought, that the full .NET Framework is not a superset of .NET Core, although it is intended to be close to that. This has interesting implications for future compatibility. If .NET Core is intended to be more agile and to evolve more rapidly than the .NET Framework, since it is somewhat free of backwards compatibility constraints, we will soon find that there are features in .NET Core that do not exist in the .NET Framework as well as vice versa, in other words, two incompatible stacks. That could be a problem.

Despite Microsoft’s impressive openness in publishing much of its .NET work and forming the .NET Foundation, I for one would appreciate a clearer presentation of the plans for .NET Core and .NET Framework and the extent to which .NET Framework should now be considered a legacy or Windows desktop only technology. I suspect the answer for the moment is “wait for Build.”