Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Cloud storage sums: how does the cost compare to backing up to your own drives?

Google now offers Cloud Storage Nearline (CSN) at $0.01 per GB per month.

Let’s say you have 1TB of data to store. That will cost $10 per month to store. Getting the data there is free if you have unlimited broadband, but getting it all back out (in the event of a disaster) costs $0.12 per GB ie $120.

A 1TB external drive is around £45 or $58 (quick prices from Amazon for USB 3.0 drives). CSN is not an alternative to local storage, but a backup; you will still have something like network attached storage preferably with RAID resilience to actually use the data day to day. The 1TB external drive would be your additional and preferably off-site backup. For the $120 per annum that CSN will cost you can buy two or three of these.

The advantage of the CSN solution is that it is off-site without the hassle of managing off-site drives and probably more secure (cloud hack risks vs chances of leaving a backup drive in a bus or taxi, or having it nabbed from a car, say). Your 1TB drive could go clunk, whereas Google will manage resilience.

If you consider the possibilities for automation, a cloud-based backup is more amenable to this, unless you have the luxury of a connection to some other office or datacentre.

Still, even at these low prices you are paying a premium versus a DIY solution. And let’s not forget performance; anyone still on ADSL or other asymmetric connections will struggle with large uploads (typically 1-2 Mb/s) while USB 3.0 is pretty fast (typically up to 100 Mb/s though theoretically it could be much faster). If you have the misfortune to have data that changes frequently – and a difficult case is the VHDs (Virtual Hard Drives) that back Virtual Machines – then cloud backup becomes difficult.

Windows 10: Moving Windows into the mobile and app era take 2, and why Windows 8 is not so bad

I attended Microsoft’s Build conference last week where there was a big focus on Windows 10. I spent some time with the latest Build 10074 which came out last week as well attending various sessions on developing for the upcoming OS. I also spoke to Corporate VP Joe Belfiore and I recommend this interview on the Reg which says a lot about Microsoft’s approach. Note that the company is determined to appeal to Windows 7 users who largely rejected Windows 8; Windows 10 is meant to feel more familiar to them.

image

That said, Microsoft is not backtracking on the core new feature in Windows 8, which is its new app platform, called the Windows Runtime (WinRT). In fact, in its latest guise as the Universal App Platform (UAP) it is more than ever at the forefront of Microsoft’s marketing effort.

Why is this? In essence, Microsoft needs a strong app ecosystem for Windows if it is to escape legacy status. That means apps which are store-delivered, run in a secure sandbox, install and uninstall easily, update automatically, and work on tablets as well as with keyboard and mouse. Interaction and data transfer between apps is managed through OS-controlled channels, called Contracts. Another advantage is that you do not need setup CDs or downloads when you get a new PC; your apps flow down automatically. When you think of it like this, the advantages are huge; but nevertheless the Windows 8 app platform largely failed. It is easy to enumerate some of the reasons:

  • Most users live in the Windows desktop and rarely transition to the “Metro” or “Modern” environment
  • Lack of Windows 7 compatibility makes the Windows 8 app platform unattractive to developers who want to target the majority of Windows users
  • Many users simply avoided upgrading to Windows 8, especially in business environments where they have more choice, reducing the size of the Windows 8 app market
  • Microsoft made a number of mistakes in its Windows 8 launch, including an uncompromising approach that put off new users (who felt, rightly, that “Metro” was forced upon them), lack of compelling first-party apps, and encouraging a flood of abysmal apps into the Store by prioritising quantity over quality

History will judge Windows 8 harshly, but I have some admiration for what Microsoft achieved. It is in my experience the most stable and best performing version of Windows, and despite what detractors tell you it works fine with keyboard and mouse. You have to learn a new way of doing a few things, such as finding apps in the Start screen, following which it works well.

The designers of Windows 8 took the view that the desktop and app environments should be separate. This has the advantage that apps appear in the environment they are designed for. Modern apps open up full-screen, desktop apps in a window (unless they are games that are designed to run full-screen). The disadvantage is that integration between the two environments is poor, and you lose one of the key benefits of Windows (from which it got its name), the ability to run multiple apps in resizable and overlapping windows.

Windows 10 takes the opposite approach. Modern apps run in a window just like desktop apps. The user might not realise that they are modern apps at all; they simply get the benefits of store delivery, isolation and so on, without having to think about it.

image

This sounds good and following the failure of the first approach, it is probably the right thing for Microsoft to do. However there are a couple of problems. One is the risk of what has been called the “uncanny valley” in an app context, where apps nearly but not quite work in the way you expect, leading to a feeling of unease or confusion. Modern apps look a little bit different from true desktop apps in Windows 10, and behave a little bit different as well. Modern apps have a different lifecycle, for example, can enter a suspended state when they do not have the focus or even be terminated by the OS if the memory is needed. A minimized desktop app keeps running, but a minimized modern app is suspended, and the developer has to take special steps if you want a task to keep running in the background.

Another issue with Windows 10 is that its attempt to recreate a Windows 8 like tablet experience is currently rather odd. Windows 10 “Tablet Mode” makes all apps run full screen, even desktop apps for which this is wholly inappropriate. Here is the Snipping Tool in Tablet Mode:

image

and here is the desktop Remote Desktop Connection:

image

Personally I find that Tablet Mode trips me up and adds little value, even when I am using a tablet without a keyboard, so I tend not to use it at all. I would prefer the Windows 8 behaviour, where Modern apps run full screen (or in a split view), but desktop apps open in a window on the desktop. Still, it illustrates the point, which is that integrating the modern and desktop environments has a downside; it is just a different set of compromises than those made for Windows 8.

Now, I do think that Microsoft is putting a more wholehearted effort into its UAP than it did for Windows 8 modern apps (even though both run on WinRT). This time around, the Store is better, the first-party apps are better (not least because we have Office), and the merging of the Windows Phone and Xbox platforms with the PC platform gives developers more incentive to come up with apps. Windows 10 is also a free upgrade for many users which must help with adoption. Even with all this, though, Microsoft has an uphill task creating a strong modern app ecosystem for Windows, and a lot of developers will take a wait and see approach.

The other huge question is how well users will take to Windows 10. Any OS upgrade has a problem to contend with, which is that users dislike change – perhaps especially what has become the Windows demographic, with business users who are by nature cautious, and many conservative consumer users. Users are contradictory of course; they dislike change, but they like things to be made better. It will take more than a Cortana demo to persuade a contented Windows 7 user that Windows 10 is something for them.

Note that I say that in full knowledge of how much potential the modern app model has to improve the Windows experience – see my third paragraph above.

Microsoft told me in San Francisco that things including Tablet Mode are still being worked on so a little time remains. It was clear at Build that there is a lot of energy and determination behind Windows 10 and the UAP so there is still room for optimism, even though it is also obvious that Windows 10 has to improve substantially on the current preview to have a chance of meeting the company’s goals.

HoloLens: a developer hands-on

I attended the “Holographic Academy” during Microsoft’s Build conference in San Francisco. It was aimed at developers, and we got a hands-on experience of coding a simple HoloLens app and viewing the results. We were forbidden from taking pictures so you will have to make do with my words; this also means I do not have to show myself wearing a bulky headset and staring at things you cannot see.

image

First, a word about HoloLens itself. The gadget is a headset that augments the real world with a 3D “projected” image. It is not really a hologram, otherwise everyone would see it, but it is a virtual hologram created by combining what you see with digital images.

The effect is uncanny, since the image you see appears to stay in one place. You can walk around it, seeing it from different angles, close up or far away, just as you could with a real image.

That said, there were a couple of issues with the experience. One is that if you went too close to a projected image, it disappeared. From memory, the minimum distance was about 18 inches. Second, the viewport where you see the augmented reality was fairly small and you could easily see around it. This is detrimental to the illusion, and sometimes made it a struggle to see as much of your hologram as you might want.

I asked about both issues and got the same response, essentially “no comment’’. This is prototype hardware, so anything could change. However, according to another journalist who attended a hands-on demo in January, the viewport has gotten smaller, suggesting that Microsoft is compromising in its effort to make the technology into a commercially viable product.

Another odd thing about the demo was that after every step, we were encouraged to whoop and cheer. There was a Microsoft “mentor” for every pair of journalists, and it seemed to me that the mentors were doing most of the whooping and cheering. It is obvious that this is a big investment for the company and I am guessing that this kind of forced enthusiasm is an effort to ensure a positive iimpression.

Lest you think I am too sceptical, let me add that the technology is genuinely amazing, with obvious potential both for gaming and business use.

The developer story

The development process involves Unity, Visual Studio, and of course the HoloLens device itself. The workflow is like this. You create an interactive 3D scene in Unity and build it, whereupon it becomes a Visual Studio project. You open the project in Visual Studio, and deploy it to HoloLens (connected over USB), just as you would to a smartphone. Once deployed, you disconnect the HoloLens and wear it in order to experience the scene you have created. Unity supports scripting in C#, running on Mono, which makes the development platform easy and familiar for Windows developers.

Our first “Holo World” project displayed a hologram at a fixed position determined by where you are when the app first runs. Next, we added the ability to move the hologram, selecting it with a wagging finger gesture, shifting our gaze to some other spot, and placing it with another wagging finger gesture. Note that for this to work, HoloLens needs to map the real world, and we tried turning on wire framing so you could see the triangles which show where HoloLens is detecting objects.

We also added a selection cursor, an image that looks like a red bagel (you can design your own cursor and import it into Unity). Other embellishments were the ability to select a sphere and make it fall to the floor and roll around, voice control to drop a sphere and then reset it back to the starting point, and then “spatial audio” that appears to emit from the hologram.

All of this was accomplished with a few lines of C# imported as scripts into Unity. The development was all guided so we did not have to think for ourselves, though I did add a custom voice command so I could say “abracadabra” instead of “reset scene”; this worked perfectly first time.

For the last experiment, we added a virtual underworld. When the sphere dropped, it exploded making a virtual pit in the floor, through which you could see a virtual world with red birds flapping around. It was also possible to enter this world, by positioning the hologram above your head and dropping a sphere from there.

HoloLens has three core inputs: gaze (where you are looking), gesture (like the finger wag) and voice. Of these, gaze and voice worked really well in our hands on, but gesture was more difficult and sometimes took several tries to get right.

At the end of the session, I had no doubt about the value of the technology. The development process looks easily accessible to developers who have the right 3D design skills, and Unity seems ideally suited for the project.

The main doubts are about how close HoloLens is to being a viable commercial product, at least in the mass market. The headset is bulky, the viewport too small, and there were some other little issues like lag between the HoloLens detection of physical objects and their actual position, if they were moving, as with a person walking around.

Watch this space though; it is going to be most interesting.