Category Archives: .net

Should you go to Microsoft Build?

image

In the beginning there was the Professional Developers Conference (PDC) – the first was in 1992. They were fantastic events, with deep dives into the innards of Windows and how to develop applications on Microsoft’s platform. Much of the technology presented was in early preview and often did not work quite right; some things that were presented never made it to production, famous examples including “Hailstorm” also known as .NET My Services, and the WinFS file system originally slated for Windows Vista.

These events seemed to be a critical part of Microsoft’s development cycle. Internally teams would ready their latest stuff for a PDC session, which was then adapted and re-presented at other events around the world.

Then PDC kind-of morphed into Microsoft Build, the first of which took place in September 2011. Unlike PDC, Build was specifically focused on Windows, and was originally associated with Windows 8 and its new app platform, WinRT. Part of the vision behind Windows 8 was that it would have a strong app ecosystem and Build was about enthusing and informing developers about the possibilities.

As it turned out, the Windows 8 app ecosystem was a bit of a disaster for various reasons. Microsoft had another go with UWP (Universal Windows Platform) in Windows 10. Build in April 2015 was an amazing event, where the company appeared to be going all-out to make UWP on both desktop and mobile a success. Not only was the platform itself being enhanced, but we also got Project Centennial (deliver desktop applications via the Store), Project Astoria (compile Android apps to UWP) and Project Islandwood (compile iOS code for UWP).

Just a few months later the company made a huge about-turn. CEO Satya Nadella’s Aligning Engineering to Strategy memo signalled the beginning of the end for Windows Phone, and the departure of Stephen Elop and the dismantling of the Nokia devices acquisition. That was the end of the universal part of UWP.

Project Astoria was scrapped. The Windows Bridge for iOS (Project Islandwood) still just about exists, but its core rationale (get iOS apps to Windows Phone) is now irrelevant.

Nadella steered the company instead towards “the intelligent cloud” and to date that strategy has been successful, with impressive growth for Office 365 and Microsoft Azure.

Microsoft has announced Build 2018, in early May, I find it intriguing, given the history of Build, that Windows is not currently mentioned on the event’s home page. In the page description metadata it says:

Microsoft Build 2018, Seattle, WA May 7-9, 2018. Microsoft’s ultimate developer conference focused on cloud, artificial intelligence, mixed reality, and more.

In the main text of the page, about the only specific topics mentioned are these:

Take in keynotes by Microsoft CEO Satya Nadella and other visionaries behind the Intelligent Cloud and Intelligent Edge

That sounds like a focus on Azure and AI/ML/IoT/Big Data cloud services, and on mobile and IoT devices. It is a long way removed from the original concept of Build as all about Windows and its application platform.

Windows remains important to Microsoft, and to all of us who use it day to day. Still, if you think about cutting-edge software development today, Windows desktop applications are probably not the first thing to come to mind, nor UWP for that matter.

This being the case, it does make sense for the company to focus on its cloud services at Build, and on diverse mobile platforms through what is now an amazing range of cross-platform tools in Visual Studio.

Of course there will in fact also be Windows stuff at Build, including Windows and HoloLens Mixed Reality, Cortana skills and UWP improvements.

Still, if you can only get to one big Microsoft event in the year, Ignite in September is now a bigger deal and closer to the heart of the new (or current) Microsoft.

Let me add that these Microsoft events, whether Build or PDC, have on occasion seen some stunning announcements. Examples include the unveiling of C# and the .NET Framework, the 2003 Longhorn reveal (yes it all turned to dust), Windows 7 in 2008, and Windows 8 in 2011.

I would like to think that the company still has the capacity to surprise and amaze us; but it must be admitted that the current Build pitch is rather unexciting. Google I/O, incidentally, is on at the same time.

Microsoft introduces new feedback system for technical documentation, will delete existing comments

Microsoft is introducing a new feedback system for https://docs.microsoft.com, used for its technical documentation.

The new system, which you can already see for certain topics such as the Visual Studio IDE, is based on GitHub issues. When you leave a comment, you can specify whether it concerns documentation or product functionality.

image

So far so good, but the downside is that all existing comments will be deleted:

image

The statement “Old comments will not be carried over. If content within a comment thread is important to you, please save a copy.” is unhelpful. Nobody knows what comments will be useful to them in future.

Few things sap enthusiasm for community participation more than having all the past contributions into which you have put effort suddenly zapped. Nor is this the first time, as user guibirow notes:

As much I like the new system idea, I hate the fact that this is happening over and over.
It used to be a Disqus comment system, then moved to LiveFyre, then moved now to this new system, what will be the next?
The worst part of this all is that MS does not care about past content lost on these discussions, so many times I found issues described in the docs that are gone now.
Please, pay attention to your previous mistakes, don’t let the information be lost again, at lest import them as closed issue in the new system.

Sometimes progress has a cost and that is understood. However it is not impossible to migrate content from one system to another. It just takes effort.

Update: Microsoft’s Rob Eisenberg has responded with an explanation and mitigation plan regarding existing comments. He says that a straight migration of the comments is impossible:

    • There is a lot of garbage, spam and even dangerous content within existing LiveFyre comments which would violate GitHub terms of usage and our open source code of conduct, as well as cause security problems.
    • There isn’t a good way to map LiveFyre users to GitHub users and using a bot account to anonymously add comments is questionable with respect to OSS practices and GitHub terms of use.
    • For legal and privacy reasons, we cannot move user-associated data from one system to another without consent from users (GDPR).
    • LiveFyre conversations are threaded, while GitHub issues are not.
    • Placing the old comments into the GitHub Issues system would derail the entire GitHub Issues workflow for both customers and employees and muddle the data.
    • It isn’t clear whether there is a way to invoke GitHub APIs for a migration of this scale such that it wouldn’t violate GitHub API terms of use.

He also has an archiving proposal:

We would take the comments from an article on docs.microsoft.com and then convert them into a Markdown file. During this process, we would strip all user info (remember GDPR). The Markdown file would then be committed to a GitHub repo. Finally, at the bottom of the feedback section, next to the link that says "View on GitHub" we would add a second link that said something like "View Comment Archive". This link would connect you directly to the Markdown comment file for that page.

This sounds positive. At the same time, it is a mess that illustrates some of the disadvantages of a “best of breed” approach to solving technical problems. If Microsoft could use its own technology to host a documentation and commenting system, and a source code management and issue tracking system for that matter, this issue would not occur, and users would not need multiple accounts, causing the legal issues mentioned above.

Microsoft in fact used to use its own platform for all the above but decided to shift to using third-party solutions because they worked better. That seemed to be a good thing, improving user experience and productivity, but becomes a problem when what seems to be the best third-party option changes.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

Which .NET framework for Windows: UWP, WPF or Windows Forms?

Yes, mobile is the future of client applications, cross-platform is cool, web applications are amazing; but out there in the real world, there are still a ton of people who work all day with a Windows PC, and businesses that want PC applications in order to get their work done.

So when a business comes to you and says, we want a new Windows application to do this or that, and presuming they do not care about mobile or Macs or access over the internet but just want something that runs on their internal network, what framework do you choose?

image

Let us even assume that they all run Windows 10 so that UWP (Universal Windows Platform) is a realistic option.

If you want to code in .NET (which is a great choice for a Windows-only application, and with the possibility of migrating code to cross-platform via Xamarin’s compiler later), then you have three obvious choices:

Windows Forms

This is the framework for Windows desktop applications that was introduced at the same time as .NET itself, back in 2002. Of course it has been revised many times since. There was a big update in 2006 with .NET 2.0. That said, Microsoft intended it to be replaced by Windows Presentation Foundation (WPF, see below), so it has not been a focus of attention. In 2014, High DPI support was improved, with .NET 4.5.2, reflecting the fact that this ancient framework is still widely used.

Windows Forms is a nice wrapper around the Windows API, and easy to use in that it uses essentially X Y layout. In other words, you can think of your form as a grid of pixels with the position of your controls determined at design time by its size and coordinates. This is great if you are designing and running on the same PC, but not so good when you deploy to other PCs with different display settings. It does kind-of scale if you follow certain rules, but successful scaling in a Windows Forms application is often difficult to achieve, so users may suffer chopped-off controls and text, or just ugly screens. Read this carefully if you use Windows Forms. And then read about High DPI support, which was improved again in .NET Framework 4.7.

If you are writing a database application, you can generate datasets by drag and drop from the Server Explorer in Visual Studio and bind them to controls. I am not a fan of this database framework, which quickly gets convoluted, but you do not have to use it. However the ability to bind list and grid controls to any kind of .NET collection is fantastically useful.

Why is Windows Forms still in use? It is partly legacy and the fact that it is easier to maintain and enhance an existing application than to start again. It is also because, scaling issues aside, Windows Forms is reliable, well supported by both built-in and third-party controls, and easy to learn.

Windows Presentation Foundation

This was Microsoft’s second go at a GUI framework for .NET and in many respects a great improvement. It was introduced with .NET Framework 3.0 in 2006, part of the Vista wave of technology. Unlike Windows Forms, it is based on the DirectX graphics API, so great for multimedia and special effects. Scaling is built-in and based on layout managers. The underlying presentation language is based on XAML, an XML language. As with Windows Forms, there is deep support for binding data to controls.

Why would you not always use WPF rather than Windows Forms? The main issue is that the time you save on figuring out scaling is more than consumed by the time you spend on design. WPF is a designer-centric framework. It will repay your efforts, but if you just want to slap a couple of grids and a few buttons on a form to get a working business application, Windows Forms remains tempting.

Universal Windows Platform

Both Windows Forms and WPF are old, and Microsoft is pointing developers towards its Universal Windows Platform (UWP) instead. UWP is an evolution of the new application platform introduced in Windows 8 in 2012. If WPF was all about scaling and multimedia, the Windows 8 modern app platform is about touch support and Store-based deployment. The application model was also service based, the idea being that your app consumes services published over the internet. Until the Windows 10 Fall Creators Update, you could not use the .NET SQLClient to connect directly to a SQL Server database (you can now). The app platform became UWP with the launch of Windows 10 in 2015. UWP can use XAML for layout design, but it is not compatible with WPF.

Personally I have mixed feelings about UWP. Unfortunately it has suffered from Microsoft’s ever-changing development strategy. The Windows 8 app platform made sense to me as a way of bringing Windows into the tablet era and enabling applications that were more secure and more easily deployed, even if it tended to result in applications that were blocky and ugly. Microsoft then changed its mind about full-screen touch applications and came up with the UWP for Windows 10, where applications again run in a window, but with a new selling point: you could run your application on Windows Phone as well as desktop. Then the company canned Windows Phone, before UWP had properly launched, in effect deleting the “Universal” part of the platform.

UWP still offers Store delivery and isolation from other applications, better for security and stability. However there are a few things against it. First, users require Windows 10. Second, like WPF it is a designer-centric platform and not so good for running up quick business applications. Third, UWP apps behave differently from standard desktop applications, sometimes not in a good way.

I was using Microsoft’s bundled Photos application recently. I work a lot with images so this often pops up, as the default image viewer on Windows 10. I was not stressing it, but it crashed which, as is typical for a UWP app, means it just disappeared without any message or warning.

UWP will be three years old this summer, but I am not convinced that the platform is quite there yet. I find it hard to think of UWP apps that I love. The apps I know best are the built-in ones, Mail, Photos, Groove Music, Calculator, and I do not love any of them. Paint 3D is amazing but not my thing.

At the same time I do see the merits of UWP versus traditional Windows application deployment. The existence of the Desktop Bridge (formerly Project Centennial) means you can get many of those benefits while still using WPF or Windows Forms.

Closing thoughts

Perhaps something like Power Apps will render this discussion irrelevant before long. There are also other options for the desktop, such as Xamarin Forms if you still want to use .NET, or Electron for using web technologies for desktop applications.

Still, while it may seem surprising, even in 2018 I can think of reasons why you might use any of the above frameworks, even Windows Forms, for a business app targeting Windows.

Generating code for simple SQL Server data access without Entity Framework, works with .NET Core

I realise that Microsoft’s Entity Framework is the most common approach for data access in the .NET world, but I have also always had good results from a simple manual approach using DbConnection, DbCommand and DataReader objects, and like the fact that I can see and control exactly what SQL gets executed. If you prefer using Entity Framework or another abstraction that is fine and please stop reading now!

One snag with this more manual approach is that you have to write tedious code building SQL statements. I figured that someone must have written a utility application to generate this code but could not find one quickly so I did my own. It supports both C# and Visual Basic. The utility connects to a database and lets you generate a class for each table along with code for retrieving and saving these objects, ready for modification. Here you can see a generated class:

image

and here is an example of the generated data access code:

image

This is NOT complete code (otherwise I would be perilously close to writing my own ORM) but simply automates creating SQL parameters and SQL statements.

One of my thoughts was that this code should work well with .NET core. The SQLClient implements the required classes. Here is my code for retrieving an author object, mostly generated by my utility:

public static ClsAuthor GetAuthor(string authorID)
        {
            SqlConnection conn = new SqlConnection(ConnectString);
            SqlCommand cmd = new SqlCommand();
            SqlDataReader dr;
            ClsAuthor TheAuthor = new ClsAuthor();
            try
            {
                cmd.CommandText = "Select * from Authors where au_id = @auid";
                cmd.Parameters.Add("@auid", SqlDbType.Char);
                cmd.Parameters[0].Value = authorID;
                cmd.Connection = conn;
                cmd.Connection.Open();

                dr = cmd.ExecuteReader();

            if (dr.Read()) {

                    //Get Function
                    TheAuthor.Auid = GetSafeDbString(dr, "au_id");
                    TheAuthor.Aulname = GetSafeDbString(dr, "au_lname");
                    TheAuthor.Aufname = GetSafeDbString(dr, "au_fname");
                    TheAuthor.Phone = GetSafeDbString(dr, "phone");
                    TheAuthor.Address = GetSafeDbString(dr, "address");
                    TheAuthor.City = GetSafeDbString(dr, "city");
                    TheAuthor.State = GetSafeDbString(dr, "state");
                    TheAuthor.Zip = GetSafeDbString(dr, "zip");
                    TheAuthor.Contract = GetSafeDbBool(dr, "contract");
                }
            }
            finally
            {
                conn.Close();
                conn.Dispose();
            }

            return TheAuthor;
        }

Everything worked perfectly and I soon had a table showing the authors, using ASP.NET MVC.

In order to verify that it really does work with .NET Core I moved the project to Visual Studio Mac and ran it there:

image

I may be unusual; but I am reassured that I have a relatively painless way to write a database application for .NET Core without using Entity Framework.

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

Notes from the field: unexpected villain breaks Dynamics CRM and IIS on Windows Server 2012

Yesterday I was asked to convert a Dynamics CRM 2013 installation from an internal to an Internet Facing Deployment (IFD). It is a bit fiddly, but I have done this before so I was confident.

The installation in question is only for test; the company has its production CRM 2011 on another server. Because it is for test though, it is a small deployment on a single server.

I got to work running the Claims Based Authentication wizard in the CRM Deployment Manager but also noticed something odd about the server. WSUS (Windows Server Update Services) was installed though it was not in use. This seems a bad idea so I asked if I could remove it. Sure, it was just a quick experiment. I removed WSUS and got on with the next steps of configuring IFD.

Unfortunately ADFS 2.0 (in this case) would not play ball. It could not communicate with CRM. I quickly saw why: attempting to browse to the special FederationMetadata.xml URL raised a 500 error.

I tried a few things. There are plenty of odd things that can go wrong: permissions on the private keys of the certificate used for the CRM web site, Service Principal Names, incorrect DNS entries and so on. All seemed fine. Still the error.

I decided to backtrack and temporarily disable Claims Based Authentication. Unfortunately it appeared that I had broken CRM completely. All access to the site raised the same 500.19 IIS error.

image

The web page IIS delivers says that the most likely causes are that the worker process is unable to read the ApplicationHost.config or web.config file, or malformed XML in the applicationhost.config or web.config file, or incorrect NTFS permissions.

I did a repair install on CRM. I reapplied the rollups. No difference.

I ran Process Monitor to try to figure out what configuration file was causing the problem. It was not a great help, but did point me in the right direction to the extent that it seemed that ASP.NET was not working properly at all. I now focused on this rather than CRM itself, observing also that there were not many CRM-related errors in the event log and I would expect more if it was really broken.

I created a hello world ASP.NET application and installed it in a separate site on a different port. Same error.

Searching for help on this particular error was not particularly helpful. In the context of CRM, the few users that encountered something similar had reinstalled everything from scratch. However, now at least I knew that IIS rather than CRM was broken. This helpful MSDN article actually includes a hint to the solution:

For above specific error (mentioned in this example), DynamicCompressionModule module is causing the trouble. This is because of the XPress compression scheme module (suscomp.dll) which gets installed with WSUS. Since Compression schemes are defined globally and try to load in every application Pool, it will result in this error when 64bit version of suscomp.dll attempts to load in an application pool which is running in 32bit mode.

which is also referenced here. These refer to WSUS breaking 32-bit applications, but in my case after removing WSUS neither 64-bit nor 32-bit apps were running.

Let me put it more clearly. If you remove WSUS using the role wizard in Server Manager, a number of bits get left behind, including a setting in ApplicationHost.config (in /System32/Inetsrv) that breaks IIS.

So it was my attempt to clean up the server that had made it worse.

That said, this is also a Windows Server failure. Adding and removing a role should result as far as possible in no change.

Once identified, the problem is easy to fix (this is often true). Still, several hours wasted and more evidence for Martin Fowler’s assertion that you should automate server configuration and spin up a new one from scratch when you want to make a change, to avoid configuration drift. There is a more detailed post on the same theme – Phoenix servers that rise from the ashes, not snowflake servers that are unique and ugly – here.

In a small business context this perhaps is harder to achieve – though the cost of entry gets lower all the time, through either cloud computing or internal virtualization platforms.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

HoloLens: a developer hands-on

I attended the “Holographic Academy” during Microsoft’s Build conference in San Francisco. It was aimed at developers, and we got a hands-on experience of coding a simple HoloLens app and viewing the results. We were forbidden from taking pictures so you will have to make do with my words; this also means I do not have to show myself wearing a bulky headset and staring at things you cannot see.

image

First, a word about HoloLens itself. The gadget is a headset that augments the real world with a 3D “projected” image. It is not really a hologram, otherwise everyone would see it, but it is a virtual hologram created by combining what you see with digital images.

The effect is uncanny, since the image you see appears to stay in one place. You can walk around it, seeing it from different angles, close up or far away, just as you could with a real image.

That said, there were a couple of issues with the experience. One is that if you went too close to a projected image, it disappeared. From memory, the minimum distance was about 18 inches. Second, the viewport where you see the augmented reality was fairly small and you could easily see around it. This is detrimental to the illusion, and sometimes made it a struggle to see as much of your hologram as you might want.

I asked about both issues and got the same response, essentially “no comment’’. This is prototype hardware, so anything could change. However, according to another journalist who attended a hands-on demo in January, the viewport has gotten smaller, suggesting that Microsoft is compromising in its effort to make the technology into a commercially viable product.

Another odd thing about the demo was that after every step, we were encouraged to whoop and cheer. There was a Microsoft “mentor” for every pair of journalists, and it seemed to me that the mentors were doing most of the whooping and cheering. It is obvious that this is a big investment for the company and I am guessing that this kind of forced enthusiasm is an effort to ensure a positive iimpression.

Lest you think I am too sceptical, let me add that the technology is genuinely amazing, with obvious potential both for gaming and business use.

The developer story

The development process involves Unity, Visual Studio, and of course the HoloLens device itself. The workflow is like this. You create an interactive 3D scene in Unity and build it, whereupon it becomes a Visual Studio project. You open the project in Visual Studio, and deploy it to HoloLens (connected over USB), just as you would to a smartphone. Once deployed, you disconnect the HoloLens and wear it in order to experience the scene you have created. Unity supports scripting in C#, running on Mono, which makes the development platform easy and familiar for Windows developers.

Our first “Holo World” project displayed a hologram at a fixed position determined by where you are when the app first runs. Next, we added the ability to move the hologram, selecting it with a wagging finger gesture, shifting our gaze to some other spot, and placing it with another wagging finger gesture. Note that for this to work, HoloLens needs to map the real world, and we tried turning on wire framing so you could see the triangles which show where HoloLens is detecting objects.

We also added a selection cursor, an image that looks like a red bagel (you can design your own cursor and import it into Unity). Other embellishments were the ability to select a sphere and make it fall to the floor and roll around, voice control to drop a sphere and then reset it back to the starting point, and then “spatial audio” that appears to emit from the hologram.

All of this was accomplished with a few lines of C# imported as scripts into Unity. The development was all guided so we did not have to think for ourselves, though I did add a custom voice command so I could say “abracadabra” instead of “reset scene”; this worked perfectly first time.

For the last experiment, we added a virtual underworld. When the sphere dropped, it exploded making a virtual pit in the floor, through which you could see a virtual world with red birds flapping around. It was also possible to enter this world, by positioning the hologram above your head and dropping a sphere from there.

HoloLens has three core inputs: gaze (where you are looking), gesture (like the finger wag) and voice. Of these, gaze and voice worked really well in our hands on, but gesture was more difficult and sometimes took several tries to get right.

At the end of the session, I had no doubt about the value of the technology. The development process looks easily accessible to developers who have the right 3D design skills, and Unity seems ideally suited for the project.

The main doubts are about how close HoloLens is to being a viable commercial product, at least in the mass market. The headset is bulky, the viewport too small, and there were some other little issues like lag between the HoloLens detection of physical objects and their actual position, if they were moving, as with a person walking around.

Watch this space though; it is going to be most interesting.