Category Archives: software development

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Microsoft to release Visual Studio for the Mac – except it is not

Microsoft’s Mikayla Hutchinson (ex Xamarin) has announced Visual Studio for the Mac:

This is an exciting development, evolving the mobile-centric Xamarin Studio IDE into a true mobile-first, cloud-first development tool for .NET and C#, and bringing the Visual Studio development experience to the Mac.

I tend to agree that it is a significant piece of news. It signals Microsoft’s intent to offer first-class support for Mac developers. Other than at Microsoft events, the majority of the developers I see at conferences carry Macs rather than Windows laptops, and if the company is to have any hope of winning them over to its cross-platform ASP.NET web application framework, getting excellent development support on Macs is a critical step.

Naming things is not Microsoft’s greatest strength though. Sometimes it gives different things the same name, such as with OneDrive and OneDrive for Business, or Outlook for Windows and Outlook for iOS and Android. It makes sense from a marketing perspective, but it is also confusing.

This is another example. No, Microsoft has not ported Visual Studio to the Mac. This is a rebrand of Xamarin Studio, originally a cross-platform IDE for its C# mobile app framework, but more recently Mac-only.

Hutchinson makes the best of it:

Its UX is inspired by Visual Studio, yet designed to look and feel like a native citizen of macOS …. Below the surface, Visual Studio for Mac also has a lot in common with its siblings in the Visual Studio family. Its IntelliSense and refactoring use the Roslyn Compiler Platform; its project system and build engine use MSBuild; and its source editor supports TextMate bundles. It uses the same debugger engines for Xamarin and .NET Core apps, and the same designers for Xamarin.iOS and Xamarin.Android.

The common use of MSBuild is a key point. “Although it’s a new product and doesn’t support all of the Visual Studio project types, for those it does have in common it uses the same MSBuild solution and project format. If you have team members on macOS and Windows, or switch between the two OSes yourself, you can seamlessly share your projects across platforms,” says Hutchinson.

image

The origins of what will now be Visual Studio for the Mac actually go back to the early days of the .NET Framework. Developer Mike Kruger decided to write an IDE in C# in order to work more easily with a pre-release of .NET Framework 1.0. His IDE was called SharpDevelop. Here is an early version, from 2001:

image

Of course by then most developers used Visual Studio to work with C#, but there were several reasons why SharpDevelop continued to have a following. Unlike Visual Studio, it was built in C# and you could get all the code. It was free. It was also of interest to Mono users, Mono being the open source implementation of the .NET Framework originated by Miguel de Icaza (also now at Microsoft). In 2003, Mono developers started work on porting SharpDevelop to run on Linux using the GNOME toolkit (Gtk#). This forked project became MonoDevelop.

Xamarin (the framework) of course has its roots in Mono and when Xamarin (the company) decided to create its own IDE it based it on MonoDevelop. So MonoDevelop evolved into Xamarin Studio.

Incidentally, SharpDevelop is still available and you can get it here.  MonoDevelop is still available and you can get it here.

So now some sort of circle is complete and what began as SharpDevelop, a rebel imitation of Visual Studio, will now be an official Microsoft product called Visual Studio for the Mac – though how much SharpDevelop code remains (if any) is another matter.

Historical digression aside, the differences between Visual Studio and Visual Studio for the Mac are not the only point of confusion. There is also Visual Studio Code, an editor with some IDE features, which is cross-platform on Windows, Mac and Linux. This one is based on the Google-sponsored Chromium project and has won quite a few friends.

Should Mac users now use Visual Studio Code, or Visual Studio for the Mac, for their .NET Core or ASP.NET Core development? Microsoft will say “your choice” but it is a good question. The key here is which project will now get more attention from both Microsoft and other open source contributors.

Still, we should not complain. Two rival Microsoft IDEs for the Mac are a considerable advance on none, which was the answer until Visual Studio Code went into preview in April 2015.

Microsoft improves Windows Subsystem for Linux: launch Windows apps from Linux and vice versa

The Windows 10 anniversary update introduced a major new feature: a subsystem for Linux. Microsoft marketing execs call this Bash on Windows; Ubuntu calls it Ubuntu on Windows; but Windows Subsystem for Linux is the most accurate description. You run a Linux binary and the subsystem redirects system calls so that it behaves like Linux.

image

The first implementation (which is designated Beta) has an obvious limitation. Linux works great, and Windows works great, but they do not interoperate, other than via the file system or networking. This means you cannot do what I used to do in the days of Services for Unix: type ls at the command prompt to get a directory listing.

That is now changing. Version 14951 introduces interop so that you can launch Windows executables from the Bash shell and vice versa. This is particularly helpful since the subsystem does not support GUI applications. One of the obvious use cases is launching a GUI editor from Bash, such as Visual Studio Code or Notepad++.

The nitty-gritty on how it works is here.

image

Limitations? A few. Environment variables are not shared so an executable that is on the Windows PATH may not be on the Linux PATH. The executable also needs to be on a filesystem compatible with DrvFs, which means NTFS or ReFS, not FAT32 or exFAT.

This is good stuff though. If you work on Windows, but love Linux utilities like grep, now you can use them seamlessly from Windows. And if you are developing Linux applications with say PHP or Node.js, now you can develop in the Linux environment but use Windows editors.

Note that this is all still in preview and I am not aware of an announced date for the first non-beta release.

On GitHub and GitHub Universe

I’ve been at GitHub Universe in San Francisco for the last few days. Around 1500 developers (not sure if that figure includes staff and exhibitors) in a warehouse at Pier 70. The venue was beautifully converted into an interaction space. Here is the view from outside as we were leaving; Octocat seems to be waving goodbye:

image

The main stage done up like a spaceship:

image

There was a large area for mingling, overseen by Octocat:

image

Plenty of space outside too, with a high standard of food and drink on offer.

image

There was also a “send a postcard” area where you could write a card; with cards, pens, stamps and postbox supplied there was no excuse not to do so:

image

You are probably thinking, when do we get to the techie stuff; but in some ways it is better to look at the space GitHub created and ask what it tells you about the company.

Running an event like this is not cheap, and I think we can conclude that GitHub has a business model that works. Further, there was a generous and inclusive spirit to the event which was good to experience. Kimberly Bryant from Black Girls Code spoke at the opening keynote – the event concert was a Black Girls Code benefit – and while there is often an element of PR in the causes which businesses choose to sponsor, I don’t question the authenticity of GitHub’s efforts to promote both coding and diversity in our sadly imbalanced software industry.

image

In some ways then the actual technical content was not the most important thing about this event. That said, there was some excellent content on themes including how GitHub scales its own service, new project management and code review features in GitHub, and how the product is evolving its add-in or “integrations” platform.

I also learned a bit about Electron, a framework for “creating native Desktop applications with web technologies” based on Chromium and Node.js. Microsoft’s Visual Studio Code uses Electron, as does GitHub’s own Atom editor.

If you are developer, you will be familiar with GitHub; it is the obvious choice for hosting an open source project (free) and a popular option for private repositories. When Google Code closed in 2015, the announcement cited the migration of developers to GitHub as the key reason and acknowledge that it was among “a wide variety of better hosting services” than Google’s own. “To meet developers where they are, we ourselves migrated nearly a thousand of our own open source projects from Google Code to GitHub,” remarked Google’s Chris DiBona. That was a pivotal moment, showing how GitHub has become a core part of the open source ecosystem as well as a strong commercial product for private and enterprise repositories.

GitHub does seem to take its responsibilities seriously and the fact that is has found a successful balance between free and commercial services is something to be thankful for.

Adapting a native code DLL to be called from a Store or Universal Windows app

I am writing a Bridge game in C# – yes, I have been doing this for some time, it does run now but it is not ready for public unveiling.

It is good fun though and a learning experience, as I am writing it as a Windows 8 Store app. This means it can also be a Universal Windows Platform app but I have kept it compatible with Window 8.1 as I don’t want to lose that large market of Windows 8 users who have not upgraded to 10. Hmm.

Bridge is a card game in which a pack of 52 cards is dealt into 4 hands of 13 cards. Each hand is played as a sequence of 13 4-card “tricks”, and each trick is won one of two opposing pairs of players according to the cards played. Each pair of course tries to win as many tricks as possible, so one of the points of interests is how many tricks can be won if you play perfectly (ie with full knowledge of all four hands). Another point of interest is how each card played affects the potential number of tricks you can win with best play. For example, leading a King might cost you a trick (or more) if your opponents hold both the Ace and the Queen of that suit.

This is called “double dummy” analysis and smart people have written algorithms to calculate the answers. A double dummy analysis is useful in a bridge game for two reasons. One is that users may like to know, after playing a hand, what their best score could have been, or even to analyse the hand and see how if they played this card rather than that card at trick such-and-such the outcome would have varied. The other is that you can use it to assist the software in finding the best play. Of course it is important that the software plays fair by not using knowledge of all four hands beyond what would be known by human players; but it is legitimate to try out various possible hands that match what is currently known and use double dummy analysis on these hands.

One such smart person is Bo Haglund who wrote a C++ Windows library for double dummy analysis, called Double Dummy Solver (DDS) and released it as open source under the Apache 2 license. It works very well and is widely used in the Bridge software community, and has now been ported to Mac and Linux; you can find the latest code on Github.

Modifying a native code DLL to use with a Store app

I wanted to use the library in my own Bridge game but faced a compatibility problem. Windows Store apps can only call into DLLs that meet certain requirements, such as using only a subset of the Windows API, and DDS did not meet those requirements. My choice was either to port the DLL to C#, or to modify the code so that it would work as a Windows Runtime native DLL.

I have no doubt that the code could be ported to C# but it looks like rather a long job that would result in a library with slower performance (please feel free to prove me wrong). I thought it would be more realistic to modify the code, so I created a new Windows 8.1 DLL project in Visual Studio 2013 (I am now using Visual Studio 2015 but it is the same for this) and set about modifying the code so that it would compile.

In no particular order, here are some notes on what I learned.

I was able to get the DLL to compile after disabling the multi-threading support (more on this later), and commenting out some functions that I don’t yet need.

Another issue I hit was that Visual C++ by default performs “Security Development Lifecycle” checks (compile with /sdl). This means that that common functions like strcpy, strcat, sprintf and others will not compile. You have to use “secure” versions of those functions, strcpy_s, strcat_s, sprintf_s and so on. These are specific to Microsoft’s libraries though. Of course you can just not compile with /sdl, or define _CRT_SECURE_NO_WARNINGS, but I chose to fix all of these. Now the library compiled.

But did it work? No. I had introduced a stupid bug which took me a while to fix. Did it then work? Yes, but it took me some time to get it working from C#.

Next, I kept getting DLLNotFound exceptions. OK, so you have to add the DLL as content in your C# project, and make sure it is set to copy to your output. I still got DLLNotFound exceptions. It turns out that you get this exception even when the DLL is present, if there is a dependency in the DLL which is not found. What dependency was not found? I downloaded the Sysinternals Process Monitor utility and set the filter to monitor my C# game. I excluded SUCCESS results. Then I tried to load the DLL. This told me that it was looking for the file msvcr120_app.dll (the Windows Runtime version of the Visual C++ runtime library). My first thought was to add runtime libraries from the appx deployment packages, in:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1\ExtensionSDKs\Microsoft.VCLibs\12.0

image

Then I discovered that all you need to do is to add a reference to the Visual C++ runtime packages, much easier. That fixed DLLNotFound.

Next, I had some problems calling the 64-bit DLL with Platform Invoke (PInvoke) from C#. I found it easier to compile both my C# app and the DLL itself as 32-bit code. I may go back to the 64-bit option later.

Concurrency issues

Now I had everything working; except that my DDS port was far inferior to the standard one because it was single-threaded. The original used QueueUserWorkItem which is not available in a Windows Runtime DLL. I searched for what to do, and came across this MSDN article which recommends using RunAsync, WorkItemHandler and IAsyncAction. However my DLL was not currently compiled using /ZW for “Consume Windows Runtime Extension”. I could add that of course; but then my DLL would have a dependency on the Windows Runtime and if I wanted to use the code for, say, Windows 7, it would not work. or not without yet more #ifdef blocks. No big deal perhaps; but my preference was to avoid this dependency.

There may be other solutions, but the one that I found was to use the Concurrency Runtime. Previously, QueueUserWorkItem was called in a for loop. I simply modified this to use a parallel_for loop instead, using the example here for guidance. I also added:

#include <ppltasks.h>

using namespace concurrency;

to the top of the code. It works well, speeding performance by about three times on my quad-core desktop. Of course I was greatly helped by the fact that the code was already written with concurrency in mind.

image

The effect is spoiled by the time it takes to load the DLL but fortunately you can get DDS to solve multiple boards in one call though I have yet to experiment with this.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

HoloLens: a developer hands-on

I attended the “Holographic Academy” during Microsoft’s Build conference in San Francisco. It was aimed at developers, and we got a hands-on experience of coding a simple HoloLens app and viewing the results. We were forbidden from taking pictures so you will have to make do with my words; this also means I do not have to show myself wearing a bulky headset and staring at things you cannot see.

image

First, a word about HoloLens itself. The gadget is a headset that augments the real world with a 3D “projected” image. It is not really a hologram, otherwise everyone would see it, but it is a virtual hologram created by combining what you see with digital images.

The effect is uncanny, since the image you see appears to stay in one place. You can walk around it, seeing it from different angles, close up or far away, just as you could with a real image.

That said, there were a couple of issues with the experience. One is that if you went too close to a projected image, it disappeared. From memory, the minimum distance was about 18 inches. Second, the viewport where you see the augmented reality was fairly small and you could easily see around it. This is detrimental to the illusion, and sometimes made it a struggle to see as much of your hologram as you might want.

I asked about both issues and got the same response, essentially “no comment’’. This is prototype hardware, so anything could change. However, according to another journalist who attended a hands-on demo in January, the viewport has gotten smaller, suggesting that Microsoft is compromising in its effort to make the technology into a commercially viable product.

Another odd thing about the demo was that after every step, we were encouraged to whoop and cheer. There was a Microsoft “mentor” for every pair of journalists, and it seemed to me that the mentors were doing most of the whooping and cheering. It is obvious that this is a big investment for the company and I am guessing that this kind of forced enthusiasm is an effort to ensure a positive iimpression.

Lest you think I am too sceptical, let me add that the technology is genuinely amazing, with obvious potential both for gaming and business use.

The developer story

The development process involves Unity, Visual Studio, and of course the HoloLens device itself. The workflow is like this. You create an interactive 3D scene in Unity and build it, whereupon it becomes a Visual Studio project. You open the project in Visual Studio, and deploy it to HoloLens (connected over USB), just as you would to a smartphone. Once deployed, you disconnect the HoloLens and wear it in order to experience the scene you have created. Unity supports scripting in C#, running on Mono, which makes the development platform easy and familiar for Windows developers.

Our first “Holo World” project displayed a hologram at a fixed position determined by where you are when the app first runs. Next, we added the ability to move the hologram, selecting it with a wagging finger gesture, shifting our gaze to some other spot, and placing it with another wagging finger gesture. Note that for this to work, HoloLens needs to map the real world, and we tried turning on wire framing so you could see the triangles which show where HoloLens is detecting objects.

We also added a selection cursor, an image that looks like a red bagel (you can design your own cursor and import it into Unity). Other embellishments were the ability to select a sphere and make it fall to the floor and roll around, voice control to drop a sphere and then reset it back to the starting point, and then “spatial audio” that appears to emit from the hologram.

All of this was accomplished with a few lines of C# imported as scripts into Unity. The development was all guided so we did not have to think for ourselves, though I did add a custom voice command so I could say “abracadabra” instead of “reset scene”; this worked perfectly first time.

For the last experiment, we added a virtual underworld. When the sphere dropped, it exploded making a virtual pit in the floor, through which you could see a virtual world with red birds flapping around. It was also possible to enter this world, by positioning the hologram above your head and dropping a sphere from there.

HoloLens has three core inputs: gaze (where you are looking), gesture (like the finger wag) and voice. Of these, gaze and voice worked really well in our hands on, but gesture was more difficult and sometimes took several tries to get right.

At the end of the session, I had no doubt about the value of the technology. The development process looks easily accessible to developers who have the right 3D design skills, and Unity seems ideally suited for the project.

The main doubts are about how close HoloLens is to being a viable commercial product, at least in the mass market. The headset is bulky, the viewport too small, and there were some other little issues like lag between the HoloLens detection of physical objects and their actual position, if they were moving, as with a person walking around.

Watch this space though; it is going to be most interesting.

StackOverflow developer survey shows decline in C#, Windows

StackOverflow, a popular (and the best) site for programming queries, has published its annual developer survey. Respondents included:

26,086 people from 157 countries participated in our 45-question survey. 6,800 identified as full-stack developers, 1,900 as mobile developers, 1,200 as front-end developers, 2 as farmers, and 12,000 as something else.

That is a decent sample size, though not necessarily representative of the entire developer community.

What is notable? Here are a few things that stood out for me:

Developers are young. The largest group is 25-29 and the average age 28.9 years old.

92.1% of respondents are male. Ouch.

Software is still a good bet for a career even if you have no qualifications. 41.8% declared themselves self-taught. That said, it is not clear to me what proportion of respondents do programming as their main job. Presumably not the two farmers?

If you look at the “Most popular technologies”, there is a striking decline in C# over the last three years:

2013: 44.7%

2014: 37.6%

2015: 31.6%

That’s a shame because C# is an excellent language. The reason? It’s speculation, but probably means less Windows development, whether server or desktop.

Swift is top of the “most loved” list, meaning a language that developers intend to continue with. Salesforce tops the “most dreaded”, meaning a platform that developers cannot wait to abandon, followed by Visual Basic.

What OS do developers use on the desktop? Here, Windows remains the biggest, but is declining:

2013: 60.4%

2014: 57.9%

2015: 54.5%

Windows XP has declined dramatically, down from 10.8% in 2013 to 1.0% today.

Where have developers gone, if they no longer use Windows? Mac is up over the period, but only by 2.8% share. 3.5% are using “Other”, interesting (Chromebook?).

I’ll stop there; I don’t want to spoil the survey.

Conclusions? This puts some data (albeit imperfect) on the theory that Microsoft is losing its grip on the developer community – though note that Microsoft’s technology in general remains popular, just less so than before.

Postscript: Several on Twitter have observed that most languages have declined over the period, not just C#. Here’s the difference in share from 2013 to 2015 for some of them:

JavaScript: –2.2%

SQL: –11.6%

Java: –5.1%

C#: –13.1%

PHP: –5.1%

In other words, all of the top 5 have declined, though C# has declined the most.

What does this mean? Since the numbers sum to more than 100%, it might imply more specialisation. Or it might just say something about how the StackOverflow community has evolved, since that is the source of the data. Still, it seems to me that you cannot spin this as good news for Microsoft, though it might be less bad than it first appears.

Delphi and RAD Studio 2015 roadmap: no Universal Apps?

Embarcadero has posted a roadmap for RAD Studio 2015, its suite of tools for building apps for Windows, Mac, iOS and Android.

Note that the company says the (sketchy) plans outlined are “not a promise, or a contract”.

I will be interested to see if the company intends to support the Windows 10 Universal App Platform (UAP), which Microsoft is pushing as the future of Windows client app development. UAP apps run on the Windows Runtime, a sandboxed environment introduced in Windows 8. In Windows 10, UAP apps are integrated with the Windows desktop, and run on Windows Phone and Xbox as well as on PCs and tablets.

When Window 8 came out, Embarcadero came up with a project type called “Metropolis”, which simulated the Windows 8 Metro environment but with a Win32 executable. It was neither one thing nor the other, and mostly ignored as far as I can tell. That said, lack of support for Windows 8 Store apps proved to be no big deal, because of the low take-up for the platform in general. At this stage, nobody knows whether the UAP may be similarly unsuccessful, though it seems to me that it has a better chance thanks to its broader scope and changes that have been made.

The roadmap promises “Integration with new Windows 10 platform technologies” but does not promise support for the Windows Runtime or UAP, so my assumption for the moment is that Embarcadero is steering clear for the time being. There may also be technical challenges.

Not much new is promised for the venerable VCL (Windows-only apps), and only a little more for the cross-platform FireMonkey: new mobile components including Maps, a WebBrowser component for desktop apps, and more iOS platform (real native) controls.

A new iOS 64-bit compiler is promised, as well as moving the Win32 compiler to an LLVM-based toolchain, as is already the case for 64-bit Windows.

There is an Internet of Things slide which promises “mobile proximity integration” and components for connecting to different devices. Exactly what is new compared to the IoT support described here for XE7 is not clear to me.

Under consideration, Embarcadero says, is Linux server-side support for its middle-tier technologies like DataSnap, support for Intel Android, and a 64-bit toolchain for Mac OS X.

Since it is on SlideShare, I can embed the whole thing here:

This is some help I guess; though I recall much past angst expressed on the Embarcadero forums about these roadmaps, or the lack/lateness of them. The problem, I guess, is that roadmaps are of little benefit to the tools vendors, since they have potential to fuel discontent, set expectations that may later prove unrealistic, and give away plans to competitors.

This may explain why this one has so little content. Embarcadero could work a bit harder on the presentation as well; this really does not have the look of being the exciting next generation of a powerful cross-platform toolkit.