Category Archives: development

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Generating code for simple SQL Server data access without Entity Framework, works with .NET Core

I realise that Microsoft’s Entity Framework is the most common approach for data access in the .NET world, but I have also always had good results from a simple manual approach using DbConnection, DbCommand and DataReader objects, and like the fact that I can see and control exactly what SQL gets executed. If you prefer using Entity Framework or another abstraction that is fine and please stop reading now!

One snag with this more manual approach is that you have to write tedious code building SQL statements. I figured that someone must have written a utility application to generate this code but could not find one quickly so I did my own. It supports both C# and Visual Basic. The utility connects to a database and lets you generate a class for each table along with code for retrieving and saving these objects, ready for modification. Here you can see a generated class:

image

and here is an example of the generated data access code:

image

This is NOT complete code (otherwise I would be perilously close to writing my own ORM) but simply automates creating SQL parameters and SQL statements.

One of my thoughts was that this code should work well with .NET core. The SQLClient implements the required classes. Here is my code for retrieving an author object, mostly generated by my utility:

public static ClsAuthor GetAuthor(string authorID)
        {
            SqlConnection conn = new SqlConnection(ConnectString);
            SqlCommand cmd = new SqlCommand();
            SqlDataReader dr;
            ClsAuthor TheAuthor = new ClsAuthor();
            try
            {
                cmd.CommandText = "Select * from Authors where au_id = @auid";
                cmd.Parameters.Add("@auid", SqlDbType.Char);
                cmd.Parameters[0].Value = authorID;
                cmd.Connection = conn;
                cmd.Connection.Open();

                dr = cmd.ExecuteReader();

            if (dr.Read()) {

                    //Get Function
                    TheAuthor.Auid = GetSafeDbString(dr, "au_id");
                    TheAuthor.Aulname = GetSafeDbString(dr, "au_lname");
                    TheAuthor.Aufname = GetSafeDbString(dr, "au_fname");
                    TheAuthor.Phone = GetSafeDbString(dr, "phone");
                    TheAuthor.Address = GetSafeDbString(dr, "address");
                    TheAuthor.City = GetSafeDbString(dr, "city");
                    TheAuthor.State = GetSafeDbString(dr, "state");
                    TheAuthor.Zip = GetSafeDbString(dr, "zip");
                    TheAuthor.Contract = GetSafeDbBool(dr, "contract");
                }
            }
            finally
            {
                conn.Close();
                conn.Dispose();
            }

            return TheAuthor;
        }

Everything worked perfectly and I soon had a table showing the authors, using ASP.NET MVC.

In order to verify that it really does work with .NET Core I moved the project to Visual Studio Mac and ran it there:

image

I may be unusual; but I am reassured that I have a relatively painless way to write a database application for .NET Core without using Entity Framework.

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Microsoft sets Visual Studio LightSwitch to off

Microsoft has officially announced the end of development of LightSwitch, a rapid application builder for desktop and mobile applications.

LightSwitch was introduced in July 2011 as a tool to build multi-tier applications using a data-first approach. You can design you database using an excellent visual designer, design screens for viewing and editing the data using a non-visual designer, and generate applications with the server-side code hosted either on your own server or on Microsoft Azure. The client application in the original LightSwitch was based on Silverlight, but this was later extended with an option for HTML. You can get a feel for the general approach from my early hands-on here.

As I noted at the time, LightSwitch abstracts a number of difficult tasks. This is a good thing, though as with any application generated you had to take time to learn its quirks. That said, it is more usable than most model-driven development tools, in my experience.

LightSwitch had some bad luck. It was conceived at a time when Silverlight looked like the future of Microsoft’s client development platform, but by the time it launched Silverlight was heading for obsolescence. It also fell victim to ideologies within Microsoft (which persist today) that chase the dream of code-free application development that anyone can do. The documentation for LightSwitch on launch was dreadful, a series of how-tos that neglected to explain how the tool worked. You had to get the software development kit, aimed at those building LightSwitch components, to have any hope of understanding the tool.

image

The abandonment of LightSwitch is not a surprise. Microsoft had stopped talking about it and adoption was poor. There will be no tooling for it in the next Visual Studio, though you can keep using it for a while if you want.

I think it is a shame since it is a promising tool and I cannot help thinking that with more intelligent positioning and a few tweaks to the product and its documentation it could have been a success. Those who did get to grips with it found it very good.

What is unfortunate is that Microsoft has lost the faith of many developers thanks to the many shifts in its development strategy. I know component vendors have also been caught out by the Silverlight and then LightSwitch debacle. Here is one of the comments on the announcement:

Microsoft keeps doing this over and over, we invest months even years to master a technology, just to find out it’s being phased out prematurely. Perfectly good, one-of-a-kind niche tools too. So much investment on both sides (both MS and customers) down the drain. What’s worse, it is done is a non-transparent, dishonest manner, letting things dry up over a couple years so that when the announcement comes, no-one really cares any more, no more noise – just look at this blog.

This makes it hard for the company to convince developers that its new strategies de jour have a longer life ahead of them. I am thinking of the UWP (Universal Windows Platform), which has already changed substantially since its first conception, and of PowerApps, the supposed replacement for LightSwitch, and yet another attempt to promote code-free development.

Developers do not want code-free development. They like tools that do stuff for them, if they are intuitive and transparent, but they also like an easy route to adding and modifying code in order to have the application work the way they want.

On GitHub and GitHub Universe

I’ve been at GitHub Universe in San Francisco for the last few days. Around 1500 developers (not sure if that figure includes staff and exhibitors) in a warehouse at Pier 70. The venue was beautifully converted into an interaction space. Here is the view from outside as we were leaving; Octocat seems to be waving goodbye:

image

The main stage done up like a spaceship:

image

There was a large area for mingling, overseen by Octocat:

image

Plenty of space outside too, with a high standard of food and drink on offer.

image

There was also a “send a postcard” area where you could write a card; with cards, pens, stamps and postbox supplied there was no excuse not to do so:

image

You are probably thinking, when do we get to the techie stuff; but in some ways it is better to look at the space GitHub created and ask what it tells you about the company.

Running an event like this is not cheap, and I think we can conclude that GitHub has a business model that works. Further, there was a generous and inclusive spirit to the event which was good to experience. Kimberly Bryant from Black Girls Code spoke at the opening keynote – the event concert was a Black Girls Code benefit – and while there is often an element of PR in the causes which businesses choose to sponsor, I don’t question the authenticity of GitHub’s efforts to promote both coding and diversity in our sadly imbalanced software industry.

image

In some ways then the actual technical content was not the most important thing about this event. That said, there was some excellent content on themes including how GitHub scales its own service, new project management and code review features in GitHub, and how the product is evolving its add-in or “integrations” platform.

I also learned a bit about Electron, a framework for “creating native Desktop applications with web technologies” based on Chromium and Node.js. Microsoft’s Visual Studio Code uses Electron, as does GitHub’s own Atom editor.

If you are developer, you will be familiar with GitHub; it is the obvious choice for hosting an open source project (free) and a popular option for private repositories. When Google Code closed in 2015, the announcement cited the migration of developers to GitHub as the key reason and acknowledge that it was among “a wide variety of better hosting services” than Google’s own. “To meet developers where they are, we ourselves migrated nearly a thousand of our own open source projects from Google Code to GitHub,” remarked Google’s Chris DiBona. That was a pivotal moment, showing how GitHub has become a core part of the open source ecosystem as well as a strong commercial product for private and enterprise repositories.

GitHub does seem to take its responsibilities seriously and the fact that is has found a successful balance between free and commercial services is something to be thankful for.

Time for another look at “pure .NET”

Back in the Nineties there was a lot of fuss about “pure Java”. This meant Java code without any native code invocations that tie the application to a specific operating system.

It is possible to write cross-platform Java code that invokes native code, but it adds to the complexity. If it is an operating system API you need conditional code so that the write API is called on each platform. If it is a custom library it will have to be compiled separately for each platform.

Over on the Microsoft .NET site, developers have tended to have a more casual approach. After all, in the great majority of cases the code would only ever run on Windows. Further, Microsoft tended to steer developers towards Windows-only dependencies like SQL Server. After all, that is the value of owning a developer platform.

Times change. Microsoft has got the cross-platform bug, with its business strategy based on attracting businesses to its cloud properties (Office 365 and Azure) rather than Windows. The .NET Framework has been forked to create .NET Core, which runs on Mac and Linux as well as Windows. SQL Server is coming to Linux.

Another issue is porting applications from 32-bit to 64-bit, as I was reminded recently when migrating some ASP.NET applications to a new site. If your .NET code avoids P/Invoke (Platform Invoke) then you can compile for “Any CPU” and 64-bit will just work. If you used P-invoke and want to support both 32-bit and 64-bit it requires more care. IntPtr, used frequently in P/Invoke calls, is a different size. If you have custom native libraries, you need to compile them separately for each platform. The lazy solution is always to run as 32-bit but that is a shame.

What this means is that P/Invoke should only be used as a last resort. Arguably this has always been true, but the reasons are stronger today.

This is also an issue for libraries and components intended for general use, whether open source or commercial. It is early days for .NET Core support, but any native code dependencies will be a problem.

Breaking the P/Invoke habit will not be easy but “Pure .NET” is the way to go whenever possible.

Reflections on QCon London 2016 – part one

I attended QCon in London last week. This is a software development conference focused on large-scale projects and with a tradition oriented towards Agile methodology. It is always one of the best events I get to attend, partly because it is vendor-neutral (it is organised by InfoQ), and partly because of the way it is structured. The schedule is divided into tracks, such as “Back to Java” or “Architecting for failure”, each of which has a track leader, and the track leader gets to choose who speaks on their track. This means you get a more diverse range of speakers than is typical; you also tend to hear from practitioners or academics rather than product managers or evangelists.

image

The 2016 event was well up to standard from my perspective – though bear in mind that with 6 tracks on each day I only got to attend a small fraction of the sessions.

This post is just to mention a few highlights, starting with the opening keynote from Adrian Colyer, who specialised in finding interesting IT-related research papers and writing them up on his blog. He seems to enjoy being contrarian and noted, for example, that you might be doing too much software testing – drawing I guess on this post about the art of testing less without sacrificing quality. The takeaway for me is that it is always worth analysing what you do and trying to avoid the point where the cost exceeds the benefit.

Next up was Gavin Stevenson on “love failure” – I wrote this up on the Reg – there is a perhaps obvious point here that until you break something, you don’t know its limitations.

On Monday evening we got a light-hearted (virtual) look at Babbage’s Analytical Engine (1837) which was never built but was interesting as a mechanical computer, and Ada Lovelace’s attempts to write code for it, thanks to John Graham-Cumming and illustrator Sydney Padua (author of The Thrilling Adventures of Lovelace and Babbage).

image

Tuesday and the BBC’s Stephen Godwin spoke on Microservices powering BBC iPlayer. This was a compelling talk for several reasons. The BBC is hooked on AWS (Amazon Web Services) apparently and stores 21TB daily into S3 (Simple Storage Service). This includes safety copies. iPlayer was rebuilt in 2013, Godwin told us, and the team of 25 developers achieves 34 live deployments per week on average; clearly the DevOps stuff is working here. Godwin advocates genuinely “micro” services. “How big should a microservice be? For us, about 600 Java statements,” he said.

Martin Thompson spoke on the characteristics of a good software engineer, though oddly the statement that has stayed with me is that an ORM (Object-Relational Mapping) “is the wrong abstraction for a database”, something that chimes with me even though I get the value of ORMs like Microsoft’s Entity Framework for rapid development where database performance is non-critical.

Then came another highlight: Google’s Micah Lemonik on Architecting Google Docs. This talk sadly was not recorded; a touch of paranoia from Google? This was fascinating both from a historical perspective – Lemonik was involved in a small company called 2Web technologies which developed an Excel-like engine in 2003-4, and joined Google (which acquired 2Web) in 2005 to work on Google Sheets. The big story here was the how Google Sheets became collaborative, so more than one person could work on a spreadsheet simultaneously. “Google didn’t like it initially,” said Lemonik. “They thought it was too weird.” The team persisted though, thinking about the editing process as “messages being transferred between collaborators” rather than as file updates; and it worked.

You can actually use today’s version in your own projects, with Google’s Realtime API, provided that you are happy to have your stuff on Google Drive.

I particularly enjoyed Lemonik’s question to the audience. Two people are working on a sheet, and one types “6” into a cell. Then the same person overtypes this with “7”. Then the collaborator overtypes the same cell with “8”. Next, the first person presses Ctrl-z for undo. What should be the result?

The audience split neatly into “6”, “7”, and just a few “8” (the rationale for “8” is that undo should only undo your own changes and not touch those made by others).

Google, incidentally, settled on “6”, maintaining a separate undo stack for each user. But there is no right answer.

Lemonik also discussed the problem of consistency when there are large numbers of contributors. A hard problem. “There have to be bounds to the system in order for it to perform well,” he said. “The biggest takeaway for me in building the system is that you just can’t have it all. All of engineering is this trade-off.”

image

I have more to say about QCon so look out for part two shortly.

Adapting a native code DLL to be called from a Store or Universal Windows app

I am writing a Bridge game in C# – yes, I have been doing this for some time, it does run now but it is not ready for public unveiling.

It is good fun though and a learning experience, as I am writing it as a Windows 8 Store app. This means it can also be a Universal Windows Platform app but I have kept it compatible with Window 8.1 as I don’t want to lose that large market of Windows 8 users who have not upgraded to 10. Hmm.

Bridge is a card game in which a pack of 52 cards is dealt into 4 hands of 13 cards. Each hand is played as a sequence of 13 4-card “tricks”, and each trick is won one of two opposing pairs of players according to the cards played. Each pair of course tries to win as many tricks as possible, so one of the points of interests is how many tricks can be won if you play perfectly (ie with full knowledge of all four hands). Another point of interest is how each card played affects the potential number of tricks you can win with best play. For example, leading a King might cost you a trick (or more) if your opponents hold both the Ace and the Queen of that suit.

This is called “double dummy” analysis and smart people have written algorithms to calculate the answers. A double dummy analysis is useful in a bridge game for two reasons. One is that users may like to know, after playing a hand, what their best score could have been, or even to analyse the hand and see how if they played this card rather than that card at trick such-and-such the outcome would have varied. The other is that you can use it to assist the software in finding the best play. Of course it is important that the software plays fair by not using knowledge of all four hands beyond what would be known by human players; but it is legitimate to try out various possible hands that match what is currently known and use double dummy analysis on these hands.

One such smart person is Bo Haglund who wrote a C++ Windows library for double dummy analysis, called Double Dummy Solver (DDS) and released it as open source under the Apache 2 license. It works very well and is widely used in the Bridge software community, and has now been ported to Mac and Linux; you can find the latest code on Github.

Modifying a native code DLL to use with a Store app

I wanted to use the library in my own Bridge game but faced a compatibility problem. Windows Store apps can only call into DLLs that meet certain requirements, such as using only a subset of the Windows API, and DDS did not meet those requirements. My choice was either to port the DLL to C#, or to modify the code so that it would work as a Windows Runtime native DLL.

I have no doubt that the code could be ported to C# but it looks like rather a long job that would result in a library with slower performance (please feel free to prove me wrong). I thought it would be more realistic to modify the code, so I created a new Windows 8.1 DLL project in Visual Studio 2013 (I am now using Visual Studio 2015 but it is the same for this) and set about modifying the code so that it would compile.

In no particular order, here are some notes on what I learned.

I was able to get the DLL to compile after disabling the multi-threading support (more on this later), and commenting out some functions that I don’t yet need.

Another issue I hit was that Visual C++ by default performs “Security Development Lifecycle” checks (compile with /sdl). This means that that common functions like strcpy, strcat, sprintf and others will not compile. You have to use “secure” versions of those functions, strcpy_s, strcat_s, sprintf_s and so on. These are specific to Microsoft’s libraries though. Of course you can just not compile with /sdl, or define _CRT_SECURE_NO_WARNINGS, but I chose to fix all of these. Now the library compiled.

But did it work? No. I had introduced a stupid bug which took me a while to fix. Did it then work? Yes, but it took me some time to get it working from C#.

Next, I kept getting DLLNotFound exceptions. OK, so you have to add the DLL as content in your C# project, and make sure it is set to copy to your output. I still got DLLNotFound exceptions. It turns out that you get this exception even when the DLL is present, if there is a dependency in the DLL which is not found. What dependency was not found? I downloaded the Sysinternals Process Monitor utility and set the filter to monitor my C# game. I excluded SUCCESS results. Then I tried to load the DLL. This told me that it was looking for the file msvcr120_app.dll (the Windows Runtime version of the Visual C++ runtime library). My first thought was to add runtime libraries from the appx deployment packages, in:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1\ExtensionSDKs\Microsoft.VCLibs\12.0

image

Then I discovered that all you need to do is to add a reference to the Visual C++ runtime packages, much easier. That fixed DLLNotFound.

Next, I had some problems calling the 64-bit DLL with Platform Invoke (PInvoke) from C#. I found it easier to compile both my C# app and the DLL itself as 32-bit code. I may go back to the 64-bit option later.

Concurrency issues

Now I had everything working; except that my DDS port was far inferior to the standard one because it was single-threaded. The original used QueueUserWorkItem which is not available in a Windows Runtime DLL. I searched for what to do, and came across this MSDN article which recommends using RunAsync, WorkItemHandler and IAsyncAction. However my DLL was not currently compiled using /ZW for “Consume Windows Runtime Extension”. I could add that of course; but then my DLL would have a dependency on the Windows Runtime and if I wanted to use the code for, say, Windows 7, it would not work. or not without yet more #ifdef blocks. No big deal perhaps; but my preference was to avoid this dependency.

There may be other solutions, but the one that I found was to use the Concurrency Runtime. Previously, QueueUserWorkItem was called in a for loop. I simply modified this to use a parallel_for loop instead, using the example here for guidance. I also added:

#include <ppltasks.h>

using namespace concurrency;

to the top of the code. It works well, speeding performance by about three times on my quad-core desktop. Of course I was greatly helped by the fact that the code was already written with concurrency in mind.

image

The effect is spoiled by the time it takes to load the DLL but fortunately you can get DDS to solve multiple boards in one call though I have yet to experiment with this.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent