Category Archives: software development

What is happening with desktop development on Windows and will WPF be upgraded at last?

Once upon a time all Windows development was desktop development. Then there was web development, but that was a server thing. Then in October 2012 Windows 8 arrived, and it was all about full-screen, touch control and Store-delivered applications that were sandboxed and safe to run. Underneath this there was a new platform-within-a-platform called the Windows Runtime or WinRT (or sometimes Metro). Developing for Windows became a choice: new WinRT platform, or old-style desktop development, the latter remaining necessary if your application needed more features than were available in WinRT, or to run on Windows 7.

Windows 8 failed and was replaced by Windows 10 (July 2015), in large part a return to the desktop. The Start menu returned, and each application again had a window. WinRT lived on though, now rebranded as UWP (Universal Windows Platform). The big selling point was that your UWP app would run on phones, Xbox and HoloLens as well as PCs. It was still locked down, though less so, and still Store-delivered.

Then Microsoft decided to abandon Windows Phone, a decision obvious to Microsoft-watchers in June 2015 when ex-Nokia CEO Stephen Elop left Microsoft, just before the launch of Windows 10, even though Windows Phone was not formally killed off until much later. UWP now had a rather small u (that is, not very universal).

In addition, Microsoft decided that locking down UWP was not the way forward, and opened up more and more Windows APIs to the platform. The distinction between UWP and desktop applications was further blurred by Project Centennial, now known as Desktop Bridge, which lets you wrap desktop applications for Store delivery.

Perhaps the whole WinRT/UWP thing was not such a good idea. A side-effect though of all the focus on UWP was that the old development frameworks, such as Windows Forms (WinForms) and Windows Presentation Foundation (WPF), received little attention – even though they were more widely used. Some Windows 10 APIs were only available in UWP, while other features only worked in WinForms or WPF, giving developers a difficult decision.

The Build 2018 event, which was on last week in Seattle, was the moment Microsoft announced that it would endeavour to undo the damage by bringing UWP and desktop development together. “We’ve taken all the UI stacks and merged them together” said Mike Harsh and Scott Hunter in a session on “Modernizing desktop apps” (BRK3501 if you want to look it up).

According to Harsh and Hunter, Windows desktop application development is increasing, despite the decline of the PC (note that this is hardly a neutral source).

image

So what was actually announced? Here is a quick summary. Note that the announced features are for the most part applicable to future versions of Windows 10. As ever, Build is for the initial announcement. So features are subject to change and will not work yet, other than possibly in pre-release form.

Greater information density in UWP applications. WinRT/UWP was originally designed for touch control, so with lots of white space. Most Windows users though have mouse and keyboard. The spacious UWP layout looked wrong on big desktop displays, and it made porting applications harder. The standard layout is getting less dense, and a new Compact Size, an application setting, will pack more information into the same space.

image

More controls for UWP. New DataGrid, Forms with data validation, Menu bar, and coming in future, Status bar, tab controls and Ribbon. The idea is to make UWP more suitable for line-of-business applications, which accounts for a large part of Windows application development overall.

New Windowing APIs for UWP. WinRT/UWP was designed for full-screen applications, not the popup-dialogs or floating windows possible in desktop applications. Those capabilities are coming though. We will get tool windows, light-dismiss windows (eg type and press Enter), and multiple windows on one thread so that they work like a single application when minimized or cycled through with alt-tab. Coming in future are topmost windows, modal windows, custom title bars, and maybe even MDI (Multiple Document Interface), though this last seems surprising since it is discouraged even in the desktop frameworks.

What many developers will care about more though is new features coming to desktop applications. There are two big announcements.

.NET Core 3.0 will support WinForms and WPF. This is big news, partly because it performs better than the Windows-only .NET Framework, but more important, because it allows side-by-side deployment of the .NET runtime. Even better, a linker will let you deliver a .NET Core desktop application as a single executable with no dependencies. What performance gain? An example shown at Build was an application which uses File APIs running nearly three times faster on .NET Core 3.0.

image

XAML Islands enabling UWP features in WinForms and WPF. The idea is that you can pop a UWP host control in your WinForms or WPF application, and show UWP content there. Microsoft is also preparing wrapper controls that you can use directly. Mentioned were WebView, MediaPlayer, InkCanvas, InkToolBar, Map and SwapChainPanel (for DirectX content). There will be a few compromises. The XAML host window will be rectangular (based on an HWND) which means non-rectangular and transparent content will not work correctly. There is also the Windows 7 problem: no UWP on Windows 7, so what happens to your XAML Islands? They will not run, though Microsoft is working on a mechanism that lets your application substitute compatible Windows 7 content rather than crashing.

MSIX deployment. MSIX is Microsoft’s latest deployment technology. It will work with both UWP and Desktop applications, will support Windows 7 and 10, will provide for auto-updates, and will have tooling built into Visual Studio, as well as a packager for both your own and third-party applications. Applications installed with MSIX are managed and updated by Windows, have tamper protection, and are installed per-user. It seems to build upon the Desktop Bridge concept, the aim being to make Windows more manageable in the Enterprise as well as safer for all users, if Microsoft can get widespread adoption. The packaging format will also work on Android, Mac and Linux and you can check out the SDK here.

image

Will WPF or WinForms be updated?

The above does not quite answer the question, will WPF or Windows Forms be significantly updated, other than with the ability to use UWP content? I could not get a clear answer on this question at Build, though I was told that adding support for .NET Core 3.0 required significant changes to these frameworks so it is no longer true to say they are frozen. With regard to WPF Microsoft Corporate VP Julia Liuson told me:

“We will be looking at more controls, more capabilities. It is widely recognised that WPF is the best framework for desktop development on Windows. The fact that we’re moving on top of .NET Core 3.0 gives us a path forward.”

That said, I also heard that the team would rather write code once and use it across UWP, WPF and WinForms via XAML Islands, than write new controls for each framework. That makes sense, the difficulty being Windows 7. Microsoft would rather promote migration to Windows 10, than write new UI components that work across both Windows 7 and Windows 10.

A week of QCon: introduction

I attended QCon London last week and found it fascinating, but have not written as much about it as I intended because of various other deadlines. In order to address this I will do a quick daily post for the next week or so.

QCon is a software development conference run by InfoQ. It is vendor-neutral and focuses on large-scale enterprise development as well as future trends, language choices and changes, software architecture and more. If you delve into the history of the event it has championed techniques including Agile development, Service Oriented Architecture, Microservices, and now AI. The event has a culture and an ethos, which is something to do with human-centred software, team communications, taking hte side of the user, aversion to unnecessary complexity, and constant exploration of emerging technology.

image
Laura Bell of SafeStack speaks at QCon London on Architecting a Culture of Secure Software.

QCon, like many other events, encourages attendees to give feedback on sessions they attend. At other events I have often seen forms with several categories and questions like “How well did the speaker know their subject” and “What was your biggest takeaway from this session”? While such questions are reasonable, the problem is that they are too difficult and time-consuming and therefore not many respond, or the responses are of low quality. The QCon organisers decided years ago that the only feedback system that works is to have attendees vote good, indifferent or poor as they leave. This used to be done with coloured paper and is now electronic. I mention this because it says something about the event culture: let’s prefer something that works and is not a burden, despite the seeming crudity of a 1-2-3 scoring system. And of course even such basic information is highly valuable in discerning which sessions were most appreciated.

The event prefers practitioners, engineers and team leads over evangelists, trainers and consultants. It attracts a particularly able audience:

image

Of course you can learn plenty outside the actual sessions by chatting to other attendees.

Up next: technical ethics at QCon London.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Which .NET framework for Windows: UWP, WPF or Windows Forms?

Yes, mobile is the future of client applications, cross-platform is cool, web applications are amazing; but out there in the real world, there are still a ton of people who work all day with a Windows PC, and businesses that want PC applications in order to get their work done.

So when a business comes to you and says, we want a new Windows application to do this or that, and presuming they do not care about mobile or Macs or access over the internet but just want something that runs on their internal network, what framework do you choose?

image

Let us even assume that they all run Windows 10 so that UWP (Universal Windows Platform) is a realistic option.

If you want to code in .NET (which is a great choice for a Windows-only application, and with the possibility of migrating code to cross-platform via Xamarin’s compiler later), then you have three obvious choices:

Windows Forms

This is the framework for Windows desktop applications that was introduced at the same time as .NET itself, back in 2002. Of course it has been revised many times since. There was a big update in 2006 with .NET 2.0. That said, Microsoft intended it to be replaced by Windows Presentation Foundation (WPF, see below), so it has not been a focus of attention. In 2014, High DPI support was improved, with .NET 4.5.2, reflecting the fact that this ancient framework is still widely used.

Windows Forms is a nice wrapper around the Windows API, and easy to use in that it uses essentially X Y layout. In other words, you can think of your form as a grid of pixels with the position of your controls determined at design time by its size and coordinates. This is great if you are designing and running on the same PC, but not so good when you deploy to other PCs with different display settings. It does kind-of scale if you follow certain rules, but successful scaling in a Windows Forms application is often difficult to achieve, so users may suffer chopped-off controls and text, or just ugly screens. Read this carefully if you use Windows Forms. And then read about High DPI support, which was improved again in .NET Framework 4.7.

If you are writing a database application, you can generate datasets by drag and drop from the Server Explorer in Visual Studio and bind them to controls. I am not a fan of this database framework, which quickly gets convoluted, but you do not have to use it. However the ability to bind list and grid controls to any kind of .NET collection is fantastically useful.

Why is Windows Forms still in use? It is partly legacy and the fact that it is easier to maintain and enhance an existing application than to start again. It is also because, scaling issues aside, Windows Forms is reliable, well supported by both built-in and third-party controls, and easy to learn.

Windows Presentation Foundation

This was Microsoft’s second go at a GUI framework for .NET and in many respects a great improvement. It was introduced with .NET Framework 3.0 in 2006, part of the Vista wave of technology. Unlike Windows Forms, it is based on the DirectX graphics API, so great for multimedia and special effects. Scaling is built-in and based on layout managers. The underlying presentation language is based on XAML, an XML language. As with Windows Forms, there is deep support for binding data to controls.

Why would you not always use WPF rather than Windows Forms? The main issue is that the time you save on figuring out scaling is more than consumed by the time you spend on design. WPF is a designer-centric framework. It will repay your efforts, but if you just want to slap a couple of grids and a few buttons on a form to get a working business application, Windows Forms remains tempting.

Universal Windows Platform

Both Windows Forms and WPF are old, and Microsoft is pointing developers towards its Universal Windows Platform (UWP) instead. UWP is an evolution of the new application platform introduced in Windows 8 in 2012. If WPF was all about scaling and multimedia, the Windows 8 modern app platform is about touch support and Store-based deployment. The application model was also service based, the idea being that your app consumes services published over the internet. Until the Windows 10 Fall Creators Update, you could not use the .NET SQLClient to connect directly to a SQL Server database (you can now). The app platform became UWP with the launch of Windows 10 in 2015. UWP can use XAML for layout design, but it is not compatible with WPF.

Personally I have mixed feelings about UWP. Unfortunately it has suffered from Microsoft’s ever-changing development strategy. The Windows 8 app platform made sense to me as a way of bringing Windows into the tablet era and enabling applications that were more secure and more easily deployed, even if it tended to result in applications that were blocky and ugly. Microsoft then changed its mind about full-screen touch applications and came up with the UWP for Windows 10, where applications again run in a window, but with a new selling point: you could run your application on Windows Phone as well as desktop. Then the company canned Windows Phone, before UWP had properly launched, in effect deleting the “Universal” part of the platform.

UWP still offers Store delivery and isolation from other applications, better for security and stability. However there are a few things against it. First, users require Windows 10. Second, like WPF it is a designer-centric platform and not so good for running up quick business applications. Third, UWP apps behave differently from standard desktop applications, sometimes not in a good way.

I was using Microsoft’s bundled Photos application recently. I work a lot with images so this often pops up, as the default image viewer on Windows 10. I was not stressing it, but it crashed which, as is typical for a UWP app, means it just disappeared without any message or warning.

UWP will be three years old this summer, but I am not convinced that the platform is quite there yet. I find it hard to think of UWP apps that I love. The apps I know best are the built-in ones, Mail, Photos, Groove Music, Calculator, and I do not love any of them. Paint 3D is amazing but not my thing.

At the same time I do see the merits of UWP versus traditional Windows application deployment. The existence of the Desktop Bridge (formerly Project Centennial) means you can get many of those benefits while still using WPF or Windows Forms.

Closing thoughts

Perhaps something like Power Apps will render this discussion irrelevant before long. There are also other options for the desktop, such as Xamarin Forms if you still want to use .NET, or Electron for using web technologies for desktop applications.

Still, while it may seem surprising, even in 2018 I can think of reasons why you might use any of the above frameworks, even Windows Forms, for a business app targeting Windows.

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Microsoft to release Visual Studio for the Mac – except it is not

Microsoft’s Mikayla Hutchinson (ex Xamarin) has announced Visual Studio for the Mac:

This is an exciting development, evolving the mobile-centric Xamarin Studio IDE into a true mobile-first, cloud-first development tool for .NET and C#, and bringing the Visual Studio development experience to the Mac.

I tend to agree that it is a significant piece of news. It signals Microsoft’s intent to offer first-class support for Mac developers. Other than at Microsoft events, the majority of the developers I see at conferences carry Macs rather than Windows laptops, and if the company is to have any hope of winning them over to its cross-platform ASP.NET web application framework, getting excellent development support on Macs is a critical step.

Naming things is not Microsoft’s greatest strength though. Sometimes it gives different things the same name, such as with OneDrive and OneDrive for Business, or Outlook for Windows and Outlook for iOS and Android. It makes sense from a marketing perspective, but it is also confusing.

This is another example. No, Microsoft has not ported Visual Studio to the Mac. This is a rebrand of Xamarin Studio, originally a cross-platform IDE for its C# mobile app framework, but more recently Mac-only.

Hutchinson makes the best of it:

Its UX is inspired by Visual Studio, yet designed to look and feel like a native citizen of macOS …. Below the surface, Visual Studio for Mac also has a lot in common with its siblings in the Visual Studio family. Its IntelliSense and refactoring use the Roslyn Compiler Platform; its project system and build engine use MSBuild; and its source editor supports TextMate bundles. It uses the same debugger engines for Xamarin and .NET Core apps, and the same designers for Xamarin.iOS and Xamarin.Android.

The common use of MSBuild is a key point. “Although it’s a new product and doesn’t support all of the Visual Studio project types, for those it does have in common it uses the same MSBuild solution and project format. If you have team members on macOS and Windows, or switch between the two OSes yourself, you can seamlessly share your projects across platforms,” says Hutchinson.

image

The origins of what will now be Visual Studio for the Mac actually go back to the early days of the .NET Framework. Developer Mike Kruger decided to write an IDE in C# in order to work more easily with a pre-release of .NET Framework 1.0. His IDE was called SharpDevelop. Here is an early version, from 2001:

image

Of course by then most developers used Visual Studio to work with C#, but there were several reasons why SharpDevelop continued to have a following. Unlike Visual Studio, it was built in C# and you could get all the code. It was free. It was also of interest to Mono users, Mono being the open source implementation of the .NET Framework originated by Miguel de Icaza (also now at Microsoft). In 2003, Mono developers started work on porting SharpDevelop to run on Linux using the GNOME toolkit (Gtk#). This forked project became MonoDevelop.

Xamarin (the framework) of course has its roots in Mono and when Xamarin (the company) decided to create its own IDE it based it on MonoDevelop. So MonoDevelop evolved into Xamarin Studio.

Incidentally, SharpDevelop is still available and you can get it here.  MonoDevelop is still available and you can get it here.

So now some sort of circle is complete and what began as SharpDevelop, a rebel imitation of Visual Studio, will now be an official Microsoft product called Visual Studio for the Mac – though how much SharpDevelop code remains (if any) is another matter.

Historical digression aside, the differences between Visual Studio and Visual Studio for the Mac are not the only point of confusion. There is also Visual Studio Code, an editor with some IDE features, which is cross-platform on Windows, Mac and Linux. This one is based on the Google-sponsored Chromium project and has won quite a few friends.

Should Mac users now use Visual Studio Code, or Visual Studio for the Mac, for their .NET Core or ASP.NET Core development? Microsoft will say “your choice” but it is a good question. The key here is which project will now get more attention from both Microsoft and other open source contributors.

Still, we should not complain. Two rival Microsoft IDEs for the Mac are a considerable advance on none, which was the answer until Visual Studio Code went into preview in April 2015.