Category Archives: software development

image

Instant applications considered harmful?

Adrian Colyer, formerly of SpringSource, VMWare, and Pivotal, is running an excellent blog where he looks at recent technical papers. A few days ago he covered The Rise of the Citizen Developer – assessing the security impact of online app generators. This was about online app generators for Android, things like Andromo which let you create an app with a few clicks. Of course the scope of such apps is rather limited, but they have appeal as a quick way to get something into the Play Store that will promote your brand, broadcast your blog, convert your website into an app, or help customers find your office.

It turns out that there are a few problems with these app generators. Andromo is one of the better ones. Some of them just download a big generic application with a configuration file that customises it to your requirements. Often this configuration is loaded from the internet, in some cases over HTTP with no encryption. API keys used for interaction with other services such as Twitter and Google can easily leak. They do not conform to Android security best practices and request more permissions that are needed.

Low code or no-code applications are not confined to Android applications. Appian promises “enterprise-grade” apps via its platform.  Microsoft PowerApps claims to “solve business problems with intuitive visual tools that don’t require code.” It is an idea that will not go away: an easy to use visual environment that will enable any business person to build productive applications.

Some are better than others; but there are inherent problems with all these kinds of tools. Three big issues come to mind:

  1. Bloat. You only require a subset of what the application generator can do, but by trying to be universal there is a mass of code that comes along with it, which you do not require but someone else may. This inevitably impacts performance, and not in a good way.
  2. Brick walls. Everything is going well until you require some feature that the platform does not support. What now? Often the only solution is to trash it and start again with a more flexible tool.
  3. Black box. You app mostly works but for some reason in certain cases it gives the wrong result. Lack of visibility into what it happening behind the scenes makes problems like this hard to fix.

It is possible for an ideal tool to overcome these issues. Such a tool generates human-understandable code and lets you go beyond the limitations of the generator by exporting and editing the project in a full programming environment. Most of the tools I have seen do not allow this; and even if they do, it is still hard for the generator to avoid generating a ton of code that you do not really need.

The more I have seen of different kinds of custom applications, the more I appreciate projects with nicely commented textual code that you can trace through and understand.

The possibility of near-instant applications has huge appeal, but beware the hidden costs.

Inside Azure Cosmos DB: Microsoft’s preferred database manager for its own high-scale applications

At Microsoft’s Build event in May this year I interviewed Dharma Shukla, Technical Fellow for the Azure Data group, about Cosmos DB. I enjoyed the interview but have not made use of the material until now, so even though Build was some time back I wanted to share some of his remarks.

Cosmos DB is Microsoft’s cloud-hosted NoSQL database. It began life as DocumentDB, and was re-launched as Cosmos DB at Build 2017. There are several things I did not appreciate at the time. One was how much use Microsoft itself makes of Cosmos DB, including for Azure Active Directory, the identity provider behind Office 365. Another was how low Cosmos DB sits in the overall Azure cloud system. It is a foundational piece, as Shukla explains below.

image

There were several Cosmos DB announcements at Build. What’s new?

“Multi-master is one of the capabilities that we announced yesterday. It allows developers to scale writes all around the world. Until yesterday Cosmos DB allowed you to scale writes in a single region but reads all around the world. Now we allow developers to scale reads and writes homogeneously all round the world. This is a huge deal for apps like IoT, connected cars, sensors, wearables. The amount of writes are far more than the amount of reads.

“The second thing is that now you get single-digit millisecond write latencies at the 99 percentile not just in one region.

“And the third piece is that what falls out of this high availability. The window of failover, the time it takes to failover from one region when a disaster happens, to the other, has shrunk significantly.

“It’s the only system I know of that has married the high consistency models that we have exposed with multi-master capability as well. It had to reach a certain level of maturity, testing it with first-party Microsoft applications at scale and then with a select set of external customers. That’s why it took us a long time.

“We also announced the ability to have your Cosmos Db database in your own VNet (virtual network). It’s a huge deal for enterprises where they want to make sure that no data leaks out of that VNet. To do it for a global distributed database is specially hard because you have to close all the transitive networking dependencies.”

image
Technical Fellow Dharma Shukla

Does Cosmos DB work on Azure Stack?

“We are in the process of going to Azure Stack. Azure Stack is one of the top customer asks. A lot of customers want a hybrid Cosmos DB on Azure Stack as well as in Azure and then have Active – Active. One of the design considerations for multi master is for edge devices. Right now Azure has about 50 regions. Azure’s going to expand to let’s say 200 regions. So a customer’s single Cosmos DB table spanning all these regions is one level of scalability. But the architecture is such that if you directly attach lots of Azure Stack devices, or you have sensors and edge devices, they can also pretend to be replicas. They can also pretend to be an Azure region. So you can attach billions of endpoints to your table. Some of those endpoints could be Azure regions, some of them could be instances of Azure Stack, or IoT hub, or edge devices. This kind of scalability is core to the system.”

Have customers asked for any additional APIs into Cosmos DB?

“There is a list of APIs, HBase, richer SQL, there are a number of such API requests. The good news is that the system has been built in a way that adding new APIs is relatively easy addition. So depending on the demand we continue to add APIs.”

Can you tell me anything about how you’ve implemented Cosmos DB? I know you use Service Fabric. Do you use other Azure services?

“We have dedicated clusters of compute machines. Cosmos DB is a Ring 0 service. So it’s there any time Azure opens a new region, Cosmos DB clusters have provision by default. Just like compute, storage, Cosmos DB is also one of the Ring 0 services which is the bottommost. Azure Active Directory for example depends on Cosmos DB. So Cosmos DB cannot take a dependency on Active Directory.

“The dependency that we have is our own clusters and machines, on which we put Service Fabric. For deployment of Cosmos DB code itself, we use Service Fabric. For some of the load balancing aspects we use Service Fabric. The partition management, global distribution, replication, is our own. So Cosmos DB is layered on top of Service Fabric, it is a Service Fabric application. But then it takes over. Once the Cosmos DB bits are laid out on the machine then its replication and partition management and distribution pieces take over. So that is the layering.

“Other than that there is no dependency on Azure. And that is why one of the salient aspects of this is that you can take the system and host it easily in places like Azure Stack. The dependencies are very small.

“We don’t use Azure Storage because of that dependency. So we store the data locally and then replicate it. And all of that data is also encrypted at rest.”

So when you say it is not currently in Azure Stack, it’s there underneath, but you haven’t surfaced it?

“It is in a defunct mode. We have to do a lot of work to light it up. When we light up it on such on-prem or private cloud devices, we want to enable this active to active pathway. So you are replicating your data and that is getting synchronized with the cloud and Azure Stack is one of the sockets.”

Microsoft itself is using Cosmos DB. How far back does this go? Azure AD is quite old now. Was it always on Cosmos DB / DocumentDB?

“Over the years Office 365, Xbox, Skype, Bing, and more and more of Azure services, have started moving. Now it has almost become ubiquitous. Because it’s at the bottom of the stack, taking a dependency on it is very easy.

“Azure Active Directory consists of a set of microservices. So they progressively have moved to Cosmos DB. Same situation with Dynamics, and our slew of such applications. Skype is by and large on Cosmos DB now. There are still some fragments of the past.  Xbox and the Microsoft Store and others are running on it.”

Do you think your customers are good at making the right choices over which database technology to use? I do pick up some uncertainty about this.

“We are working on making sure that we provide that clarity. Postgres and MySQL and MariaDB and SQL Server, Azure SQL and elastic pools, managed instances, there is a whole slew of relational offerings. Then we have Cosmos DB and then lots of analytical offerings as well.

“If you are a relational app, and if you are using a relational database, and you are migrating from on-prem to Azure, then we recommend the relational family. It comes with this fundamental scale caveat which is that up to 4TB. Most of those customers are settled because they have designed the app around those sorts of scalability limitations.

“A subset of those customers, and a whole bunch of brand new customers, are willing to re-write the app. They know that that they want to come to cloud for scale. So then we pitch Cosmos DB.

“Then there are customers who want to do massive scale offline analytical processing. So there is, Databricks, Spark, HD Insight, and that set of services.

“We realise there are grey lines between these offerings. We’re tightening up the guidance, it’s valid feedback.”

Any numbers to flesh out the idea that this is a fast-growing service for Microsoft?

“I can tell you that the number of new clusters we provision every week is far more than the total number of clusters we had in the first month. The growth is staggering.”

Is Ron Jeffries right about the shortcomings of Agile?

A post from InfoQ alerted me to this post by Agile Manifesto signatory Ron Jeffries with the rather extreme title “Developers should abandon Agile”.

If you read the post, you discover that what Jeffries really objects to is the assimilation of Agile methodology into the old order of enterprise software development, complete with expensive consultancy, expensive software that claims to manage Agile for you, and the usual top-down management.

All this goes to show that it is possible do do Agile badly; or more precisely, to adopt something that you call Agile but in reality is not. Jeffries concludes:

Other than perhaps a self-chosen orientation to the ideas of Extreme Programming — as an idea space rather than a method — I really am coming to think that software developers of all stripes should have no adherence to any “Agile” method of any kind. As those methods manifest on the ground, they are far too commonly the enemy of good software development rather than its friend.

However, the values and principles of the Manifesto for Agile Software Development still offer the best way I know to build software, and based on my long and varied experience, I’d follow those values and principles no matter what method the larger organization used.

I enjoyed a discussion on the subject of Agile with some of the editors and writes at InfoQ during the last London QCon event. Why is it, I asked, that Agile is no longer at the forefront of QCon, when a few years back it was at the heart of these events?

The answer, broadly, was that the key concepts behind Agile are now taken for granted so that there are more interesting things to discuss.

While this makes sense, it is also true (as Jeffries observes) that large organizations will tend to absorb these ideas in name only, and continue with dark methods if that is in their culture.

The core ideas in Extreme Programming are (it seems to be) sound. Working in small chunks, forming a team that includes the customer, releasing frequently and delivering tangible benefits, automated tests and continuous refactoring, planning future releases as you go rather than in one all-encompassing plan at the beginning of a project; these are fantastic principles and revolutionary when you first come across them. See here for Jeffries’ account of what is Extreme Programming.

These ideas have everything to do with how the team works and little to do with specific tools (though it is obvious that things like a test framework, DevOps strategy and so on are needed).

Equally, you can have all the best tools but if the team is not functioning as envisaged, the methodology will fail. This is why software development methodology and the psychology of human relationships are intimately linked.

Real change is hard, and it is easy to slip back into bad practices, which is why we need to rediscover Agile, or something like it, repeatedly. Maybe the Agile word itself is not so helpful now; but the ideas are as strong as ever.

What is happening with desktop development on Windows and will WPF be upgraded at last?

Once upon a time all Windows development was desktop development. Then there was web development, but that was a server thing. Then in October 2012 Windows 8 arrived, and it was all about full-screen, touch control and Store-delivered applications that were sandboxed and safe to run. Underneath this there was a new platform-within-a-platform called the Windows Runtime or WinRT (or sometimes Metro). Developing for Windows became a choice: new WinRT platform, or old-style desktop development, the latter remaining necessary if your application needed more features than were available in WinRT, or to run on Windows 7.

Windows 8 failed and was replaced by Windows 10 (July 2015), in large part a return to the desktop. The Start menu returned, and each application again had a window. WinRT lived on though, now rebranded as UWP (Universal Windows Platform). The big selling point was that your UWP app would run on phones, Xbox and HoloLens as well as PCs. It was still locked down, though less so, and still Store-delivered.

Then Microsoft decided to abandon Windows Phone, a decision obvious to Microsoft-watchers in June 2015 when ex-Nokia CEO Stephen Elop left Microsoft, just before the launch of Windows 10, even though Windows Phone was not formally killed off until much later. UWP now had a rather small u (that is, not very universal).

In addition, Microsoft decided that locking down UWP was not the way forward, and opened up more and more Windows APIs to the platform. The distinction between UWP and desktop applications was further blurred by Project Centennial, now known as Desktop Bridge, which lets you wrap desktop applications for Store delivery.

Perhaps the whole WinRT/UWP thing was not such a good idea. A side-effect though of all the focus on UWP was that the old development frameworks, such as Windows Forms (WinForms) and Windows Presentation Foundation (WPF), received little attention – even though they were more widely used. Some Windows 10 APIs were only available in UWP, while other features only worked in WinForms or WPF, giving developers a difficult decision.

The Build 2018 event, which was on last week in Seattle, was the moment Microsoft announced that it would endeavour to undo the damage by bringing UWP and desktop development together. “We’ve taken all the UI stacks and merged them together” said Mike Harsh and Scott Hunter in a session on “Modernizing desktop apps” (BRK3501 if you want to look it up).

According to Harsh and Hunter, Windows desktop application development is increasing, despite the decline of the PC (note that this is hardly a neutral source).

image

So what was actually announced? Here is a quick summary. Note that the announced features are for the most part applicable to future versions of Windows 10. As ever, Build is for the initial announcement. So features are subject to change and will not work yet, other than possibly in pre-release form.

Greater information density in UWP applications. WinRT/UWP was originally designed for touch control, so with lots of white space. Most Windows users though have mouse and keyboard. The spacious UWP layout looked wrong on big desktop displays, and it made porting applications harder. The standard layout is getting less dense, and a new Compact Size, an application setting, will pack more information into the same space.

image

More controls for UWP. New DataGrid, Forms with data validation, Menu bar, and coming in future, Status bar, tab controls and Ribbon. The idea is to make UWP more suitable for line-of-business applications, which accounts for a large part of Windows application development overall.

New Windowing APIs for UWP. WinRT/UWP was designed for full-screen applications, not the popup-dialogs or floating windows possible in desktop applications. Those capabilities are coming though. We will get tool windows, light-dismiss windows (eg type and press Enter), and multiple windows on one thread so that they work like a single application when minimized or cycled through with alt-tab. Coming in future are topmost windows, modal windows, custom title bars, and maybe even MDI (Multiple Document Interface), though this last seems surprising since it is discouraged even in the desktop frameworks.

What many developers will care about more though is new features coming to desktop applications. There are two big announcements.

.NET Core 3.0 will support WinForms and WPF. This is big news, partly because it performs better than the Windows-only .NET Framework, but more important, because it allows side-by-side deployment of the .NET runtime. Even better, a linker will let you deliver a .NET Core desktop application as a single executable with no dependencies. What performance gain? An example shown at Build was an application which uses File APIs running nearly three times faster on .NET Core 3.0.

image

XAML Islands enabling UWP features in WinForms and WPF. The idea is that you can pop a UWP host control in your WinForms or WPF application, and show UWP content there. Microsoft is also preparing wrapper controls that you can use directly. Mentioned were WebView, MediaPlayer, InkCanvas, InkToolBar, Map and SwapChainPanel (for DirectX content). There will be a few compromises. The XAML host window will be rectangular (based on an HWND) which means non-rectangular and transparent content will not work correctly. There is also the Windows 7 problem: no UWP on Windows 7, so what happens to your XAML Islands? They will not run, though Microsoft is working on a mechanism that lets your application substitute compatible Windows 7 content rather than crashing.

MSIX deployment. MSIX is Microsoft’s latest deployment technology. It will work with both UWP and Desktop applications, will support Windows 7 and 10, will provide for auto-updates, and will have tooling built into Visual Studio, as well as a packager for both your own and third-party applications. Applications installed with MSIX are managed and updated by Windows, have tamper protection, and are installed per-user. It seems to build upon the Desktop Bridge concept, the aim being to make Windows more manageable in the Enterprise as well as safer for all users, if Microsoft can get widespread adoption. The packaging format will also work on Android, Mac and Linux and you can check out the SDK here.

image

Will WPF or WinForms be updated?

The above does not quite answer the question, will WPF or Windows Forms be significantly updated, other than with the ability to use UWP content? I could not get a clear answer on this question at Build, though I was told that adding support for .NET Core 3.0 required significant changes to these frameworks so it is no longer true to say they are frozen. With regard to WPF Microsoft Corporate VP Julia Liuson told me:

“We will be looking at more controls, more capabilities. It is widely recognised that WPF is the best framework for desktop development on Windows. The fact that we’re moving on top of .NET Core 3.0 gives us a path forward.”

That said, I also heard that the team would rather write code once and use it across UWP, WPF and WinForms via XAML Islands, than write new controls for each framework. That makes sense, the difficulty being Windows 7. Microsoft would rather promote migration to Windows 10, than write new UI components that work across both Windows 7 and Windows 10.

A week of QCon: introduction

I attended QCon London last week and found it fascinating, but have not written as much about it as I intended because of various other deadlines. In order to address this I will do a quick daily post for the next week or so.

QCon is a software development conference run by InfoQ. It is vendor-neutral and focuses on large-scale enterprise development as well as future trends, language choices and changes, software architecture and more. If you delve into the history of the event it has championed techniques including Agile development, Service Oriented Architecture, Microservices, and now AI. The event has a culture and an ethos, which is something to do with human-centred software, team communications, taking hte side of the user, aversion to unnecessary complexity, and constant exploration of emerging technology.

image
Laura Bell of SafeStack speaks at QCon London on Architecting a Culture of Secure Software.

QCon, like many other events, encourages attendees to give feedback on sessions they attend. At other events I have often seen forms with several categories and questions like “How well did the speaker know their subject” and “What was your biggest takeaway from this session”? While such questions are reasonable, the problem is that they are too difficult and time-consuming and therefore not many respond, or the responses are of low quality. The QCon organisers decided years ago that the only feedback system that works is to have attendees vote good, indifferent or poor as they leave. This used to be done with coloured paper and is now electronic. I mention this because it says something about the event culture: let’s prefer something that works and is not a burden, despite the seeming crudity of a 1-2-3 scoring system. And of course even such basic information is highly valuable in discerning which sessions were most appreciated.

The event prefers practitioners, engineers and team leads over evangelists, trainers and consultants. It attracts a particularly able audience:

image

Of course you can learn plenty outside the actual sessions by chatting to other attendees.

Up next: technical ethics at QCon London.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Which .NET framework for Windows: UWP, WPF or Windows Forms?

Yes, mobile is the future of client applications, cross-platform is cool, web applications are amazing; but out there in the real world, there are still a ton of people who work all day with a Windows PC, and businesses that want PC applications in order to get their work done.

So when a business comes to you and says, we want a new Windows application to do this or that, and presuming they do not care about mobile or Macs or access over the internet but just want something that runs on their internal network, what framework do you choose?

image

Let us even assume that they all run Windows 10 so that UWP (Universal Windows Platform) is a realistic option.

If you want to code in .NET (which is a great choice for a Windows-only application, and with the possibility of migrating code to cross-platform via Xamarin’s compiler later), then you have three obvious choices:

Windows Forms

This is the framework for Windows desktop applications that was introduced at the same time as .NET itself, back in 2002. Of course it has been revised many times since. There was a big update in 2006 with .NET 2.0. That said, Microsoft intended it to be replaced by Windows Presentation Foundation (WPF, see below), so it has not been a focus of attention. In 2014, High DPI support was improved, with .NET 4.5.2, reflecting the fact that this ancient framework is still widely used.

Windows Forms is a nice wrapper around the Windows API, and easy to use in that it uses essentially X Y layout. In other words, you can think of your form as a grid of pixels with the position of your controls determined at design time by its size and coordinates. This is great if you are designing and running on the same PC, but not so good when you deploy to other PCs with different display settings. It does kind-of scale if you follow certain rules, but successful scaling in a Windows Forms application is often difficult to achieve, so users may suffer chopped-off controls and text, or just ugly screens. Read this carefully if you use Windows Forms. And then read about High DPI support, which was improved again in .NET Framework 4.7.

If you are writing a database application, you can generate datasets by drag and drop from the Server Explorer in Visual Studio and bind them to controls. I am not a fan of this database framework, which quickly gets convoluted, but you do not have to use it. However the ability to bind list and grid controls to any kind of .NET collection is fantastically useful.

Why is Windows Forms still in use? It is partly legacy and the fact that it is easier to maintain and enhance an existing application than to start again. It is also because, scaling issues aside, Windows Forms is reliable, well supported by both built-in and third-party controls, and easy to learn.

Windows Presentation Foundation

This was Microsoft’s second go at a GUI framework for .NET and in many respects a great improvement. It was introduced with .NET Framework 3.0 in 2006, part of the Vista wave of technology. Unlike Windows Forms, it is based on the DirectX graphics API, so great for multimedia and special effects. Scaling is built-in and based on layout managers. The underlying presentation language is based on XAML, an XML language. As with Windows Forms, there is deep support for binding data to controls.

Why would you not always use WPF rather than Windows Forms? The main issue is that the time you save on figuring out scaling is more than consumed by the time you spend on design. WPF is a designer-centric framework. It will repay your efforts, but if you just want to slap a couple of grids and a few buttons on a form to get a working business application, Windows Forms remains tempting.

Universal Windows Platform

Both Windows Forms and WPF are old, and Microsoft is pointing developers towards its Universal Windows Platform (UWP) instead. UWP is an evolution of the new application platform introduced in Windows 8 in 2012. If WPF was all about scaling and multimedia, the Windows 8 modern app platform is about touch support and Store-based deployment. The application model was also service based, the idea being that your app consumes services published over the internet. Until the Windows 10 Fall Creators Update, you could not use the .NET SQLClient to connect directly to a SQL Server database (you can now). The app platform became UWP with the launch of Windows 10 in 2015. UWP can use XAML for layout design, but it is not compatible with WPF.

Personally I have mixed feelings about UWP. Unfortunately it has suffered from Microsoft’s ever-changing development strategy. The Windows 8 app platform made sense to me as a way of bringing Windows into the tablet era and enabling applications that were more secure and more easily deployed, even if it tended to result in applications that were blocky and ugly. Microsoft then changed its mind about full-screen touch applications and came up with the UWP for Windows 10, where applications again run in a window, but with a new selling point: you could run your application on Windows Phone as well as desktop. Then the company canned Windows Phone, before UWP had properly launched, in effect deleting the “Universal” part of the platform.

UWP still offers Store delivery and isolation from other applications, better for security and stability. However there are a few things against it. First, users require Windows 10. Second, like WPF it is a designer-centric platform and not so good for running up quick business applications. Third, UWP apps behave differently from standard desktop applications, sometimes not in a good way.

I was using Microsoft’s bundled Photos application recently. I work a lot with images so this often pops up, as the default image viewer on Windows 10. I was not stressing it, but it crashed which, as is typical for a UWP app, means it just disappeared without any message or warning.

UWP will be three years old this summer, but I am not convinced that the platform is quite there yet. I find it hard to think of UWP apps that I love. The apps I know best are the built-in ones, Mail, Photos, Groove Music, Calculator, and I do not love any of them. Paint 3D is amazing but not my thing.

At the same time I do see the merits of UWP versus traditional Windows application deployment. The existence of the Desktop Bridge (formerly Project Centennial) means you can get many of those benefits while still using WPF or Windows Forms.

Closing thoughts

Perhaps something like Power Apps will render this discussion irrelevant before long. There are also other options for the desktop, such as Xamarin Forms if you still want to use .NET, or Electron for using web technologies for desktop applications.

Still, while it may seem surprising, even in 2018 I can think of reasons why you might use any of the above frameworks, even Windows Forms, for a business app targeting Windows.