Category Archives: visual studio

The era of tiny PCs: 400g and smaller than a paperback book

My work PC for the last few years has been a 2018 HP Omen gaming PC which has been brilliant; I have replaced the GPU and added storage but everything still works fine. That is, it used to be, until I reviewed a mini PC which has surprised me with its capability – not because it is exceptional, but because everyday technology is at the point where having something bigger is unnecessary for everyday purposes other than gaming.

Mini PC with paperback book and CD to show the size

The new PC is a Trigkey S5 with an AMD Ryzen 5560 CPU, 500GB NVMe SSD and 16GB DDR4 RAM, and currently costs around £320. Its Geekbench CPU score is better than my 5-year old HP with a Core i7.

GPU score is way less than the old HP.

Still, there is support for three displays via HDMI, DisplayPort and USB-C and 4K/60Hz is no problem.

Inside we find branded RAM and it does not look as if the components are shoe-horned in, there is plenty of space.

The power supply is external and rated at 19v and 64.98w.

Expansion is via 4 USB-A ports, one USB-C, and the aforementioned HDMI and DisplayPort sockets. There is also an Ethernet port, and of course Bluetooth and Wi-Fi.

Operating system? Interesting. It is not mentioned in the blurb but Windows 11 happens to be installed, but with one of those volume MAK (Multiple Activation Key) licenses that is not suitable for this kind of distribution (but costs the vendor hardly anything). When first run Windows setup states that “you may not use this software if you have not validly acquired a license for the software from Microsoft or its licensed distributors,” which you likely have not, but Trigkey may presume that most of its customers will not care. I recommend installing your own licensed copy of Windows as I have done, or your preferred Linux distribution.

Windows does run well however and 16GB RAM is enough for Hyper-V and Windows Subsystem for Linux (WSL) 2.0 to run well. Visual Studio 2022, VS Code, Microsoft Office, all run fine.

I am not suggesting that this particular model is the one to get, but I do think that something like this, small, light, and power-sipping, is now the sane choice for most desktop PC users.

.NET P/Invoke on Azure App Service for Linux

I have an online bridge game in development (yes, still!) and it is written in ASP.NET Core with C#. One of the things that interests bridge players is called double-dummy analysis; this is where you look at what would be the best play in a game if you knew where all the cards were, whereas when actually playing bridge you only see your own cards and, during play, another hand called Dummy, so half the cards are hidden.

Double-dummy analysis is a solved problem and bridge programmers benefit from an open source library called DDS (Double Dummy Solver) written primarily by Bo Hagland and Soren Hein. This is a C++ DLL that can also be compiled for Linux and MacOS.

I wanted to integrate DDS into the bridge game in order to give players information at the end of a game including whether they were in the optimum contract and whether they beat the optimum score. I started by doing a new C# wrapper for DDS though borrowing from the work here. My version is 64-bit and wraps a few more functions. I compiled the native DLL for Windows and Linux using OpenMP for concurrency, which considerably improves performance (Boost is another option but I did not find much difference).

Note: the usual caveats about P/Invoke apply here. During one of my tests I actually crashed the container running the app. The ASP.NET developers do a lot of work to make the platform reliable, and doing P/Invoke may introduce instability.

I added my wrapper into the ASP.NET application and it worked fine on my development machine. I deployed it to App Service and the P/Invoke calls did not work. Fixing this required a bit of a deep dive into Azure App Service for Linux.

I am deploying the native code .so library into the same directory as the compiled .NET code for the rest of the application. The error I got was:

Cannot open shared object file: No such file or directory

I raised the topic on Stack Overflow.

One of the things that puzzled me was that the unit tests, which include the P/Invoke code, ran OK in Azure Pipelines, which I use for deployment. But not when deployed.

The first point is that you get the “No such file” error not only when the file itself is not present (it was) but also when a dependency is missing. So step one is to SSH into the container running the ASP.NET app, which you can do with the Development Tools in the Azure portal. Note that with Azure App Service for Linux the app always runs in a container.

image

This gives you root permissions in the container though not to the host operating system. Navigate to the directory with the troublesome library and type:

ldd libdds.so

(or the name of your library). This will tell you if any dependencies are missing or other issues. I noticed two things. One is that it was missing the dependency libgomp.so.1 which is the OpenMP library. Second, ldd reported that my library required at least GLIBC 2.29 where the available version was 2.28.

How could I fix the GLIBC version? This is determined by the version of Linux and you can use

ldd – version

to check the version you have. In my case it said I had Debian with GLIBC 2.28:

image

I did some more research. If you really want to know about Azure App Service for Linux, there are a few key documents.

The basics here: Operating system functionality – Azure App Service | Microsoft Learn

The FAQ here: App Service on Linux FAQ | Microsoft Learn

Here you will learn details like why you cannot use a file-based database like SQLite in Azure App Service for Linux:

“The file system of your application is a mounted network share. This enables scale out scenarios where your code needs to be executed across multiple hosts. Unfortunately this blocks the use of file-based database providers like SQLite since it’s not possible to acquire exclusive locks on the database file.”

But I digress. To go deeper still, check this post by Jim Cheshire:

Things You Should Know: Web Apps and Linux – Microsoft Tech Community

which has lots of critical information, like why a custom container on App Service must respond to ping.

So after reading through all this and greatly improving my understanding of how App Service for Linux works, I got to the heart of my problem. When you deploy a .NET Core application to App Service for Linux, it will by default use a container from the Microsoft Artifact Registry that matches the version of .NET you are using. If you check this page you will see that the current version for ASP.NET Core 6.0 is tagged mcr.microsoft.com/dotnet/aspnet:6.0

image

If you examine this container you will find that it runs Debian Buster which uses GLIBC 2.28. It is a matter of slight concern since Debian Buster is shown on the Debian releases wiki as having an approximate end of life August 2022, though the LTS project extends that to June 2024.

Still, now I knew how to fix my problem. Either use a custom container image, or upgrade to .NET 7, or recompile libdds.so to run on Debian Buster.

I decided that the easiest short-term solution was to recompile. I downloaded Buster and recompiled the library.

What about libgomp.so.1? This was kind-of fixable by using SSH to run:

apt-get update

apt-get install libgomp1

This is not great though since Azure could replace the container at any time, and always if you do something like scale the plan up or down, to change the specification of the VM. I tried copying the buster version of libgomp.so.1 to the application directory. It works, but I also needed to add a linker option to enable DDS to use a library in the same directory:

 -Wl,-rpath='${ORIGIN}'

as explained here.

I think a better solution is to move to deploying a custom container to App Service, which is an option:

image

Care is needed though as there is a bit of special sauce in the official container images if you want features like SSH in the portal to work properly. It also means revisiting my deployment scripts, so the above hack was an easier and quicker workaround for me.

Multi-page ASP.NET Core, TypeScript, and ES JavaScript Modules

One of the messier aspects of the modern web is the situation with JavaScript modules. Modules, and the ability to import code from one module into another in a coherent and efficient manner, are fundamental  programming concepts but JavaScript originally had no concept of them. Developers came up with  CommonJS, originally for server-side JavaScript, using the keyword require to reference one module from another. Node.js borrowed and refined this system. It does not work in web browsers but can be made to do so by processing the code before deployment to make it browser compatible, or by using require.js or an equivalent.

In the  meantime the ECMAScript standard evolved to develop its own module system, often referred to as ES modules or ES6 modules (modules changed a lot in ES6, also known as ES2015). Browsers implement ES6 in their JavaScript engines, not CommonJS. The two systems are not compatible.

The situation today is that although most agree that ES6 is the way forward, Node.js and a huge amount of existing code uses Node.js modules. The Node.js team is trying to migrate towards ES6 but it is inherently a difficult path. Deno, an alternative to Node but with a tiny userbase by comparison, uses ES6 and that is one of its attractions.

ASP.NET Core and JavaScript

ASP.NET was originally designed to be a server-side code generator like PHP. You write code in C# or VB.NET but what gets sent to the browser is just HTML, CSS and JavaScript. The JavaScript piece was not too important at first, just handy for the occasional client-side confirmation dialog or the like.

This is no longer the case and increasing numbers of applications make heavy use of client-side JavaScript. I am working on a multi-user game for example and have written a ton of JavaScript. Perhaps I should have started with a single-page application (SPA) and used React or Vue but I did not, I did what I was most familiar with and started with a basic ASP.NET Core application. I was in a hurry and took full advantage of everything I could get built-in, including ASP.Net Identity and the SignalR real-time communications library.

Everything went fine but I wanted to shift to TypeScript and take advantage of JavaScript minification. There is WebOptimizer, an unofficial project with involvement from the .NET team at Microsoft, but I started going down the WebPack path for reasons you can read here. I got this working more or less, but abandoned it: essentially, WebPack is not designed for multipage projects and I was running into some awkward problems and spending more time on WebPack configuration than on developing my application.

Using ECMAScript modules

I am in favour both of simplicity and of keeping up to date, so I took a closer look at using the JavaScript emitted by the TypeScript compiler more directly, rather than transpiling it to browser-compatible JavaScript (one of the things WebPack does). The main issue is that the JavaScript code will now include import and export statements. You can try and use TypeScript without ever using import or export but I do not recommend it. Brower compatibility is pretty good if you can manage without Internet Explorer.

Quite a lot changes though when you start using import and export and your JavaScript files become modules. Here are a few things I found.

1. Any links to JavaScript files will now need to include type=”module” like so:

<script type=”module” src=”~/js/myscript.js”></script>

2. Any scripts that are imported by other scripts must not use the asp-append-version Tag Helper for cachebusting. Cachebusting is to prevent old versions of JavaScript files being used because they are cached by the browser. The asp-append-version helper adds a hash value as an argument when retrieving the script. The reason it causes problems is that the scripts that import that file do not know about the hash  value and use its unadorned name. This means the browser loads the script twice with unpredictable results. Removing asp-append-version is not as bad as it first appears, thanks to Etags that inform the browser whether the file has been modified. See the discussion here.

3. If you have controls that call JavaScript functions on your web page, they will no longer work unless you import them. That is how modules work. There are a few solutions. The best is to attach things like click handlers in JavaScript rather than coding them in the HTML. This can be problematic though especially if you have server-side ASP.NET code that creates controls that call JavaScript programmatically. An alternative is to add the function to the window object, which you can do either in the ASP.NET Razor .cshtml page or in the TypeScript/JavaScript. I find it easiest to have an initialisation function in the TypeScript that I call from the web page. Scripts defined as modules never run until the page has loaded.

4. You need to be aware of side effects. Imagine you have three JavaScript files, page1.js, page2.js and shared.js. Your web page page1.cshtml uses page1.js and page2.cshtml uses page2.js. Both files import functions from shared.js. Everything works fine, but then you find that shared.js needs to import a function from page2.js. You run the application and find that page2.js has been loaded by page1.cshtml. This is by design: when you import the function you are telling the browser to load that file. It could catch you out though if you have initialisation code in both page1.js and page2.js and do not want them both to run.

The solution is either to plan for this and code accordingly, or not to import functions from page1.js or page2.js in shared.js. Of course if you follow the path of least resistance in an ASP.NET Core application and the only JavaScript code directly referenced in .cshtml is site.js then it is not a problem.

A working example with Visual Studio 2022

Imagine you have a multi-page ASP.NET Core application such as the one created by default in Visual Studio. It has site.js in wwwroot/js and that is about it. Here is what you might do:

a. Create a directory called Scripts in your project  and add a file demo.ts

b. Add a file called tsconfig.json to the Scripts folder. If you use the Add Item wizard it will be prepopulated with some defaults. You will need to add as a minimum a compiler option to support ES modules and an outDir, for example:

{
   “compilerOptions”: {
     “module”: “es2015”,
     “noImplicitAny”: false,
     “noEmitOnError”: true,
     “removeComments”: false,
     “sourceMap”: true,
     “target”: “es5”,
     “outDir”: “../wwwroot/js”

  },
   “exclude”: [
     “node_modules”,
     “wwwroot”
   ]
}

c. demo.ts looks like this:

export function clickme() {
     alert(“You clicked”);
}

d. Add the following to Index.cshtml:

<script type=”module” src=”js/demo.js”></script>
<script type=”module”>
        import {clickme} from ‘./js/demo.js’;
        window.clickme = clickme;       
</script>

e. Now a button on the page will work, for example:

<p><button onclick=”clickme()”>Click me</button></p>

image

Note: when you add a TypeScript file to a Visual Studio 2022 project you get a message inviting you to install a NuGet package:

image

The TypeScript will still get compiled by Visual Studio, with or without this  package. However, without it the .NET Core compiler will not compile the TypeScript (dotnet build etc).

Minification

Minifying the JavaScript is pretty easy. For the time being I am just running terser in a script called by a post-build event. I am deploying to a Linux Azure app service using Azure DevOps Pipelines and have had to workaround the issue that build events do not seem to handle the cross platform scenario very well, and Visual Studio does not provide much of an editor for build events in ASP.NET Core projects, but it is working.

I hope this proves a better long-term solution for me than WebPack.

A UI lesson: do not ask users to choose between Register and Login

I am developing a web site for playing bridge, a project which kicked off in March when lockdown caused bridge clubs everywhere to close. There are lots of sites where you can play bridge online, but not many options (particularly back in March) for clubs that wanted to run their own online sessions.

It’s going OK with a number of clubs now using it every week, though it is still in development. I have learned a painful lesson though. In order to proceed as quickly as possible, I started my project with the Visual Studio template for an ASP.Net Core application with ASP.NET Core Identity – the latter an easy decision since it handles all the complications of registration, password reset and so on. (I did end up having to re-plumb it to use int rather than GUID for the primary key but that is another story).

The default home page the template generates looks like this, with options in the menu to Register or Login.

image

Registration and login are fundamental concepts that have been part of the web forever. It’s simple for a developer to understand: you register to create an account, you login if you already have an account.

The painful discovery is that this is not obvious to everyone, particularly to an older demographic that did not grow up with computers. Another factor is that cookies, browser-saved passwords and single sign-on with Google/Facebook etc means that this whole area is a bit of a mess and there are people who just kinda expect web sites to know who they are (which in one way horrifies me but I do see the massive convenience).

The consequence is that a surprising (to me) number of people had difficulty knowing whether Registration or Login was what they needed. They would Register, then return to the site and hit Register again. Why is this site asking for my details again? Maybe a security thing? Oh no, why does it now say username not available?

This is because asking the user to make this choice is not good design. Registration is rare, login is common. Further, Register is a confusing word. We sometimes use the word register when accounts already exist. Create Account is better. And a better UI is just Login. I need to access this website. Then, underneath the username/password request, an option that says “I need to create an account”. The two options should not be equally prominent; and if you look at how many prominent sites design this, that is what they do:

image image

Lesson learned; but I wish this had occurred to me sooner!

Hands On ASP.NET Core

I’ve been putting together a quick web application (well, I thought it would be quick) in my spare time (hah!) and I picked ASP.NET Core on Linux as a sensible option given that I like working in C#. Overall it has been a reasonable experience so far and I still love the language. This is the most extensive work I have done so far with ASP.NET Core though and I have a few observations.

It is not a difficult framework to work with but I believe it could be made more approachable. This is largely a matter of documentation though another point of confusion is the transition Microsoft has been making from ASP.NET MVC to Razor Pages. These two frameworks are similar but different, they share a lot of technology but some things work in one but not the other, and sometimes it is not clear whether what you are reading applies just to ASP.NET MVC, or just to Razor Pages, or to both, or to both but with a little tweaking to account for differences. I started with MVC because I am more familiar with it but have shifted to Razor Pages because that seems to be the preferred direction; really I am equally happy with either.

If you are thinking of getting started with ASP.NET Core I recommend you start not with the framework, but with making sure that you are familiar with the following topics:

Dependency injection. If you are puzzling about something in the framework the answer may be “add it to the constructor and it magically works.” This is obvious if you are familiar with it but not otherwise.

Anonymous types. These seem to crop up quite a lot.

Lambdas and the arrow operator =>

LINQ queries

Now, the documentation. Unless you have perhaps found a good and up to date book you will probably start here.

image

Now, I do think there are lots of good things about docs.microsoft.com, the fact that it is all on GitHub and open for comment and improvement, the fact that it performs well, and the obvious effort that has gone into many of the topics.

That said, I do not much like this page. My biggest problem with it is that there is no simple link to a comprehensive reference. It is a bunch of little tutorials which may or may not tell you what you need to know. It gets better if you click into one of the topics and I like this page, for example, much better, with the hierarchical list of topics on the left.

image

It is still not great though. There is a big emphasis on tutorials, and while I agree that learning through doing is a great way to learn, the problem with the tutorials is that they tend to leave you with lots of questions and no obvious route to answers.

I will give you an example. I decided to use the ASP.NET Identity system in my application, because it saves a ton of tedious work doing registration, password reset, login, and so on, plus it is security-critical code that I would likely get wrong if I did it myself.

The problem you will immediately hit though is that you want to store additional data about users. This could be any kind of data but let’s call it additional profile data. For example, you want to let users upload an image which is then displayed in the application. There are some heavy articles about customizing identity but there is also this one on adding custom user data to an ASP.NET Core web app. It’s great but it does not actually tell you how to retrieve the custom user data in your application. Eventually I figured out a way of doing it. You just have to use dependency injection to get an instance of the UserManager class. So you pop this in the constructor for one of your classes:

UserManager<YourCustomUser> UserManager

and store it in a private variable. Then you can do:

var MyTask = _userManager.GetUserAsync(User);
MyTask.Wait();
var MyUser = MyTask.Result;

or something similar (if it is a synchronous method) and it just works.

Let me add something else. The actual API reference for ASP.NET Core is almost useless. It faithfully documents each class and method while often saying nothing about how or why to use it.

Data access

My application is really forms over data as so many are, so data access plays a big role. There seem to be plenty of tutorials on data access in the ASP.NET Core documentation but I don’t much like them. The problem is Entity Framework. Most of the documentation assumes it. It is not that Entity Framework is bad; it does seem to work well and while there is debate about how well it performs, in many cases it does not matter, and in other cases you can fine-tune it. My problem rather is that what Microsoft calls a “complex data model” is actually the normal case, where you have many-to-many relationships, and dealing with this in Entity Framework soon gets fiddly. I am guilty of lacking patience, but being familiar with SQL it is easier for me just to write the SQL and to know exactly what data is being saved and what data is being retrieved. I have left Entity Framework in place because the Identity system uses it (and it looks non-trivial to replace) but for the rest I have migrated to Dapper which seems ideal. It is not a full-featured ORM and it expects you to write the SQL but does a lot that saves time. My only complaint about Dapper is that (again) the documentation isn’t great but I’ve found it much simpler to grok than the more advanced aspects of Entity Framework.

One thing I do like about Entity Framework is data migrations. Like most developers I have a local database and another one online and code-first data migrations save a lot of work creating database tables and keeping the schema in sync. Dapper does not have this.

StackOverflow

Of course it is true that no matter what is your question, someone has asked it before, and often the best place to find the answer is on StackOverflow. Big appreciation for the folk who take the time to answer questions there, though I’d add that it is not a place from which to copy code, it is a place to understand a solution. Out of date information is a problem, as it is in Microsoft’s own documentation.

Finally

I think ASP.NET Core is a great framework (or frameworks) but not as approachable as it could be. Documenting it in the best way is not an easy problem to solve, and every developer comes with different skills and requirements. Perhaps Microsoft could get someone suitable to write a nice book aimed at intermediate coders, and one that does not assume you want to use Entity Framework. Then offer it as a free download and/or publish it online as part of the documentation, and keep it up to date as new versions appear.

Wrestling with Visual Basic 6 (really!) and how knowledge on the internet gets harder to find

I have not done a thing with Visual Basic 6 for years, but a contact of mine has done a very useful utility in a certain niche (it is the teaching of Contract Bridge though you do not need to know about bridge to follow this post) and his preferred tools are VBA and Visual Basic 6.

Visual Basic 6 is thoroughly obsolete, but apps compiled with it still run fine on Windows 10 and Microsoft probably knows better than to stop them working. The IDE is painful on versions of Windows later than XP but that is what VMs are for. VBA, which uses essentially the same runtime (though updated and also available in 64-bit)  is not really obsolete at all; it is still the macro language of Office though Microsoft would prefer you to use a modern add-in model that works in the cloud.

Specifically, we thought it would be great to use Bo Haglund’s excellent Double Dummy Solver which is an open source library and runs cross-platform. So it was just a matter of doing a VB6 wrapper to call this DLL.

I did not set my sights high, I just wanted to call one function which looks like this:

EXTERN_C DLLEXPORT int STDCALL CalcDDtablePBN(
   struct ddTableDealPBN tableDealPBN,
   struct ddTableResults * tablep);

and the ddTableResults struct looks like this:

struct ddTableResults
{
   int resTable[DDS_STRAINS][DDS_HANDS];
};

I also forked the source and created a Visual C++ project for it for a familiar debugging experience.

So the first problem I ran into (before I compiled the DLL for myself) is that VB6 struggles with passing a struct (user-defined type or UDT in VB) ByVal. Maybe there is a way to do it, but this was when I ran up Visual C++ and decided to modify the source to make it more VB friendly, creating a version of the function that has pointer arguments so that you can pass the UDT ByRef.

A trivial change, but at this point I discovered that there is some mystery about  __declspec(dllexport) in Visual C++ which is meant to export undecorated functions for use in a DLL but does not always do so. The easy solution is to go back to using a DEF file and after fiddling for a bit I did that.

Now the head-scratching started as the code seemed to run fine but I got the wrong results. My C++ code was OK and the unit test worked perfectly. Further, VB6 did not crash or report any error. Just that the values in the ddTableResults.resTable array after the function returned were wrong.

Of course I searched for help but it is somewhat hard to find help with VB6 and calling DLLs especially since Microsoft has broken the links or otherwise removed all the helpful documents and MSDN articles that existed 20 years ago when VB6 was hot.

I actually dug out my old copy of Daniel Appleman’s Visual Basic Programmer’s Guide to the Windows API where he assured me that arrays in UDTs should work OK, especially since it is just an array of integers rather than strings.

Eventually I noticed what was happening. When I passed my two-dimensional array to the DLL it worked fine but in the DLL the indexes were inverted. So all I needed to do was to fix this up in my wrapper. Then it all worked fine. Who knows what convoluted stuff happens under the surface to give this result – yes I know that VB6 uses SAFEARRAYs.

image

I do not miss VB6 at all and personally I moved on to VB.NET and C# at the earliest opportunity. I do understand however that some people like to stay with what is familiar, and also that legacy software has to be maintained. It would be interesting to know how many VB6 projects are still being actively maintained; my guess is quite a few. Which if you read Bruce McKinney’s well-argued rants here must be somewhat frustrating.

Stack Overflow survey shows leap in popularity for Visual Studio Code

Stack Overflow has released the results of its annual developer survey. I took a quick look, comparing the figures to those for 2018.

The survey is based on results from 88,883 developers from 179 countries. 400 responses were excluded because they took less than 3 minutes to complete!

A few things caught my eye.

Visual Studio Code is the most popular development tool, with over 50% of developers saying they use it. This is up from 34.9% last year. Visual Studio (which I guess includes Visual Studio for Mac) is second but has gone down from 34.3% to 31.5%.

image

Visual Studio Code is an amazing success; it is only four years old. What is the benefit to Microsoft though? There must be some benefit via the branding and gentle steers towards Microsoft services such as Azure; but this is mainly about Microsoft delivering a great free and open source developer tool and getting kudos in return.

Few other technologies have moved by such a dramatic amount. JavaScript remains top in languages but slightly down, 69.8% to 67.8%. Python is gently up, 38.8% to 41.7%. C# slightly down, 34.4% to 31%. Swift is down a bit, 8.1% to 6.6%, which is a little surprising to me.

TypeScript is up from 17.4% to 21.2%. Another impressive open source project from Microsoft and the great Anders Hejlsberg (Object Pascal, Delphi, C#, TypeScript).

In frameworks, last year StackOverflow had a single category for .NET Core (27.2%) while ignoring .NET Framework. This year it has 23.7% for .NET Core and 37.4% for .NET – no trends are therefore visible but next year perhaps.

There is an overly broad platform category including everything from Raspberry Pi to WordPress to AWS. I do not have much confidence in these figures, but notice that AWS is up from 24.1% to 29.5%, Google Cloud Platform up from 8% to 12.8%, and Azure up from 11% to  11.9%. Not good figures for Azure, now third behind AWS and Google. But Microsoft can take comfort from Windows, supposedly up from 35.4% to 50.7%.

Both Android (29% to 27%) and iOS (15.5% to 13%) are down slightly. I do not think this is meaningful movement but it does suggest that mobile app development is not longer a big growth area; perhaps there is more attention being paid to server apps.

RemObjects Elements: mix and match languages and platforms as you like

The world of software development has changed profoundly in the last decade or so. Once it was a matter of mainly desktop Windows development for the client, mainly Java for server-based applications with web or Windows clients. Then came mobile and cloud – the iPhone SDK was released in March 2008, kicking off a new wave of mobile applications, while Amazon EC2 (Elastic Compute Cloud) came out of beta in October 2008. Microsoft tussled within itself about what to do with Windows Mobile and ended up ceding the entire market to Android and iOS.

The consequence of these changes is that business developers who once happily developed Windows desktop applications have had to diversify, as their customers demand applications for mobile and web as well. The PC market has not gone away, so there has been growing interest in both cross-platform development and in how to port Windows code to other platforms.

Embarcadero took Delphi, a favourite development tool based on an Object Pascal compiler, down a cross-platform path but not to the satisfaction of all Delphi developers, some of whom looked for other ways to transition to the new world.

Founded in 2002, RemObjects had a project called Chrome, which compiled Delphi’s Object Pascal to .NET executables. This product was later rebranded Oxygene. For a while Embarcadero bundled a version of this with Delphi, calling it Prism, after abandoning its own .NET compilation tools.

The partnership with Embarcadero ended, but RemObjects pressed on, adding language features to its flavour of Object Pascal and adding support for Mac OS X, iPhone and Java.

In February 2015 the company was an early adopter of Apple’s Swift language, introducing a Swift compiler called Silver that targets Android, .NET and native Mac OS X executables.

The company now offers a remarkable set of products for developers who want to target new platforms but in a familiar language:

  • Oxygene: Object Pascal
  • Silver: Swift 3 (and most of Swift 4)
  • Hydrogene: C# 7
  • Iodine: Java 8

Each language can import APIs from the others, and compile to all the platforms – well, there are exceptions, but this is the general approach.

More precisely, RemObjects defines four target platforms:

  • Echoes: .NET and .NET Core including ASP.NET and Mono
  • Cooper: Java and Android
  • Toffee: Mac, iOS, tvOS
  • Island: CPU native and WebAssembly

So if you fancy writing a WPF (Windows Presentation Foundation) application in Java, you can:

image

As you may spot from the above screenshot, the RemObjects tools use Visual Studio as the IDE. This is a limitation for Mac developers, so the company also developed a Mac IDE called Fire, and now a Windows IDE called Water (in preview) for those who dislike the Visual Studio dependency.

image

Important to note: RemObjects does not address the problem of cross-platform user interfaces. In this respect it is similar to the approach taken by Xamarin before that company came up with the idea of Xamarin Forms. So this is about sharing non-visual code and libraries, not cross-platform GUI (Graphical User Interface). If you are targeting Cocoa, you can use Apple’s Interface Builder to design your user interface, for example.

Of course WebAssembly and HTML is an interesting option in this respect.

A notable absentee from the list of RemObjects targets is UWP (Universal Windows Platform), a shame given the importance Microsoft still attaches to this.

RemObjects is mainly focused  on languages and compilers rather than libraries and frameworks. The idea is that you use the existing libraries and frameworks that are native to the platform you are targeting. This is a smart approach for a small company that does not wish to reinvent the wheel.

That said, there is a separate product called Data Abstract which is a multi-tier database framework.

These are interesting products, but as a journalist I have struggled to give them much coverage, because of their specialist nature and also the demands on my time as someone who prefers to try things out rather than simply relay news from press releases. I also appreciate that the above information is sketchy and encourage you to check out the website if these tools pique your interest.

Microsoft announces Visual Studio 2019, but pleasing developers is a tough challenge

Microsoft’s John Montgomery has announced Visual Studio 2019, in a post which is short on any details of what might be in the product, other than to continue evolving features that we already know about, such as Live Share, AI-powered IntelliCode, more refactorings and so on.

The acquisition of GitHub is bound to impact both Visual Studio and Visual Studio Team Services, but Montgomery does not talk about this.

Note there is already a Visual Studio roadmap which gives some clues about what is coming. A common theme is integration with Azure services such as Azure Key Vault (for app secrets), Azure Functions, and Azure Container Service (Kubernetes).

It is more illuminating to read the comments to Montgomery’s post. Montgomery says that Visual Studio 2017 is “our most popular Visual Studio release ever,” which I presume is a count of how many times it has been downloaded or installed. It is not the most reliable though; one comment says “2017 has been buggier than all of the bugs 2015 and 2013 had combined.” I imagine every Visual Studio developer, myself included, has to exit and reload the IDE from time to time to fix odd behaviour. Other comments include:

– Reporting components have to be added per project rather than being integrated into the toolbox

– SQL Server Data Tools (SSDT) lagged behind the 2017 release and still have issues

– the XAML designer has performance and behaviour issues and the new XAML designer in preview is missing many features

In general, Microsoft struggles to keep Visual Studio up to date with its constantly-changing developer platform while also working well with the older technologies that are still widely used. The transition from .NET Framework to .NET Core is a tricky issue for the team to solve.

User Benjamin Callister says this:

I have been developing professionally with VS for 20 years now. honestly, the experience seems to get worse with each new release. the amount of time wasted in my day working with XAML alone makes me more than frustrated. The feeling is mutual among my peers as well – and it has been for years now. VS Code is such a fresh breath of air because of its speed. VS full has become so bloated, working with UWP/XAML so slow, and build times so slow. Also, imo profiling tools should be turned OFF by default, with a simple button to toggle them back on when needed. As a developer, I don’t want them on all the time – rather, just when I want to profile.

The mention of Visual Studio code is an interesting one. Code is cross-platform and has an increasing number of extensions and will be an increasingly popular choice for developers who can live without the vast range of features in Visual Studio.

What is happening with desktop development on Windows and will WPF be upgraded at last?

Once upon a time all Windows development was desktop development. Then there was web development, but that was a server thing. Then in October 2012 Windows 8 arrived, and it was all about full-screen, touch control and Store-delivered applications that were sandboxed and safe to run. Underneath this there was a new platform-within-a-platform called the Windows Runtime or WinRT (or sometimes Metro). Developing for Windows became a choice: new WinRT platform, or old-style desktop development, the latter remaining necessary if your application needed more features than were available in WinRT, or to run on Windows 7.

Windows 8 failed and was replaced by Windows 10 (July 2015), in large part a return to the desktop. The Start menu returned, and each application again had a window. WinRT lived on though, now rebranded as UWP (Universal Windows Platform). The big selling point was that your UWP app would run on phones, Xbox and HoloLens as well as PCs. It was still locked down, though less so, and still Store-delivered.

Then Microsoft decided to abandon Windows Phone, a decision obvious to Microsoft-watchers in June 2015 when ex-Nokia CEO Stephen Elop left Microsoft, just before the launch of Windows 10, even though Windows Phone was not formally killed off until much later. UWP now had a rather small u (that is, not very universal).

In addition, Microsoft decided that locking down UWP was not the way forward, and opened up more and more Windows APIs to the platform. The distinction between UWP and desktop applications was further blurred by Project Centennial, now known as Desktop Bridge, which lets you wrap desktop applications for Store delivery.

Perhaps the whole WinRT/UWP thing was not such a good idea. A side-effect though of all the focus on UWP was that the old development frameworks, such as Windows Forms (WinForms) and Windows Presentation Foundation (WPF), received little attention – even though they were more widely used. Some Windows 10 APIs were only available in UWP, while other features only worked in WinForms or WPF, giving developers a difficult decision.

The Build 2018 event, which was on last week in Seattle, was the moment Microsoft announced that it would endeavour to undo the damage by bringing UWP and desktop development together. “We’ve taken all the UI stacks and merged them together” said Mike Harsh and Scott Hunter in a session on “Modernizing desktop apps” (BRK3501 if you want to look it up).

According to Harsh and Hunter, Windows desktop application development is increasing, despite the decline of the PC (note that this is hardly a neutral source).

image

So what was actually announced? Here is a quick summary. Note that the announced features are for the most part applicable to future versions of Windows 10. As ever, Build is for the initial announcement. So features are subject to change and will not work yet, other than possibly in pre-release form.

Greater information density in UWP applications. WinRT/UWP was originally designed for touch control, so with lots of white space. Most Windows users though have mouse and keyboard. The spacious UWP layout looked wrong on big desktop displays, and it made porting applications harder. The standard layout is getting less dense, and a new Compact Size, an application setting, will pack more information into the same space.

image

More controls for UWP. New DataGrid, Forms with data validation, Menu bar, and coming in future, Status bar, tab controls and Ribbon. The idea is to make UWP more suitable for line-of-business applications, which accounts for a large part of Windows application development overall.

New Windowing APIs for UWP. WinRT/UWP was designed for full-screen applications, not the popup-dialogs or floating windows possible in desktop applications. Those capabilities are coming though. We will get tool windows, light-dismiss windows (eg type and press Enter), and multiple windows on one thread so that they work like a single application when minimized or cycled through with alt-tab. Coming in future are topmost windows, modal windows, custom title bars, and maybe even MDI (Multiple Document Interface), though this last seems surprising since it is discouraged even in the desktop frameworks.

What many developers will care about more though is new features coming to desktop applications. There are two big announcements.

.NET Core 3.0 will support WinForms and WPF. This is big news, partly because it performs better than the Windows-only .NET Framework, but more important, because it allows side-by-side deployment of the .NET runtime. Even better, a linker will let you deliver a .NET Core desktop application as a single executable with no dependencies. What performance gain? An example shown at Build was an application which uses File APIs running nearly three times faster on .NET Core 3.0.

image

XAML Islands enabling UWP features in WinForms and WPF. The idea is that you can pop a UWP host control in your WinForms or WPF application, and show UWP content there. Microsoft is also preparing wrapper controls that you can use directly. Mentioned were WebView, MediaPlayer, InkCanvas, InkToolBar, Map and SwapChainPanel (for DirectX content). There will be a few compromises. The XAML host window will be rectangular (based on an HWND) which means non-rectangular and transparent content will not work correctly. There is also the Windows 7 problem: no UWP on Windows 7, so what happens to your XAML Islands? They will not run, though Microsoft is working on a mechanism that lets your application substitute compatible Windows 7 content rather than crashing.

MSIX deployment. MSIX is Microsoft’s latest deployment technology. It will work with both UWP and Desktop applications, will support Windows 7 and 10, will provide for auto-updates, and will have tooling built into Visual Studio, as well as a packager for both your own and third-party applications. Applications installed with MSIX are managed and updated by Windows, have tamper protection, and are installed per-user. It seems to build upon the Desktop Bridge concept, the aim being to make Windows more manageable in the Enterprise as well as safer for all users, if Microsoft can get widespread adoption. The packaging format will also work on Android, Mac and Linux and you can check out the SDK here.

image

Will WPF or WinForms be updated?

The above does not quite answer the question, will WPF or Windows Forms be significantly updated, other than with the ability to use UWP content? I could not get a clear answer on this question at Build, though I was told that adding support for .NET Core 3.0 required significant changes to these frameworks so it is no longer true to say they are frozen. With regard to WPF Microsoft Corporate VP Julia Liuson told me:

“We will be looking at more controls, more capabilities. It is widely recognised that WPF is the best framework for desktop development on Windows. The fact that we’re moving on top of .NET Core 3.0 gives us a path forward.”

That said, I also heard that the team would rather write code once and use it across UWP, WPF and WinForms via XAML Islands, than write new controls for each framework. That makes sense, the difficulty being Windows 7. Microsoft would rather promote migration to Windows 10, than write new UI components that work across both Windows 7 and Windows 10.