Category Archives: development

Amazon Linux 2023: designed to be disposable

Amazon Linux 2023 is the default for Linux VMs on AWS EC2 (Elastic Compute Cloud). Should you use it? It is a DevOps choice; the main reason why you might use it is that it feels like playing safe. AWS support will understand it, it should be performance-optimised for EC2; it should work smoothly with AWS services.

Amazon Linux 2 was released in June 2018 and was the latest production version until March 2023, by which time it was very out of date. Based on CentOS 7, it was pretty standard and you could easily use additional repositories such as EPEL (Extra Packages for Enterprise Linux). It is easy to keep up to date with sudo yum update. However there is no in-place upgrade.

Amazon Linux 2023 is different in character. It was released in March 2023 and the idea is to have a major release every 2 years, and to support each release for 5 years. It does not support EPEL or repositories other than Amazon’s own. The docs say:

At this time, there are no additional repositories that can be added to AL2023. This might change in the future.

The docs also document how to add an external repository so it is a bit confusing. You can also compile your own rpms and install that way; but if you do, keeping them up to date is down to you.

The key to why this is though is in a thing AWS calls deterministic upgrades. Each version, including minor versions, is locked to a specific repository. You can upgrade to a new release but it has to be specified. This is what I got today from my installation on Hyper-V:

Amazon Linux 2023 offering a new release

The command dnf check-release-update looks for a new release and tells you how to upgrade to it, but does not do so by default.

The reason, the docs explain, is that:

With AL2023, you can ensure consistency between package versions and updates across your environment. You can also ensure consistency for multiple instances of the same Amazon Machine Image (AMI). With the deterministic upgrades through versioned repositories feature, which is turned on by default, you can apply updates based on a schedule that meets your specific needs.

The idea is that if you have a fleet of Amazon Linux 2023 instances they all work the same. This is ideal for automation. The environment is predictable.

It is not ideal though if you have, say, one server, or a few servers doing different things, and you want to run them for a long time and keep them up to date. This will work, but the operating system is designed to be disposable. In fact, the docs say:

To apply both security and bug fixes to an AL2023 instance, update the DNF configuration. Alternatively, launch a newer AL2023 instance.

The bolding is mine; but if you have automation so that a new instance can be fired up configured as you want it, launching a new instance is just as logical as updating an existing one, and arguably safer.

Amazon Linux 2023 on Hyper-V

Amazon Linux 2023 came out in March 2023, somewhat late as it was originally called Amazon Linux 2022. It took even longer to provide images for running it outside AWS, but these did eventually arrive – but only for VMWare and KVM, even though old Amazon Linux 2 does have a Hyper-V image.

Update: Hyper-V is now officially supported making this post obsolete but it may be of interest!

I wanted to try out AL 2023 and it makes sense to do that locally rather than spend money on EC2; but my server runs Windows Hyper-V. Migrating images between hypervisors is nothing new so I gave it a try.

  • I used the KVM image here (or the version that was available at the time).
  • I used the qemu disk image utility to convert the .qcow2 KVM disk image to .vhdx format. I installed qemu-img by installing QUEMU for Windows but not enabling the hypervisor itself.
  • I used the seed.iso technique to initialise the VM with an ssh key and a user with sudo rights. I found it helpful to consult the cloud-init documentation linked from that page for this.
  • In Hyper-V I created a new Generation 1 VM with 4GB RAM and set it to boot from converted drive, plus seed.iso in the virtual DVD drive. Started it up and it worked.
Amazon Linux 2023 running on Hyper-V

I guess I should add the warning that installing on Hyper-V is not supported by AWS; on the other hand, installing locally has official limitations anyway. Even if you install on KVM the notes state that the KVM guest agent is not packaged or supported, VM hibernation is not supports, VM migration is not supported, passthrough of any device is not supported and so on.

What about the Hyper-V integration drivers? Note that “Linux Integration Services has been added to the Linux kernel and is updated for new releases.” Running lsmod shows that the essentials are there:

The Hyper-V modules are in the kernel in Amazon Linux 2023

Networking worked for me without resorting to a legacy network card emulation.

This exercise also taught me about the different philosophy in Amazon Linux 2023 versus Amazon Linux 2. That will be the subject of another post.

The era of tiny PCs: 400g and smaller than a paperback book

My work PC for the last few years has been a 2018 HP Omen gaming PC which has been brilliant; I have replaced the GPU and added storage but everything still works fine. That is, it used to be, until I reviewed a mini PC which has surprised me with its capability – not because it is exceptional, but because everyday technology is at the point where having something bigger is unnecessary for everyday purposes other than gaming.

Mini PC with paperback book and CD to show the size

The new PC is a Trigkey S5 with an AMD Ryzen 5560 CPU, 500GB NVMe SSD and 16GB DDR4 RAM, and currently costs around £320. Its Geekbench CPU score is better than my 5-year old HP with a Core i7.

GPU score is way less than the old HP.

Still, there is support for three displays via HDMI, DisplayPort and USB-C and 4K/60Hz is no problem.

Inside we find branded RAM and it does not look as if the components are shoe-horned in, there is plenty of space.

The power supply is external and rated at 19v and 64.98w.

Expansion is via 4 USB-A ports, one USB-C, and the aforementioned HDMI and DisplayPort sockets. There is also an Ethernet port, and of course Bluetooth and Wi-Fi.

Operating system? Interesting. It is not mentioned in the blurb but Windows 11 happens to be installed, but with one of those volume MAK (Multiple Activation Key) licenses that is not suitable for this kind of distribution (but costs the vendor hardly anything). When first run Windows setup states that “you may not use this software if you have not validly acquired a license for the software from Microsoft or its licensed distributors,” which you likely have not, but Trigkey may presume that most of its customers will not care. I recommend installing your own licensed copy of Windows as I have done, or your preferred Linux distribution.

Windows does run well however and 16GB RAM is enough for Hyper-V and Windows Subsystem for Linux (WSL) 2.0 to run well. Visual Studio 2022, VS Code, Microsoft Office, all run fine.

I am not suggesting that this particular model is the one to get, but I do think that something like this, small, light, and power-sipping, is now the sane choice for most desktop PC users.

Thoughtworks: do not choose to develop Single Page Applications by default

I have a lot of time for Thoughtworks, a global software development company, and always look at its Technology Radar, the latest version of which appeared recently. Plenty to digest, but what caught my eye was this comment regarding SPAs (Single Page Applications):

The sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren’t even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn’t exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don’t see teams making that tradeoff analysis, blindly accepting the complexity of SPAs by default even when the business needs don’t justify it.

This struck a chord with me because of my adventures creating an online bridge playing platform using ASP.NET Core. I picked the platform because I was in a hurry, like C#, and had some existing code for implementing a bridge game, done for Windows. Any online game though needs lots of JavaScript and I soon became aware that the traditional ASP.NET approach, where each web page is a separate .cshtml file with server-side rendering and C# code-behind, is at odds with trends towards SPAs and JAMstack (JavaScript, API and Markup, where “Markup” is HTML and CSS).

Note that you can of course do SPAs and JAMstack with ASP.NET; ASP.NET is a nice technology for implementing an API and there are Visual Studio templates for things including “ASP.NET Core with React.js and Redux”. A Razor Pages application is still the default though, and gives you a UI for the ASP.NET Core Identity for free which saved me a lot of gruntwork. Still, as I got deeper into JavaScript libraries, including the AWS JavaScript SDK which I am using for audio and video, I found myself being steered towards React.js (resisted so far) and JavaScript bundling with Webpack (tried but was not a good fit). I also found that even switching my JavaScript code to TypeScript was surprisingly awkward, considering that the creator of TypeScript works for Microsoft. I found myself wondering if I should have started with an SPA, or convert my application to an SPA, in order to fit in well with the new world.

Separately, I’ve been involved with another project, in PHP and JavaScript, which is an SPA, and hitting some of the potential issues. For example, the application made a ton of database queries on first load, the data from which was in most cases never used, as users did not visit the parts of the application that required them. Refactoring to load this data on demand has made the application faster and more efficient.

A problem, which Thoughtworks alludes to in a remark about “closing the gap on user experience,” is that staying in JavaScript rather than loading a new page from the server generally makes for a smoother application. The way my bridge application has evolved is that the main play screen is a kind of SPA: everything is done in JavaScript and API calls, and I have written a ton of JavaScript code for things like rendering HTML tables where server-side rendering with Razor would be much easier, but unacceptable for usability. However, different parts of the application still use separate Razor pages, for things like viewing results, configuring a user profile, finding a game, and admin screens for managing members and running sessions.

JavaScript, now TypeScript, has exceeded my expectations in terms of performance and capability. It is annoying at times but a modern web browser is a phenomenal platform. I was glad though to see Thoughtworks noting that going the SPA route is not always the right decision

Debugging Safari on an old iPad

Someone was trying to use the bridge application I have in progress, using an iPad 2.0. There were a couple of interesting things about this. One was that I had to rethink the warning thrown up, base on Modernizr, which detects incompatible web browsers. The problem (obvious when you think about it) is that if you use some potentially incompatible features in the same page where you are testing for them, then with an old web browser the JavaScript fails with a syntax error and the warning does not appear. The fix: I now show the warning by default, and the compatibility check hides it.

Still, I was interested in the Safari error and wanted to debug it, in case it was something I could fix. How do you debug Safari on an iPad?  The way it is meant to work is this:

– On a Mac, enable the Safari Develop menu (in Safari preferences, Advanced, Show Develop menu).

– On iOS, enable Safari Web Inspector (Settings – Safari – Advanced – Web Inspector).

– Connect the iPad to the Mac via USB. You can now use Web Inspector on the Mac to debug the Safari iOS pages and scripts.

This did not work for me on my Catalina Mac. The iOS Safari did not show up in the Web Inspector on Safari Mac. I could get it to show briefly, by switching Web Inspector on the iPad off and on again, but after than, no go. I tried a few things, but none of the proposed solutions I could find for this issue fixed it for me.

I have an older 2011 Mac Mini in a drawer, so I thought that might work, being a similar age to the iPad. I fired it up, marvelled at how old-fashioned the UI looked (I had reset it to OS X Lion), and connected the iPad. No go. Same problem as with Catalina.

Surprisingly, what did work were the instructions here (more or less) for debugging Safari iOS on Windows. This is based on the RemoteDebug iOS WebKit Adapter described here, a project which originated as an internal Microsoft experiment.

image

I did find it amusing that I could do this on Windows, having failed with the Mac.

The next generation of this is Inspect. This is in private beta, though the GitHub page for RemoteDebug says it has been superseded and to use Inspect instead.

It worked for me though.

Wrestling with Azure DevOps Pipelines

Pipelines is an Azure service that enables a powerful feature: the ability to set up continuous integration. I have tangled with it before, in the context of trying Azure Kubernetes Service, but managed to avoid getting deep into the YAML which is the language of Pipelines. I am working on a web application and  trying to get it up to scratch as quickly as possible, especially as there are now a bunch of users who are being patient over glitches during development but whose patience may run out.

The application uses .NET Core which for the most part is working well for me. I am using Visual Studio 2019, with occasional forays into Visual Studio Code (VS Code), and deploying to a Linux Azure App Service. Everything was fine until one day when the Web Deploy feature in Visual Studio stopped working with “could not complete the request to remote agent … the operation has timed out.” I appealed for help but with no result yet.

All was not lost as I found that the VS Code Deploy to Azure extension worked pretty well. All I needed to do was to open the solution folder in VS Code, run:

dotnet publish -c Release -o ./publish

in the terminal, then right-click the publish folder and  then right-click the publish folder and choose Deploy to web app. There are a few annoyances but it solved the immediate problem.

One can do better though. Rather than manually deploying, you can create a pipeline using Azure DevOps (the thing that was once called Visual Studio Online and is the cloud version of Team Foundation Services). An attraction of using Azure Devops is that you get “1 Microsoft-hosted job with 1,800 minutes per month for CI/CD” free which seems decent.

I got started, creating an Azure DevOps project and adding a pipeline. You authorize it to access your GitHub repository (if that is what you have, as I do) and then end up in an editor that looks like this:

image

I soon got frustrated. The Pipelines service seems fundamentally excellent but spoilt by poor documentation and some odd behaviour – at least for .NET Core. It took me hours to achieve a basic setup that would upload, test and deploy my simple web application. Most of the time was spent observing pipelines fail to run and trying to figure out why.

When you run a pipeline you may notice that it uses .NET Core 2.1 and warns that 2.2 and 3.0 are end of life:

How do you get it to use .NET Core 3.1? You can add a snippet called UseDotNet@2 and specify the version. I put 3.1 and it was rejected as it likes a full version. I put 3.1.301 and it worked .

The most time-consuming thing for me was running tests. The application uses an Azure SQL database. It is unwise to put the database password in appsettings.json in the GitHub respository. How then do you connect to the database? The docs anticipate this and you can use a feature called variable substitution. In the Pipeline editor, you can add variables and mark them secret, so they are not included in logs. You can also use variables from Azure Key Vault. Then you can use the FileTransform@2 task to replace the connection string in appsettings.json with the one you need including the password. I do not think this is ideal from a security perspective – you are still putting the password in plain text in a configuration file – but it beats having it in the GitHub repository.

I had many issues. The main documentation on variable substitution is here. This is terrible. Note that if you look at the YAML example for JSON file substitution (which is what we need) it does not even use FileTransform@2. It uses AzureRmWebAppDeployment@4 which does a whole lot of other stuff as described here. Maybe I should have tried that. But FileTransform@2 looked like the right thing. Unfortunately it generally gives the error “Cannot perform XML transformations on a non-Windows platform.” No, I am not trying to do an XML transformation. Even if you specify the fileType as json and set enableXmlTransform to false, you still get the error. Later research suggests you can beat this error by setting xmlTransformationRules to an empty string. I gave up though and used FileTransform@1 (an older version of the task) which works as expected.

I still did not get the result I wanted though. All the tests using the database failed. Eventually I figured out that I had to set the folderPath to $(Build.SourcesDirectory). Then it works.

This was good. Now my tests run in Linux rather than on Windows, matching the deployed environment. In a full production environment I would use a second Azure SQL database for the tests, but for development this will do.

I then created a staging slot in the App Service and added a deployment step to deploy the application to that slot. Again, this is good. The application will not deploy unless it passes all the tests (this is a built-in feature of Pipelines, as each step does not run unless the previous steps succeeded). It deploys to staging which has a separate URL so you can try it out and not swap it to production until you are ready.

Overall, it is a better solution than the Visual Studio web deploy which it replaces, so perhaps the error did me a favour. It will work with Visual Studio as well as with VS Code, since it triggers automatically on every code commit. The Publish option in Visual Studio becomes redundant.

Note that Visual Studio also has an option to set this up automatically.

image

I tried it, letting the wizard do what it wanted including creating a new Azure DevOps project and a new App Service plan. Notable things:

– It created a pipeline using the Classic UI rather than the YAML based editor

– It uses an agent (the VM where the pipeline runs) called vs2017-win2016

– the pipeline did not get very far, failing on NuGet restore

No, I am not going to bother troubleshooting this.

This time yesterday I hated Azure DevOps pipelines. Nothing worked first time, YAML is a hostile editing environment (whitespace matters), and the documentation frustrated me. Now I feel pleased and I have this nice badge in my repository.

image

I am left with a nagging feeling though that all this is more difficult than it should be. It seems to me that what I wanted to do was commonplace: use .NET Core, use Azure App Service, have my pipeline build the project, run tests and deploy to staging. You could add, apply entity framework migrations in many cases. I did not find this documented in any one place and the result was that it took more time to figure out.

Five facts about Rust

Rust is a programming language aimed at system programming – for which high performance and low-level system access is essential – but with safety features that make it harder to write dangerous or insecure code (though it is still possible). Since all programmers value both speed and stability, Rust is being used for tasks other than system programming as well. Rust is open source and sponsored by Mozilla, which uses Rust in its own development including parts of the Firefox web browser.

Rust is not one of the most-used programming languages; according to a StackOverflow survey only 3.2% of developers use it. Among professional developers that figure drops to 3.0%.

Yet Rust comfortably tops the list of most loved languages.

image

Second, Rust has built-in support for unit tests, in conjunction with Cargo, the Rust build system and package manager. Cargo will both generate test functions and run tests for you. You can do unit tests in any language, but this is a great way to prompt developers to use them.  Tests are a big deal. I recall Sqlite developer Dr D Richard Hipp telling me that testing was core to the project and without it, it could not progress as it does. Sqlite has 662 times more test code than the code in the Sqlite library itself.

Third, Rust can be compiled to WebAssembly so you can run it in a web browser.

Fourth, Microsoft is considering using Rust on the basis that it “could eliminate an entire class of vulnerabilities before they ever happened”.

Fifth, work is under way to build a new operating system with Rust, called Redox. I wrote about this briefly for the Register.

If asked to think of a language that is as efficient and powerful as C++ but nicer and for many of us more productive to use, I think of Delphi (or Object Pascal). Delphi has an ardent niche following but is unlikely to grow its usage much beyond it. Rust on the other hand is a modern language that benefits from things we have learned about programming in the last forty years (C++ was first thought by Bjarne Stroustrup when writing his PhD thesis, though the name dates from 1983), and with a refreshing lack of legacy. And Delphi is not open source, unless you mean Lazarus.

Worth a look if you have a moment – see here for how Verity Stop got on.

Xcode on Catalina update hassles

I have a Mac running Catalina. It is almost new and I did not migrate anything from the old Mac, so should be a very clean install.

I installed Xcode 11 from the App Store. All fine.

Yesterday it wanted to update to Xcode 11.1. But the update took a long time and then failed. Try again later. I did. Same. The App Store UI gives you no clue what is not working.

I ran the Console app to check the log. Install failed “The package is attempting to install content to the system volume.”

Annoying. Suggested fix is to download the DMG. Another idea is to uninstall and then reinstall from the App Store. I like having it App Store managed so I did the latter and it worked.

Together with Gimp permission problems it looks like permission issues in Catalina are a considerable annoyance. Which is OK if security is better as a result; but that does not excuse this kind of arbitrary behaviour.

The future of WPF for developers who need to support Windows 7

If you talk to Microsoft about what is new for Windows Presentation Foundation (WPF), a framework for Windows desktop applications, the answer tends to revolve around the Windows UI Library (WinUI), user interface controls for the Universal Windows Platform and therefore Windows 10, which you can use with WPF. That is no use if you need to compile applications that work on Windows 7. Is WPF on Windows 7 in effect frozen?

Not quite. First, note that WPF (and Windows Forms) was updated for .NET Framework 4.8, with High DPI enhancements and bug fixes. The complete list of fixes is here. So there have been recent updates.

Microsoft says though that .NET Framework 4.8 is the “last major version” of .NET Framework. This suggests that WPF on .NET Framework will not change much in future. WPF is open source; but the open source project targets .NET Core, the cross-platform version of .NET. In addition, there are a few features in WPF for .NET Framework that will never be ported, including XBAPs (XAML Browser Applications) – probably not something you care about.

The good news though is that .NET Core does run on Windows 7 (currently SP1 is required). You can see the progress of WPF on .NET Core here. It is not yet done and there are a few things that will never be supported. But when this is production-ready, it is likely that the open source WPF will run on Windows 7 and thus benefit from any updates and fixes made to the code.

From what I have learned here at Build, Microsoft’s developer conference, it is that .NET Core work that is currently top of mind for the WPF team. This means that WPF on Windows 7 does have a future – provided that .NET Core continues to support Windows 7. This proviso is important, since it is the decision of a different team. At some point there will be a version of .NET Core that does not support Windows 7, and that will be the moment when WPF cannot really progress on that operating system.

There may also be a special case. Presuming Edge Chromium runs on Windows 7, WPF may get a new Edge-based WebView control that runs on Windows 7.

Summary: WPF (and Windows Forms) on .NET Framework is not going to change much in future. If you can transition to using these frameworks on .NET Core though, there is more hope of improvements, though there is no magic that will make Windows 10 features available on Windows 7.

One .NET: unification of .NET for Windows and .NET Core, Xamarin too

Microsoft’s forking of the .NET development platform into the Windows-only .NET Framework on one side, and the cross-platform .NET Core on the other, has caused considerable confusion. Which should you target? What is the compatibility story? And where does Mono, the older cross-platform .NET fit in? Xamarin, partly based on Mono, is another piece of the puzzle.

Now Microsoft has announced that .NET 5, coming in November 2020, will unify these diverse .NET versions.

“There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more,” says Microsoft’s Rich Turner.

image

Following the release of .NET 5.0, the framework will have a major release every November, says Turner, with a long-term support release every two years.

Some other key announcements:

  • CoreCLR (the .NET Core runtime) and Mono will become drop-in replacements for one another.
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating systems.
  • CoreFX will be extended to support static compilation of .NET and support for more operating systems.

A note of caution though. Turner says there are a number of issues still to be resolved. There is room for scepticism about how complete this unification will be.

More details in the official announcement here.

Update: having looked at these plans in a little more detail, it is wrong to say that Microsoft is unifying .NET Framework and .NET Core. Rather, Microsoft is saying that .NET Core is the replacement for .NET Framework for new applications whether on Windows or elsewhere. Certain parts of .NET Framework, including WCF, Web Forms, and Windows Workflow, will never be migrated to .NET 5. .NET Framework 4.8 will still be maintained and is recommended for existing applications.