Category Archives: development

image

Thoughtworks: do not choose to develop Single Page Applications by default

I have a lot of time for Thoughtworks, a global software development company, and always look at its Technology Radar, the latest version of which appeared recently. Plenty to digest, but what caught my eye was this comment regarding SPAs (Single Page Applications):

The sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren’t even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn’t exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don’t see teams making that tradeoff analysis, blindly accepting the complexity of SPAs by default even when the business needs don’t justify it.

This struck a chord with me because of my adventures creating an online bridge playing platform using ASP.NET Core. I picked the platform because I was in a hurry, like C#, and had some existing code for implementing a bridge game, done for Windows. Any online game though needs lots of JavaScript and I soon became aware that the traditional ASP.NET approach, where each web page is a separate .cshtml file with server-side rendering and C# code-behind, is at odds with trends towards SPAs and JAMstack (JavaScript, API and Markup, where “Markup” is HTML and CSS).

Note that you can of course do SPAs and JAMstack with ASP.NET; ASP.NET is a nice technology for implementing an API and there are Visual Studio templates for things including “ASP.NET Core with React.js and Redux”. A Razor Pages application is still the default though, and gives you a UI for the ASP.NET Core Identity for free which saved me a lot of gruntwork. Still, as I got deeper into JavaScript libraries, including the AWS JavaScript SDK which I am using for audio and video, I found myself being steered towards React.js (resisted so far) and JavaScript bundling with Webpack (tried but was not a good fit). I also found that even switching my JavaScript code to TypeScript was surprisingly awkward, considering that the creator of TypeScript works for Microsoft. I found myself wondering if I should have started with an SPA, or convert my application to an SPA, in order to fit in well with the new world.

Separately, I’ve been involved with another project, in PHP and JavaScript, which is an SPA, and hitting some of the potential issues. For example, the application made a ton of database queries on first load, the data from which was in most cases never used, as users did not visit the parts of the application that required them. Refactoring to load this data on demand has made the application faster and more efficient.

A problem, which Thoughtworks alludes to in a remark about “closing the gap on user experience,” is that staying in JavaScript rather than loading a new page from the server generally makes for a smoother application. The way my bridge application has evolved is that the main play screen is a kind of SPA: everything is done in JavaScript and API calls, and I have written a ton of JavaScript code for things like rendering HTML tables where server-side rendering with Razor would be much easier, but unacceptable for usability. However, different parts of the application still use separate Razor pages, for things like viewing results, configuring a user profile, finding a game, and admin screens for managing members and running sessions.

JavaScript, now TypeScript, has exceeded my expectations in terms of performance and capability. It is annoying at times but a modern web browser is a phenomenal platform. I was glad though to see Thoughtworks noting that going the SPA route is not always the right decision

Debugging Safari on an old iPad

Someone was trying to use the bridge application I have in progress, using an iPad 2.0. There were a couple of interesting things about this. One was that I had to rethink the warning thrown up, base on Modernizr, which detects incompatible web browsers. The problem (obvious when you think about it) is that if you use some potentially incompatible features in the same page where you are testing for them, then with an old web browser the JavaScript fails with a syntax error and the warning does not appear. The fix: I now show the warning by default, and the compatibility check hides it.

Still, I was interested in the Safari error and wanted to debug it, in case it was something I could fix. How do you debug Safari on an iPad?  The way it is meant to work is this:

– On a Mac, enable the Safari Develop menu (in Safari preferences, Advanced, Show Develop menu).

– On iOS, enable Safari Web Inspector (Settings – Safari – Advanced – Web Inspector).

– Connect the iPad to the Mac via USB. You can now use Web Inspector on the Mac to debug the Safari iOS pages and scripts.

This did not work for me on my Catalina Mac. The iOS Safari did not show up in the Web Inspector on Safari Mac. I could get it to show briefly, by switching Web Inspector on the iPad off and on again, but after than, no go. I tried a few things, but none of the proposed solutions I could find for this issue fixed it for me.

I have an older 2011 Mac Mini in a drawer, so I thought that might work, being a similar age to the iPad. I fired it up, marvelled at how old-fashioned the UI looked (I had reset it to OS X Lion), and connected the iPad. No go. Same problem as with Catalina.

Surprisingly, what did work were the instructions here (more or less) for debugging Safari iOS on Windows. This is based on the RemoteDebug iOS WebKit Adapter described here, a project which originated as an internal Microsoft experiment.

image

I did find it amusing that I could do this on Windows, having failed with the Mac.

The next generation of this is Inspect. This is in private beta, though the GitHub page for RemoteDebug says it has been superseded and to use Inspect instead.

It worked for me though.

Wrestling with Azure DevOps Pipelines

Pipelines is an Azure service that enables a powerful feature: the ability to set up continuous integration. I have tangled with it before, in the context of trying Azure Kubernetes Service, but managed to avoid getting deep into the YAML which is the language of Pipelines. I am working on a web application and  trying to get it up to scratch as quickly as possible, especially as there are now a bunch of users who are being patient over glitches during development but whose patience may run out.

The application uses .NET Core which for the most part is working well for me. I am using Visual Studio 2019, with occasional forays into Visual Studio Code (VS Code), and deploying to a Linux Azure App Service. Everything was fine until one day when the Web Deploy feature in Visual Studio stopped working with “could not complete the request to remote agent … the operation has timed out.” I appealed for help but with no result yet.

All was not lost as I found that the VS Code Deploy to Azure extension worked pretty well. All I needed to do was to open the solution folder in VS Code, run:

dotnet publish -c Release -o ./publish

in the terminal, then right-click the publish folder and  then right-click the publish folder and choose Deploy to web app. There are a few annoyances but it solved the immediate problem.

One can do better though. Rather than manually deploying, you can create a pipeline using Azure DevOps (the thing that was once called Visual Studio Online and is the cloud version of Team Foundation Services). An attraction of using Azure Devops is that you get “1 Microsoft-hosted job with 1,800 minutes per month for CI/CD” free which seems decent.

I got started, creating an Azure DevOps project and adding a pipeline. You authorize it to access your GitHub repository (if that is what you have, as I do) and then end up in an editor that looks like this:

image

I soon got frustrated. The Pipelines service seems fundamentally excellent but spoilt by poor documentation and some odd behaviour – at least for .NET Core. It took me hours to achieve a basic setup that would upload, test and deploy my simple web application. Most of the time was spent observing pipelines fail to run and trying to figure out why.

When you run a pipeline you may notice that it uses .NET Core 2.1 and warns that 2.2 and 3.0 are end of life:

How do you get it to use .NET Core 3.1? You can add a snippet called UseDotNet@2 and specify the version. I put 3.1 and it was rejected as it likes a full version. I put 3.1.301 and it worked .

The most time-consuming thing for me was running tests. The application uses an Azure SQL database. It is unwise to put the database password in appsettings.json in the GitHub respository. How then do you connect to the database? The docs anticipate this and you can use a feature called variable substitution. In the Pipeline editor, you can add variables and mark them secret, so they are not included in logs. You can also use variables from Azure Key Vault. Then you can use the FileTransform@2 task to replace the connection string in appsettings.json with the one you need including the password. I do not think this is ideal from a security perspective – you are still putting the password in plain text in a configuration file – but it beats having it in the GitHub repository.

I had many issues. The main documentation on variable substitution is here. This is terrible. Note that if you look at the YAML example for JSON file substitution (which is what we need) it does not even use FileTransform@2. It uses AzureRmWebAppDeployment@4 which does a whole lot of other stuff as described here. Maybe I should have tried that. But FileTransform@2 looked like the right thing. Unfortunately it generally gives the error “Cannot perform XML transformations on a non-Windows platform.” No, I am not trying to do an XML transformation. Even if you specify the fileType as json and set enableXmlTransform to false, you still get the error. Later research suggests you can beat this error by setting xmlTransformationRules to an empty string. I gave up though and used FileTransform@1 (an older version of the task) which works as expected.

I still did not get the result I wanted though. All the tests using the database failed. Eventually I figured out that I had to set the folderPath to $(Build.SourcesDirectory). Then it works.

This was good. Now my tests run in Linux rather than on Windows, matching the deployed environment. In a full production environment I would use a second Azure SQL database for the tests, but for development this will do.

I then created a staging slot in the App Service and added a deployment step to deploy the application to that slot. Again, this is good. The application will not deploy unless it passes all the tests (this is a built-in feature of Pipelines, as each step does not run unless the previous steps succeeded). It deploys to staging which has a separate URL so you can try it out and not swap it to production until you are ready.

Overall, it is a better solution than the Visual Studio web deploy which it replaces, so perhaps the error did me a favour. It will work with Visual Studio as well as with VS Code, since it triggers automatically on every code commit. The Publish option in Visual Studio becomes redundant.

Note that Visual Studio also has an option to set this up automatically.

image

I tried it, letting the wizard do what it wanted including creating a new Azure DevOps project and a new App Service plan. Notable things:

– It created a pipeline using the Classic UI rather than the YAML based editor

– It uses an agent (the VM where the pipeline runs) called vs2017-win2016

– the pipeline did not get very far, failing on NuGet restore

No, I am not going to bother troubleshooting this.

This time yesterday I hated Azure DevOps pipelines. Nothing worked first time, YAML is a hostile editing environment (whitespace matters), and the documentation frustrated me. Now I feel pleased and I have this nice badge in my repository.

image

I am left with a nagging feeling though that all this is more difficult than it should be. It seems to me that what I wanted to do was commonplace: use .NET Core, use Azure App Service, have my pipeline build the project, run tests and deploy to staging. You could add, apply entity framework migrations in many cases. I did not find this documented in any one place and the result was that it took more time to figure out.

Five facts about Rust

Rust is a programming language aimed at system programming – for which high performance and low-level system access is essential – but with safety features that make it harder to write dangerous or insecure code (though it is still possible). Since all programmers value both speed and stability, Rust is being used for tasks other than system programming as well. Rust is open source and sponsored by Mozilla, which uses Rust in its own development including parts of the Firefox web browser.

Rust is not one of the most-used programming languages; according to a StackOverflow survey only 3.2% of developers use it. Among professional developers that figure drops to 3.0%.

Yet Rust comfortably tops the list of most loved languages.

image

Second, Rust has built-in support for unit tests, in conjunction with Cargo, the Rust build system and package manager. Cargo will both generate test functions and run tests for you. You can do unit tests in any language, but this is a great way to prompt developers to use them.  Tests are a big deal. I recall Sqlite developer Dr D Richard Hipp telling me that testing was core to the project and without it, it could not progress as it does. Sqlite has 662 times more test code than the code in the Sqlite library itself.

Third, Rust can be compiled to WebAssembly so you can run it in a web browser.

Fourth, Microsoft is considering using Rust on the basis that it “could eliminate an entire class of vulnerabilities before they ever happened”.

Fifth, work is under way to build a new operating system with Rust, called Redox. I wrote about this briefly for the Register.

If asked to think of a language that is as efficient and powerful as C++ but nicer and for many of us more productive to use, I think of Delphi (or Object Pascal). Delphi has an ardent niche following but is unlikely to grow its usage much beyond it. Rust on the other hand is a modern language that benefits from things we have learned about programming in the last forty years (C++ was first thought by Bjarne Stroustrup when writing his PhD thesis, though the name dates from 1983), and with a refreshing lack of legacy. And Delphi is not open source, unless you mean Lazarus.

Worth a look if you have a moment – see here for how Verity Stop got on.

Xcode on Catalina update hassles

I have a Mac running Catalina. It is almost new and I did not migrate anything from the old Mac, so should be a very clean install.

I installed Xcode 11 from the App Store. All fine.

Yesterday it wanted to update to Xcode 11.1. But the update took a long time and then failed. Try again later. I did. Same. The App Store UI gives you no clue what is not working.

I ran the Console app to check the log. Install failed “The package is attempting to install content to the system volume.”

Annoying. Suggested fix is to download the DMG. Another idea is to uninstall and then reinstall from the App Store. I like having it App Store managed so I did the latter and it worked.

Together with Gimp permission problems it looks like permission issues in Catalina are a considerable annoyance. Which is OK if security is better as a result; but that does not excuse this kind of arbitrary behaviour.

The future of WPF for developers who need to support Windows 7

If you talk to Microsoft about what is new for Windows Presentation Foundation (WPF), a framework for Windows desktop applications, the answer tends to revolve around the Windows UI Library (WinUI), user interface controls for the Universal Windows Platform and therefore Windows 10, which you can use with WPF. That is no use if you need to compile applications that work on Windows 7. Is WPF on Windows 7 in effect frozen?

Not quite. First, note that WPF (and Windows Forms) was updated for .NET Framework 4.8, with High DPI enhancements and bug fixes. The complete list of fixes is here. So there have been recent updates.

Microsoft says though that .NET Framework 4.8 is the “last major version” of .NET Framework. This suggests that WPF on .NET Framework will not change much in future. WPF is open source; but the open source project targets .NET Core, the cross-platform version of .NET. In addition, there are a few features in WPF for .NET Framework that will never be ported, including XBAPs (XAML Browser Applications) – probably not something you care about.

The good news though is that .NET Core does run on Windows 7 (currently SP1 is required). You can see the progress of WPF on .NET Core here. It is not yet done and there are a few things that will never be supported. But when this is production-ready, it is likely that the open source WPF will run on Windows 7 and thus benefit from any updates and fixes made to the code.

From what I have learned here at Build, Microsoft’s developer conference, it is that .NET Core work that is currently top of mind for the WPF team. This means that WPF on Windows 7 does have a future – provided that .NET Core continues to support Windows 7. This proviso is important, since it is the decision of a different team. At some point there will be a version of .NET Core that does not support Windows 7, and that will be the moment when WPF cannot really progress on that operating system.

There may also be a special case. Presuming Edge Chromium runs on Windows 7, WPF may get a new Edge-based WebView control that runs on Windows 7.

Summary: WPF (and Windows Forms) on .NET Framework is not going to change much in future. If you can transition to using these frameworks on .NET Core though, there is more hope of improvements, though there is no magic that will make Windows 10 features available on Windows 7.

One .NET: unification of .NET for Windows and .NET Core, Xamarin too

Microsoft’s forking of the .NET development platform into the Windows-only .NET Framework on one side, and the cross-platform .NET Core on the other, has caused considerable confusion. Which should you target? What is the compatibility story? And where does Mono, the older cross-platform .NET fit in? Xamarin, partly based on Mono, is another piece of the puzzle.

Now Microsoft has announced that .NET 5, coming in November 2020, will unify these diverse .NET versions.

“There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more,” says Microsoft’s Rich Turner.

image

Following the release of .NET 5.0, the framework will have a major release every November, says Turner, with a long-term support release every two years.

Some other key announcements:

  • CoreCLR (the .NET Core runtime) and Mono will become drop-in replacements for one another.
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating systems.
  • CoreFX will be extended to support static compilation of .NET and support for more operating systems.

A note of caution though. Turner says there are a number of issues still to be resolved. There is room for scepticism about how complete this unification will be.

More details in the official announcement here.

Update: having looked at these plans in a little more detail, it is wrong to say that Microsoft is unifying .NET Framework and .NET Core. Rather, Microsoft is saying that .NET Core is the replacement for .NET Framework for new applications whether on Windows or elsewhere. Certain parts of .NET Framework, including WCF, Web Forms, and Windows Workflow, will never be migrated to .NET 5. .NET Framework 4.8 will still be maintained and is recommended for existing applications.

Progressive Delivery: the next step in DevOps?

I attended the always-excellent QCon developer conference in London earlier this week. James Governor from Redmonk what there, presenting what he calls Progressive Delivery, the idea being that rather than rolling out continuous and (mostly) small changes to everyone, you segment your deployments. Progressive deployment, see.

image

It is not really a new idea and might even be considered a rediscovery of what we already knew: that it makes sense to deploy new stuff to a small sample first. However it is true that tools are constantly evolving, and Progressive Delivery is perhaps best seen as a necessary refinement to the Continuous Delivery concept. In particular, LaunchDarkly exhibited at QCon; the product is a feature management platform which lets you create groups of users and toggle features on or off for particular groups. Needless to say, the LaunchDarkly folk love the Progressive Delivery concept.

Why Progressive Delivery? My first reaction is that this is about caution: if stuff breaks, let us make sure it only breaks for a few users. Then I saw that it can be equally about bold experimentation, trying new ideas with small groups so you can observe what works and what does not.

Of course you can do this anyway and in the end there is no magic in LaunchDarkly; it is still down to the developer to write the code:

image

This stuff can also easily become non-trivial; one attendee asked about managing database structure and it is obvious that not all features are equally amenable to being switched on or off for groups of users.

Still, I reckon “how do you manage features?” is a good question to add to the list when considering DevOps tools.

You can read most of what Governor talked about in his post from last year here.

Adobe announces extensibility for XD design and prototyping tool, integration with Microsoft Teams, Slack and Jira

Adobe XD (Experience Design) is a tool for prototyping apps and web applications. The full application runs on Windows and Mac, as part of Adobe’s Creative Cloud, and there are apps for iOS and Android that let you preview your designs on a device. Note that it is only a prototyping tool: you still have to re-implement the design in Android Studio, Xcode, Visual Studio or your preferred development tool. However the ability to create and share prototypes is a critical part of the workflow for many applications.

image

Adobe has now announced extensibility for XD via an API. This enables third-party plugins, which will enable “adding new features, automating workflows and connecting XD to tools and services,” according to the press release.

There are also new integrations with collaboration tools including Microsoft Teams and Slack, and Jira (Atlassian’s software development management tool).

The release emphasises that that Microsoft Teams is Adobe’s “preferred collaboration service”, showing that the company’s alliance with Microsoft is still on.

These are not the only tools which integrate with XD. Others were announced in January this year, including Dropbox and Sketch.

What do these integrations do? It is mainly a matter of rich preview within the tool, and the ability to receive notifications, such as when someone comments on an XD design.

Adobe has a generous free starter plan for XD. This includes:

  • Adobe XD
  • 1 active shared prototype
  • 1 active shared design spec
  • 2 GB cloud storage
  • Typekit Free (limited set of fonts)

You can get the free plan here, play around with the tool, and upgrade to the full plan (with unlimited prototypes) if you need to, at $9.99 per month.

RemObjects Elements: mix and match languages and platforms as you like

The world of software development has changed profoundly in the last decade or so. Once it was a matter of mainly desktop Windows development for the client, mainly Java for server-based applications with web or Windows clients. Then came mobile and cloud – the iPhone SDK was released in March 2008, kicking off a new wave of mobile applications, while Amazon EC2 (Elastic Compute Cloud) came out of beta in October 2008. Microsoft tussled within itself about what to do with Windows Mobile and ended up ceding the entire market to Android and iOS.

The consequence of these changes is that business developers who once happily developed Windows desktop applications have had to diversify, as their customers demand applications for mobile and web as well. The PC market has not gone away, so there has been growing interest in both cross-platform development and in how to port Windows code to other platforms.

Embarcadero took Delphi, a favourite development tool based on an Object Pascal compiler, down a cross-platform path but not to the satisfaction of all Delphi developers, some of whom looked for other ways to transition to the new world.

Founded in 2002, RemObjects had a project called Chrome, which compiled Delphi’s Object Pascal to .NET executables. This product was later rebranded Oxygene. For a while Embarcadero bundled a version of this with Delphi, calling it Prism, after abandoning its own .NET compilation tools.

The partnership with Embarcadero ended, but RemObjects pressed on, adding language features to its flavour of Object Pascal and adding support for Mac OS X, iPhone and Java.

In February 2015 the company was an early adopter of Apple’s Swift language, introducing a Swift compiler called Silver that targets Android, .NET and native Mac OS X executables.

The company now offers a remarkable set of products for developers who want to target new platforms but in a familiar language:

  • Oxygene: Object Pascal
  • Silver: Swift 3 (and most of Swift 4)
  • Hydrogene: C# 7
  • Iodine: Java 8

Each language can import APIs from the others, and compile to all the platforms – well, there are exceptions, but this is the general approach.

More precisely, RemObjects defines four target platforms:

  • Echoes: .NET and .NET Core including ASP.NET and Mono
  • Cooper: Java and Android
  • Toffee: Mac, iOS, tvOS
  • Island: CPU native and WebAssembly

So if you fancy writing a WPF (Windows Presentation Foundation) application in Java, you can:

image

As you may spot from the above screenshot, the RemObjects tools use Visual Studio as the IDE. This is a limitation for Mac developers, so the company also developed a Mac IDE called Fire, and now a Windows IDE called Water (in preview) for those who dislike the Visual Studio dependency.

image

Important to note: RemObjects does not address the problem of cross-platform user interfaces. In this respect it is similar to the approach taken by Xamarin before that company came up with the idea of Xamarin Forms. So this is about sharing non-visual code and libraries, not cross-platform GUI (Graphical User Interface). If you are targeting Cocoa, you can use Apple’s Interface Builder to design your user interface, for example.

Of course WebAssembly and HTML is an interesting option in this respect.

A notable absentee from the list of RemObjects targets is UWP (Universal Windows Platform), a shame given the importance Microsoft still attaches to this.

RemObjects is mainly focused  on languages and compilers rather than libraries and frameworks. The idea is that you use the existing libraries and frameworks that are native to the platform you are targeting. This is a smart approach for a small company that does not wish to reinvent the wheel.

That said, there is a separate product called Data Abstract which is a multi-tier database framework.

These are interesting products, but as a journalist I have struggled to give them much coverage, because of their specialist nature and also the demands on my time as someone who prefers to try things out rather than simply relay news from press releases. I also appreciate that the above information is sketchy and encourage you to check out the website if these tools pique your interest.