All posts by Tim Anderson

Thoughtworks: do not choose to develop Single Page Applications by default

I have a lot of time for Thoughtworks, a global software development company, and always look at its Technology Radar, the latest version of which appeared recently. Plenty to digest, but what caught my eye was this comment regarding SPAs (Single Page Applications):

The sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren’t even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn’t exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don’t see teams making that tradeoff analysis, blindly accepting the complexity of SPAs by default even when the business needs don’t justify it.

This struck a chord with me because of my adventures creating an online bridge playing platform using ASP.NET Core. I picked the platform because I was in a hurry, like C#, and had some existing code for implementing a bridge game, done for Windows. Any online game though needs lots of JavaScript and I soon became aware that the traditional ASP.NET approach, where each web page is a separate .cshtml file with server-side rendering and C# code-behind, is at odds with trends towards SPAs and JAMstack (JavaScript, API and Markup, where “Markup” is HTML and CSS).

Note that you can of course do SPAs and JAMstack with ASP.NET; ASP.NET is a nice technology for implementing an API and there are Visual Studio templates for things including “ASP.NET Core with React.js and Redux”. A Razor Pages application is still the default though, and gives you a UI for the ASP.NET Core Identity for free which saved me a lot of gruntwork. Still, as I got deeper into JavaScript libraries, including the AWS JavaScript SDK which I am using for audio and video, I found myself being steered towards React.js (resisted so far) and JavaScript bundling with Webpack (tried but was not a good fit). I also found that even switching my JavaScript code to TypeScript was surprisingly awkward, considering that the creator of TypeScript works for Microsoft. I found myself wondering if I should have started with an SPA, or convert my application to an SPA, in order to fit in well with the new world.

Separately, I’ve been involved with another project, in PHP and JavaScript, which is an SPA, and hitting some of the potential issues. For example, the application made a ton of database queries on first load, the data from which was in most cases never used, as users did not visit the parts of the application that required them. Refactoring to load this data on demand has made the application faster and more efficient.

A problem, which Thoughtworks alludes to in a remark about “closing the gap on user experience,” is that staying in JavaScript rather than loading a new page from the server generally makes for a smoother application. The way my bridge application has evolved is that the main play screen is a kind of SPA: everything is done in JavaScript and API calls, and I have written a ton of JavaScript code for things like rendering HTML tables where server-side rendering with Razor would be much easier, but unacceptable for usability. However, different parts of the application still use separate Razor pages, for things like viewing results, configuring a user profile, finding a game, and admin screens for managing members and running sessions.

JavaScript, now TypeScript, has exceeded my expectations in terms of performance and capability. It is annoying at times but a modern web browser is a phenomenal platform. I was glad though to see Thoughtworks noting that going the SPA route is not always the right decision

Drupal 7 is the version that refuses to die as the majority of sites have not upgraded

image

Drupal, which may be the 2nd most popular content management system after WordPress according to these stats, is now at version 9.2. Version 7.0 was released 11 years ago but when 8.0 was being developed (it was released in 2015) the team decided that there there were so many key improvements, including mobile-first design, multi-language support and HTML 5 forms, that in-place upgrade from 7.0 was too hard. In addition, some modules (used to extend Drupal) had no Drupal 8 version. Read all about the migration story here. It is not trivial.

From 8 on, the team promised, compatibility would be preserved so that upgrades would be easier.

What happened? Did every Drupal 7 site migrate to version 8 in order to enjoy the new features and promised future upgrade path?

No. Last month the team confessed that “a majority  of all sites in the Drupal project are still on Drupal 7.” The date for ending support for Drupal 7 keeps getting pushed back and is now November 1 2023, but to be reviewed annually.  “We will announce by July 2023 whether we will extend Drupal 7 community support an additional year,” said the post.

While this is good news in one sense for Drupal 7 site maintainers, it is not good news for the Drupal project. Having more than half Drupal sites on what is now an ancient version is unhealthy and maintaining it is a distraction.

Should the team have compromised the improvements in Drupal 8  for the sake of compatibility? It is imponderable but underlines a general truth in software development: breaking compatibility in major ways is expensive and can only be worth it if the benefits are correspondingly huge.

Another example that come to mind is Visual  Basic .NET which was incompatible with Visual Basic 6.0 and in consequence there are many  VB 6.0  applications still out there, that have never been upgraded.

Python 2 is another example.

What this also means is that time invested in making upgrade easy, or preserving compatibility in a widely-used project, may seem unrewarding but has a big payback.

Multi-page ASP.NET Core, TypeScript, and ES JavaScript Modules

One of the messier aspects of the modern web is the situation with JavaScript modules. Modules, and the ability to import code from one module into another in a coherent and efficient manner, are fundamental  programming concepts but JavaScript originally had no concept of them. Developers came up with  CommonJS, originally for server-side JavaScript, using the keyword require to reference one module from another. Node.js borrowed and refined this system. It does not work in web browsers but can be made to do so by processing the code before deployment to make it browser compatible, or by using require.js or an equivalent.

In the  meantime the ECMAScript standard evolved to develop its own module system, often referred to as ES modules or ES6 modules (modules changed a lot in ES6, also known as ES2015). Browsers implement ES6 in their JavaScript engines, not CommonJS. The two systems are not compatible.

The situation today is that although most agree that ES6 is the way forward, Node.js and a huge amount of existing code uses Node.js modules. The Node.js team is trying to migrate towards ES6 but it is inherently a difficult path. Deno, an alternative to Node but with a tiny userbase by comparison, uses ES6 and that is one of its attractions.

ASP.NET Core and JavaScript

ASP.NET was originally designed to be a server-side code generator like PHP. You write code in C# or VB.NET but what gets sent to the browser is just HTML, CSS and JavaScript. The JavaScript piece was not too important at first, just handy for the occasional client-side confirmation dialog or the like.

This is no longer the case and increasing numbers of applications make heavy use of client-side JavaScript. I am working on a multi-user game for example and have written a ton of JavaScript. Perhaps I should have started with a single-page application (SPA) and used React or Vue but I did not, I did what I was most familiar with and started with a basic ASP.NET Core application. I was in a hurry and took full advantage of everything I could get built-in, including ASP.Net Identity and the SignalR real-time communications library.

Everything went fine but I wanted to shift to TypeScript and take advantage of JavaScript minification. There is WebOptimizer, an unofficial project with involvement from the .NET team at Microsoft, but I started going down the WebPack path for reasons you can read here. I got this working more or less, but abandoned it: essentially, WebPack is not designed for multipage projects and I was running into some awkward problems and spending more time on WebPack configuration than on developing my application.

Using ECMAScript modules

I am in favour both of simplicity and of keeping up to date, so I took a closer look at using the JavaScript emitted by the TypeScript compiler more directly, rather than transpiling it to browser-compatible JavaScript (one of the things WebPack does). The main issue is that the JavaScript code will now include import and export statements. You can try and use TypeScript without ever using import or export but I do not recommend it. Brower compatibility is pretty good if you can manage without Internet Explorer.

Quite a lot changes though when you start using import and export and your JavaScript files become modules. Here are a few things I found.

1. Any links to JavaScript files will now need to include type=”module” like so:

<script type=”module” src=”~/js/myscript.js”></script>

2. Any scripts that are imported by other scripts must not use the asp-append-version Tag Helper for cachebusting. Cachebusting is to prevent old versions of JavaScript files being used because they are cached by the browser. The asp-append-version helper adds a hash value as an argument when retrieving the script. The reason it causes problems is that the scripts that import that file do not know about the hash  value and use its unadorned name. This means the browser loads the script twice with unpredictable results. Removing asp-append-version is not as bad as it first appears, thanks to Etags that inform the browser whether the file has been modified. See the discussion here.

3. If you have controls that call JavaScript functions on your web page, they will no longer work unless you import them. That is how modules work. There are a few solutions. The best is to attach things like click handlers in JavaScript rather than coding them in the HTML. This can be problematic though especially if you have server-side ASP.NET code that creates controls that call JavaScript programmatically. An alternative is to add the function to the window object, which you can do either in the ASP.NET Razor .cshtml page or in the TypeScript/JavaScript. I find it easiest to have an initialisation function in the TypeScript that I call from the web page. Scripts defined as modules never run until the page has loaded.

4. You need to be aware of side effects. Imagine you have three JavaScript files, page1.js, page2.js and shared.js. Your web page page1.cshtml uses page1.js and page2.cshtml uses page2.js. Both files import functions from shared.js. Everything works fine, but then you find that shared.js needs to import a function from page2.js. You run the application and find that page2.js has been loaded by page1.cshtml. This is by design: when you import the function you are telling the browser to load that file. It could catch you out though if you have initialisation code in both page1.js and page2.js and do not want them both to run.

The solution is either to plan for this and code accordingly, or not to import functions from page1.js or page2.js in shared.js. Of course if you follow the path of least resistance in an ASP.NET Core application and the only JavaScript code directly referenced in .cshtml is site.js then it is not a problem.

A working example with Visual Studio 2022

Imagine you have a multi-page ASP.NET Core application such as the one created by default in Visual Studio. It has site.js in wwwroot/js and that is about it. Here is what you might do:

a. Create a directory called Scripts in your project  and add a file demo.ts

b. Add a file called tsconfig.json to the Scripts folder. If you use the Add Item wizard it will be prepopulated with some defaults. You will need to add as a minimum a compiler option to support ES modules and an outDir, for example:

{
   “compilerOptions”: {
     “module”: “es2015”,
     “noImplicitAny”: false,
     “noEmitOnError”: true,
     “removeComments”: false,
     “sourceMap”: true,
     “target”: “es5”,
     “outDir”: “../wwwroot/js”

  },
   “exclude”: [
     “node_modules”,
     “wwwroot”
   ]
}

c. demo.ts looks like this:

export function clickme() {
     alert(“You clicked”);
}

d. Add the following to Index.cshtml:

<script type=”module” src=”js/demo.js”></script>
<script type=”module”>
        import {clickme} from ‘./js/demo.js’;
        window.clickme = clickme;       
</script>

e. Now a button on the page will work, for example:

<p><button onclick=”clickme()”>Click me</button></p>

image

Note: when you add a TypeScript file to a Visual Studio 2022 project you get a message inviting you to install a NuGet package:

image

The TypeScript will still get compiled by Visual Studio, with or without this  package. However, without it the .NET Core compiler will not compile the TypeScript (dotnet build etc).

Minification

Minifying the JavaScript is pretty easy. For the time being I am just running terser in a script called by a post-build event. I am deploying to a Linux Azure app service using Azure DevOps Pipelines and have had to workaround the issue that build events do not seem to handle the cross platform scenario very well, and Visual Studio does not provide much of an editor for build events in ASP.NET Core projects, but it is working.

I hope this proves a better long-term solution for me than WebPack.

Microsoft’s “new commerce experience” for 365 services: not just price increases

Microsoft stated in August that it is increasing prices for Microsoft 365 (formerly known as Office 365), the increase being around 20%, from March 1 2022. The company argues that prices have not changed substantially for ten years – perhaps contentious since it has introduced premium plans that are more expensive – and that “this updated pricing reflects the increased value we have delivered to our customers over the past 10 years.”

There has been inflation of around 2% per annum since 2011 and there have been need features, so a price increase is not unreasonable. However there are some other changes in the pipeline that are more difficult. This is the thing called the New Commerce Experience that impacts both customers and resellers. Finding out what has really changed is not that easy but if you dig through the fluff about “agility” and “alignment” and “streamlining”, there are some standout changes:

  • Customers that want the flexibility to reduce seat count will pay 20% more. Until now, it has been possible to reduce seat count without penalty, even though Microsoft presents its pricing as for an “annual term.” With NCE, customers can either pay by the month with premium prices but the ability to reduce seat count with a month’s notice, or pay less but commit to seats for one or three years. During that period, seat count can be increased but not decreased.

    Reasonable? The problem perhaps is that it means giving up one of the benefits of cloud, which is elasticity. Or at least, you can still have elasticity but it is going to cost more. We have also seen this with reserved instance pricing on AWS, Azure and Google Cloud Platform: the price comes down substantially if you commit to paying for one year or more.

  • There will be no cancellation allowed after the first 72 hours of a term, as explained here. This may impact partners more than customers. Scenario: partner sells 1,000 seats of Microsoft 365 for a 3-year term to some company. Three months into the term, the company goes bust. Partners are saying that this leaves them on the hook for the remaining cost. Here, for example, Australian distributor Dicker Data states that “If a customer (who has the agreement with Microsoft) no longer want or can finish the payment of the contract (bankruptcy for example), the partner will incur the costs of paying the remainder of the contract to Microsoft.”

One hopes that such matters are negotiable, but it is a significant risk especially in these unpredictable times of pandemic and climate change.

Converting a scanned image to text in Office 365

I was emailed an attachment scanned from a magazine; it was a nuisance and I wanted to convert it to text. There are of course a million ways to do this and I recall that every multifunction printer used to come with an OCR facility but what is the easiest way now? For a while I’ve used Microsoft OneNote for this, you just paste in an image, right-click, and there is a Copy Text from Picture option:

image

This normally works OK but not this time. The results were not completely useless but included lots of errors; words missing and words wrongly recognised or scrambled. I am not sure, for example, how the word “score” got recognized as “scMe”.

So I looked for a better solution online, trying to avoid ad-laden free OCR sites of unknown quality. I found Convertio which has a straightforward introductory service with no registration or ads for the first 10 pages. It did a much better job with only 3 or 4 errors, text converted correctly to two columns in a Word document, and a table converted to a Word table. The main issue was that the text was tiny – 4pt – but that was reasonably easy to fix up. It seems that it has a much better recognition engine than OneNote.

I’ll be inclined to use Convertio again, but it also seems that Microsoft has got behind with this little corner of Office 365. Perhaps it should do something based on its Cognitive Services.

All change for the New Year

I have been working happily at The Register four days a week since mid-2019 but it is time for a change. Incidentally I very much enjoyed working for the Reg, it was consistently  interesting work, I was given a lot of freedom to write what I wanted, I was well treated, and I recommend it highly as a great place to work. The idea is to find material that is interesting for a technical readership without any pressure to please vendors and I found that to be 100% true.

Why change? The main reason is that near-full time journalism is rightly a demanding role and I found it taking most of my energy; and I have other things I want to do. I will still be writing but once again on a freelance basis, and not at the two or three posts per day that I have been doing. I will also be indulging my enthusiasm for bridge, hopefully improving the online bridge playing and teaching platform I started coding during lockdown, as well as helping the English Bridge Union with its technology. I expect to be more active here on itwriting.com and have plans to experiment with a redesign, perhaps using Next.js and headless WordPress.

Visual Studio, TypeScript, WebPack and ASP.NET Core: somewhat awkward

It is always good to learn a new language so I took advantage of the holiday season to look more closely at TypeScript. At least, that was the original intent. So far I have spent longer on  configuring stuff to work, than I have on actual coding. I think of it as time invested rather than wasted.

As long-term readers will know I am working on a bridge (the card game) website which has been used successfully over the lockdown period. I put this together quickly in the first half of 2020, reusing a unfinished Windows project and taking advantage of everything I could get without having to code it myself, like the ASP.NET identity system. So it is C#, ASP.NET Core, SignalR, runs on Linux on Azure App Service, and mostly coded in Visual Studio, with a few detours into Visual Studio Code.

Of course there is a ton of JavaScript involved and since the user interface for a bridge-playing game is fairly custom I did not use a JavaScript framework unless you could jQuery and Bootstrap. I wrote a separate JavaScript file for each page (possibly a mistake). I also started using the AWS Chime SDK for JavaScript which means referencing a huge 680K JavaScript file.

I therefore had several goals in mind. One was to code in TypeScript rather than JavaScript in order to take advantage of its features and catch more mistakes at compile time. Second, I wanted to optimize the JavaScript better, with automatic minification. Third, I wanted to align my project more closely with the JavaScript ecosystem. The AWS SDK, for example, is written in TypeScript using modules, but I have been using some demo code provided to compile a single JavaScript file. Maybe I can get better optimization by coding my own project in  TypeScript, and importing only the modules I need.

Visual Studio is not well aligned with the modern JavaScript ecosystem, as you can tell if you read this article on bundling and minification of static assets. “ASP.NET Core doesn’t provide a native bundling and minification solution,” it says, and refers developers to the WebOptimizer project or other tools such as Gulp and Webpack.

I did want to start with TypeScript though, and to begin with this looked easy. All you have to do is to add the TypeScript NuGet package, do some minimal configuration by creating and editing tsconfig.json, and you can write TypeScript and have it transpiled to JavaScript in your preferred target directory whenever the project is built. I moved a bunch of my JavaScript files to a directory of TypeScript files, renamed them from .js to .ts, and set to work making the TypeScript compiler happy.

When you do this you discover that the TypeScript compiler considers all .ts files that are not modules to be in the same scope. So if you have two JavaScript files and they both contain functions called DoSomething(), the compiler throws a duplicate function error, even if you will never reference them both from the same web page. You can fix this by making them modules – it feels like TypeScript is designed on this basis – but now you have the opposite problem, that if JavaScript file A references functions or variables in JavaScript file B, they have to be exported and imported. A good thing in principle, but now you have import statements in the code. The TypeScript compiler does not transpile these for compatibility with browsers that do not support import, and in addition, you now have to use “type = ‘module'” on script references in HTML. I also ran into issues with the libraries I use, primarily SignalR and the AWS Chime SDK. You can either npm install these and import them in the proper way, if the developers have provided TypeScript definition files (with a d.ts extension) or find a type library via DefinitelyTyped which provides only the types; you still need to reference the library separately. There is an obvious potential version issue if you go the DefinitelyTyped route.

In other words, what starts out as a simple idea of writing TypeScript instead of JavaScript soon becomes a complete refactor of the code to be modular and use imports and exports. Again, this is not a bad thing, but it is more work and not quite the incremental transition that I had in mind. I had over 1000 errors reported by the TypeScript compiler but gradually whittled them down (and this is with TypeScript set with strict off, intended to be a temporary expedient).

So I did all that but had a problem with these import statements when it came to using them in the browser. It seemed that WebPack could fix this for me, plus I could configure it to do tree-shaking to reduce code size and to use a minifier (it uses terser by default). There is a slight issue though since modern JavaScript tools like WebPack and terser are geared towards bundling all your JavaScript into a single file, and/or having a single-page application, which is not how my bridge site works. Still, it looked like it could be configured to work for me so I started down the track, using a post build step in Visual Studio to run WebPack.

I am sure this is obvious to people familiar with WebPack, but I still had problems getting my HTML pages to talk to the JavaScript. By default terser will mangle and shorten all the function names, but that is easily configured. The HTML still could not call any JavaScript functions: function not defined. Eventually I discovered that you have to configure WebPack to output a library. So if you have an HTML button that called a JavaScript function in its onclick event handler, there are several things you need to do. First, export the function in the JavaScript (or TypeScript) code. Second, add a library name and preferably type ‘umd’ in webpack.config.js. Third, add the library name as a prefix to the function you are calling from HTML, for example mylibrary.myfunction.

I also had issues with code splitting – essential to avoid bloated JavaScript bundles. This is done in WebPack by configuring SplitChunks. If set to ‘all’ then my library exports stopped working. After much trial and error, I found a fix. First, set chunkLoading to ‘jsonp’. Second, if your library variable is set to “undefined” at runtime there is a problem with one of the bundled JavaScript files. Unfortunately this was not reported as an error in the browser console – that is, the undefined library variable was reported, but not the reason for it.I tracked it down to a call to document.readyState or possibly document.addEventListener; using jQuery instead fixed it.

Another tip: do not call any JavaScript code directly from cshtml, other than via event handlers. It might try to run before the JavaScript is loaded. I found it easiest to put initialization code in a function and call it from JavaScript. You can put the function declaration into index.d.ts to keep TypeScript happy, since it is external.

Is it worth it? Watch this space. It has pushed me into refactoring which is improving the structure of my code – but I have also added complexity with a build process that compiles TypeScript to JavaScript (using tsloader), then merging and splitting files with WebPack, while along the way minifying and mangling it with terser. Yes I have encountered unexpected behaviour, partly thanks to my inexperience, but the interactions for example between jQuery and WebPack and library exports are quite complex, for example. I have spent time and energy wresting with WebPack, instead of coding my application. There is a lot to be said for my old approach, where you code in JavaScript and it runs as JavaScript and it is easier to trace what is going on.

Still, it is working and I have achieved some of my goals – but the AWS Chime SDK file is still huge, just 30K smaller than before, which is disappointing. Perhaps there is something I have missed. I will be coding in TypeScript now and look forward to further refactoring as I get to know the language.

Update: I have abandoned this experiment. There were niggling problems with the WebPack bundles and I came to the conclusion that is it unsuitable for a multi-page application. A shame as I wanted to use WebPack; but for the time being I am just using TypeScript with terser. This means I am using native ES modules in the browser and I intend to write up the experience soon.

Exchange emails stuck in queue because “message deferred by categorizer agent”- Happy New Year admins!

The first day of a new year is a great moment to relax and prepare for what is ahead – but spare a thought for Microsoft Exchange administrators who may have woken up to seized up installations of their on-premises email servers. I was among those affected, but only on my tiny system. Messages were stuck in the submission queue, suspiciously since midnight or thereabouts (somehow a message sneaked through timed 12.14 am) and the last error reported by the queue viewer was “Messages deferred by categorizer agent.”

As usual I went down a number of rabbit holes. Restart the Exchange Transport service. Reboot the server. Delete the first message not to be delivered in case it was corrupt and somehow clogging up the queue. Check for certificate issues.

It was none of these. Here is the guilty party in the event viewer:

image

The FIPS-FS Microsoft Scan Engine failed to load, with the error can’t convert “2201010001” too long.

The impact was that the malware filter could not check the message, hence the error from the categorizer agent.

The solution is to run the Exchange Shell on the server and navigate to the Scripts directory where Exchange is installed, for example C:\Program Files\Microsoft\Exchange Server\V15\Scripts. Here you will find a script called Disable-AntimalwareScanning.ps1.

& $env:ExchangeInstallPath\Scripts\Disable-AntimalwareScanning.ps1

should work. Run it, restart the  Exchange Transport service, and email will start to flow.

Once the problem is patched, there is a companion script called Enable-AntimalwareScanning which restores it. Though I am not sure of the value of the Exchange malware filter since Microsoft considers that even on-premises installations should use the Microsoft 365 services for spam and malware scanning, and the on-premises protection features are not kept up to date, meaning that a third-party or open source spam and malware filter is a necessity anyway, unless you go the Office 365 route.

Another reason not to run Exchange on-premises – but Microsoft still says that hybrid systems using Azure Active Directory Connect should do so in order to manage mailboxes.

Note: the maximum value for a 32-bit signed integer is 2,147,483,647. Yesterday which was perhaps represented as 2,112,310,001 would have fitted within that whereas today 2,202,020,001 did not. Dates and times are awkward for programmers.

Update: Microsoft  has an official fix here. Thanks to Erik in the comments for the link.

Notes from the field: virtualising an existing Windows server using UEFI and Secure Boot

Over the weekend I had the task of converting an existing Windows server running on HP RAID to a virtual machine on Hyper-V. This is a very small network with only one server so nice and simple. I used the sysinternals tool Disk2vhd which converts all the drives on an existing server to a single VHD or VHDX. It’s a nice tool that uses shadow copy to make a consistent snapshot.

The idea is that you then take your VHDX and and make it the drive for a new VM on the target host, in my case running Server 2019. Unfortunately my new VM would not boot. Generally there are three things that can happen in these cases. One is that the VM boots fine. Second it tries to boot but comes up with a STOP error. Third, it just sits there with a flashing cursor and nothing happens.

At this point I should say that Microsoft does not really support this type of migration. It is considered something that might or might nor work and at the user’s risk. However I have had success with it in the past and when it works, it does save a lot of time especially in small setups like this, because the new VM is a clone of the old server with all the shared folders, printer drivers, applications, databases and other configuration ready to go.

Disclaimer: please consider this procedure unsupported and if you follow any tips here do not blame me if it does not work! Normally the approach is to take the existing server off the network, do the P2V (Physical to Virtual), run up the new VM and check its health. If it cannot be made to work, scrap the idea, fire up the old server again, and do a migration to a new VM using other techniques, re-install applications and so on.

In my case I got a flashing cursor. What this means, I discovered after some research, is that there is no boot device. If you get a STOP error instead, you have a boot device but there is some other problem, usually with accessing the storage (see notes below about disabling RAID). At this point you will need an ISO of Windows Server xxxx (matching the OS you are troubleshooting) so you can run the troubleshooting tools. I downloaded the Windows Server 2016 Hyper-V, which is nice and small and has the tools.

Note that if the source server uses UEFI boot you must create a generation 2 Hyper-V VM. Well, either that or go down the rabbit hole of converting the GPT partitions to MBR without wiping the data so you can use generation 1.

For troubleshooting, the basic technique is to boot into the Windows recovery tools and then the command prompt.

I am not sure if this is necessary, but the first thing I did was to run regedit, load the system hive using the Load Hive option, and set the Intel RAID controller entries to zero. What this does is to tell Windows not to look for an Intel RAID for its storage. Essentially go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSetXXX\Services (usually XXX is 001 but it might not be) and find the entries if they exist for:

iaStor

iaStorAVC

iaStorAV

iaStorV

storAHCI

and set the Start or StartOverride parameters to 0. This even works for storAHCI since 0 is on and 3 is off.

The VM still would not boot. Flashing cursor. I am grateful for this thread in the Windows EightForums which explains how to fix EFI boot. My problem, I discovered via the diskpart utility, was that my EFI boot partition, which should show as a small, hidden, FAT32 partition, was instead showing as RAW, meaning no filesystem.

The solution, which I am copying here just in case the link fails in future, was (within the recovery command prompt for the failing VM) to do as follows – the bracketed comments are not to be typed, they are notes.

diskpart
list disk
select disk # ( # = disk number for the disk with the efi partition)
list partition (and note size of old efi or presumed efi partition, which will be small and hidden)
select partition # (# = efi partition)
create partition efi size=# (size of old partition, mine was 99)
format quick fs=fat32 label=”SYSTEM”
assign letter=”S”
exit

assuming C is still the drive letter assigned to your windows partition

type:

C:\Windows\System32\bcdboot C:\Windows

This worked perfectly for me. The VM booted, spent a while detecting devices, following which everything was straightforward.

Final comment: although it is unsupported, the Windows engineers have done an amazing job enabling Windows to boot on new hardware with relatively little fuss in most cases – you will end up of course with lots of hidden missing devices in Device Manager that you can clean up with care though I don’t think they do much harm.

Funbridge abandons its Windows app

It appears that Funbridge, an online bridge game, is discontinuing its app for Windows.

image

There is a bit of a sad story here. Funbridge used to have a Windows app that was a little messy but excellent. The company (GOTO Games) then came up with a mobile app for iOS and Android, which worked well on iOS and a bit less well on Android. This mobile app then migrated to Windows and Mac, in terms of look and feel; I  am not sure what programming framework it uses. The new-style Windows version has always been worse than the mobile versions for me, the UI is not really suitable for Windows, and I mainly play on iPad. Now it is going altogether, with users directed towards the web site.

I have always liked the Funbridge user interface on mobile and the asynchronous approach it users, so players can take as long as they like. Everyone plays against the computer and then compares their score with other humans playing the same cards. Funbridge is adding new real-time play though and will soon be adding audio and video online; this may relate to its retirement of the Windows application.

The abandonment of the Windows app is interesting in the context of Microsoft’s hope to boost Windows apps and the Microsoft Store in Windows 11. It looks as if GOTO Games will not be playing.