Using an M1 Mac after a lifetime of mainly Windows

So I got an M1 MacBook Pro back in April and it is time for a quick brain dump on my experience. I am not travelling as much as I did pre-lockdown, so although I got the Mac as a replacement for an ancient Windows laptop it gets used at home too. My usual desktop PC is a few years old but a decent spec gaming PC withCore i7-7700 3.6 GHz, 16GB RAM and Nvidia RTX 2060 GPU. I have been happy with it; but I do find myself thinking “why not just use the MacBook” when needing to fire up a computer, a subconscious preference that bears examination. Most of my work is writing, web browsing and coding.

I do not particularly prefer the macOS UI to that of Windows. It is more consistent because Apple managed iOS vs macOS sensibly whereas Microsoft made a hash of Windows desktop vs Windows CE vs Windows Phone vs Windows 8 and has now settled on a thing called WinUI but scratch the surface of Windows and you still find UI that has not changed for decades.

I digress though. I do not mind the Windows UI, I am used to it. What I do mind though is annoyances like the always-broken Windows search, and the way certain actions cause lengthy pauses that make me wonder what my PC is doing. In my case, sorting a large directory in Windows Explorer takes an age. Another little issue is that creating a new folder works fine, but renaming it causes a long pause. There also seem to be some focus issues. I create a new folder, I rename it and press Enter. Eventually it renames, but half the time the focus mysteriously switches to a different folder.

I realise that these problems do not occur with a new install of Windows and that I could pop out and buy a Surface laptop and it would be fine. For a bit. Windows, it seems to me, still suffers from the cruft problem beautifully described by Verity Stob 20 years ago. I do not think Macs are completely immune (I had a Mac Mini where I upgraded the OS once too often and it crawled) but does seem to me more resistant.

There is another thing that I like about the MacBook. You close the lid and it sleeps. You open the lid minutes, hours or days later, and it wakes. This has never worked well for me on Windows, though it is meant to do the same. I can believe that it is hard to implement, but when it works it is a huge benefit.

There is also the unwanted advertising that has crept into the Windows UI especially since Windows 11. Working on the MacBook I do notice its absence; I can better focus on what I want to do.

From a developer perspective, the performance of the M1 Pro is a delight. I work mostly in Visual Studio Code on both platforms; even on Windows I have come to prefer VS Code for most types of work. There is also the fact that Unix-like operating systems have won in server and web applications, so there is less friction there.

Launchpad: reminiscent of the Windows 8 Start screen?

Microsoft came up with a great application launcher in the Windows 95 Start menu – and improved it until it reached its peak in Windows 7. I also like the Windows 8 full-screen version. Windows 10 and 11 are not so good though. You get inadvertent web searches, as well as the problem of apps that you search for not appearing for strange reasons. The Mac Launchpad, which reminds me of the Windows 8 full-screen Start menu, seems to work well. You type what you want and all the matches appear.

What do I miss when not using Windows? It is mainly a matter of working out new ways to do certain tasks. I do miss Hyper-V and WSL (Windows Subsystem for Linux) though I have had success with UTM for running both Windows and Ubuntu on the Mac. The integration of WSL with the desktop OS is great though. Microsoft Office still works best on Windows though not to the extent of a few years back. There is no Paint or Notepad, and favourites like Notepad++ do not run natively, but Preview works for cropping images and alternatives to Windows utilities exist.

Sometimes you are pushed towards the command line which is not a bad thing. No WinSCP for example, so use scp instead, and do some helper scripts for common tasks. You end up saving time. (I realise you can script WinSCP as well). And no need for Putty; just type ssh or script the command line you need.

I do expect though to use Windows less in future, and for me that is a big change.

Book review: After Steve by Tripp Mickle

This is billed as a book about a company, but is more accurately described as about two people, Tim Cook and Jony Ive, respectively CEO and former Chief Design Officer at Apple, one of the world’s biggest and most profitable companies. The author Tripp Mickle is a reporter at the Wall Street Journal where he covered Apple for four years.

Mickle has a thesis: that under Cook Apple’s profitability has flourished but its design-led innovation has faltered, damaged in 2011 when co-founder Steve Jobs died at the age of 56. “It’s unclear if design will ever regain its position as the dominant voice over product direction,” he writes. In his epilogue, Mickle says that “Cook’s aloofness and unknowability made him an imperfect partner for an artist who wanted to bring empathy to every product.” The author mentions several times that Cook “seldom went to the design studio to see Ive’s team work.”

The book has amazing detail and represents the outcome of interviews with “more than two hundred current and former Apple employees” supplemented by further interviews with their family members, friends, suppliers of Apple, competitors, and government officials. There is lots of dialogue in the accounts of key incidents, drawn either from recordings or “reconstructed based on the recollections of people familiar with the events described.” As you read, you feel immersed in the company. It is a great achievement, particularly (as the author also notes) considering that “at Apple, current and former employees adhere to a strict code of silence.” There is a thick section of notes and references.

After Steve then is essential reading for Apple watchers. That said, I have a couple of reservations. At 512pp this is a lengthy work and for me, too long. It is occasionally repetitive, the writing is professional but at times pedestrian. Further, if your interest is in Apple the company rather than Cook and Ive, it is overly focused on those two people.

This last point is perhaps why Mickle misses the impact of Apple Silicon, the series of ARM-based processors which began with the A series and took over from Intel as the technology in Mac computers from November 2020 with the launch of the M1. Recently Apple has announced the M2 with claimed performance improvements of up to 18% for the CPU and 35% for the GPU, compared to the M1.

Apple Silicon matters because it dramatically improves over x86 in its power/performance ratio, making the company’s laptops and iPads a delight compared to their competition. It may not be design-based, and it builds on ARM and the work of others, but it is a huge advance and gives the company’s hardware an edge over its Windows and Android competition that is hard to counter. Johny Srouji, in charge of Apple Silicon? Not mentioned by Mickle.

I would have preferred the book to be shorter (though researchers may be glad of its detail). What of its central thesis? Mickle makes the point that Apple Watch has a disappointing lack of focus, which I agree with, and that projects like the Apple electric car appear to have faltered. The Beats acquisition had a mixed outcome, and this was a puzzle to me too. Apple did not need Beats, its culture was alien, and my sense is that Apple Music would have flourished equally well without it.

I do think though that since Jobs Apple has developed something with iPhone-level impact and that is Apple Silicon and the M series in particular. I also think that Mickle misses something of the big picture. Buying a smartphone or computer? There is the Android jungle, or the Windows jungle, or Apple. For many it is hardly a choice; and the fact that this is more than ever true more than a decade after the passing of Jobs is huge credit to those involved and makes the accusation “how Apple became a trillion-dollar company and lost its soul” ring just a bit hollow.

Installing Ubuntu 22.04 on Apple M1 with UTM

I  started with Arch Linux for Linux development  on  M1, which works, but succumbed to Ubuntu just because it is so widely used and therefore easier to find help. It is also supported by VS Code for remote development, as I am aiming for something similar to a WSL setup on Windows and using VS Code on the host side. I had problems installing 22.04 though;  the install completed but trying to boot resulted in:

EFI stub: booting Linux Kernel

EFI stub: Using DTB from configuration table

EFI stub: Exiting boot services

and there it would stay.

The fix I found was to update QEMU:

sudo port selfupdate

sudo port install qemu

after which Ubuntu started without issue (no need to reinstall).

Following this I was able to install and use the Remote – SSH extension in VS Code which worked first time.

Small points to note:

  • I accepted the option in Ubuntu to install the OpenSSH server during installation
  • In UTM I changed the networking for the VM to Emulated VLAN rather than Shared Network, in order to use port forwarding. I forwarded both the SSH port 22 and also the HTTP port 80, to different ports on the Mac, so that I can test web applications running on Ubuntu.

Thanks also to Liz Rice for her post here.

Fixing an Xbox controller broken by Elden Ring

I have been enjoying Elden Ring on the Xbox but not so much when my controller broke. I recall the same thing happening with Dark Souls. Maybe it’s the way I play, but the problem is that the right bumper is used for a quick attack, which I use constantly. The bumpers seem to be less robust than the triggers, so after a while it breaks.

Fortunately the current Xbox controllers (I have a Carbon Black) are easy to fix. The hardest part is getting the textured panels off the controller handles; like so many modern electronics cases, these are a press fit and have to be levered off while trying not to break or scratch them. Then you undo two screws each side using a Torx T8 security screwdriver and another screw under the label in the battery compartment. Then you can carefully remove first a central rear panel and next the rear bumpers.

This revealed the problem: a small plastic tab had broken.

Gluing the tab back probably would not last long; but fortunately compatible bumper parts are available for a few pounds on eBay. I bought two (one for next time) and everything is fine.

The two key things I do with a new Mac

My Windows laptop is ancient (2015) and my company decided to replace it with a MacBook Pro especially since we need to develop software compatible with Apple Silicon. The new Mac works well  and I have been busy putting the essentials (for me) on it: Xcode, Visual Studio Code, .NET 6.0 SDK, Microsoft Office and so on.

Tip for .NET developers: when you put .NET and VS Code on an M1 Mac, you might get a CPU not supported error from Mono, at one time a dependency of the OmniSharp language server. You can fix this either by installing rosetta 2, the x86 translator for Apple silicon, or by setting omnisharp.useModernNet – see here for details.

There are two things I always have to do with a new Mac. The first is to go to  System preferences – Trackpad  – Scroll & Zoom and uncheck the mischievously worded option Scroll direction: Natural. This seems to set this for the mouse too. The reason is that as far as I can tell the Apple preference is no more or less natural than the older approach and having it different depending which operating system you are using is confusing. My suspicion is that Apple introduced this in order to make it harder to switch between Mac and Windows.

image

The second thing is a bit trickier which is to install my password manager. I use Password Safe which does not offer an up to date Mac download, nor an Apple Silicon version. There is a commercial version in the App Store, but since it is open source my solution is to build from source. I recall doing some tweaking the last time I did this, a couple of years back for an Intel Mac, but the process seems to be smooth now as a few fixes have been added for Xcode and arm64 support. I used the latest development release of wxWidgets, 3.1.6, which has to be built first. My build declares itself to be v 0.01 OSX.

image

Without the password manager a laptop is almost unusable for me since I don’t know many of the passwords I use and generally prefer not to save them in the  browser.

My desktop PC which I use for the majority of my work remains Windows and I am a fan of WSL (Windows Subsystem for Linux) which from my perspective is the best new feature of Windows since the release of Windows  7. I miss WSL on the Mac though it is less necessary because macOS is a Unix-like operating system.

In general I do not have a strong preference between Mac and Windows, though I feel that Microsoft and its OEM partners have some work to do to get Windows on Arm working as well as M1 Macs. I was also disappointed by Windows 11, particularly by its lack of support for slightly older CPUs, and the new Start menu and taskbar which is a step backwards from Windows 10. The appearance of ads in the user interface is a concern too, though it is minimal if carefully configured.

Microsoft moves towards UDP in place of TCP for Azure Virtual Desktop, claims lower latency and higher reliability

Microsoft has announced the public preview of Azure Virtual Desktop RDP Shortpath for public networks – a bit of a mouthful, but what this really means is a switch towards UDP as the first choice transport for remote desktop sessions on the Azure cloud.

“Long running TCP sessions are problematic” said Senior Program Manager Denis Gundarev. “UDP is more tolerant to the temporary network interruptions caused by wireless interference or by changes in dynamic routing.”

UDP in itself is not enough; for example, UDP “does not care about each individual packet’s packet order or delivery. It does not have built-in congestion or rate control,” explains Gundarev. The implementation for RDP (Remote Desktop Protocol) uses a thing called URCP (Universal Rate Control Protocol) which Microsoft developed back in 2013, for real-time communications.

AVD already supported UDP for private networks, but many users do not have a private connection to Azure like ExpressRoute, hence the introduction of the public network version. Microsoft says that the benefits include lower latency, better network utilization, and high tolerance to packet loss.

Implementing the preview is done by setting a registry key on the AVD session host, so this can be done experimentally for just a few hosts in order to try out the feature. That said, it will not always be possible. “RDP Shortpath may fail if you use double NAT setups,” said Gundarev. Users should not notice as the old TCP-based connection will be used automatically instead.

Thoughtworks: do not choose to develop Single Page Applications by default

I have a lot of time for Thoughtworks, a global software development company, and always look at its Technology Radar, the latest version of which appeared recently. Plenty to digest, but what caught my eye was this comment regarding SPAs (Single Page Applications):

The sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren’t even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn’t exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don’t see teams making that tradeoff analysis, blindly accepting the complexity of SPAs by default even when the business needs don’t justify it.

This struck a chord with me because of my adventures creating an online bridge playing platform using ASP.NET Core. I picked the platform because I was in a hurry, like C#, and had some existing code for implementing a bridge game, done for Windows. Any online game though needs lots of JavaScript and I soon became aware that the traditional ASP.NET approach, where each web page is a separate .cshtml file with server-side rendering and C# code-behind, is at odds with trends towards SPAs and JAMstack (JavaScript, API and Markup, where “Markup” is HTML and CSS).

Note that you can of course do SPAs and JAMstack with ASP.NET; ASP.NET is a nice technology for implementing an API and there are Visual Studio templates for things including “ASP.NET Core with React.js and Redux”. A Razor Pages application is still the default though, and gives you a UI for the ASP.NET Core Identity for free which saved me a lot of gruntwork. Still, as I got deeper into JavaScript libraries, including the AWS JavaScript SDK which I am using for audio and video, I found myself being steered towards React.js (resisted so far) and JavaScript bundling with Webpack (tried but was not a good fit). I also found that even switching my JavaScript code to TypeScript was surprisingly awkward, considering that the creator of TypeScript works for Microsoft. I found myself wondering if I should have started with an SPA, or convert my application to an SPA, in order to fit in well with the new world.

Separately, I’ve been involved with another project, in PHP and JavaScript, which is an SPA, and hitting some of the potential issues. For example, the application made a ton of database queries on first load, the data from which was in most cases never used, as users did not visit the parts of the application that required them. Refactoring to load this data on demand has made the application faster and more efficient.

A problem, which Thoughtworks alludes to in a remark about “closing the gap on user experience,” is that staying in JavaScript rather than loading a new page from the server generally makes for a smoother application. The way my bridge application has evolved is that the main play screen is a kind of SPA: everything is done in JavaScript and API calls, and I have written a ton of JavaScript code for things like rendering HTML tables where server-side rendering with Razor would be much easier, but unacceptable for usability. However, different parts of the application still use separate Razor pages, for things like viewing results, configuring a user profile, finding a game, and admin screens for managing members and running sessions.

JavaScript, now TypeScript, has exceeded my expectations in terms of performance and capability. It is annoying at times but a modern web browser is a phenomenal platform. I was glad though to see Thoughtworks noting that going the SPA route is not always the right decision

Drupal 7 is the version that refuses to die as the majority of sites have not upgraded

image

Drupal, which may be the 2nd most popular content management system after WordPress according to these stats, is now at version 9.2. Version 7.0 was released 11 years ago but when 8.0 was being developed (it was released in 2015) the team decided that there there were so many key improvements, including mobile-first design, multi-language support and HTML 5 forms, that in-place upgrade from 7.0 was too hard. In addition, some modules (used to extend Drupal) had no Drupal 8 version. Read all about the migration story here. It is not trivial.

From 8 on, the team promised, compatibility would be preserved so that upgrades would be easier.

What happened? Did every Drupal 7 site migrate to version 8 in order to enjoy the new features and promised future upgrade path?

No. Last month the team confessed that “a majority  of all sites in the Drupal project are still on Drupal 7.” The date for ending support for Drupal 7 keeps getting pushed back and is now November 1 2023, but to be reviewed annually.  “We will announce by July 2023 whether we will extend Drupal 7 community support an additional year,” said the post.

While this is good news in one sense for Drupal 7 site maintainers, it is not good news for the Drupal project. Having more than half Drupal sites on what is now an ancient version is unhealthy and maintaining it is a distraction.

Should the team have compromised the improvements in Drupal 8  for the sake of compatibility? It is imponderable but underlines a general truth in software development: breaking compatibility in major ways is expensive and can only be worth it if the benefits are correspondingly huge.

Another example that come to mind is Visual  Basic .NET which was incompatible with Visual Basic 6.0 and in consequence there are many  VB 6.0  applications still out there, that have never been upgraded.

Python 2 is another example.

What this also means is that time invested in making upgrade easy, or preserving compatibility in a widely-used project, may seem unrewarding but has a big payback.

Multi-page ASP.NET Core, TypeScript, and ES JavaScript Modules

One of the messier aspects of the modern web is the situation with JavaScript modules. Modules, and the ability to import code from one module into another in a coherent and efficient manner, are fundamental  programming concepts but JavaScript originally had no concept of them. Developers came up with  CommonJS, originally for server-side JavaScript, using the keyword require to reference one module from another. Node.js borrowed and refined this system. It does not work in web browsers but can be made to do so by processing the code before deployment to make it browser compatible, or by using require.js or an equivalent.

In the  meantime the ECMAScript standard evolved to develop its own module system, often referred to as ES modules or ES6 modules (modules changed a lot in ES6, also known as ES2015). Browsers implement ES6 in their JavaScript engines, not CommonJS. The two systems are not compatible.

The situation today is that although most agree that ES6 is the way forward, Node.js and a huge amount of existing code uses Node.js modules. The Node.js team is trying to migrate towards ES6 but it is inherently a difficult path. Deno, an alternative to Node but with a tiny userbase by comparison, uses ES6 and that is one of its attractions.

ASP.NET Core and JavaScript

ASP.NET was originally designed to be a server-side code generator like PHP. You write code in C# or VB.NET but what gets sent to the browser is just HTML, CSS and JavaScript. The JavaScript piece was not too important at first, just handy for the occasional client-side confirmation dialog or the like.

This is no longer the case and increasing numbers of applications make heavy use of client-side JavaScript. I am working on a multi-user game for example and have written a ton of JavaScript. Perhaps I should have started with a single-page application (SPA) and used React or Vue but I did not, I did what I was most familiar with and started with a basic ASP.NET Core application. I was in a hurry and took full advantage of everything I could get built-in, including ASP.Net Identity and the SignalR real-time communications library.

Everything went fine but I wanted to shift to TypeScript and take advantage of JavaScript minification. There is WebOptimizer, an unofficial project with involvement from the .NET team at Microsoft, but I started going down the WebPack path for reasons you can read here. I got this working more or less, but abandoned it: essentially, WebPack is not designed for multipage projects and I was running into some awkward problems and spending more time on WebPack configuration than on developing my application.

Using ECMAScript modules

I am in favour both of simplicity and of keeping up to date, so I took a closer look at using the JavaScript emitted by the TypeScript compiler more directly, rather than transpiling it to browser-compatible JavaScript (one of the things WebPack does). The main issue is that the JavaScript code will now include import and export statements. You can try and use TypeScript without ever using import or export but I do not recommend it. Brower compatibility is pretty good if you can manage without Internet Explorer.

Quite a lot changes though when you start using import and export and your JavaScript files become modules. Here are a few things I found.

1. Any links to JavaScript files will now need to include type=”module” like so:

<script type=”module” src=”~/js/myscript.js”></script>

2. Any scripts that are imported by other scripts must not use the asp-append-version Tag Helper for cachebusting. Cachebusting is to prevent old versions of JavaScript files being used because they are cached by the browser. The asp-append-version helper adds a hash value as an argument when retrieving the script. The reason it causes problems is that the scripts that import that file do not know about the hash  value and use its unadorned name. This means the browser loads the script twice with unpredictable results. Removing asp-append-version is not as bad as it first appears, thanks to Etags that inform the browser whether the file has been modified. See the discussion here.

3. If you have controls that call JavaScript functions on your web page, they will no longer work unless you import them. That is how modules work. There are a few solutions. The best is to attach things like click handlers in JavaScript rather than coding them in the HTML. This can be problematic though especially if you have server-side ASP.NET code that creates controls that call JavaScript programmatically. An alternative is to add the function to the window object, which you can do either in the ASP.NET Razor .cshtml page or in the TypeScript/JavaScript. I find it easiest to have an initialisation function in the TypeScript that I call from the web page. Scripts defined as modules never run until the page has loaded.

4. You need to be aware of side effects. Imagine you have three JavaScript files, page1.js, page2.js and shared.js. Your web page page1.cshtml uses page1.js and page2.cshtml uses page2.js. Both files import functions from shared.js. Everything works fine, but then you find that shared.js needs to import a function from page2.js. You run the application and find that page2.js has been loaded by page1.cshtml. This is by design: when you import the function you are telling the browser to load that file. It could catch you out though if you have initialisation code in both page1.js and page2.js and do not want them both to run.

The solution is either to plan for this and code accordingly, or not to import functions from page1.js or page2.js in shared.js. Of course if you follow the path of least resistance in an ASP.NET Core application and the only JavaScript code directly referenced in .cshtml is site.js then it is not a problem.

A working example with Visual Studio 2022

Imagine you have a multi-page ASP.NET Core application such as the one created by default in Visual Studio. It has site.js in wwwroot/js and that is about it. Here is what you might do:

a. Create a directory called Scripts in your project  and add a file demo.ts

b. Add a file called tsconfig.json to the Scripts folder. If you use the Add Item wizard it will be prepopulated with some defaults. You will need to add as a minimum a compiler option to support ES modules and an outDir, for example:

{
   “compilerOptions”: {
     “module”: “es2015”,
     “noImplicitAny”: false,
     “noEmitOnError”: true,
     “removeComments”: false,
     “sourceMap”: true,
     “target”: “es5”,
     “outDir”: “../wwwroot/js”

  },
   “exclude”: [
     “node_modules”,
     “wwwroot”
   ]
}

c. demo.ts looks like this:

export function clickme() {
     alert(“You clicked”);
}

d. Add the following to Index.cshtml:

<script type=”module” src=”js/demo.js”></script>
<script type=”module”>
        import {clickme} from ‘./js/demo.js’;
        window.clickme = clickme;       
</script>

e. Now a button on the page will work, for example:

<p><button onclick=”clickme()”>Click me</button></p>

image

Note: when you add a TypeScript file to a Visual Studio 2022 project you get a message inviting you to install a NuGet package:

image

The TypeScript will still get compiled by Visual Studio, with or without this  package. However, without it the .NET Core compiler will not compile the TypeScript (dotnet build etc).

Minification

Minifying the JavaScript is pretty easy. For the time being I am just running terser in a script called by a post-build event. I am deploying to a Linux Azure app service using Azure DevOps Pipelines and have had to workaround the issue that build events do not seem to handle the cross platform scenario very well, and Visual Studio does not provide much of an editor for build events in ASP.NET Core projects, but it is working.

I hope this proves a better long-term solution for me than WebPack.

Microsoft’s “new commerce experience” for 365 services: not just price increases

Microsoft stated in August that it is increasing prices for Microsoft 365 (formerly known as Office 365), the increase being around 20%, from March 1 2022. The company argues that prices have not changed substantially for ten years – perhaps contentious since it has introduced premium plans that are more expensive – and that “this updated pricing reflects the increased value we have delivered to our customers over the past 10 years.”

There has been inflation of around 2% per annum since 2011 and there have been need features, so a price increase is not unreasonable. However there are some other changes in the pipeline that are more difficult. This is the thing called the New Commerce Experience that impacts both customers and resellers. Finding out what has really changed is not that easy but if you dig through the fluff about “agility” and “alignment” and “streamlining”, there are some standout changes:

  • Customers that want the flexibility to reduce seat count will pay 20% more. Until now, it has been possible to reduce seat count without penalty, even though Microsoft presents its pricing as for an “annual term.” With NCE, customers can either pay by the month with premium prices but the ability to reduce seat count with a month’s notice, or pay less but commit to seats for one or three years. During that period, seat count can be increased but not decreased.

    Reasonable? The problem perhaps is that it means giving up one of the benefits of cloud, which is elasticity. Or at least, you can still have elasticity but it is going to cost more. We have also seen this with reserved instance pricing on AWS, Azure and Google Cloud Platform: the price comes down substantially if you commit to paying for one year or more.

  • There will be no cancellation allowed after the first 72 hours of a term, as explained here. This may impact partners more than customers. Scenario: partner sells 1,000 seats of Microsoft 365 for a 3-year term to some company. Three months into the term, the company goes bust. Partners are saying that this leaves them on the hook for the remaining cost. Here, for example, Australian distributor Dicker Data states that “If a customer (who has the agreement with Microsoft) no longer want or can finish the payment of the contract (bankruptcy for example), the partner will incur the costs of paying the remainder of the contract to Microsoft.”

One hopes that such matters are negotiable, but it is a significant risk especially in these unpredictable times of pandemic and climate change.

Tech Writing