All posts by Tim Anderson

Rhys Bowen: A love story for Venice

image

I loved this book; if anything it is too short. It begins with Lettie, a young English girl, and her visit to Venice in 1928, accompanied by her prim and stuffy aunt, who is determined to protect her charge from anything modern or lively, and especially from interacting with strangers. Needless to say the delightfully named Aunt Hortensia’s does not altogether succeed in her endeavours and Lettie forms a brief but life-changing connection with a Venetian, leading us into a story that spans the decades, as we jump to the modern day and follow the story of Caroline, who is charged by her great aunt Lettie to visit Venice to uncover … something.

This is an easy and compelling read which I finished in one sitting. The bulk of the book concerns Lettie and her time in Venice as Europe was plunged into the tragedy of the second world war. The war is mostly at a distance but casts a shadow over everything, and author Rhys Bowen recounts some dark moments.

Reflecting on the book I am struck by how Bowen transports us to Venice, the city and its people, which in many ways is the heart of the story. She has obvious affection for this unique and wonderful place, its delicious food (if you know where to go), its beautiful works of art, its addiction to religion, and even its less savoury aspects, smells, frequent rain, and occasional “acqua alta” when the city is flooded.

The charm of the ancient and magical city more than makes up for what are perhaps slightly thin (though still likeable) characters. The only negative for me is that the ending was a little abrupt; I would have liked a little more detail about the aftermath.

No complaints though; I reviewed this book on a drab November day in lockdown and spending a few hours in Venice was a most welcome and enjoyable respite.

A Venice Sketchbook is published by Lake Union Publishing on 13th April 2021.

A UI lesson: do not ask users to choose between Register and Login

I am developing a web site for playing bridge, a project which kicked off in March when lockdown caused bridge clubs everywhere to close. There are lots of sites where you can play bridge online, but not many options (particularly back in March) for clubs that wanted to run their own online sessions.

It’s going OK with a number of clubs now using it every week, though it is still in development. I have learned a painful lesson though. In order to proceed as quickly as possible, I started my project with the Visual Studio template for an ASP.Net Core application with ASP.NET Core Identity – the latter an easy decision since it handles all the complications of registration, password reset and so on. (I did end up having to re-plumb it to use int rather than GUID for the primary key but that is another story).

The default home page the template generates looks like this, with options in the menu to Register or Login.

image

Registration and login are fundamental concepts that have been part of the web forever. It’s simple for a developer to understand: you register to create an account, you login if you already have an account.

The painful discovery is that this is not obvious to everyone, particularly to an older demographic that did not grow up with computers. Another factor is that cookies, browser-saved passwords and single sign-on with Google/Facebook etc means that this whole area is a bit of a mess and there are people who just kinda expect web sites to know who they are (which in one way horrifies me but I do see the massive convenience).

The consequence is that a surprising (to me) number of people had difficulty knowing whether Registration or Login was what they needed. They would Register, then return to the site and hit Register again. Why is this site asking for my details again? Maybe a security thing? Oh no, why does it now say username not available?

This is because asking the user to make this choice is not good design. Registration is rare, login is common. Further, Register is a confusing word. We sometimes use the word register when accounts already exist. Create Account is better. And a better UI is just Login. I need to access this website. Then, underneath the username/password request, an option that says “I need to create an account”. The two options should not be equally prominent; and if you look at how many prominent sites design this, that is what they do:

image image

Lesson learned; but I wish this had occurred to me sooner!

Cyber Privacy by April Falcon Doss

This is a book about pervasive data collection and its implications. The author, April Falcon Doss, is a lawyer who spent 13 years at the US National Security Agency (NSA), itself an organization controversial for phone-tapping and other covert surveillance practices. Disturbing though that is, one of Doss’s observations is that “in democratic countries … the government doesn’t have nearly as much data as private companies do.” She argues that government-held data is less troubling since its usage is well regulated, unlike privately held data – though these safeguards do not apply in authoritarian regimes.

Government use then is just one piece of something much bigger, the colossal amount of personal data gathered on so much of what we do, our buying habits, what we search for on the internet, our health, our location, our contacts, tastes and preferences, all tracked, stored, and used in ways that we might not expect. Most of the book simply describes what is happening, and this will be eye-opening to anyone who has not followed the growth of data collection and its use in marketing and advertising over the last twenty years or so. Doss describes how a researcher analyzed his iPhone activity and found that “within seven days, the phone had exported data via 5,400 hidden app trackers.” – and Google’s Android is even worse.

How much do we care and how much should we care? Doss looks at this question which to me is of particular interest. We like getting stuff for free, like social media, search, maps and directions; but how aware are we of hidden costs like compromised privacy and would we be willing to pay in other ways? Studies on the subject are contradictory; humans are not very logical on the matter, and it depends exactly how the trade-off between privacy and cost is presented. The tech giants know this and in general we easily succumb to the temptation to hand over personal information when signing up for free services.

Doss makes some excellent and succinct points, as when she writes that “privacy policies offer little more than a fig leaf of user notice and consent since they are cumbersome to read, difficult to understand, and individuals have few alternatives when it comes to using the major digital platforms.” She also takes aim at well-intended but ineffective cookie legislation – which have given rise to the banners you see, especially in the EU, inviting you to accept all manner of cookies when you visit a web site for the first time. “A great deal of energy and attention has gone into drafting and implementing cookie notice laws,” she says. “But it is an open question whether anyone’s privacy has actually increased.”

She also observes that we are in uncharted territory. “It turns out that all of us have been unwitting participants in a multifaceted, loosely designed program of unregulated research,” she writes.

Personally I agree that the issue is super-important and deserves more attention than it gets, so I am grateful for the book. There are a couple of issues though. One is that the reason personal data gathering has escalated so fast is that we’ve seen benefits – like free services and personalisation of advertising which reduces the amount of irrelevant material we see – but the harms are more hidden. What are the harms? Doss does identify some harms, such as reduced freedom in authoritarian regimes, or higher prices for things like Uber transport when algorithms decide what offers to show based on our willingness to pay. I would like to have seen more attention paid though to the most obvious harm of the moment, the fact that abuse of personal data and social media may have resulted in political upheavals like the election of Donald Trump as US president, or the result of the Brexit referendum in the UK. Whatever your political views, those who value democracy should be concerned; Doss gives this matter some attention but not as much as it merits, in my opinion.

Second, the big question is what can be done; and here the book is short of answers. Doss ends up arguing that we have passed the point of no return in terms of data collection. “The real challenge lies in creating sufficient restrictions to rein in the human tendency to misuse information for purposes that we’ve collectively decided are unacceptable in society,” she writes, acknowledging that how we do so remains an open question.

She says that her ambitions for the book become more modest as the research continued, ending with the hope that she has provided “a catalogue of risks and relevant questions, along with a useful framework for thinking about the future” which “may spark further, future discussions.”

Fair enough, but I would like to have seen more practical suggestions. Should we regulate more? Should Google or Facebook be broken up? As individuals, does it help if we close social media accounts and become more wary about the data that we give away?

Nevertheless I welcome this thought-provoking book and hope that it does help to stimulate the future debate for which the author hopes.

BenBella Books (3 Nov. 2020)

The Whole Truth by Cara Hunter

Set in Oxford, this crime novel continues Hunter’s series based on the cases of DI Adam Fawley. A student has accused a professor of sexual assault – and unusually, the accused is female. Separately, an old case returns to haunt Fawley and his pregnant wife Alex: a criminal whom he put away has done his time, will he attempt the revenge he swore he would exact when convicted?

It is a great read, a book which drew me in quickly and kept me absorbed. I love the fact that the author is a Colin Dexter fan who uses an anagram of Morse for the surname of one of her own fictitious detectives. The plot is full of twists, it’s super-clever, and I particularly enjoyed that last few chapters when the pieces slot into place, worked out by someone unexpected.

That said, I do have a few niggles. One is that there the two separate stories here are essentially unrelated and get almost equal attention, despite the fact that it is the incident with the professor and her student that is highlighted in the blurb and cover picture. Two plots for the price of one isn’t a bad thing, except that the second plot about Fawley’s old case is quite a bit more interesting and exciting than the one which is meant to be the main one. It’s just as well, since I doubt the book would have held my interest without it, but I do wonder if it would have been better to make this more compelling plot the main theme.

Second, I found it odd that the book is written part in first person, from Fawley’s perspective, and partly in third person. There is a bit of chronological jumping around too, but that I have no problem with. There are also illustrations featuring lots of text which are quite hard to read on a Kindle.

Still, these little annoyances did not stop me enjoying the book which was a welcome distraction in these strange days of pandemic.

Penguin. Pub Date 25 Feb 2021

Flashbacks of a Fool, a film inspired by a song

In 2008 Bond actor Daniel Craig starred in a film called Flashbacks of  a Fool, about a failing Hollywood actor (Joe) who returns to England after the death of a childhood friend.

Except it is not really about that. It is about regret and it struck a chord with me, not only because of its nuanced, open approach to its subject, but also because the film is inspired by a song that is also one of my favourite’s, If there is something from Roxy Music’s first and most experimental album. And it is perhaps no coincidence that director Baillie Walsh, who is also a music video director, is the same generation as me and, it seems, shares some of my taste in music.

The film was critically panned on release and scores just 38% on Rotten Tomatoes; I feel it deserves better, with some magical moments including a wonderful scene with Felicity Jones as young Ruth, Joe’s first love, a scene which really is a music video but one into which Walsh threw all his passion for the song.

It would be wrong though just to watch this scene and think that you have seen the best of the movie. There is more to enjoy; sharply-observed humour (such as lunch with Joe and his agent at a smart LA restaurant), and other scenes which evoke the agony caused by humans behaving badly.

The closing scene returns to the same song and is again full of passion for what is lost and what might have been.

The film is what you get when someone with the means to make a film reflects on a song he loves and what it means to him. I am not sure how often this has been done; but in this case it worked for me.

Debugging Safari on an old iPad

Someone was trying to use the bridge application I have in progress, using an iPad 2.0. There were a couple of interesting things about this. One was that I had to rethink the warning thrown up, base on Modernizr, which detects incompatible web browsers. The problem (obvious when you think about it) is that if you use some potentially incompatible features in the same page where you are testing for them, then with an old web browser the JavaScript fails with a syntax error and the warning does not appear. The fix: I now show the warning by default, and the compatibility check hides it.

Still, I was interested in the Safari error and wanted to debug it, in case it was something I could fix. How do you debug Safari on an iPad?  The way it is meant to work is this:

– On a Mac, enable the Safari Develop menu (in Safari preferences, Advanced, Show Develop menu).

– On iOS, enable Safari Web Inspector (Settings – Safari – Advanced – Web Inspector).

– Connect the iPad to the Mac via USB. You can now use Web Inspector on the Mac to debug the Safari iOS pages and scripts.

This did not work for me on my Catalina Mac. The iOS Safari did not show up in the Web Inspector on Safari Mac. I could get it to show briefly, by switching Web Inspector on the iPad off and on again, but after than, no go. I tried a few things, but none of the proposed solutions I could find for this issue fixed it for me.

I have an older 2011 Mac Mini in a drawer, so I thought that might work, being a similar age to the iPad. I fired it up, marvelled at how old-fashioned the UI looked (I had reset it to OS X Lion), and connected the iPad. No go. Same problem as with Catalina.

Surprisingly, what did work were the instructions here (more or less) for debugging Safari iOS on Windows. This is based on the RemoteDebug iOS WebKit Adapter described here, a project which originated as an internal Microsoft experiment.

image

I did find it amusing that I could do this on Windows, having failed with the Mac.

The next generation of this is Inspect. This is in private beta, though the GitHub page for RemoteDebug says it has been superseded and to use Inspect instead.

It worked for me though.

Point-in-time restore: a handy built-in feature in Azure SQL

I am working on a project that is hosted in Azure and I made a mistake, running a SQL script that was dependent on another SQL script that I had forgotten to run. It messed up the foreign keys and I would have to restore a backup … but my most recent backup was from the day before. Annoying.

But wait. Looking the Azure portal I saw this:

image

This is a plain Azure SQL instance with no extras, but look, you can restore the database from 6 minutes ago.

I did it; it restored to a second database. I deleted the bad one, renamed the restored one, ran my scripts in the right order, and all was well.

I recommend you do not run scripts in the wrong order … but if you do, or make some other error, this is a handy feature of Azure SQL which I was not aware of before.

Wrestling with Azure DevOps Pipelines

Pipelines is an Azure service that enables a powerful feature: the ability to set up continuous integration. I have tangled with it before, in the context of trying Azure Kubernetes Service, but managed to avoid getting deep into the YAML which is the language of Pipelines. I am working on a web application and  trying to get it up to scratch as quickly as possible, especially as there are now a bunch of users who are being patient over glitches during development but whose patience may run out.

The application uses .NET Core which for the most part is working well for me. I am using Visual Studio 2019, with occasional forays into Visual Studio Code (VS Code), and deploying to a Linux Azure App Service. Everything was fine until one day when the Web Deploy feature in Visual Studio stopped working with “could not complete the request to remote agent … the operation has timed out.” I appealed for help but with no result yet.

All was not lost as I found that the VS Code Deploy to Azure extension worked pretty well. All I needed to do was to open the solution folder in VS Code, run:

dotnet publish -c Release -o ./publish

in the terminal, then right-click the publish folder and  then right-click the publish folder and choose Deploy to web app. There are a few annoyances but it solved the immediate problem.

One can do better though. Rather than manually deploying, you can create a pipeline using Azure DevOps (the thing that was once called Visual Studio Online and is the cloud version of Team Foundation Services). An attraction of using Azure Devops is that you get “1 Microsoft-hosted job with 1,800 minutes per month for CI/CD” free which seems decent.

I got started, creating an Azure DevOps project and adding a pipeline. You authorize it to access your GitHub repository (if that is what you have, as I do) and then end up in an editor that looks like this:

image

I soon got frustrated. The Pipelines service seems fundamentally excellent but spoilt by poor documentation and some odd behaviour – at least for .NET Core. It took me hours to achieve a basic setup that would upload, test and deploy my simple web application. Most of the time was spent observing pipelines fail to run and trying to figure out why.

When you run a pipeline you may notice that it uses .NET Core 2.1 and warns that 2.2 and 3.0 are end of life:

How do you get it to use .NET Core 3.1? You can add a snippet called UseDotNet@2 and specify the version. I put 3.1 and it was rejected as it likes a full version. I put 3.1.301 and it worked .

The most time-consuming thing for me was running tests. The application uses an Azure SQL database. It is unwise to put the database password in appsettings.json in the GitHub respository. How then do you connect to the database? The docs anticipate this and you can use a feature called variable substitution. In the Pipeline editor, you can add variables and mark them secret, so they are not included in logs. You can also use variables from Azure Key Vault. Then you can use the FileTransform@2 task to replace the connection string in appsettings.json with the one you need including the password. I do not think this is ideal from a security perspective – you are still putting the password in plain text in a configuration file – but it beats having it in the GitHub repository.

I had many issues. The main documentation on variable substitution is here. This is terrible. Note that if you look at the YAML example for JSON file substitution (which is what we need) it does not even use FileTransform@2. It uses AzureRmWebAppDeployment@4 which does a whole lot of other stuff as described here. Maybe I should have tried that. But FileTransform@2 looked like the right thing. Unfortunately it generally gives the error “Cannot perform XML transformations on a non-Windows platform.” No, I am not trying to do an XML transformation. Even if you specify the fileType as json and set enableXmlTransform to false, you still get the error. Later research suggests you can beat this error by setting xmlTransformationRules to an empty string. I gave up though and used FileTransform@1 (an older version of the task) which works as expected.

I still did not get the result I wanted though. All the tests using the database failed. Eventually I figured out that I had to set the folderPath to $(Build.SourcesDirectory). Then it works.

This was good. Now my tests run in Linux rather than on Windows, matching the deployed environment. In a full production environment I would use a second Azure SQL database for the tests, but for development this will do.

I then created a staging slot in the App Service and added a deployment step to deploy the application to that slot. Again, this is good. The application will not deploy unless it passes all the tests (this is a built-in feature of Pipelines, as each step does not run unless the previous steps succeeded). It deploys to staging which has a separate URL so you can try it out and not swap it to production until you are ready.

Overall, it is a better solution than the Visual Studio web deploy which it replaces, so perhaps the error did me a favour. It will work with Visual Studio as well as with VS Code, since it triggers automatically on every code commit. The Publish option in Visual Studio becomes redundant.

Note that Visual Studio also has an option to set this up automatically.

image

I tried it, letting the wizard do what it wanted including creating a new Azure DevOps project and a new App Service plan. Notable things:

– It created a pipeline using the Classic UI rather than the YAML based editor

– It uses an agent (the VM where the pipeline runs) called vs2017-win2016

– the pipeline did not get very far, failing on NuGet restore

No, I am not going to bother troubleshooting this.

This time yesterday I hated Azure DevOps pipelines. Nothing worked first time, YAML is a hostile editing environment (whitespace matters), and the documentation frustrated me. Now I feel pleased and I have this nice badge in my repository.

image

I am left with a nagging feeling though that all this is more difficult than it should be. It seems to me that what I wanted to do was commonplace: use .NET Core, use Azure App Service, have my pipeline build the project, run tests and deploy to staging. You could add, apply entity framework migrations in many cases. I did not find this documented in any one place and the result was that it took more time to figure out.

Using Windows 10 on a 4K display: issues in multi-monitor setups

I made the mistake of reading this post where programmer Nikita Prokopov explains why it is time to upgrade your monitor, particularly if you are a software developer. “I optimize my setup to showing really, really good letters. A good monitor is essential for that. Not nice to have,” he says, going to to explain why standard 1080p (1920 x 1080 pixel) displays have insufficient resolution to display text nicely (unless the display is also small, such as on a 13” laptop). You can use the tool here to calculate the PPI (pixels per inch). You should aim for 150 or more PPI; a 27″ 1080p display will get you 81.59 PPI.

Prokopov’s point is that if you spend all day looking at text (and I do), then you should make the effort (and expense) of getting to display properly; your eyes will thank you and you can work with less strain.

One of my displays is dying (needs new capacitors I suspect) so I took the bait and stumped up for a 4K screen. I did not do what Prokopov also suggested which is to get a display with 120Hz refresh rate. I looked into it; but you need to get a TN (twisted nematic) display which involves some compromises in viewing angles and colours. I also did not want to spend £1700 or more. So I went for an 4K IPS display.

It has been educational. Prokopov is right; text looks much better. Note though that you cannot run at full resolution unless your display is huge; mine is 27” because it has to fit on the desk. It is quite fun at full resolution but the text is too small to read.

image

What you should do, says Prokopov, is to scale the display by an integer value. Therefore I scale 200% (Windows display settings). The display is now back to 1080p in terms of the size of most text but at higher resolution and text looks great.

There is a snag though, actually a couple of snags. One is that occasionally you hit an application that does not understand the scaling – like Open Live Writer – and the text is tiny. More significant for me though is what happens if you have multiple displays. Windows is smart enough to let you have different display settings for each screen. The problems come in two cases though:

– if you move the mouse from the 4K screen to the 1080p screen, it jumps vertically. Essentially, it retains the pixel coordinates from the 4K display and applies them to the 1080p display. So if the mouse is halfway down the 4K display, and you move it right onto a 1080p display, it jumps to the bottom of the screen.

– if you drag an application so it straddles the two displays, it all goes wrong. I cannot use a screen grab for this.

Exhibit one: what happens if the 4K display is set to 100% scaling and you drag an application to straddle across to a 1080p display (ignore the mottling effect, that’s just an artefact from snapping the screen):

image

Exhibit two: what happens if you have the 4K display set to 200% scaling and perform the same straddling act:

image

I appreciate the difficulties here, but possibly Windows could do better? Incidentally the weirdness fixes itself when you drag the window fully across to the 1080p display, it snaps back to normal.

The solution of course would be to get two or three 4K displays. An expensive solution though.

Developing software for playing bridge

I am a duplicate bridge player in my spare time and enjoyed playing in my local club once or twice a week. That was before COVID-19 and then, in March this year, lockdown. Bridge clubs were no longer able to meet. There are more important things in the world; but bridge is both a lot of fun and a welcome distraction from weightier matters, and my thoughts soon turned to what we could do to continue playing in these new circumstances.

The answer was to play online; but while there are plenty of ways to play bridge online, the existing systems were not designed with the idea of being a way for bridge clubs to meet in a new context. If anything, the reverse is true: online bridge site were designed for people who could not easily get to a club or wanted to play at any time with whoever else happened to be available. Clubs like my own, by contrast, wanted to replicate their face-to-face meetings with an online equivalent. A further complication back in March was that the biggest online bridge site, called Bridgebase, was immediately overloaded and declared that it was unwilling to allow new people to qualify as directors, people allowed to run online bridge sessions.

My immediate instinct was to build a new site for playing bridge. I was not quite starting from scratch. Back in the early days of Windows 8, I started work on a bridge game for Microsoft’s new and as it turned out ill-fated platform. I had got some way with it; I had created a bridge engine that understood about cards and hands and tricks and shuffling and scoring and all the various elements that go into playing bridge. It was written in C# and what is now UWP XAML. It is designed of course for a solo player. Here is the bidding screen:

image

and the play screen:

image

This is how it looks on Windows 10; it looked a bit better on Windows 8 though it would not win any prizes for design. My software could play bridge though; the reason I never finished it was that I never cracked getting the AI working. But for human to human play that did not matter. A weekend or two coding, I thought, and I could have a website up and running so our club could play bridge online. I made an immediate start, registering the domain name YourBridgeClubOnline.co.uk.

Well, three months later and here we are.

image

image

It is, I have to say, still under development. But it works and we have been able to play bridge again, as a club.

What took you so long? Ha! Much of my old bridge engine code remains untouched and has proved useful; it all runs fine on .NET Core. Even the (useless) AI has been handy, as I can test the mechanics of play without involving others. But I had, of course, wildly underestimated the problem of converting a game for solo play on Windows, to a multi-player web application. There is much to think about:

The UI. I am not a designer (I am sure you can tell) but spent ages puzzling over how to get a workable user interface in the browser for everything from tablets to desktops. Not smartphones yet but it is coming. I decided early on to take a view on compatibility. No Internet Explorer. JavaScript fetch API is required. When time is against you, it is easier to say, just use another browser, than to waste too much time supporting old browsers.

Messaging – both the API kind, and the chat kind. I am using C#, ASP.NET Core and SignalR. In general it works well. SignalR uses WebSockets as first preference, but falls back to Server Sent Events or long polling where necessary. In my first experiments I did my own polling and switching to SignalR was a great relief.

Registration and login. I am using the stuff that comes in the box, ASP.NET Core Identity. It has saved me a ton of work. It’s a bit annoying and not too well documented. I don’t really like using GUIDs for the primary key, for example, and I believe there is way to avoid it, but it isn’t top priority when you are going for Minimum Viable Product.

JavaScript. I’ve written tons of it and I don’t even like the language. I have a new respect for it though. The thing is, it is very fast and there is nothing you cannot do. The worst thing is the friction of doing some debugging in the browser, and some in Visual Studio. I am thinking of switching to VS Code for development since it works nicely with ASP.NET Core and is better for JavaScript than Visual Studio.

Scoring. My Windows software could score a hand of bridge. But duplicate is different; you have to compare the scores with others who played the same hands and work out the percentages, then export the results to standard formats for display on club websites and submission to the English Bridge Union. It was more work than I had expected and I am not done yet; the system only understands Pairs at the moment, not Teams (a different way of scoring).

Directing. Someone has to manage an online bridge session, settle any arguments, and fix errors like cards played by accident. It all needs coding and there was nothing like it in the Windows version.

Movements. Imagine you have 28 people playing bridge (or 14 pairs). They need to all play the same hands, but never play the same hand twice, and it has to be so arranged that each pair plays against other pairs in a defined sequence so it is balanced and fair. We call this the movement. Online, you have a bit more flexibility because you don’t need to share physical cards: everyone can play the same hand at the same time if you like. It is still quite fiddly though, and I did not do any of this in the old Windows version. I saved some time by writing an import function to enable re-use of movements made for EBUScore, a widely used scoring and bridge session management application. There is more to do though.

Claims. This is where, half way through the hand, a player says, “There’s no point in playing on, I’m obviously going to win all the remaining tricks.” A trick is a sequence of four cards played one from each hand, which is won by one of the pairs. This statement is called a claim, and has to be agreed by the other players. Getting this working was more difficult than I had expected – because built into my bridge engine was the idea that you could score by counting the tricks each side had won. But claimed tricks are never played. With hindsight, I should have allowed for this from the beginning.

Database. Every detail of play has to be stored on the server. I am using Dapper and SQL Server currently, though it is possible that PostgreSQL would work just as well. I started with Entity Framework Core, still there as it is used by ASP.NET Core Identity, but I am happier with Dapper.

Things that worked well

Three months is longer than I had thought it would take to get to a playable system, but I suppose as a spare time project it is not too bad. It would not be possible without the likes of ASP.NET Core and Dapper and SignalR doing so much for you. C# is a delight for coding. I am also using an Azure App Service for all this test and development and that has worked well. I am deploying to a Linux container of course; but the nice thing about App Service is that it will scale to a considerable extent without the hassle of Kubernetes. If the project succeeds and needs to scale up, there is an Azure SignalR service ready and waiting. I was nevertheless interested to see that AWS now offers .NET Core on Elastic Beanstalk, complete with some nice Visual Studio integration. Trying it there would be an interesting experiment, though I’m not sure AWS is so savvy about SignalR.

Open Source?

Could this have been done quicker by making it open source and seeking collaborators early on? Will it become open source? I need help for sure, though I also feel the code needs some cleaning up before it is fit to share more widely. You will recall though that I had started out thinking that it would be a small matter to convert my solo bridge game to an online multiplayer web application. I figured it would be better to get something working and then ask for help. But I am open to offers! Note: this is not a commercial project.

Rewarding

Most of the software projects I have been involved in have been business applications. Bridge is a lot more fun. I do see software development as a creative act. I recall starting work on the bridge game back in 2011 (I think); starting a new blank project in Visual Studio and thinking, hmm, I had better write a class to represent a pack of cards. From that beginning I ended up with an application that could play bridge, after a fashion, and now one that multiple people can play concurrently. It is rewarding and I will not regret the time spent on it, irrespective of how much actual use it gets.