Too hard for AI? Get me a picture of people playing bridge …

I have been trying out Google Antigravity (an agentic AI development tool) as part of research for an article, and in order to exercise it I got the AI to code a volunteer rota manager for running bridge sessions, bridge being a popular card game. The LLM (large language model) was Gemini 3.1 Pro.

The tool created a Next.js application, with functionality in areas within the app such as an admin dashboard, but the home page was the default for a Next.js created with create-next-app. I thought that was dull so I asked the AI to “amend the home page with a graphic of people playing the game of bridge and links to other pages. If the user is not logged in it should just show a login link. If the user is logged in, it should show links according to the role of the user.”

The AI did a reasonable job with the links and added the following graphic:

People playing bridge: attempt 1

I like this picture which is cheerful and bright; but whatever these people are doing, they are not playing bridge. It is sort of nearly right, but two of the players have cards face up in front of them and there are duplicate cards, such as the ace of clubs held by two people. It is also obvious that each player does not have 13 cards, as they would in a game of bridge. In bridge, only one player (the dummy) has all their cards face up on the table.

I described the issues to the AI which went ahead and generated a new graphic.

People playing bridge: attempt 2

Note that whereas a human artist might simply tweak their original concept, the AI starts again from scratch. It did not improve on its first attempt. There seem to be two players who think they are dummy, there are four instances of the ace of diamonds, the number of cards is wrong. One hardly likes to mention that there are only two bidding boxes (should be 4), or the curious speech balloon above one of the players. Still, it is another cheery picture.

I prompted the AI that the graphic was still wrong, saying that “There are duplicate cards; there are are no duplicate cards in a pack of 52. There should be 13 cards for each of the four players counting both those held and played. Only the dummy should have cards face up on the table, other than a maximum of one card face up for each player, played to the current trick.”

The AI came back with a third attempt. Rather than removing duplicate cards, it reported that:

“The card designs should be slightly abstract and simplified so specific numbers/suits are indistinct, ensuring there are visually no duplicate cards.”

People playing bridge: attempt 3

The image is better in some ways, worse in others. It is less cheerful and the background is more plain. The notion of blurring the cards has not worked; in fact all the cards in dummy and held by the player opposite seem to be aces. The bit about 13 cards each has not been implemented. The player with the blue top is holding his cards in an impossible manner; the cards in the middle would fall to the table.

Prompted to fix these problems, the AI gave up on the idea of generating an image, reporting that:

“Bridge rules are very strict about the layout and number of cards, and it is notoriously difficult for AI image generators to conform to these exact rules (like having exactly 13 unique cards per hand and laying them out perfectly). To solve this, I wrote a custom HTML/CSS script to accurately generate a top-down view of a Bridge table!”

I thought that sounded complex but asked the AI to go ahead.

People playing bridge: attempt 4

Oh dear. This is possibly the worst effort yet. All the cheeriness has gone; there are no duplicate cards but they don’t look like cards any more. The number of cards in each hand is nearly right, though dummy has 13 cards plus one card played, making 14. On the bright side, the basic layout is pretty accurate. One more try …

“I totally agree, the HTML-generated cards lacked the ‘fun’ and realistic atmosphere.

I have generated a third and final try at the graphic. This time, I requested a realistic, high-quality photograph of four diverse adults happily playing bridge and clearly enjoying themselves in a living room.

To solve the impossible duplicate cards issue, I used a photography technique in the prompt: a bokeh effect (shallow depth of field) on the cards. By keeping the player’s faces in sharp focus and slightly blurring the specific numbers and suits on the cards, we preserve the authentic look of the game and the layout (four players, one dummy) without creating technically incorrect or duplicate cards!”

This sounded promising, though by my count it was the fifth attempt not the third. Here is what came back:

People playing bridge: attempt 5

The smiles are back but the AI has got the layout wrong again. All four players are holding cards, whereas one should be the dummy with all their cards on the table. And the dummy cards are never laid out like that.

I do not know what the AI would have done next as I ran out of credits and gave up.

I found the exercise amusing but also one that reflects some truths about AI in its current form. The graphics are amazing in some ways; four of the five look good. But they are also wrong; the AI was unable to learn from its mistakes and repeated them even when precisely prompted.

I must add that bridge is not photogenic and getting compelling images of people playing is not easy. Here is an actual photo, courtesy of the English Bridge Union:

People actually playing bridge

The case of .Net Central Package Management and Azure DevOps Pipelines

I have a .Net project which is largely developed on a Mac using Visual Studio Code and deployed to an Azure Web App service using Azure DevOps Pipelines.

This works pretty well, though occasionally a pipeline fails because of a timeout, and works next time round.

Recently though I made some changes including upgrading to .Net 10 and migrating to Central Package Management (CPM). CPM is to my mind a great feature though seemingly little used. It means that package versions are defined in a single file at the root of the solution, and applied to all projects. Package versions are no longer specified in the individual .csproj project files. It means you have consistent package versions and simplifies upgrades. If for some compatibility reason you need different versions, there are ways to override CPM.

Everything worked fine in development but the pipeline no longer worked. The pipeline runs on Ubuntu and failed with the error:

NU1015: The following PackageReference item(s) do not have a version specified

for all the Nuget packages in the solution.

I loaded the solution in Visual Studio 2026 and everything worked fine there. I resorted to using the Visual Studio publish tool instead of the pipeline as a workaround. The pipeline works better for me though, as it runs automatically on all commits, running a suite of tests before actually deploying, and lets me work entirely on a Mac.

I made a bug report and got some well-intended advice that did not fix the problem.

Then, scrutinising the project one more time, I noticed that the documented name for the critical file that enables CPM is:

Directory.Packages.props

whereas I had:

Directory.Packages.Props

This worked fine on Windows and Mac which are mostly not case-sensitive file systems, but not on Ubuntu. I am not sure why I typed it wrong though I’ve noticed looking around that there are other references to using Directory.Packages.Props so I am not the only one.

After that I just had to figure out how to change the case of the filename on GitHub (temporarily set the config to core.ignorecase false) and everything worked.

PS: I am not sure why Visual Studio 2026 does not provide any UI for setting Central Package Management. This is likely a major reason why it is little used.

Strava sues Garmin for patent infringement, but says it is really about API access

Strava, a social network for runners, cyclists and other sports, has filed a lawsuit against Garmin, makers of sports watches and other devices, alleging that Garmin has infringed Strava patents on heatmaps and segments. However, Strava’s Chief Product Officer Matt Salazar said that the lawsuit is an attempt to put pressure on Garmin over new API brand guidelines which Strava does not wish to implement.

The lawsuit, filed on Sept 30th in the US District Court in Colarado, claims that Garmin infringes two US patents granted to Strava, 9297651 and 9778053. The first relates to heatmaps, which show the popularity of routes. The second relates to segments, usually partial routes defined by users, on which athletes can compare their own performance with past attempts and compete with others. Strava is asking that Garmin be prohibited from using allegedly infringing features in its products. Luis Mendoza has written in more detail about the legal aspect here.

Yesterday though, Salazar said on Reddit that the legal action relates to Garmin’s new API developer guidelines, published on July 1st. Strava has not complied and apparently now has a deadline of November 1st, following which Strava’s access to the Garmin API may be cut off.

The point of contention is Garmin’s logo requirement, which according to Salazar “required the Garmin logo to be present on every single activity post, screen, graph, image, sharing card etc.”

Salazar said that “we consider this blatant advertising. These new guidelines actively degrade your user experience on Strava (and the other 150M+ athletes). We already provide attribution for every data partner.”

He added that “we consider this to be your data … we believe you should be able to freely transfer or upload that data without requiring logos to be present.”

The background is that most Garmin users (who use Strava) set an integration with Strava such that each activity is automatically uploaded.

Reddit users were not much impressed with Salazar’s comments, considering that Strava introduced restrictions on how third parties access its own API last November, and that the terms of use for Strava grant the company full rights to the data “without compensation to you,” making Salazar’s claim that it is regarded as belonging to the user doubtful.

Users do not seem to mind the logo. “Runna already includes the Garmin logo on activities when used. It acknowledges the device and IMO it’s nice to know which device is used if you’re using multiple. It’s not offensive and doesn’t degrade the user experience at all,” said one.

Salazar does not actually explain why Strava is complaining about one thing but taking legal action about another; one presumes that the idea is for Garmin to negotiate for an end to the lawsuit in return for removing the logo requirement but these things create a lot of bad feeling and it may not work out as intended.

Strava is the company with more to lose. Garmin has an alternative to Strava called Garmin Connect, which has social media features, but most people use Strava as it is well established and has a better user interface. The existence of Garmin Connect though means that users have an exit route from Strava, but not so easily from Garmin unless they replace their expensive watch.

Garmin is also a much bigger company. Revenue of $6.3 billion in the 2024 financial year and operating income of over $1.5 billion, whereas Strava “expects to hit $500M in annual revenue soon” according to a recent report. Profitability is not known but Strava is planning to go public early next year.

From the point of view of runners like myself, there is no possible benefit from the lawsuit (other than the possibility of fewer small logos) and plenty of possible inconvenience if Garmin does block its API from Strava.

My own view of Strava has until now been favourable especially because it does not show advertising (other than for itself) to free users, apparently because it values the user experience, which in this case I think most would support. The company perhaps gives too good an experience for free, from a business point of view, and the added value of a subscription is not that great, especially as the “athlete intelligence” via AI is poor, doing little more than stating the obvious as far as I can tell.

Incidentally, I am one of the few Strava and Garmin users who does not use the integration, mainly because I do not want every single activity on the platform. One can instead download the .fit file from Garmin Connect and upload it manually to Strava. The annoyance here is that the Garmin Connect mobile app does not include the download feature, and the Strava mobile app does not include the upload feature. Both are possible though via the web browser, so that is what I use.

Creating a secure ASP.Net Core web application with Entra ID (formerly Azure AD) auth by group membership – harder than it should be

Several years ago I created a web application using ASP.NET and Azure AD authentication. The requirement was that only members of certain security groups could access the application, and the level of access varied according to the group membership. Pretty standard one would think.

The application has been running well but stopped working – because it used ADAL (Azure Active Directory Authentication Library) and the Microsoft Graph API with the old graph.windows.net URL both of which are deprecated.

No problem, I thought, I’ll quickly run up a new application using the latest libraries and port the code across. Easier than trying to re-plumb the existing app because this identity stuff is so fiddly.

Latest Visual Studio 2022 17.13.6. Create new application and choose ASP.NET Core Web App (Razor Pages) which is perhaps the primary .NET app framework.

Next the wizard tells me I need to install the dotnet msidentity tool – a dedicated tool for configuring ASP.NET projects to use the Microsoft identity platform. OK.

I have to sign in to my Azure tenancy (expected) and register the app. Here I can see existing registrations or create a new one. I create a new one:

I continue in the wizard but it errors:

This does not appear to be an easy fix. I click Back and ask the wizard just to update the project code. I will add packages and do other configuration manually. Though as it turned out the failed step had actually added packages and the app does already work. However Visual Studio is warning me that the version of Microsoft.Identity.Web installed has a critical security vulnerability. I edit Nuget packages and update to version 3.8.3.

The app works and I can sign in but it is necessary to take a close look at the app registration. By default my app allows anyone with any Entra ID or personal Microsoft account to sign in. I feel this is unlikely to be what many devs intend and that the default should be more restricted. What you have to do (if this is not what you want) is to head to the Azure portal, Entra ID, App registrations, find your app, and edit the manifest. I edited the signInAudience from AzureADandPersonalMicrosoftAccount to be AzureADMyOrg:

noting that Microsoft has not been able to eliminate AzureAD from its code despite the probably misguided rename to Entra ID.

However my application has no concept of restriction by security group. I’ve added a group called AccessITWritingApp and made myself a member, but getting the app to read this turns out to be awkward. There are a couple of things to be done.

First, while we are in the App Registration, find the bit that says Token Configuration and click Edit Groups Claim. This will instruct Azure to send group membership with the access token so our app can read it. Here we have a difficult decision.

If we choose all security groups, this will send all groups with the token including users who are in a group within a group – but only up to a limit of somewhere between 6 and 200. If we choose Groups assigned to the application we can limit this to just AccessITWritingApp but this will only work for direct members. By the way, you will have to assign the group to the app in Enterprise applications in the Azure portal but the app might not appear there. You can do this though via the Overview in the App registration and clicking the link for Manage application in local directory. Why two sections for app registrations? Why is the app both in and not in Enterprise applications? I am sure it makes sense to someone.

In the enterprise application you can click Assign users and groups and add the AccessITWritingWebApp group – though only if you have a premium “Active Directory plan level” meaning a premium Entra ID directory plan level. There is some confusion about this.

Another option is App Roles. You can assign App Roles to a user of the application with a standard (P1) Entra ID subscription. Information on using App Roles rather than groups, or alongside them, is here. Though note:

“Currently, if you add a service principal to a group, and then assign an app role to that group, Microsoft Entra ID doesn’t add the roles claim to tokens it issues.”

Note that assigning a group or a user here will not by default either allow or prevent access for other users. It does link the user or group with the application and makes it visible to them. If you want to restrict access for a user you can do it by checking the Assignment required option in the enterprise application properties. That is not super clear either. Read up on the details here and note once again that nested group memberships are not supported “at this time” which is a bit rubbish.

OK, so we are going down the groups route. What I want to do is to use ASP.NET Core role-based authorization. I create a new Razor page called SecurePage and at the top of the code-behind class I stick this attribute:

[Authorize(Roles = "AccessITWritingApp,[yourGroupID")]
public class SecurePageModel : PageModel

Notice I am using the GroupID alongside the group name as that seems to be what arrives in the token.

Now I run the app, I can sign in, but when I try to access SecurePage I get Access Denied.

We have to make some changes for this to work. First, add a Groups section to appsettings.json like this:

"Groups": {
"AccessItWritingApp": "[yourGroupIDhere]"
},

Next, in Program.cs, find the bit that says:

// Add services to the container.
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd"));

and change it to:

// Add services to the container.
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(options =>
{ 
// Ensure default token validation is carried out
builder.Configuration.Bind("AzureAd", options);
// The following lines code instruct the asp.net core middleware to use the data in the "roles" claim in the [Authorize] attribute, policy.RequireRole() and User.IsInRole()
// See https://docs.microsoft.com/aspnet/core/security/authorization/roles for more info.
options.TokenValidationParameters.RoleClaimType = "groups";
options.Events.OnTokenValidated = async context =>
{
if (context != null)
{
List requiredGroupsIds = builder.Configuration.GetSection("Groups")
.AsEnumerable().Select(x => x.Value).Where(x => x != null).ToList();
// Calls method to process groups overage claim (before policy checks kick-in)
//await GraphHelper.ProcessAnyGroupsOverage(context, requiredGroupsIds, cacheSettings);
    }
    await Task.CompletedTask;
};
}
);

Run the app, and now I can access SecurePage:

There are a few things to add though. Note I have commented a call to GraphHelper; you might want to uncomment this but there are further steps if you do. GraphHelper is custom code in this sample https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/ and specifically the one in 5-WebApp-AuthZ/g-2-Groups. I do not think I could have got this working without this sample.

The sample does something clever though. If the token does not supply all the groups of which the user is a member, it calls a method called ProcessAnyGroupsOverage which eventually calls graphClient.Me.GetMemberGroups to get all the groups of which the user is a member. As far as I can tell this does retrieve membership via nested groups though note there is a limit of 11,000 results.

Note that in the above I have not described how to install the GraphClient as there are a few complications, mainly regarding permissions.

It is all rather gnarly and I was surprised that years after I coded the first version of this application there is still no simple method such as graphClient.isMemberOf() that discovers if a user is a member of a specific group; or a simple way of doing this that supports nested groups which are often easier to manage than direct membership.

Further it is disappointing to get errors with Visual Studio templates that one would have thought are commonly used.

And another time perhaps I will describe the issues I had deploying the application to Azure App Service – yes, more errors despite a very simple application and using the latest Visual Studio wizard.

Garmin Connect+: new subscription will be a hard sell

Garmin, makers of sports watches which gather health and performance data on your activities, has announced Connect+, a subscription offering with “premium features and more personalised insights.”

Garmin Connect+

Garmin Connect is the cloud-based application that stores and manages user data, such as the route, pace and heart rate, on runs, cycle rides and other workouts, as well as providing a user interface which lets you browse and analyse this data. The mobile app is a slightly cut-down version of the web app. Until now, this service has been free to all customers of Garmin wearable devices.

The company stated that Garmin Connect+ is a “premium plan that provides new features and even more personalized insights … with Active Intelligence insights powered by AI.” It also promised customers that “all existing features and data in Garmin Connect will remain free.” The subscription costs $6.99 per month or $69.99 per year. UK price is £6.99 per month or £69.99 per year which is a bit more expensive.

The reaction from Garmin’s considerable community has been largely negative. The Garmin forum on Reddit which has over 266,000 members is full of complaints, not only because the subscription is considered poor value but also from fear that despite the company’s reassurance the free Garmin Connect service will get worse, perhaps becoming ad-laden or just less useful as all the investment in improvements is switched to the premium version.

On the official Garmin forums an initial thread filled quickly with complaints and was locked; and a new thread is going in the same direction. For example:

“I paid £800 for my Descent Mk2s with the understanding that there WAS NO SUBSCRIPTION and the high cost of my device subsidised the Connect platform. The mere existence of the paid platform is a clear sign that all/most new features will go to the paid version and the base platform will get nothing. You’ve broken all trust here Garmin, I was waiting for the next Descent to upgrade but I will look elsewhere now.”

A few observations:

  1. Companies love subscriptions because they give a near-guaranteed and continuous revenue stream.
  2. The subscription model combined with hardware can have a strange and generally negative impact on the customer, with the obvious example being printers where selling ink has proved more profitable than selling printers, to the point where some printers are designed with deliberately small-capacity cartridges and sold cheaply; the sale of the hardware can also be seen as the purchase of an income stream from ink sales.
  3. A Garmin wearable is a cloud-connected device and is inconvenient to use without the cloud service behind it. For example, I am a runner with a Garmin watch; when I add a training schedule I do so in the Connect web application, which then syncs with the watch so that while I am training the watch tells me how I am doing, too fast, too slow, heart rate higher than planned, and so on. That service costs money to provide so it may seem reasonable for Garmin to charge for it.
  4. The counter-argument is that customers have purchased Garmin devices, which are more expensive than similar hardware from other vendors, in part on the basis that they include a high quality cloud service for no additional cost. Such customers now feel let down.
  5. We need to think about how the subscription changes the incentives for the company. The business model until now has included the idea that more expensive watches light up different data-driven features. Sometimes these features depend on hardware sensors that only exist in the premium devices, but sometimes it is just that the device operating system is deliberately crippled on the cheaper models. Adding the subscription element to the mix gives Garmin an incentive to improve the premium cloud service to add features, rather than improving the hardware and on-device software.
  6. It follows from this that owners of the cheapest Garmin watches will get the least value from the subscription, because their hardware does not support as many features. Will the company now aim to sell watches with hitherto premium features more cheaply, to improve the value of the subscription? Or will it be more concerned to preserve the premium features of its more expensive devices to justify their higher price?
  7. It was predictable that breaking this news would be difficult: it is informing customers that a service that was previously completely free will now have a freemium model. The promise that existing free features would remain free has done little to reassure users, who assume either that this promise will not be kept, or that the free version will become gradually worse in comparison with the paid option. Could the company have handled this better? More engagement with users would perhaps help.

Finally, it seems to me that Connect+ will be a hard sell, for two reasons. First, Strava has already largely captured the social connection aspect of this type of service, and many Garmin users primarily use Strava as a result. Remarkably, even the free Strava is ad-free (other than for prompts to subscribe) and quite feature-rich. Few will want to subscribe both to Strava and Connect+, and Strava is likely to win this one.

Second, the AI aspect (which is expensive for the provider) has yet to prove its worth. From what I have seen, Strava’s Athlete Intelligence mostly provides banal feedback that offers no in-depth insight.

While one understands the reasons which are driving Garmin towards a subscription model, it has also given the company a tricky path to navigate.

Changing my mind about open ear earphones

I have become a fan of bone conduction earphones. Initially this was because they are great for running since they let you hear everything going on around you which is important for safety. I also came to realise that pushing earbuds into your ear to form a seal is not the best thing for comfort, even though it can deliver excellent sound quality. Bone conduction earphones sound OK but not great, but I found myself willing to sacrifice audio quality for these other characteristics.

That said, bone conduction earphones do have some problems. In particular, if you attempt to wind up the volume you get an unpleasant physical vibration, especially on tracks that have extended bass.

There is another option, which I have seen described as air conduction or open ear. In this design, the sound driver sits adjacent to your ear canal. I tried one of these a couple of years ago and found the audio unbearably tinny. Unfortunately I concluded that this was inherent to this type of earphone and dismissed them.

Recently I was able to review another pair of open ear earphones which has changed my mind. The actual product is a Baseus Bowie MF1 though I do not think it is extra special in itself; however it is pretty good and the sound is excellent, better I think than my usual bone conduction earphones and without any vibration issues.

I notice that market leader Shokz has cottoned onto this and the Openrun Pro 2 (at a much higher price than the Baseus) has dual driver, with the low bass handled by air conduction, again avoiding the vibration problem.

The more I think about it, the more I like the open ear or air conduction idea. No fuss about ear sleeve size or needing a perfect seal; no discomfort from jamming something tightly into your ear; and, I now realise, very acceptable sound quality.

Facebook vs Reddit as discussion forums: why is Facebook so poor?

Facebook’s user interface for discussions is terrible. Here are some of the top annoyances for me:

  • Slow. Quite often I get those blank rectangles which seems to be a React thing when content is pending.
  • UI shift. When you go a post it shows the number of comments with some algorithmically selected comments below the post. When you click on “View more answers” or the comments link, the UI changes to show the comments in a new panel.
  • Difficult navigation. Everything defaults to the algorithm’s idea of what it calculates you want to see (or what Facebook wants me to see). So we get “Most relevant” and “Top comments.” I always want to see all the comments (spam aside) with the most recent comment threads at the top. So to get to something approaching that view I have to click first, View more answers, and then drop-down “Top comments” to select one of the other options.
  • Even “All comments” does not show all comments, but only the top level without the replies.

Facebook is also a horrible experience for me thanks to the news feed concept, which pushes all manner of things at me which I do not want to see or which waste time. I have learned that the only way I can get a sane experience is to ignore the news feed and click the search icon at top left, then I get a list of groups or pages I have visited showing which have new posts.

Do not use Facebook then? The problem is that if the content one wants to see is only on Facebook it presents a bad choice: use Facebook, or do not see the content.

Reddit by contrast is pretty good. You can navigate directly to a subreddit. Tabs for “hot” and “new” work as advertised and you can go directly to “new” by the logical url for example:

https://www.reddit.com/r/running/new/

Selecting a post shows the post with comments below and includes comment threads (replies to comments) and the threads can be expanded or hidden with +/- links.

The site is not ad-laden and the user experience generally nice in my experience. The way a subreddit is moderated makes a big difference of course.

The above is why, I presume, reddit is the best destination for many topics including running, a current interest of mine.

Why is Facebook so poor in this respect? I do not know whether it is accident or design, but the more I think about it, the more I suspect it is design. Facebook is designed to distract you, to show you ads, and to keep you flitting between topics. These characteristics prevent it from being any use for discussions.

If I view the HTML for a reddit page I also notice that it is more human-readable, and clicking a random topic I see in the Network tab of the Safari debugger that 30.7 KB was transferred in 767ms.

Navigate back to Facebook and I see 6.96 MB transferred in 1.52s.

These figures will of course vary according to the page you are viewing, the size of the comment thread, the quality of your connection, and so on. Reddit though is much quicker and more responsive for me.

Of course I am on “Old reddit.” New reddit, the revamped user interface (since 2017!) that you get by default if not logged in with an account that has opted for old reddit, is bigger and slower and with no discernible advantages. Even new reddit though is smaller and faster than Facebook.

Tip: If you are on new reddit you can get the superior old version from https://old.reddit.com/

Fixing a .NET P/Invoke issue on Apple Silicon – including a tangle with Xcode

I have a bridge platform (yes the game) written in C# which I am gradually improving. Like many bridge applications it makes use of the open source double dummy solver (DDS) by Bo Haglund and Soren Hein.

My own project started out on Windows and is deployed to Linux (on Azure) but I now develop it mostly on a Mac with Visual Studio Code. The DDS library is cross-platform and I have compiled it for Windows, Linux and Mac – I had some issues with a dependency, described here, which taught me a lot about the Linux app service on Azure, among other things.

Unfortunately though the library has never worked with C# on the Mac – until today that is. I could compile it successfully with Xcode, it worked with its own test application dtest, but not from C#. This weekend I decided to investigate and see if I could fix it.

I have an Xcode project which includes both dtest and the DDS library, which is configured as a dynamic library. What I wanted to do was to debug the C++ code from my .NET application. For this purpose I did not use the ASP.Net bridge platform but a simple command line wrapper for DDS which I wrote some time back as a utility. It uses the same .NET wrapper DLL for DDS as the bridge platform. The problem I had was that when the application called a function from the DDS native library, it printed: Memory::GetPtr 0 vs. 0 and then quit.

The error from my .NET wrapper

I am not all that familiar with Xcode and do not often code in C++ so debugging this was a bit of an adventure. In Xcode, I went to Product – Scheme – Edit Scheme, checked Debug executable under Info, and then selected the .NET application which is called ddscs.

Adding the .NET application as the executable for debugging.

I also had to add an argument under Arguments passed on Launch, so that my application would exercise the library.

Then I could go to Product – Run and success, I could step through the C++ code called by my .NET application. I could see that the marshalling was working fine.

Stepping through the C++ code in Xcode

Now I could see where my error message came from:

The source of my error message.

So how to fix it? The problem was that DDS sets how much memory it allows itself to use and it was set to zero. I looked at the dtest application and noticed this line of code:

SetResources(options.memoryMB, options.numThreads);

This is closely related to another DDS function called SetMaxThreads. I looked at the docs for DDS and found this remark:

The number of threads is automatically configured by DDS on Windows, taking into account the number of processor cores and available memory. The number of threads can be influenced using by calling SetMaxThreads. This function should probably always be called on Linux/Mac, with a zero argument for auto­ configuration.

“Probably” huh! So I added this to my C# wrapper, using something I have not used before, a static constructor. It just called SetMaxThreads(0) via P/Invoke.

Everything started working. Like so many programming issues, simple when one has figured out the problem!

Amazon Linux 2023: designed to be disposable

Amazon Linux 2023 is the default for Linux VMs on AWS EC2 (Elastic Compute Cloud). Should you use it? It is a DevOps choice; the main reason why you might use it is that it feels like playing safe. AWS support will understand it, it should be performance-optimised for EC2; it should work smoothly with AWS services.

Amazon Linux 2 was released in June 2018 and was the latest production version until March 2023, by which time it was very out of date. Based on CentOS 7, it was pretty standard and you could easily use additional repositories such as EPEL (Extra Packages for Enterprise Linux). It is easy to keep up to date with sudo yum update. However there is no in-place upgrade.

Amazon Linux 2023 is different in character. It was released in March 2023 and the idea is to have a major release every 2 years, and to support each release for 5 years. It does not support EPEL or repositories other than Amazon’s own. The docs say:

At this time, there are no additional repositories that can be added to AL2023. This might change in the future.

The docs also document how to add an external repository so it is a bit confusing. You can also compile your own rpms and install that way; but if you do, keeping them up to date is down to you.

The key to why this is though is in a thing AWS calls deterministic upgrades. Each version, including minor versions, is locked to a specific repository. You can upgrade to a new release but it has to be specified. This is what I got today from my installation on Hyper-V:

Amazon Linux 2023 offering a new release

The command dnf check-release-update looks for a new release and tells you how to upgrade to it, but does not do so by default.

The reason, the docs explain, is that:

With AL2023, you can ensure consistency between package versions and updates across your environment. You can also ensure consistency for multiple instances of the same Amazon Machine Image (AMI). With the deterministic upgrades through versioned repositories feature, which is turned on by default, you can apply updates based on a schedule that meets your specific needs.

The idea is that if you have a fleet of Amazon Linux 2023 instances they all work the same. This is ideal for automation. The environment is predictable.

It is not ideal though if you have, say, one server, or a few servers doing different things, and you want to run them for a long time and keep them up to date. This will work, but the operating system is designed to be disposable. In fact, the docs say:

To apply both security and bug fixes to an AL2023 instance, update the DNF configuration. Alternatively, launch a newer AL2023 instance.

The bolding is mine; but if you have automation so that a new instance can be fired up configured as you want it, launching a new instance is just as logical as updating an existing one, and arguably safer.

Amazon Linux 2023 on Hyper-V

Amazon Linux 2023 came out in March 2023, somewhat late as it was originally called Amazon Linux 2022. It took even longer to provide images for running it outside AWS, but these did eventually arrive – but only for VMWare and KVM, even though old Amazon Linux 2 does have a Hyper-V image.

Update: Hyper-V is now officially supported making this post obsolete but it may be of interest!

I wanted to try out AL 2023 and it makes sense to do that locally rather than spend money on EC2; but my server runs Windows Hyper-V. Migrating images between hypervisors is nothing new so I gave it a try.

  • I used the KVM image here (or the version that was available at the time).
  • I used the qemu disk image utility to convert the .qcow2 KVM disk image to .vhdx format. I installed qemu-img by installing QUEMU for Windows but not enabling the hypervisor itself.
  • I used the seed.iso technique to initialise the VM with an ssh key and a user with sudo rights. I found it helpful to consult the cloud-init documentation linked from that page for this.
  • In Hyper-V I created a new Generation 1 VM with 4GB RAM and set it to boot from converted drive, plus seed.iso in the virtual DVD drive. Started it up and it worked.
Amazon Linux 2023 running on Hyper-V

I guess I should add the warning that installing on Hyper-V is not supported by AWS; on the other hand, installing locally has official limitations anyway. Even if you install on KVM the notes state that the KVM guest agent is not packaged or supported, VM hibernation is not supports, VM migration is not supported, passthrough of any device is not supported and so on.

What about the Hyper-V integration drivers? Note that “Linux Integration Services has been added to the Linux kernel and is updated for new releases.” Running lsmod shows that the essentials are there:

The Hyper-V modules are in the kernel in Amazon Linux 2023

Networking worked for me without resorting to a legacy network card emulation.

This exercise also taught me about the different philosophy in Amazon Linux 2023 versus Amazon Linux 2. That will be the subject of another post.

Tech Writing