All posts by Tim Anderson

The era of tiny PCs: 400g and smaller than a paperback book

My work PC for the last few years has been a 2018 HP Omen gaming PC which has been brilliant; I have replaced the GPU and added storage but everything still works fine. That is, it used to be, until I reviewed a mini PC which has surprised me with its capability – not because it is exceptional, but because everyday technology is at the point where having something bigger is unnecessary for everyday purposes other than gaming.

Mini PC with paperback book and CD to show the size

The new PC is a Trigkey S5 with an AMD Ryzen 5560 CPU, 500GB NVMe SSD and 16GB DDR4 RAM, and currently costs around £320. Its Geekbench CPU score is better than my 5-year old HP with a Core i7.

GPU score is way less than the old HP.

Still, there is support for three displays via HDMI, DisplayPort and USB-C and 4K/60Hz is no problem.

Inside we find branded RAM and it does not look as if the components are shoe-horned in, there is plenty of space.

The power supply is external and rated at 19v and 64.98w.

Expansion is via 4 USB-A ports, one USB-C, and the aforementioned HDMI and DisplayPort sockets. There is also an Ethernet port, and of course Bluetooth and Wi-Fi.

Operating system? Interesting. It is not mentioned in the blurb but Windows 11 happens to be installed, but with one of those volume MAK (Multiple Activation Key) licenses that is not suitable for this kind of distribution (but costs the vendor hardly anything). When first run Windows setup states that “you may not use this software if you have not validly acquired a license for the software from Microsoft or its licensed distributors,” which you likely have not, but Trigkey may presume that most of its customers will not care. I recommend installing your own licensed copy of Windows as I have done, or your preferred Linux distribution.

Windows does run well however and 16GB RAM is enough for Hyper-V and Windows Subsystem for Linux (WSL) 2.0 to run well. Visual Studio 2022, VS Code, Microsoft Office, all run fine.

I am not suggesting that this particular model is the one to get, but I do think that something like this, small, light, and power-sipping, is now the sane choice for most desktop PC users.

The AWS re:Invent 5K run 2023

Sunrise over Las Vegas – at the re:Invent 5K run 2023

It happens that, a little later in life than most, I have taken up running, and during the recent AWS re:Invent in Las Vegas I was one of 978 attendees to take part in the official event 5K run.

If there were around 50,000 at the conference that would be nearly 2% of us which is not bad considering the first coaches to the venue left our hotel at 5.15am. The idea was that you could do the run and still make the keynote I guess – which I did.

I would not call myself an experienced runner but I have taken part in a few races and this one seemed to have all the trimmings. The run was up and down Frank Sinatra Drive, which was closed for the event, and the start and finish was at the Michelob ULTRA Arena at Mandalay Bay. Snacks and drinks were available; there was a warm-up; there was a bag drop; there was a guy who kept up an enthusiastic commentary both for the start and the finish. The race was chip timed.

We started in three waves, being fast-ish, medium, and run/walk. I started perhaps optimistically in the fast-ish group and did what for me was a decent time; it was a quick course with the only real impediments being two u-turns at the ends of the loop.

Overall a lot of fun and I am grateful to the organisers for arranging it (it does seem to be a regular re:Invent feature).

Here is where it gets a bit odd though. The event is pushed quite hard; it is a big focus at the community stand outside the registration/swag hall and elsewhere at the other official re:Invent hotels. It is also a charity event, supporting the Fred Hutchinson Cancer Center. All good; but I was surprised never to be officially told my result.

I was curious about it and eventually tracked down the results – I figured that with chip timing they were probably posted somewhere – and yes, here they are. You will notice though that no names are included, only the bib numbers. If you know your bib number you can look up your time. This was mine.

It seems that AWS do not really publish the results which would have disappointed me if I had been the first finisher who achieved an excellent time of 16:23 – well done 1116!

I can’t pretend to understand why one would organise a chip-timed race but then not publish the results. Perhaps in the interests of inclusivity one could give people an option to be anonymous but for most runners the time achieved is part of the fun. I think we were meant to be emailed our results but mine never came; but even if I had received an email, I would like to browse through the full table and see how I did overall.

AverMedia LiveGamer Ultra 2.1 – excellent capture card and getting better

This capture device is a neat device packaged in an unnecessarily bulky box – though to be fair the cables take more space than the capture box. It is called Ultra 2.1 because it supports HDMI 2.1, though not at the highest resolutions of which HDMI 2.1 is capable. However since an Xbox Series X or a PlayStation 5 supports up to 4K 120Hz, the Ultra 2.1 with passthrough at 4K 144Hz and support for HDR (High Dynamic Range) and VRR (Variable Refresh Rate) seems plenty good enough. I was able to capture at 3840 x 2160 at 60Hz using OBS (Open Broadcaster Software) with very low latency. 

AverMedia Live Gamer Ultra 2.1

Some features of the product are not quite ready though. Support for Avermedia’s easy to use RECentral software is not coming until the first half of 2024, according to the support page, and passthrough resolution will be enhanced to add 3440×1440 100hz in a forthcoming firmware update. Similarly, macOS support is promised before the end of 2023.

I got good results even with the product as it is though. The device is very easy to use (even if OBS is a bit fiddly) and I was glad to see that the supplied HDMI cable is fully certified. The box is USB powered, requiring a USB 3.2 Type-C port; it does not require any additional power. There is also a 4-pole audio cable supplied which can be used with a headset or controller, though I did not try this.

The box has lighting effects which to my mind are rather pointless but you can control this through the AverMedia Gaming Utility, a download from the AverMedia site. This utility can also update the firmware, which was the first thing I did. Downloads are available here.

A high quality capture box which gave me excellent results from a PS5.

Full specs:

  • Interface: USB 3.2 Gen 2 Type-C (10Gbps)
  • Input & Output (Pass-through): HDMI 2.1
  • Max Pass-Through Resolution: 2160p144 HDR/VRR, 1440p240 HDR/VRR, 1080p360 HDR/VRR
  • 3440x1440p100hz and other ultrawide resolutions promised via firmware upgrade on Nov 16th with others to follow
  • Max Capture Resolution: 2160p60
  • Supported Resolution: 2160p, 1440p, 1080p, 1080i, 720p, 576p, 576i, 480p, 480i
  • Video Format: YUY2, NV12, RGB24, P010(HDR)
  • Dimension (W x D x H): 120 x 70 x 27.6 mm (4.72 x 2.76 x 1.09 in.)
  • Weight: 115 g (4.06 oz.)

System requirements:

  • Windows® 10 x64 / 11 x64 or later
  • macOS support promised by the end of 2023.
  • Desktop: Intel® Core™ i5-6XXX + NVIDIA® GeForce® GTX 1060 or above
  • Laptop: Intel® Core™ i7-7700HQ + NVIDIA® GeForce® GTX 1050Ti or above
  • 8 GB RAM recommended (Dual-channel)

A mild case of Azure bill shock: is this the most over-priced service on Microsoft’s cloud?

I have been experimenting with accessing Azure storage from remote PCs and tried out the option to use SFTP which was introduced last year. It works though there are limitations, like no support for SSH commands after connecting, no resume support for uploads, and no support for Azure AD authentication – this last is a bit of an issue since fine-grained permissions can only be done with local users, specific to the blob storage.

I actually thought I had turned this off after my experiment but I did not. So I had SFTP enabled on a test storage account, doing nothing. I spotted it of course when I got a large (for my usage) bill. Simply having SFTP enabled on a storage account costs around $220 per month.

To be fair to Microsoft, the cost is documented and there is a notice in the portal, in the details for the storage account, that enabling SFTP incurs a charge, though it does not say how much.

The cost for enabling SFTP

The price is remarkable though, especially given that it seems that the SFTP support is a bit of a hack. Perhaps Microsoft actually runs up a dedicated VM for this in the background, who knows?

“The cost is astronomical considering the service, it’s like $7.20 a day to use and roughly $220 a Month. It’s WAY cheaper to use a VM. This service is like 3x too much,” said a comment from another sufferer.

My advice is not to do this. My further advice is to track closely the actual spend on any new services you run up since is it the only reliable way to avoid this kind of problem.

Windows Server 2022 Essentials – a good deal for small businesses but what is it really?

I have just installed Windows Server 2022 Essentials on a Gen 10 Plus HPE Server – a somewhat arduous experience mainly thanks to what seems to me HP’s buggy firmware and utilities. I optimistically tried to use Intelligent Provisioning; this is meant to update itself before use but got into a loop where it would not update, the solution being to download the latest version from HPE and install it from a USB stick. That worked but I still could not get Intelligent Provisioning to install Windows Server and ended up going a more manual route. Once installed you will need HP’s SUM (Smart Update Manager) to install drivers and update other bits of firmware; this runs as a local web application but when it attempts to open in the default browser (Edge) it hangs on “Loading”; the solution was to use Firefox. I also hit a documented problem where Windows reports virtualization as not enabled and Hyper-V therefore does not work. All fixed now and one thing that I do like about HPE servers is the ILO (Integrated Lights Out) and the ability to do everything remotely including changing BIOS settings.

The main focus of this post though is Windows Server 2022 Essentials, which I purchased with the new server.  Curiously it installs as Windows Server Standard and at first I thought something must be wrong. Not so; this is quite a different thing than previous versions. Windows Server Essentials is two things: a role in Windows Server 2012, 2012, and 2019; and an edition of Windows Server aimed at small businesses. The edition is a good deal for organizations that fit within its limitations since it is modestly priced and does not require CALs (Client Access Licenses),  though it seems you can now only buy it as OEM software. If you exceed the limitations, you have to upgrade to full Windows Server and add the CALs too.

The fact that Server Essentials is both a role and an edition leads to some hilarious confusion including this remark in the official documentation.

image

All that is irrelevant now though as the role has gone since Server 2019.

The consequence of these changes is that Server Essentials now has very little specific documentation. The features are the same as Windows Server Standard, other than the stringent hardware limits which are:

For Windows Server 2022 Essentials:

1 CPU socket, 10 CPU cores, 128GB RAM

For Windows Server 2019 Essentials:

2 CPU sockets, no core limit, 64GB RAM

In addition, the licensing terms state that “Up to either 25 unique users or 50 unique devices may access and use the software at one time” and that “Windows Server CALs are not needed to access the server software.  Some server software functionality may require special CALs.”

Finally, there is provision for virtualization of the server by installing both directly on the hardware and a further instance as a VM, provided that “if you run both permitted instances at the same time, the instance of the server software running in the physical operating system environment may be used only to run hardware virtualization software or provide hardware virtualization services.”

In every other respect, it is Windows Server Standard. A note here states:

With Windows Server 2022, the Essentials edition is available to purchase from OEMs only, however there is no specific installation media. Instead, an Essentials edition product key is used to activate the Standard edition of Windows Server 2022. You get all the same features.

I cannot see any requirement for it to be a domain controller or other such restrictions which apply to earlier versions – though in most cases it probably would be. You can also run Azure AD Connect on versions since 2019.

Windows Server Essentials is the last remnant of what used to be Small Business Server, which in its time was a great solution for small organizations when properly installed and managed. Microsoft now expects such businesses to use 365, though a local server is still handy for things like local user management, print management, local file shares, or applying group policy if you do not use InTune. Further, there is still plenty of business software that expects to run on Windows Server.

Remote Desktop on Mac failes to connect with 0x207 error

I am setting up a new Mac and got this annoying error from the Microsoft Remote Desktop client.

Worse, a number of people have complained about this error but there is a lot of useless advice out there, and also the bad advice to disable NLA (Network Level Authentication) on the Windows PC. Don’t do that, it is bad for security.

One of the few helpful threads on the topic is this one which point to this article on the subject of how to enable integrated authentication on Mac and Linux using Kerberos. I followed the advice here and it worked though I’m not sure if the ALL CAPS is necessary for the domain, but I used it and it worked – as long as I entered user@ALLCAPS in the RDP username as well.

StackOverflow still delivers, fixes my Rollup.js problem

I hear plenty of complaints about StackOverflow, the developer Q&A site. Top of the list is unfriendliness and/or arguing about whether a question is well put rather than, well, answering the question.

For example, this Reddit post yesterday:

“It’s been three years since a question I posted to SO wasn’t closed within the first ten minutes of posting it and downvoted for good measure (that’ll teach me to use the site like it’s intended!).”

No doubt the complaints have some validity. StackOverflow is a kind of social media as well as a technology site and all social media sites have their problems.

Nevertheless, it remains a wonderful resource.

I am using the AWS Chime SDK for JavaScript in a project. I am using ASP.NET, it is not an SPA (Single Page Application), and it is not using React. The SDK though is primarily designed to be used with Node.js and a bundler like Webpack, fitting I guess with the majority of web applications being built today. The solution AWS provides for developers like myself is presented as a demo called singlejs.  This uses rollup.js to bundle the Chime SDK for JavaScript into a single file that can be used in any plain ordinary web page. A couple of weeks ago, someone got around to updating the demo to use version 3.x of the SDK.

I guess the committer checked that the demo worked and generated a JavaScript file, but did not actually check that it was usable. The way it is meant to work:

In a browser environment, window.ChimeSDK will be available. You can access Chime SDK components by component name. For example, you can create a meeting session and configure the meeting session using window.ChimeSDK.

Unfortunately the newly generated amazon-chime-sdk.min.js did not create this global variable, even though it is specified at the name property in rollup.config.js. I puzzled over this problem and tried to fix it in various ways, using different versions of rollup and the plugins it uses. I noted the problem in a GitHub issue on the relevant AWS repository but no response yet. Since it made it impossible for me to upgrade my project to the version 3.x of the SDK it was a significant problem.

I posted a question on StackOverflow. It did not attract many views, and at the time of writing it has only been viewed 31 times. I was not optimistic.

As so often with these kinds of problems, the fix is super simple. The source file in the singlejs demo, the one which Rollup bundles, has just one line of JavaScript:

export * from ‘amazon-chime-sdk-js’;

Someone popped up, 2 days after my post, and commented:

Update the src/index.js file with the following code and then rebuild the code with npm run bundle. Rollup recommends a default export if we have only single export.

export * as default from ‘amazon-chime-sdk-js’;

Indeed, that fixes it. So thank you to Vishnu S Krish, who for no reward has posted a solution that works, ahead of those busy AWS developers working on the project who have so far ignored it.

Thank you too to StackOverflow. It is imperfect; but it is not full of spam, it is a pleasant site to use and not afflicted by intrusive ads and popups, and it has a ton of good solutions.

Surface 9 Pro with Windows on Arm

I have had a short time with a loan Surface 9 Pro running Windows on Arm.

My review sample came without a keyboard case. I do not recommend this unless you have very specific tablet-y requirements. It is hard to use without a keyboard. This of course means it costs more than it first appears, because the cheapest keyboard is £129.99 inc VAT. Since most people I see using a Surface use it like a laptop, I do wonder about the value of the kickstand design, which harks back to the earliest Surface devices when Microsoft was taking on the iPad. That battle was lost with the failure of the tablet personality in Windows 8. Desktop Windows won; and it needs a keyboard.

image

That aside, it’s a lovely device, great screen, great for video conferencing thanks to the smart camera. AI makes it appear that you are looking at the camera even if you are not. Good feature or deception? I am not sure, but I err more towards deception. It is a hard one though, because when paying attention in video conference you are looking at your video of the speaker, not at the camera, which makes it appear that you are looking elsewhere even though you are not.

Lower energy use than x64, longer battery life. Perfect Windows device? It might be, except that the vast majority of Windows applications are compiled for x64 only. This means some applications might not work, and in other areas there is friction. A contact of mine bought a Surface 9 Pro with the SQ (Arm) chipset for work.  It came with Windows 11 Home on Arm. The tech specs say that “At this time, Surface Pro 9 (SQ® 3/5G) with Windows 11 Home on ARM will not install some games and CAD software, and some third-party drivers or anti-virus software. Certain features require specific hardware … find out more in the FAQ.” Where is this FAQ? It is not linked from the tech specs as far as I can tell. Maybe this is it. Windows 11 Pro not mentioned. My contact should of course have purchased Surface Pro 9 for Business. Windows Home has too many annoyances and limitations to be usable for business.

What to do? Fortunately there is a Microsoft 365 upgrade to Windows 11 Pro, which is a cost effective option. The upgrade was delivered to the Microsoft 365 portal as a license key with a link to an ISO to download. The key did not work. The ISO did not work as it was x64 only. Rumour has it that a Windows 11 Pro ARM build from UUP dump worked fine with the key, even as an in-place upgrade. Maybe Microsoft support could also sort this out. But it is friction, and I doubt it will be the last.

It seems obvious to me that if you want an Arm-based laptop with excellent performance and long battery life, a MacBook Pro is a better option. You can run Windows in a VM via VMware Fusion 13 or Parallels and it performs well. Or if you want a Windows on Arm box for test and development the Dev Kit is a good offer.

There is still a niche for the Surface 9 Pro with SQ, if you are confident that everything you need will run. It is more efficient than an x64 device, and it has 5G. It is a nicely built device even if not the best value. I think Windows on Arm will continue to improve. There is a way to go though before it is really mainstream.

.NET P/Invoke on Azure App Service for Linux

I have an online bridge game in development (yes, still!) and it is written in ASP.NET Core with C#. One of the things that interests bridge players is called double-dummy analysis; this is where you look at what would be the best play in a game if you knew where all the cards were, whereas when actually playing bridge you only see your own cards and, during play, another hand called Dummy, so half the cards are hidden.

Double-dummy analysis is a solved problem and bridge programmers benefit from an open source library called DDS (Double Dummy Solver) written primarily by Bo Hagland and Soren Hein. This is a C++ DLL that can also be compiled for Linux and MacOS.

I wanted to integrate DDS into the bridge game in order to give players information at the end of a game including whether they were in the optimum contract and whether they beat the optimum score. I started by doing a new C# wrapper for DDS though borrowing from the work here. My version is 64-bit and wraps a few more functions. I compiled the native DLL for Windows and Linux using OpenMP for concurrency, which considerably improves performance (Boost is another option but I did not find much difference).

Note: the usual caveats about P/Invoke apply here. During one of my tests I actually crashed the container running the app. The ASP.NET developers do a lot of work to make the platform reliable, and doing P/Invoke may introduce instability.

I added my wrapper into the ASP.NET application and it worked fine on my development machine. I deployed it to App Service and the P/Invoke calls did not work. Fixing this required a bit of a deep dive into Azure App Service for Linux.

I am deploying the native code .so library into the same directory as the compiled .NET code for the rest of the application. The error I got was:

Cannot open shared object file: No such file or directory

I raised the topic on Stack Overflow.

One of the things that puzzled me was that the unit tests, which include the P/Invoke code, ran OK in Azure Pipelines, which I use for deployment. But not when deployed.

The first point is that you get the “No such file” error not only when the file itself is not present (it was) but also when a dependency is missing. So step one is to SSH into the container running the ASP.NET app, which you can do with the Development Tools in the Azure portal. Note that with Azure App Service for Linux the app always runs in a container.

image

This gives you root permissions in the container though not to the host operating system. Navigate to the directory with the troublesome library and type:

ldd libdds.so

(or the name of your library). This will tell you if any dependencies are missing or other issues. I noticed two things. One is that it was missing the dependency libgomp.so.1 which is the OpenMP library. Second, ldd reported that my library required at least GLIBC 2.29 where the available version was 2.28.

How could I fix the GLIBC version? This is determined by the version of Linux and you can use

ldd – version

to check the version you have. In my case it said I had Debian with GLIBC 2.28:

image

I did some more research. If you really want to know about Azure App Service for Linux, there are a few key documents.

The basics here: Operating system functionality – Azure App Service | Microsoft Learn

The FAQ here: App Service on Linux FAQ | Microsoft Learn

Here you will learn details like why you cannot use a file-based database like SQLite in Azure App Service for Linux:

“The file system of your application is a mounted network share. This enables scale out scenarios where your code needs to be executed across multiple hosts. Unfortunately this blocks the use of file-based database providers like SQLite since it’s not possible to acquire exclusive locks on the database file.”

But I digress. To go deeper still, check this post by Jim Cheshire:

Things You Should Know: Web Apps and Linux – Microsoft Tech Community

which has lots of critical information, like why a custom container on App Service must respond to ping.

So after reading through all this and greatly improving my understanding of how App Service for Linux works, I got to the heart of my problem. When you deploy a .NET Core application to App Service for Linux, it will by default use a container from the Microsoft Artifact Registry that matches the version of .NET you are using. If you check this page you will see that the current version for ASP.NET Core 6.0 is tagged mcr.microsoft.com/dotnet/aspnet:6.0

image

If you examine this container you will find that it runs Debian Buster which uses GLIBC 2.28. It is a matter of slight concern since Debian Buster is shown on the Debian releases wiki as having an approximate end of life August 2022, though the LTS project extends that to June 2024.

Still, now I knew how to fix my problem. Either use a custom container image, or upgrade to .NET 7, or recompile libdds.so to run on Debian Buster.

I decided that the easiest short-term solution was to recompile. I downloaded Buster and recompiled the library.

What about libgomp.so.1? This was kind-of fixable by using SSH to run:

apt-get update

apt-get install libgomp1

This is not great though since Azure could replace the container at any time, and always if you do something like scale the plan up or down, to change the specification of the VM. I tried copying the buster version of libgomp.so.1 to the application directory. It works, but I also needed to add a linker option to enable DDS to use a library in the same directory:

 -Wl,-rpath='${ORIGIN}'

as explained here.

I think a better solution is to move to deploying a custom container to App Service, which is an option:

image

Care is needed though as there is a bit of special sauce in the official container images if you want features like SSH in the portal to work properly. It also means revisiting my deployment scripts, so the above hack was an easier and quicker workaround for me.

Microsoft to remove Azure “Basic” IP number and load balancer in favour of pricier options

Microsoft is removing some features from Azure which were called “Basic,” in favour of alternatives which have more features but are also more expensive.

A load balancer is a network component which balances traffic to virtual machines. The Basic load balancer is free but has a few limitations, such as no compatibility with availability zones, support for only 300 instances, no SLA (Service Level Agreement), and no support for NAT Gateway. Microsoft has emailed customers saying:

On 30 September 2025, Azure Basic Load Balancer will be retired. You can continue to use your existing Basic Load Balancers until then, but you’ll no longer be able to deploy new ones after 31 March 2025.

The Standard load balancer routes to availability zones, supports up to 5000 instances, is secure by default, and has a 99.9% SLA, but it costs $0.025 per hour, or around $18 per month, for up to 5 rules.

A Basic public IP number costs  $0.0036 per hour or about $2.60 per month. It is a perfectly good IP number but does not support zone resiliency. A standard public IP number costs $0.005 per hour or about $3.60 per month, and does support zone resiliency. A similar email has been sent to users, with the same dates.

Although these extra charges will not make much of a ripple in enterprise accounts, they can be noticeable, for example if you are an individual developing an application and trying to keep within a strict budget.