Category Archives: tech

Converting a scanned image to text in Office 365

I was emailed an attachment scanned from a magazine; it was a nuisance and I wanted to convert it to text. There are of course a million ways to do this and I recall that every multifunction printer used to come with an OCR facility but what is the easiest way now? For a while I’ve used Microsoft OneNote for this, you just paste in an image, right-click, and there is a Copy Text from Picture option:

image

This normally works OK but not this time. The results were not completely useless but included lots of errors; words missing and words wrongly recognised or scrambled. I am not sure, for example, how the word “score” got recognized as “scMe”.

So I looked for a better solution online, trying to avoid ad-laden free OCR sites of unknown quality. I found Convertio which has a straightforward introductory service with no registration or ads for the first 10 pages. It did a much better job with only 3 or 4 errors, text converted correctly to two columns in a Word document, and a table converted to a Word table. The main issue was that the text was tiny – 4pt – but that was reasonably easy to fix up. It seems that it has a much better recognition engine than OneNote.

I’ll be inclined to use Convertio again, but it also seems that Microsoft has got behind with this little corner of Office 365. Perhaps it should do something based on its Cognitive Services.

All change for the New Year

I have been working happily at The Register four days a week since mid-2019 but it is time for a change. Incidentally I very much enjoyed working for the Reg, it was consistently  interesting work, I was given a lot of freedom to write what I wanted, I was well treated, and I recommend it highly as a great place to work. The idea is to find material that is interesting for a technical readership without any pressure to please vendors and I found that to be 100% true.

Why change? The main reason is that near-full time journalism is rightly a demanding role and I found it taking most of my energy; and I have other things I want to do. I will still be writing but once again on a freelance basis, and not at the two or three posts per day that I have been doing. I will also be indulging my enthusiasm for bridge, hopefully improving the online bridge playing and teaching platform I started coding during lockdown, as well as helping the English Bridge Union with its technology. I expect to be more active here on itwriting.com and have plans to experiment with a redesign, perhaps using Next.js and headless WordPress.

Visual Studio, TypeScript, WebPack and ASP.NET Core: somewhat awkward

It is always good to learn a new language so I took advantage of the holiday season to look more closely at TypeScript. At least, that was the original intent. So far I have spent longer on  configuring stuff to work, than I have on actual coding. I think of it as time invested rather than wasted.

As long-term readers will know I am working on a bridge (the card game) website which has been used successfully over the lockdown period. I put this together quickly in the first half of 2020, reusing a unfinished Windows project and taking advantage of everything I could get without having to code it myself, like the ASP.NET identity system. So it is C#, ASP.NET Core, SignalR, runs on Linux on Azure App Service, and mostly coded in Visual Studio, with a few detours into Visual Studio Code.

Of course there is a ton of JavaScript involved and since the user interface for a bridge-playing game is fairly custom I did not use a JavaScript framework unless you could jQuery and Bootstrap. I wrote a separate JavaScript file for each page (possibly a mistake). I also started using the AWS Chime SDK for JavaScript which means referencing a huge 680K JavaScript file.

I therefore had several goals in mind. One was to code in TypeScript rather than JavaScript in order to take advantage of its features and catch more mistakes at compile time. Second, I wanted to optimize the JavaScript better, with automatic minification. Third, I wanted to align my project more closely with the JavaScript ecosystem. The AWS SDK, for example, is written in TypeScript using modules, but I have been using some demo code provided to compile a single JavaScript file. Maybe I can get better optimization by coding my own project in  TypeScript, and importing only the modules I need.

Visual Studio is not well aligned with the modern JavaScript ecosystem, as you can tell if you read this article on bundling and minification of static assets. “ASP.NET Core doesn’t provide a native bundling and minification solution,” it says, and refers developers to the WebOptimizer project or other tools such as Gulp and Webpack.

I did want to start with TypeScript though, and to begin with this looked easy. All you have to do is to add the TypeScript NuGet package, do some minimal configuration by creating and editing tsconfig.json, and you can write TypeScript and have it transpiled to JavaScript in your preferred target directory whenever the project is built. I moved a bunch of my JavaScript files to a directory of TypeScript files, renamed them from .js to .ts, and set to work making the TypeScript compiler happy.

When you do this you discover that the TypeScript compiler considers all .ts files that are not modules to be in the same scope. So if you have two JavaScript files and they both contain functions called DoSomething(), the compiler throws a duplicate function error, even if you will never reference them both from the same web page. You can fix this by making them modules – it feels like TypeScript is designed on this basis – but now you have the opposite problem, that if JavaScript file A references functions or variables in JavaScript file B, they have to be exported and imported. A good thing in principle, but now you have import statements in the code. The TypeScript compiler does not transpile these for compatibility with browsers that do not support import, and in addition, you now have to use “type = ‘module'” on script references in HTML. I also ran into issues with the libraries I use, primarily SignalR and the AWS Chime SDK. You can either npm install these and import them in the proper way, if the developers have provided TypeScript definition files (with a d.ts extension) or find a type library via DefinitelyTyped which provides only the types; you still need to reference the library separately. There is an obvious potential version issue if you go the DefinitelyTyped route.

In other words, what starts out as a simple idea of writing TypeScript instead of JavaScript soon becomes a complete refactor of the code to be modular and use imports and exports. Again, this is not a bad thing, but it is more work and not quite the incremental transition that I had in mind. I had over 1000 errors reported by the TypeScript compiler but gradually whittled them down (and this is with TypeScript set with strict off, intended to be a temporary expedient).

So I did all that but had a problem with these import statements when it came to using them in the browser. It seemed that WebPack could fix this for me, plus I could configure it to do tree-shaking to reduce code size and to use a minifier (it uses terser by default). There is a slight issue though since modern JavaScript tools like WebPack and terser are geared towards bundling all your JavaScript into a single file, and/or having a single-page application, which is not how my bridge site works. Still, it looked like it could be configured to work for me so I started down the track, using a post build step in Visual Studio to run WebPack.

I am sure this is obvious to people familiar with WebPack, but I still had problems getting my HTML pages to talk to the JavaScript. By default terser will mangle and shorten all the function names, but that is easily configured. The HTML still could not call any JavaScript functions: function not defined. Eventually I discovered that you have to configure WebPack to output a library. So if you have an HTML button that called a JavaScript function in its onclick event handler, there are several things you need to do. First, export the function in the JavaScript (or TypeScript) code. Second, add a library name and preferably type ‘umd’ in webpack.config.js. Third, add the library name as a prefix to the function you are calling from HTML, for example mylibrary.myfunction.

I also had issues with code splitting – essential to avoid bloated JavaScript bundles. This is done in WebPack by configuring SplitChunks. If set to ‘all’ then my library exports stopped working. After much trial and error, I found a fix. First, set chunkLoading to ‘jsonp’. Second, if your library variable is set to “undefined” at runtime there is a problem with one of the bundled JavaScript files. Unfortunately this was not reported as an error in the browser console – that is, the undefined library variable was reported, but not the reason for it.I tracked it down to a call to document.readyState or possibly document.addEventListener; using jQuery instead fixed it.

Another tip: do not call any JavaScript code directly from cshtml, other than via event handlers. It might try to run before the JavaScript is loaded. I found it easiest to put initialization code in a function and call it from JavaScript. You can put the function declaration into index.d.ts to keep TypeScript happy, since it is external.

Is it worth it? Watch this space. It has pushed me into refactoring which is improving the structure of my code – but I have also added complexity with a build process that compiles TypeScript to JavaScript (using tsloader), then merging and splitting files with WebPack, while along the way minifying and mangling it with terser. Yes I have encountered unexpected behaviour, partly thanks to my inexperience, but the interactions for example between jQuery and WebPack and library exports are quite complex, for example. I have spent time and energy wresting with WebPack, instead of coding my application. There is a lot to be said for my old approach, where you code in JavaScript and it runs as JavaScript and it is easier to trace what is going on.

Still, it is working and I have achieved some of my goals – but the AWS Chime SDK file is still huge, just 30K smaller than before, which is disappointing. Perhaps there is something I have missed. I will be coding in TypeScript now and look forward to further refactoring as I get to know the language.

Update: I have abandoned this experiment. There were niggling problems with the WebPack bundles and I came to the conclusion that is it unsuitable for a multi-page application. A shame as I wanted to use WebPack; but for the time being I am just using TypeScript with terser. This means I am using native ES modules in the browser and I intend to write up the experience soon.

Exchange emails stuck in queue because “message deferred by categorizer agent”- Happy New Year admins!

The first day of a new year is a great moment to relax and prepare for what is ahead – but spare a thought for Microsoft Exchange administrators who may have woken up to seized up installations of their on-premises email servers. I was among those affected, but only on my tiny system. Messages were stuck in the submission queue, suspiciously since midnight or thereabouts (somehow a message sneaked through timed 12.14 am) and the last error reported by the queue viewer was “Messages deferred by categorizer agent.”

As usual I went down a number of rabbit holes. Restart the Exchange Transport service. Reboot the server. Delete the first message not to be delivered in case it was corrupt and somehow clogging up the queue. Check for certificate issues.

It was none of these. Here is the guilty party in the event viewer:

image

The FIPS-FS Microsoft Scan Engine failed to load, with the error can’t convert “2201010001” too long.

The impact was that the malware filter could not check the message, hence the error from the categorizer agent.

The solution is to run the Exchange Shell on the server and navigate to the Scripts directory where Exchange is installed, for example C:\Program Files\Microsoft\Exchange Server\V15\Scripts. Here you will find a script called Disable-AntimalwareScanning.ps1.

& $env:ExchangeInstallPath\Scripts\Disable-AntimalwareScanning.ps1

should work. Run it, restart the  Exchange Transport service, and email will start to flow.

Once the problem is patched, there is a companion script called Enable-AntimalwareScanning which restores it. Though I am not sure of the value of the Exchange malware filter since Microsoft considers that even on-premises installations should use the Microsoft 365 services for spam and malware scanning, and the on-premises protection features are not kept up to date, meaning that a third-party or open source spam and malware filter is a necessity anyway, unless you go the Office 365 route.

Another reason not to run Exchange on-premises – but Microsoft still says that hybrid systems using Azure Active Directory Connect should do so in order to manage mailboxes.

Note: the maximum value for a 32-bit signed integer is 2,147,483,647. Yesterday which was perhaps represented as 2,112,310,001 would have fitted within that whereas today 2,202,020,001 did not. Dates and times are awkward for programmers.

Update: Microsoft  has an official fix here. Thanks to Erik in the comments for the link.

Notes from the field: virtualising an existing Windows server using UEFI and Secure Boot

Over the weekend I had the task of converting an existing Windows server running on HP RAID to a virtual machine on Hyper-V. This is a very small network with only one server so nice and simple. I used the sysinternals tool Disk2vhd which converts all the drives on an existing server to a single VHD or VHDX. It’s a nice tool that uses shadow copy to make a consistent snapshot.

The idea is that you then take your VHDX and and make it the drive for a new VM on the target host, in my case running Server 2019. Unfortunately my new VM would not boot. Generally there are three things that can happen in these cases. One is that the VM boots fine. Second it tries to boot but comes up with a STOP error. Third, it just sits there with a flashing cursor and nothing happens.

At this point I should say that Microsoft does not really support this type of migration. It is considered something that might or might nor work and at the user’s risk. However I have had success with it in the past and when it works, it does save a lot of time especially in small setups like this, because the new VM is a clone of the old server with all the shared folders, printer drivers, applications, databases and other configuration ready to go.

Disclaimer: please consider this procedure unsupported and if you follow any tips here do not blame me if it does not work! Normally the approach is to take the existing server off the network, do the P2V (Physical to Virtual), run up the new VM and check its health. If it cannot be made to work, scrap the idea, fire up the old server again, and do a migration to a new VM using other techniques, re-install applications and so on.

In my case I got a flashing cursor. What this means, I discovered after some research, is that there is no boot device. If you get a STOP error instead, you have a boot device but there is some other problem, usually with accessing the storage (see notes below about disabling RAID). At this point you will need an ISO of Windows Server xxxx (matching the OS you are troubleshooting) so you can run the troubleshooting tools. I downloaded the Windows Server 2016 Hyper-V, which is nice and small and has the tools.

Note that if the source server uses UEFI boot you must create a generation 2 Hyper-V VM. Well, either that or go down the rabbit hole of converting the GPT partitions to MBR without wiping the data so you can use generation 1.

For troubleshooting, the basic technique is to boot into the Windows recovery tools and then the command prompt.

I am not sure if this is necessary, but the first thing I did was to run regedit, load the system hive using the Load Hive option, and set the Intel RAID controller entries to zero. What this does is to tell Windows not to look for an Intel RAID for its storage. Essentially go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSetXXX\Services (usually XXX is 001 but it might not be) and find the entries if they exist for:

iaStor

iaStorAVC

iaStorAV

iaStorV

storAHCI

and set the Start or StartOverride parameters to 0. This even works for storAHCI since 0 is on and 3 is off.

The VM still would not boot. Flashing cursor. I am grateful for this thread in the Windows EightForums which explains how to fix EFI boot. My problem, I discovered via the diskpart utility, was that my EFI boot partition, which should show as a small, hidden, FAT32 partition, was instead showing as RAW, meaning no filesystem.

The solution, which I am copying here just in case the link fails in future, was (within the recovery command prompt for the failing VM) to do as follows – the bracketed comments are not to be typed, they are notes.

diskpart
list disk
select disk # ( # = disk number for the disk with the efi partition)
list partition (and note size of old efi or presumed efi partition, which will be small and hidden)
select partition # (# = efi partition)
create partition efi size=# (size of old partition, mine was 99)
format quick fs=fat32 label=”SYSTEM”
assign letter=”S”
exit

assuming C is still the drive letter assigned to your windows partition

type:

C:\Windows\System32\bcdboot C:\Windows

This worked perfectly for me. The VM booted, spent a while detecting devices, following which everything was straightforward.

Final comment: although it is unsupported, the Windows engineers have done an amazing job enabling Windows to boot on new hardware with relatively little fuss in most cases – you will end up of course with lots of hidden missing devices in Device Manager that you can clean up with care though I don’t think they do much harm.

Funbridge abandons its Windows app

It appears that Funbridge, an online bridge game, is discontinuing its app for Windows.

image

There is a bit of a sad story here. Funbridge used to have a Windows app that was a little messy but excellent. The company (GOTO Games) then came up with a mobile app for iOS and Android, which worked well on iOS and a bit less well on Android. This mobile app then migrated to Windows and Mac, in terms of look and feel; I  am not sure what programming framework it uses. The new-style Windows version has always been worse than the mobile versions for me, the UI is not really suitable for Windows, and I mainly play on iPad. Now it is going altogether, with users directed towards the web site.

I have always liked the Funbridge user interface on mobile and the asynchronous approach it users, so players can take as long as they like. Everyone plays against the computer and then compares their score with other humans playing the same cards. Funbridge is adding new real-time play though and will soon be adding audio and video online; this may relate to its retirement of the Windows application.

The abandonment of the Windows app is interesting in the context of Microsoft’s hope to boost Windows apps and the Microsoft Store in Windows 11. It looks as if GOTO Games will not be playing.

Bang & Olufsen HX wireless headphones: a delight

I’ve reviewed dozens of wireless headphones and earphones in the last six months, enough that I’m not easily impressed. The B&O HX wireless headphones are an exception; as soon as I heard them I was delighted with the sound and have been using them frequently ever since.

image

These are premium wireless headphones; they are over-ear but relatively compact and lightweight (285g). They are an upgrade of the previous model, Beoplay H9, with longer battery life (35 hours), upgraded ANC (Active Noise Cancellation), and four microphones in place of two. It is not top of the B&O range; for that you need to spend quite a bit more for the H95. Gamers are directed to the similarly-priced Portal model which has a few more features (Dolby Atmos, Xbox Wireless) but shorter battery life and no case; worth considering as it probably sounds equally good but I do not have a Portal to compare.

image

What you are paying for with the HX is a beautiful minimalist design and excellent sound quality. The sound is clean, sweet and exceptionally clear, perfect for extended listening sessions. Trying these out is a matter of “I just want to hear how they sound on [insert another favourite track]”; they convey every detail and are superbly tuneful. It is almost easier to describe what they don’t do: the bass is not distorted or exaggerated, notes are not smeared, they are never harsh. Listening to an old favourite like Kind of Blue by Miles Davis you can follow the bass lines easily, hear every nuance of the percussion. Applause on Alison Krauss and the Union Station live sounds like it does at a concert, many hands clapping. Listening to Richard Thompson’s guitar work on Acoustic Classics you get a sense of the texture of the strings plus amazing realism from the vocals. The drama on the opening bars of Beethoven’s 9th symphony, performed by the New York Philharmonic Orchestra conducted by Leonard Bernstein, is wonderfully communicated.

In other words, if you are a hi-fi enthusiast, the HX will remind you of why. I could also easily hear the difference between the same tracks on Spotify, and CD or high-res tracks on my Sony player. AptX HD is supported.

image

That said, there are unfortunately a few annoyances; not deal-breakers but they must be mentioned. First, the HX has touch controls for play/pause, volume, and skip track. I dislike touch controls because you get no tactile feedback and it is easy to trigger accidentally. With the HX you play/pause by tapping the right earcup; it’s not pleasant even when it works because you get a thump sound in your ear, and half the time you don’t hit it quite right, or you think you haven’t, tap again, then realise you did and have now tapped twice. Swipe for skip track works better, but volume is not so good, you have to use a circular motion and it is curiously difficult to get a small change; nothing seems to happen, then it jumps up or down. Or you trigger skip track by mistake. Ugh.

Luckily there are some real buttons on the HX. These cover on/off, and ANC control, which toggles between on, transparent (hear external sounds) and inactive. There is also a multi-button which activates voice assistants if you use these.

Want to use your headphones? Please sign in

Second, there is an app. I tried it on an iPad. Major annoyance here is that it does not work at all unless you create an account with B&O and sign in. Why should you have to sign in to use your headphones? What are the privacy implications? That aside, I had a few issues getting the app to find the HX, but once I succeeded there are a few extra features. In particular, you can set listening modes which are really custom EQ, or create your own EQ using an unusual graphic controller that sets a balance between Bright, Energetic, Warm and Relaxed. You can also update firmware, set wear detection on or off (pauses play when headphones are removed). Finally, and perhaps most important, you can tune the ANC, though changes don’t seem to persist if you then operate the button. You can also enable Adaptive ANC which is meant to adjust the level of ANC automatically according to the surroundings. It didn’t seem to me to make much difference but maybe it does if you are moving about.

image

The ANC is pretty good though. I have a simple test; I work in a room with a constant hum from servers and ANC should cut out this noise. It does. Further, engaging ANC doesn’t change the sound much, other than cutting out noise, which is how it should work.

There is a 3.5mm jack connection for wired use, but with two important limitations. First, this does not work at all if the battery is flat; the headphones must be turned on. Second, the jack connection lacks the extra connection that enables use for calling, so for listening only. The sound quality was no better wired, perhaps slightly worse, so of limited value.

Despite a few annoyances I really like these headphones. I doubt I will use the app, other than for firmware updates, and rarely bother with the touch controls, which means I can enjoy the lovely sound, elegant design, good noise cancelling, and comfortable wear.

HP unhinged, refuses to honour warranty on its defective laptop

My son bought an HP laptop. He is a student and bought it for his studies. It was an HP ENVY – 13-aq0002na. It was expensive – approaching £1000 as I recall – and he took care of it, buying an official HP case. He bought it directly from HP’s site. A 3-year extended warranty was included. Here is how the HP Care Pack was described:

image

After  a few months, the hinge of the laptop screen started coming away from the base of the laptop at one side, causing the base to start splitting from the top part.

image

He was certain that it was a manufacturing defect. However he did not make a claim immediately, because he needed the laptop for his studies, and he knew he had a long warranty. (I think this was a mistake, but understand his position). The December vacation approached and he had time to have the laptop sorted so he raised the claim.

HP closed the case. He had a confusing conversation with HP who offered to put him through to sales for paid service, he agreed (another mistake) and ended up talking to someone who quoted hundreds of pounds for the repair, which he could not afford.

He attempted to follow up but had no success. He did engage with HP Support on Twitter who seemed helpful at first but ended with this:

image

At no point did HP even offer to look at the laptop.

My son replied:

“The problem is absolutely not accidental damage or customer-induced – I’ve never dropped the laptop, and have always taken very good care of it (there are no marks on the chassis, for example). Whenever I’ve moved it around, it’s been in an HP case designed for the laptop. The problem emerged during normal use within eight months of me purchasing the laptop, and others have reported the same issue – for example, see this post on the HP forum:

https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-2019-Hinge-Issues/td-p/7590894

The problem is the sole result of HP’s hinge design being inadequate. It’s therefore a manufacturing fault and should be covered by the care pack.”

Follow that link and you find this:

image

It is the same model. The same problem at an earlier stage. And 10 people have clicked to say they “have the same question”. So HP’s claim that “nothing has been reported” concerning a similar defect is false.

See also here for another case https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-hinge-issue/m-p/7911883

“I bought my HP Envy in 2017 for university, unfortunately that means it is now out of warranty. One of the hinges is loose and pops out of place every time I open the laptop”

and here for another https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/Hinge-Split-issue-HP-Envy-13-2019/m-p/7991805

“I purchased HP ENVY 13-aq0011ms Laptop in December 2019. After using about 1 year, (I treated it very gently all the time), the issue started: when the screen of the laptop is being moved to open/close, the chassis of the bottom case pops and split-opens”

and here https://h30434.www3.hp.com/t5/Business-Notebooks/Broken-hinge-in-HP-ENVY-13-Laptop/td-p/6749446

“Just after 13 months of buying my new HP Envy and spending 1400£ for it, the right hinge broke. There were no falls/accidents”

My son did attempt to follow up with customer service as suggested but got nowhere.

He knows he could take it further. He could get an independent inspection. He could raise a small claim – subject to finding the right entity to pursue as that isn’t necessarily easy. But he found the whole process exhausting. He raised a claim which was rejected, he escalated it and it was rejected again. He decided life was too short, he is still using the broken laptop for his studies, and when he starts work and has a bit of money he plans to buy a Mac.

Personally I’ve got a lot of respect for HP. I have an HP laptop myself (x360) which has lasted for years. I was happy for my son to buy an HP laptop. I did not believe the company would work so hard to avoid its warranty responsibilities. I guess those charged with minimising the cost to the company of warranty claims have done a good job. There is a hidden cost though. Why would he ever buy or recommend HP again? Why would I?

Note: HP Inc is the vendor who supplies PCs and printers, like my son’s laptop. HP Enterprise sells servers, storage and networking products and is a separate company. None of the above has any relevance to HP Enterprise.

PS I posted a link to this blog on the HP support forum, where others are complaining about this issue. I was immediately banned.

image

Odd behaviour from Azure App Service: production version sometimes serves app from staging slot

I am developing an application which is deployed to Azure App Service. It runs on .NET 5.0 on Linux. I have set up a simple DevOps process so that committing changes to GitHub runs an Azure DevOps pipeline that deploys the application to a staging slot on Azure App Service for Linux. Then I can use Swap in the Azure portal to update the production slot. Swap simply exchanges the content of the staging slot with that in the production slot, so there is a route back in the event of disaster. Swap also restarts the application and forces users to log back in.

Yesterday I fixed a bug, deployed the change to the staging slot, and performed a swap. Logged back into the application, but the bug was still there, though intermittent. That was the bit I could not figure out: what was causing the code to behave differently on different requests? I became suspicious that it was sometimes serving the old version. I proved this by refreshing a page that demonstrated the bug. My page has an application version in the footer, and I could see that when the bug appeared, the version was older.

image

image

Well this is odd. In the App Service Deployment slot settings I have traffic set to 100% for the production slot:

image

In general I tend to assume a bug in my code or an error in my configuration settings is more likely than an issue with the Azure App Service. This does look odd though: why, if traffic is going 100% to the production slot, does the application sometimes serve the old version?

The pragmatic fix was easy. A second deployment to the staging slot means both now have code that works. The bug no longer appears; but I have kept the version number different and can see that the issue is actually still occurring.

I will update this post when I have more information, just in case anyone else hits this issue.

Some of my favourite earbuds and headphones of 2020: part 1

I love technology that endures and one of my most treasured gadgets is a pair of Sennheiser HD600 headphones. Mine are not quite that old, but this is a model that was first released in 1997 and remains on sale; they sound fantastic especially with a high quality headphone amplifier, and I use them as a kind of reference for other headphones. The ear cushions and headband perished on mine and I bought the official spares at an unreasonable price because I so much wanted to keep them going.

image

Still on sale today: Sennheiser HD600

They are as good as ever; but it happens that this year I have reviewed a large number of headsets and earbuds some of which I also like very much. First though, a few observations.

Passive headphones like my HD600s are in decline. I call them passive because they have no built-in electronics other than the speaker drivers. They can only work when wired to the output from an amplifier. Wires can be a nuisance; and as wireless options become cheaper, better and more abundant, wired is in decline (though it will never go away completely).

Once you go wireless, something else happens. The signal the headset receives over Bluetooth or wi-fi is a digital one. Packed into the receiving electronics is not just a pair of speaker drivers,  but also a DAC (Digital to Audio Converter) and an amplifier. Further, the amplifier can be optimized for the drivers. The output can be equalized to compensate for any deficiencies in the drivers or the acoustics formed by their case and fit. This is an active configuration and has obvious advantages, getting the best possible performance from the driver and also enabling smart features like active noise cancellation (if you add microphones into the mix). It should not be surprising that the better wireless devices sound very good. I have also noticed that headsets which offer both wired and wireless modes often sound better wireless, despite the theoretical advantages of a wired connection.

Another significant development is that off-the-shelf chipsets for wireless audio have got better. The leader in this is Qualcomm whose Bluetooth audio chipsets are packed with advanced features. SoCs like the QCC 3506 include adaptive active noise cancellation, 24-bit 96Khz audio, low power consumption, voice assistant support, fast pairing, built in DSP (Digital Signal Processing), and of course programmability for custom features.

It is because of this that there are now numerous budget true wireless earbuds and headsets from brands you might not have heard of, with these exact features.

The importance of the fit

A few years ago I attended a press event at CES in Las Vegas. Shure was exhibiting there and I got to try a pair of its premium IEMs (In-ear monitors). Until then I had been convinced that earbuds could never be as good as headphones; but what I heard that day changed my mind. The audio amazed me, sounding full, rich, spacious, detailed and realistic. I thought for a moment about it and realised I should not be surprised. IEMs are designed to be fitted directly to your ears; why should they not be of the highest quality? The best ones, like those at CES, have multiple drivers.

There is something else though. Getting the best sound from IEMs requires getting an excellent fit since they are generally designed to sealed into your ears; the once with expandable foam ear seals are perhaps the best for achieving this. If they are badly fitted then you will hear sound that is tinny or bass-light.

This means that it is worth taking extreme care with the selection of the right ear seals or ear sleeves, as they are sometimes called. Most earbuds come with a few sizes to accommodate different ear sizes. If the sound changes dramatically when you press the earbuds in slightly, they are not fitted right.

Something else regarding Shure, that I did not realise until recently, is that all its earbuds are designed to have a cable or (in the case of the wireless range) clip that fits behind your ears. The reason for this is that it makes it easier to get a good fit when the cable is not constantly pulling the IEMs out of position. Check out this video for details.

Earbuds that do not fall out

It is a common problem. You fit your earbuds and then go for a run or to the gym. With all that movement, one or both of the earbuds falls out. This can be a serious problem with wireless models. If you lose your earbud in the long grass or on a busy street; you might never find it again, or someone might trample on it.

There are a few solutions. The Shure approach makes it unlikely that the device will fall out. Another idea is to have a neckband that connects the two earbuds and hangs at the back of your head. I quite like this approach, since manufacturers can sneak some batteries into the neckband that give a longer play time than is typical with true wireless. It does not stop an earbud falling out, but it does mean you are less likely to lose it. Some people I’ve chatted to though feel it is the worst of both worlds, the battery and pairing issues of Bluetooth with the inconvenience or ugliness of wires. Up to you.

image
Airloop Snap 3-in-one

One brand, Airloop, claims to solve this by offering 3-in-one earbuds that can be used true wireless, or with a neckband, or with a lightweight “sports band” that has no batteries but does give a bit of security. The idea was good enough to raise hundreds of thousands in crowdfunding on Indiegogo and Kickstarter; they are nice devices but did not quite make my favourites list as I found the devices a little bulky for comfort, the firmware a little buggy, and the sound not quite what it should be at the price.

Don’t fret about the codec

One last remark before we get to my list of favourites. You will find a few variations when it comes to wireless standards and codecs. Headsets used for gaming generally use USB dongles for low latency. Other headsets generally use Bluetooth and support various codecs. The minimum today is SBC (low-complexity subband codec) which is part of A2DP (Advanced Audio Distribution Profile). Apple devices support AAC. Qualcomm’s aptX has advantages over SBC in compression, higher bitrate and lower latency. Sony has LDAC which supports higher resolutions.

In my experience though, the codec support is relatively unimportant to the sound quality. Yes, the better codecs like aptX or LDAC are superior to SBC; but compared to other factors like the number and quality of the drivers, the ease of getting a good fit (see above), and the quality of the electronics in other respects, the codec is less important. “As you may have noticed, it’s difficult to tell the difference between SBC and aptX by ear”, observes this article.

You can bypass these concerns by going wired; but when you consider the benefits of active amplification as well as the convenience of wireless, it is not a simple decision.

See part two (coming soon) for some of the headsets I have enjoyed in 2020.