Microsoft’s “new commerce experience” for 365 services: not just price increases

Microsoft stated in August that it is increasing prices for Microsoft 365 (formerly known as Office 365), the increase being around 20%, from March 1 2022. The company argues that prices have not changed substantially for ten years – perhaps contentious since it has introduced premium plans that are more expensive – and that “this updated pricing reflects the increased value we have delivered to our customers over the past 10 years.”

There has been inflation of around 2% per annum since 2011 and there have been need features, so a price increase is not unreasonable. However there are some other changes in the pipeline that are more difficult. This is the thing called the New Commerce Experience that impacts both customers and resellers. Finding out what has really changed is not that easy but if you dig through the fluff about “agility” and “alignment” and “streamlining”, there are some standout changes:

  • Customers that want the flexibility to reduce seat count will pay 20% more. Until now, it has been possible to reduce seat count without penalty, even though Microsoft presents its pricing as for an “annual term.” With NCE, customers can either pay by the month with premium prices but the ability to reduce seat count with a month’s notice, or pay less but commit to seats for one or three years. During that period, seat count can be increased but not decreased.

    Reasonable? The problem perhaps is that it means giving up one of the benefits of cloud, which is elasticity. Or at least, you can still have elasticity but it is going to cost more. We have also seen this with reserved instance pricing on AWS, Azure and Google Cloud Platform: the price comes down substantially if you commit to paying for one year or more.

  • There will be no cancellation allowed after the first 72 hours of a term, as explained here. This may impact partners more than customers. Scenario: partner sells 1,000 seats of Microsoft 365 for a 3-year term to some company. Three months into the term, the company goes bust. Partners are saying that this leaves them on the hook for the remaining cost. Here, for example, Australian distributor Dicker Data states that “If a customer (who has the agreement with Microsoft) no longer want or can finish the payment of the contract (bankruptcy for example), the partner will incur the costs of paying the remainder of the contract to Microsoft.”

One hopes that such matters are negotiable, but it is a significant risk especially in these unpredictable times of pandemic and climate change.

Converting a scanned image to text in Office 365

I was emailed an attachment scanned from a magazine; it was a nuisance and I wanted to convert it to text. There are of course a million ways to do this and I recall that every multifunction printer used to come with an OCR facility but what is the easiest way now? For a while I’ve used Microsoft OneNote for this, you just paste in an image, right-click, and there is a Copy Text from Picture option:

image

This normally works OK but not this time. The results were not completely useless but included lots of errors; words missing and words wrongly recognised or scrambled. I am not sure, for example, how the word “score” got recognized as “scMe”.

So I looked for a better solution online, trying to avoid ad-laden free OCR sites of unknown quality. I found Convertio which has a straightforward introductory service with no registration or ads for the first 10 pages. It did a much better job with only 3 or 4 errors, text converted correctly to two columns in a Word document, and a table converted to a Word table. The main issue was that the text was tiny – 4pt – but that was reasonably easy to fix up. It seems that it has a much better recognition engine than OneNote.

I’ll be inclined to use Convertio again, but it also seems that Microsoft has got behind with this little corner of Office 365. Perhaps it should do something based on its Cognitive Services.

All change for the New Year

I have been working happily at The Register four days a week since mid-2019 but it is time for a change. Incidentally I very much enjoyed working for the Reg, it was consistently  interesting work, I was given a lot of freedom to write what I wanted, I was well treated, and I recommend it highly as a great place to work. The idea is to find material that is interesting for a technical readership without any pressure to please vendors and I found that to be 100% true.

Why change? The main reason is that near-full time journalism is rightly a demanding role and I found it taking most of my energy; and I have other things I want to do. I will still be writing but once again on a freelance basis, and not at the two or three posts per day that I have been doing. I will also be indulging my enthusiasm for bridge, hopefully improving the online bridge playing and teaching platform I started coding during lockdown, as well as helping the English Bridge Union with its technology. I expect to be more active here on itwriting.com and have plans to experiment with a redesign, perhaps using Next.js and headless WordPress.

Visual Studio, TypeScript, WebPack and ASP.NET Core: somewhat awkward

It is always good to learn a new language so I took advantage of the holiday season to look more closely at TypeScript. At least, that was the original intent. So far I have spent longer on  configuring stuff to work, than I have on actual coding. I think of it as time invested rather than wasted.

As long-term readers will know I am working on a bridge (the card game) website which has been used successfully over the lockdown period. I put this together quickly in the first half of 2020, reusing a unfinished Windows project and taking advantage of everything I could get without having to code it myself, like the ASP.NET identity system. So it is C#, ASP.NET Core, SignalR, runs on Linux on Azure App Service, and mostly coded in Visual Studio, with a few detours into Visual Studio Code.

Of course there is a ton of JavaScript involved and since the user interface for a bridge-playing game is fairly custom I did not use a JavaScript framework unless you could jQuery and Bootstrap. I wrote a separate JavaScript file for each page (possibly a mistake). I also started using the AWS Chime SDK for JavaScript which means referencing a huge 680K JavaScript file.

I therefore had several goals in mind. One was to code in TypeScript rather than JavaScript in order to take advantage of its features and catch more mistakes at compile time. Second, I wanted to optimize the JavaScript better, with automatic minification. Third, I wanted to align my project more closely with the JavaScript ecosystem. The AWS SDK, for example, is written in TypeScript using modules, but I have been using some demo code provided to compile a single JavaScript file. Maybe I can get better optimization by coding my own project in  TypeScript, and importing only the modules I need.

Visual Studio is not well aligned with the modern JavaScript ecosystem, as you can tell if you read this article on bundling and minification of static assets. “ASP.NET Core doesn’t provide a native bundling and minification solution,” it says, and refers developers to the WebOptimizer project or other tools such as Gulp and Webpack.

I did want to start with TypeScript though, and to begin with this looked easy. All you have to do is to add the TypeScript NuGet package, do some minimal configuration by creating and editing tsconfig.json, and you can write TypeScript and have it transpiled to JavaScript in your preferred target directory whenever the project is built. I moved a bunch of my JavaScript files to a directory of TypeScript files, renamed them from .js to .ts, and set to work making the TypeScript compiler happy.

When you do this you discover that the TypeScript compiler considers all .ts files that are not modules to be in the same scope. So if you have two JavaScript files and they both contain functions called DoSomething(), the compiler throws a duplicate function error, even if you will never reference them both from the same web page. You can fix this by making them modules – it feels like TypeScript is designed on this basis – but now you have the opposite problem, that if JavaScript file A references functions or variables in JavaScript file B, they have to be exported and imported. A good thing in principle, but now you have import statements in the code. The TypeScript compiler does not transpile these for compatibility with browsers that do not support import, and in addition, you now have to use “type = ‘module'” on script references in HTML. I also ran into issues with the libraries I use, primarily SignalR and the AWS Chime SDK. You can either npm install these and import them in the proper way, if the developers have provided TypeScript definition files (with a d.ts extension) or find a type library via DefinitelyTyped which provides only the types; you still need to reference the library separately. There is an obvious potential version issue if you go the DefinitelyTyped route.

In other words, what starts out as a simple idea of writing TypeScript instead of JavaScript soon becomes a complete refactor of the code to be modular and use imports and exports. Again, this is not a bad thing, but it is more work and not quite the incremental transition that I had in mind. I had over 1000 errors reported by the TypeScript compiler but gradually whittled them down (and this is with TypeScript set with strict off, intended to be a temporary expedient).

So I did all that but had a problem with these import statements when it came to using them in the browser. It seemed that WebPack could fix this for me, plus I could configure it to do tree-shaking to reduce code size and to use a minifier (it uses terser by default). There is a slight issue though since modern JavaScript tools like WebPack and terser are geared towards bundling all your JavaScript into a single file, and/or having a single-page application, which is not how my bridge site works. Still, it looked like it could be configured to work for me so I started down the track, using a post build step in Visual Studio to run WebPack.

I am sure this is obvious to people familiar with WebPack, but I still had problems getting my HTML pages to talk to the JavaScript. By default terser will mangle and shorten all the function names, but that is easily configured. The HTML still could not call any JavaScript functions: function not defined. Eventually I discovered that you have to configure WebPack to output a library. So if you have an HTML button that called a JavaScript function in its onclick event handler, there are several things you need to do. First, export the function in the JavaScript (or TypeScript) code. Second, add a library name and preferably type ‘umd’ in webpack.config.js. Third, add the library name as a prefix to the function you are calling from HTML, for example mylibrary.myfunction.

I also had issues with code splitting – essential to avoid bloated JavaScript bundles. This is done in WebPack by configuring SplitChunks. If set to ‘all’ then my library exports stopped working. After much trial and error, I found a fix. First, set chunkLoading to ‘jsonp’. Second, if your library variable is set to “undefined” at runtime there is a problem with one of the bundled JavaScript files. Unfortunately this was not reported as an error in the browser console – that is, the undefined library variable was reported, but not the reason for it.I tracked it down to a call to document.readyState or possibly document.addEventListener; using jQuery instead fixed it.

Another tip: do not call any JavaScript code directly from cshtml, other than via event handlers. It might try to run before the JavaScript is loaded. I found it easiest to put initialization code in a function and call it from JavaScript. You can put the function declaration into index.d.ts to keep TypeScript happy, since it is external.

Is it worth it? Watch this space. It has pushed me into refactoring which is improving the structure of my code – but I have also added complexity with a build process that compiles TypeScript to JavaScript (using tsloader), then merging and splitting files with WebPack, while along the way minifying and mangling it with terser. Yes I have encountered unexpected behaviour, partly thanks to my inexperience, but the interactions for example between jQuery and WebPack and library exports are quite complex, for example. I have spent time and energy wresting with WebPack, instead of coding my application. There is a lot to be said for my old approach, where you code in JavaScript and it runs as JavaScript and it is easier to trace what is going on.

Still, it is working and I have achieved some of my goals – but the AWS Chime SDK file is still huge, just 30K smaller than before, which is disappointing. Perhaps there is something I have missed. I will be coding in TypeScript now and look forward to further refactoring as I get to know the language.

Update: I have abandoned this experiment. There were niggling problems with the WebPack bundles and I came to the conclusion that is it unsuitable for a multi-page application. A shame as I wanted to use WebPack; but for the time being I am just using TypeScript with terser. This means I am using native ES modules in the browser and I intend to write up the experience soon.

Exchange emails stuck in queue because “message deferred by categorizer agent”- Happy New Year admins!

The first day of a new year is a great moment to relax and prepare for what is ahead – but spare a thought for Microsoft Exchange administrators who may have woken up to seized up installations of their on-premises email servers. I was among those affected, but only on my tiny system. Messages were stuck in the submission queue, suspiciously since midnight or thereabouts (somehow a message sneaked through timed 12.14 am) and the last error reported by the queue viewer was “Messages deferred by categorizer agent.”

As usual I went down a number of rabbit holes. Restart the Exchange Transport service. Reboot the server. Delete the first message not to be delivered in case it was corrupt and somehow clogging up the queue. Check for certificate issues.

It was none of these. Here is the guilty party in the event viewer:

image

The FIPS-FS Microsoft Scan Engine failed to load, with the error can’t convert “2201010001” too long.

The impact was that the malware filter could not check the message, hence the error from the categorizer agent.

The solution is to run the Exchange Shell on the server and navigate to the Scripts directory where Exchange is installed, for example C:\Program Files\Microsoft\Exchange Server\V15\Scripts. Here you will find a script called Disable-AntimalwareScanning.ps1.

& $env:ExchangeInstallPath\Scripts\Disable-AntimalwareScanning.ps1

should work. Run it, restart the  Exchange Transport service, and email will start to flow.

Once the problem is patched, there is a companion script called Enable-AntimalwareScanning which restores it. Though I am not sure of the value of the Exchange malware filter since Microsoft considers that even on-premises installations should use the Microsoft 365 services for spam and malware scanning, and the on-premises protection features are not kept up to date, meaning that a third-party or open source spam and malware filter is a necessity anyway, unless you go the Office 365 route.

Another reason not to run Exchange on-premises – but Microsoft still says that hybrid systems using Azure Active Directory Connect should do so in order to manage mailboxes.

Note: the maximum value for a 32-bit signed integer is 2,147,483,647. Yesterday which was perhaps represented as 2,112,310,001 would have fitted within that whereas today 2,202,020,001 did not. Dates and times are awkward for programmers.

Update: Microsoft  has an official fix here. Thanks to Erik in the comments for the link.

Notes from the field: virtualising an existing Windows server using UEFI and Secure Boot

Over the weekend I had the task of converting an existing Windows server running on HP RAID to a virtual machine on Hyper-V. This is a very small network with only one server so nice and simple. I used the sysinternals tool Disk2vhd which converts all the drives on an existing server to a single VHD or VHDX. It’s a nice tool that uses shadow copy to make a consistent snapshot.

The idea is that you then take your VHDX and and make it the drive for a new VM on the target host, in my case running Server 2019. Unfortunately my new VM would not boot. Generally there are three things that can happen in these cases. One is that the VM boots fine. Second it tries to boot but comes up with a STOP error. Third, it just sits there with a flashing cursor and nothing happens.

At this point I should say that Microsoft does not really support this type of migration. It is considered something that might or might nor work and at the user’s risk. However I have had success with it in the past and when it works, it does save a lot of time especially in small setups like this, because the new VM is a clone of the old server with all the shared folders, printer drivers, applications, databases and other configuration ready to go.

Disclaimer: please consider this procedure unsupported and if you follow any tips here do not blame me if it does not work! Normally the approach is to take the existing server off the network, do the P2V (Physical to Virtual), run up the new VM and check its health. If it cannot be made to work, scrap the idea, fire up the old server again, and do a migration to a new VM using other techniques, re-install applications and so on.

In my case I got a flashing cursor. What this means, I discovered after some research, is that there is no boot device. If you get a STOP error instead, you have a boot device but there is some other problem, usually with accessing the storage (see notes below about disabling RAID). At this point you will need an ISO of Windows Server xxxx (matching the OS you are troubleshooting) so you can run the troubleshooting tools. I downloaded the Windows Server 2016 Hyper-V, which is nice and small and has the tools.

Note that if the source server uses UEFI boot you must create a generation 2 Hyper-V VM. Well, either that or go down the rabbit hole of converting the GPT partitions to MBR without wiping the data so you can use generation 1.

For troubleshooting, the basic technique is to boot into the Windows recovery tools and then the command prompt.

I am not sure if this is necessary, but the first thing I did was to run regedit, load the system hive using the Load Hive option, and set the Intel RAID controller entries to zero. What this does is to tell Windows not to look for an Intel RAID for its storage. Essentially go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSetXXX\Services (usually XXX is 001 but it might not be) and find the entries if they exist for:

iaStor

iaStorAVC

iaStorAV

iaStorV

storAHCI

and set the Start or StartOverride parameters to 0. This even works for storAHCI since 0 is on and 3 is off.

The VM still would not boot. Flashing cursor. I am grateful for this thread in the Windows EightForums which explains how to fix EFI boot. My problem, I discovered via the diskpart utility, was that my EFI boot partition, which should show as a small, hidden, FAT32 partition, was instead showing as RAW, meaning no filesystem.

The solution, which I am copying here just in case the link fails in future, was (within the recovery command prompt for the failing VM) to do as follows – the bracketed comments are not to be typed, they are notes.

diskpart
list disk
select disk # ( # = disk number for the disk with the efi partition)
list partition (and note size of old efi or presumed efi partition, which will be small and hidden)
select partition # (# = efi partition)
create partition efi size=# (size of old partition, mine was 99)
format quick fs=fat32 label=”SYSTEM”
assign letter=”S”
exit

assuming C is still the drive letter assigned to your windows partition

type:

C:\Windows\System32\bcdboot C:\Windows

This worked perfectly for me. The VM booted, spent a while detecting devices, following which everything was straightforward.

Final comment: although it is unsupported, the Windows engineers have done an amazing job enabling Windows to boot on new hardware with relatively little fuss in most cases – you will end up of course with lots of hidden missing devices in Device Manager that you can clean up with care though I don’t think they do much harm.

Funbridge abandons its Windows app

It appears that Funbridge, an online bridge game, is discontinuing its app for Windows.

image

There is a bit of a sad story here. Funbridge used to have a Windows app that was a little messy but excellent. The company (GOTO Games) then came up with a mobile app for iOS and Android, which worked well on iOS and a bit less well on Android. This mobile app then migrated to Windows and Mac, in terms of look and feel; I  am not sure what programming framework it uses. The new-style Windows version has always been worse than the mobile versions for me, the UI is not really suitable for Windows, and I mainly play on iPad. Now it is going altogether, with users directed towards the web site.

I have always liked the Funbridge user interface on mobile and the asynchronous approach it users, so players can take as long as they like. Everyone plays against the computer and then compares their score with other humans playing the same cards. Funbridge is adding new real-time play though and will soon be adding audio and video online; this may relate to its retirement of the Windows application.

The abandonment of the Windows app is interesting in the context of Microsoft’s hope to boost Windows apps and the Microsoft Store in Windows 11. It looks as if GOTO Games will not be playing.

Bang & Olufsen HX wireless headphones: a delight

I’ve reviewed dozens of wireless headphones and earphones in the last six months, enough that I’m not easily impressed. The B&O HX wireless headphones are an exception; as soon as I heard them I was delighted with the sound and have been using them frequently ever since.

image

These are premium wireless headphones; they are over-ear but relatively compact and lightweight (285g). They are an upgrade of the previous model, Beoplay H9, with longer battery life (35 hours), upgraded ANC (Active Noise Cancellation), and four microphones in place of two. It is not top of the B&O range; for that you need to spend quite a bit more for the H95. Gamers are directed to the similarly-priced Portal model which has a few more features (Dolby Atmos, Xbox Wireless) but shorter battery life and no case; worth considering as it probably sounds equally good but I do not have a Portal to compare.

image

What you are paying for with the HX is a beautiful minimalist design and excellent sound quality. The sound is clean, sweet and exceptionally clear, perfect for extended listening sessions. Trying these out is a matter of “I just want to hear how they sound on [insert another favourite track]”; they convey every detail and are superbly tuneful. It is almost easier to describe what they don’t do: the bass is not distorted or exaggerated, notes are not smeared, they are never harsh. Listening to an old favourite like Kind of Blue by Miles Davis you can follow the bass lines easily, hear every nuance of the percussion. Applause on Alison Krauss and the Union Station live sounds like it does at a concert, many hands clapping. Listening to Richard Thompson’s guitar work on Acoustic Classics you get a sense of the texture of the strings plus amazing realism from the vocals. The drama on the opening bars of Beethoven’s 9th symphony, performed by the New York Philharmonic Orchestra conducted by Leonard Bernstein, is wonderfully communicated.

In other words, if you are a hi-fi enthusiast, the HX will remind you of why. I could also easily hear the difference between the same tracks on Spotify, and CD or high-res tracks on my Sony player. AptX HD is supported.

image

That said, there are unfortunately a few annoyances; not deal-breakers but they must be mentioned. First, the HX has touch controls for play/pause, volume, and skip track. I dislike touch controls because you get no tactile feedback and it is easy to trigger accidentally. With the HX you play/pause by tapping the right earcup; it’s not pleasant even when it works because you get a thump sound in your ear, and half the time you don’t hit it quite right, or you think you haven’t, tap again, then realise you did and have now tapped twice. Swipe for skip track works better, but volume is not so good, you have to use a circular motion and it is curiously difficult to get a small change; nothing seems to happen, then it jumps up or down. Or you trigger skip track by mistake. Ugh.

Luckily there are some real buttons on the HX. These cover on/off, and ANC control, which toggles between on, transparent (hear external sounds) and inactive. There is also a multi-button which activates voice assistants if you use these.

Want to use your headphones? Please sign in

Second, there is an app. I tried it on an iPad. Major annoyance here is that it does not work at all unless you create an account with B&O and sign in. Why should you have to sign in to use your headphones? What are the privacy implications? That aside, I had a few issues getting the app to find the HX, but once I succeeded there are a few extra features. In particular, you can set listening modes which are really custom EQ, or create your own EQ using an unusual graphic controller that sets a balance between Bright, Energetic, Warm and Relaxed. You can also update firmware, set wear detection on or off (pauses play when headphones are removed). Finally, and perhaps most important, you can tune the ANC, though changes don’t seem to persist if you then operate the button. You can also enable Adaptive ANC which is meant to adjust the level of ANC automatically according to the surroundings. It didn’t seem to me to make much difference but maybe it does if you are moving about.

image

The ANC is pretty good though. I have a simple test; I work in a room with a constant hum from servers and ANC should cut out this noise. It does. Further, engaging ANC doesn’t change the sound much, other than cutting out noise, which is how it should work.

There is a 3.5mm jack connection for wired use, but with two important limitations. First, this does not work at all if the battery is flat; the headphones must be turned on. Second, the jack connection lacks the extra connection that enables use for calling, so for listening only. The sound quality was no better wired, perhaps slightly worse, so of limited value.

Despite a few annoyances I really like these headphones. I doubt I will use the app, other than for firmware updates, and rarely bother with the touch controls, which means I can enjoy the lovely sound, elegant design, good noise cancelling, and comfortable wear.

HP unhinged, refuses to honour warranty on its defective laptop

My son bought an HP laptop. He is a student and bought it for his studies. It was an HP ENVY – 13-aq0002na. It was expensive – approaching £1000 as I recall – and he took care of it, buying an official HP case. He bought it directly from HP’s site. A 3-year extended warranty was included. Here is how the HP Care Pack was described:

image

After  a few months, the hinge of the laptop screen started coming away from the base of the laptop at one side, causing the base to start splitting from the top part.

image

He was certain that it was a manufacturing defect. However he did not make a claim immediately, because he needed the laptop for his studies, and he knew he had a long warranty. (I think this was a mistake, but understand his position). The December vacation approached and he had time to have the laptop sorted so he raised the claim.

HP closed the case. He had a confusing conversation with HP who offered to put him through to sales for paid service, he agreed (another mistake) and ended up talking to someone who quoted hundreds of pounds for the repair, which he could not afford.

He attempted to follow up but had no success. He did engage with HP Support on Twitter who seemed helpful at first but ended with this:

image

At no point did HP even offer to look at the laptop.

My son replied:

“The problem is absolutely not accidental damage or customer-induced – I’ve never dropped the laptop, and have always taken very good care of it (there are no marks on the chassis, for example). Whenever I’ve moved it around, it’s been in an HP case designed for the laptop. The problem emerged during normal use within eight months of me purchasing the laptop, and others have reported the same issue – for example, see this post on the HP forum:

https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-2019-Hinge-Issues/td-p/7590894

The problem is the sole result of HP’s hinge design being inadequate. It’s therefore a manufacturing fault and should be covered by the care pack.”

Follow that link and you find this:

image

It is the same model. The same problem at an earlier stage. And 10 people have clicked to say they “have the same question”. So HP’s claim that “nothing has been reported” concerning a similar defect is false.

See also here for another case https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-hinge-issue/m-p/7911883

“I bought my HP Envy in 2017 for university, unfortunately that means it is now out of warranty. One of the hinges is loose and pops out of place every time I open the laptop”

and here for another https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/Hinge-Split-issue-HP-Envy-13-2019/m-p/7991805

“I purchased HP ENVY 13-aq0011ms Laptop in December 2019. After using about 1 year, (I treated it very gently all the time), the issue started: when the screen of the laptop is being moved to open/close, the chassis of the bottom case pops and split-opens”

and here https://h30434.www3.hp.com/t5/Business-Notebooks/Broken-hinge-in-HP-ENVY-13-Laptop/td-p/6749446

“Just after 13 months of buying my new HP Envy and spending 1400£ for it, the right hinge broke. There were no falls/accidents”

My son did attempt to follow up with customer service as suggested but got nowhere.

He knows he could take it further. He could get an independent inspection. He could raise a small claim – subject to finding the right entity to pursue as that isn’t necessarily easy. But he found the whole process exhausting. He raised a claim which was rejected, he escalated it and it was rejected again. He decided life was too short, he is still using the broken laptop for his studies, and when he starts work and has a bit of money he plans to buy a Mac.

Personally I’ve got a lot of respect for HP. I have an HP laptop myself (x360) which has lasted for years. I was happy for my son to buy an HP laptop. I did not believe the company would work so hard to avoid its warranty responsibilities. I guess those charged with minimising the cost to the company of warranty claims have done a good job. There is a hidden cost though. Why would he ever buy or recommend HP again? Why would I?

Note: HP Inc is the vendor who supplies PCs and printers, like my son’s laptop. HP Enterprise sells servers, storage and networking products and is a separate company. None of the above has any relevance to HP Enterprise.

PS I posted a link to this blog on the HP support forum, where others are complaining about this issue. I was immediately banned.

image

A surprising favourite: Shure’s Aonic 215 True Wireless Sound Isolating Earbuds

I have reviewed numerous wireless earbuds over the last six months, but the real test is which ones I pick out of the pile when going out for a walk or run. Often it is the Shure Aonic 215, despite some limitations. They are an unusual design which hook right over the ear instead of just fitting within; I like this because they are more secure than most designs and I don’t like the inconvenience or potential expense of losing a valuable gadget when out and about. Plus, they sound good.

image

How good? It was some years back that Shure opened my eyes (or ears) to how good in-ear monitors (IEMs) could sound. It was at a show, only a demo, and the IEMs had a four-figure price, but it made me realise the potential of in-ear electronics to sound better than any headphones I have heard.

I also have some lesser but much-used wired Shure wired IEMs which are a years old but still sound good. I’m happy to say that the Aonic 215s sounds substantially better in every way: clarity, frequency response, realism.

That said, the Aonic 215 true wireless has had a chequered history. Launched in April 2020, they were actually recalled by Shure because of problems with one earphone not playing, or battery issues. Shure fixed the issues to the extent of resuming supply but they are still a little troublesome.

If you value convenience above sound quality, you can get other earbuds that sound fine, have more features, and cost less – so go and do that.

image

Still reading? Well, if you like the Shure sound the True Wireless does have a lot going for it. It’s important to understand that this is a modular system. The Aonic 215 has Shure’s standard MMCX connector and you can get a cable that would let you use these wired. You can also get other Shure IEMs to connect to the True Wireless earhooks to let you use them wireless.

This package of course has them in one. You get a charging case too, which will charge the IEMs three times when fully charged. Play time is up to 8 hours (7 hours probably more realistic). Long enough for me.

Task number one is selecting the right ear sleeve. The aims is to create a seal in your ear. 6 pairs are supplied, including the one pre-attached. The best in my opinion are the foam type which form themselves to the shape of your ear, and which come in small, medium and large. Changing these is a little awkward and as ever one worries about damaging the unit but with a little twisting and tugging it is not too bad. Most other earbuds do not have these foam-type sleeves.

That done, you fit the earbuds and turn on. Now the fun starts. There is a single button on each ear hook, positioned on a circular piece which hangs behind your ear. You operate it either by squeezing this piece, or pressing the button which then presses into your head. I found the squeezing option better, but it is not super convenient.
Don’t worry too much though as functionality is limited. You can power on and off, pause, answer or end a call, and turn environment mode on or off. A triple click activates a voice assistant (I didn’t try this).

Environment mode? This is pretty useful, and lets you hear what is going on around you. If you want to have a conversation while listening, it is pretty much essential.

What’s missing though? Well, volume control and track skip are the key ones. You will have to use your player for that. Shure is still working on the firmware so this might improve, but one button is not much to play with.

Another potentially big deal is that calls only work in the right ear. This doesn’t bother me much, but for some it is a deal-breaker, depending on how you want to use them. I expect to use them almost entirely for music.

There is a Shure Play app for iOS and Android which describes itself as a high-res audio player. This has a graphic equalizer but it only works when playing music through the app, which excludes streaming sources. You can also adjust the environment mode. You need the app to update the firmware – which given the history of the product is quite important; the update history makes reference to “bug fixes.” The update is done over Bluetooth and takes around 30 minutes; my first effort failed because the mobile went to sleep. I got this done in the end by keeping the app open and touching it from time to time to stop it sleeping. Such are the things that lovers of high quality audio endure in pursuit of the best sound!

I use these earbuds mainly with a Sony Walkman music player and like the excellent sound quality, secure fit, and very useful environment mode. A few things to note about the sound, which is the main benefit here. You get aptX, AAC and SBC, with aptX best for quality but AAC important for Apple devices. There is only a single driver which limits the quality compared to the high-end Shure devices, like the ones I heard years ago, but it is still excellent. I would characterise it as neutral in tone, with particularly good separation and bass that is clean and not at all boomy. The lack of boom may come over as bass-light at first, but persevere and you appreciate this. It is important to have a good fit and if you don’t get the seal they will sound thin; of course every ear is different so how easy this is will vary. The design of the Shure also means that the sleeves can clog with wax and a little tool is supplied to help with cleaning.

Not for everyone then; but these suit me well. One last thing to mention: Shure unfortunately has a reputation as not the most reliable of earbud brands. In my case, one wireless unit went dead after a couple of weeks. Shure replaced it and all is well, but it is somewhat concerning.

Tech Writing