The decline of high end audio at CES and what it says about the audiophile market

I am not a regular at CES, the huge trade consumer technology fair in Las Vegas, but well recall my last visit, in 2014. I did the usual round of press conferences from various technology vendors, but reserved some time towards the end of my stay for the high-end audio rooms at the Venetian, one of the more civilized hotels in Vegas despite the fake canals.

There was plenty of activity there, floor upon floor of exhibitors showing all kinds of audio exotica, from cables thicker than your arm to amplifiers that would test the strength of your flooring. Of course there was plenty of audio on the main CES exhibits as well, but my observation at the time was that while the mainstream manufacturers like B&W and Sony had good sounds at relatively affordable prices, the crazy folk in the Venetian did achieve the best sonics, if you closed your eyes to the wild theories and bank-busting prices.

I was ushered into a room to hear a preview of Naim’s Statement amplifiers and heard a sound that was “muscular, etched and authoritative”, no less than it should be at £150,000 for a set.

image

It appears that memories will now be all we have of the these great days in the Venetian. Last year CEPro reported:

Maybe the writing was on the wall last year at CES 2017 when two of the suites in the high-performance area were occupied by AARP and Serta Mattress. The running joke among attendees was the elderly audiophiles there could take a nap and check in on their retirement status while listening to audio …. “This is the end of high-performance audio at CES,” said one exhibitor bluntly.

This year it has played out more or less as expected:

The impact of the high-fidelity corner of CES was certainly diminished by any standard. Actual listening rooms were reduced to a single hallway, with some stragglers to be found a few floors upward.

says AudioStream.

The word is that High-End Munich has replaced CES to some extent; but this is not just a matter of which industry show is more fashionable. You only have to look around you at a hi-fi shows to note that these enthusiasts are mostly an older generation. The future does not look good.

There is no decline in music appreciation, so what is wrong? There are several factors which come to mind.

The first and most important is that technology has made high quality audio cheap and ubiquitous. Plug a decent pair of headphones into the smartphone you already have, and the quality is already more than satisfactory for most listeners. Spend a bit on powered wireless speakers and you can get superb sound. In other words, the excellent performance of mainstream audio has pushed the high-end market into a smaller and smaller niche.

The industry has also harmed itself by seemingly embracing every opportunity for hype, regardless of what science and engineering tell us. Exotic cables, digital resolutions beyond anything that human ears can hear, unwarranted fuss about jitter or mysterious timing issues (MQA anyone?), and more.

In the meantime, the music companies have done their best to make high resolution audio even more pointless by excessive dynamic range compression engineered into the music they release, wasting the fantastic dynamic range that is now possible and even on occasion introducing audible distortion.

I became an audio enthusiast when I heard how much I was missing by using mainstream budget equipment. I recall listening sessions in hi-fi shops where I was stunned by the realism, musicality and detail that was to be heard from familiar records when played back on high-end systems.

Such experiences are less likely today.

Fixing OneDrive Camera upload on Android

A feature of Microsoft’s OneDrive cloud storage is that you can set it to upload photos from your smartphone automatically. It is a handy feature, in part as a backup in case the you lose your mobile, and in part because it lets you easily get to them on your PC or Mac, for editing, printing or sharing.

This feature used to work reliably on Windows Phone but I have not found it so good on Android. Photos never seem to upload in the background, but only when you open the OneDrive app and tap Photos. Even then, it seems to stop uploading from time to time, as if everything is up to date when it is not.

The fix that I have found is to open OneDrive settings by tapping the Me icon (not a particularly intuitive place to find settings, but never mind).

image

Then I turn Camera upload off. Go back to Photos. Go back to settings and turn Camera upload on again. It always kicks it back into life.

image

It is worth noting of course that Google Photos also has this feature and it is likely to be enabled, unless you specifically took care not to enable it. And  cloud storage of photos on Google is free if you choose “High quality” for upload size. If you choose “Original” for upload size, you get 15GB free photo storage.

This being the case, why bother with OneDrive camera upload? A few reasons I can think of:

1. The Windows 10 Photos app integrates with OneDrive, showing previews of your images without downloading them and letting you download on demand.

2. You might have more space on OneDrive, especially if you use OneDrive for Business, which is now in beta

3. In a business context, automatic upload to OneDrive for Business has great potential. Think surveyors, engineers, medicine, anyone who does site visits for work

4. For consumers, it probably does not make sense to spread your stuff across both a Microsoft account and a Google account. If you have picked Microsoft, maybe because you use Windows or because you would rather trust Microsoft than Google with your personal data, then you would want your photos to be in OneDrive rather than Google Photos.

It is therefore unfortunate that in my experience it does now work right. I am not sure if this is just a bug in the app, or something to do with Android. In the end though, it is just another niggly thing that pushes Android users away from Microsoft and towards Google services.

The best apps for a Windows 10 PC? Disappointing list shows key Windows weakness

I happened across Tom Warren’s list of 9 best apps for your new Windows PC and it gave me pause for thought. You may love some of those apps – Tweeten, Wox, ShareX, for example – but as it happens I don’t use any of them and it strikes me as a weak list.

There are reasons for this and it is not Warren’s fault (though of course you can argue with his selection, that’s really the point of this kind of post).

The most essential app for Windows is Microsoft Office. In business environments a new Windows 10 installation may only need Office, or Office and perhaps a few custom business applications, and it is ready to go.

You might add Chrome or Firefox if you want to avoid Edge (I use Edge and find it pretty good), and you probably want Adobe Reader or equivalent as Edge is not that good for PDFs.

There are other fantastic commercial applications of course, not least Adobe’s amazing Creative Cloud, and of course stalwarts like AutoCAD.

These expensive business applications are not the kind of thing you want to list in a consumer-oriented post though. So you end up desperately searching the Windows Store for apps that deserve to be on a “best apps” list. It is not easy.

The core problem is that Microsoft expended considerable energy telling developers not to bother with classic Windows desktop applications but to target the Windows Runtime, later reworked as UWP (Universal Windows Platform). Then with Windows 10 (and the abandonment of Windows 10 mobile) UWP became rather pointless. You can debate this back and forth, but the net result is that much of the life was sucked out of the Windows developer ecosystem, even though Windows remains popular.

I don’t see this changing and it will not help Microsoft sustain Windows market share versus Google Chrome OS and Apple iPad Pro. From a consumer perspective, an iPad now has vastly better apps than Windows.

Incidentally, my favourite free Windows apps are Visual Studio Code, Filezilla, Putty, Notepad++, Paint.NET, Audacity, Foobar2000 and Open Live Writer. And stuff I have installed in Windows Subsystem for Linux (Ubuntu) though I am not sure if that counts.

David Bowie Is app: Floating in a most peculiar way

The exhibition David Bowie Is, originally at the Victoria and Albert museum in London and subsequently on tour around the world, has proved an enormous success with over 2 million visitors in 12 locations. Sony Entertainment has now released David Bowie Is AR Exhibition, an app for iOS and Android that uses Augmented Reality to enable users to enjoy the exhibition at home and whenever they like.

I found the app though-provoking. I am a fan of course, so keen to see the material; and I attended the London exhibition twice so I have some context.

image

I tried the app on an Honor 10 AI – note that you have to download the Google ARCore library first, if it is not already installed. Then I ran the app and found it somewhat frustrating. When the app starts up, you get a calibrating screen and this has to complete before you can progress.

image

If you struggle at all with this, I recommend having a look at the help, which says to “Find a well-lit surface with a visible pattern or a few flat items on it. A magazine on a desk or table works well.” Another tip is that the app is designed for a table-top experience. So sit at a desk, do not try walking around and using a wall.

The app streams a lot of data. So if you are on a poor connection, expect to wait while the orange thermometer bar fills up at the bottom of the screen. The streaming/caching could probably be much improved.

Once I got the app working I began to warm to it. You can think of it as a series of pages or virtual rooms. Each room has an array of object in it, and you tap an object to bring it into view. Once an object is focused like this, you can zoom in by moving the phone. Pinch to zoom should work too though I had some problems with it.

Here is a view of the recording page:

image

and here I’ve brought a page of Bowie’s notes into view (note the caption which appears) and zoomed in; the resolution is good.

image

The clever bit is that you can move objects around by tap and drag. This is a nice feature when viewing Bowie’s cut-up lyric technique, since you can drag the pieces around to exercise your own creativity.

image

Fair enough, but is this really Augmented Reality? I’d argue not, since it does not mix the real world with the virtual world. It just uses the AR platform as a viewer into this virtualised environment.

The experience is good when it works, but not if you get disappearing content, endless “calibration”, stuttering videos, or content that is too small and stubbornly refuses to come into view – all issues which I encountered. It also requires a fairly high-end phone or tablet. So your environment has to be just right for it to work; not ideal for enjoying on a train journey, for example. And some of the content is literally shaky; I think this is a bug and may improve with an update.

Would it be better if it were presented in a more traditional ways, as a database of items which you could search and view? Unfortunately I think it would. This would also reduce the system requirements and enable more people to enjoy it.

It does look as if there is a lot here. According to the site:

56 costumes
38 songs
23 music videos
60 original lyric sheets
50 photos
33 drawings and sketches
7 paintings

I would love to be able to look up these items easily. Instead I have to hunt through the virtual rooms and hope I can find what I am looking for. Just like a real exhibition, complete with crowds and kids wanting toilets I guess. 

Unlimited free private repositories come to GitHub

When I was looking for an online code repository some years back, I picked Visual Studio Online (now called Azure DevOps) over GitHub. The main reason was the ability to host private repositories with a free account. The projects I work on typically only have one or two developers.

Microsoft acquired GitHub last year and has now announced free private repositories on GitHub – provided you have no more than three collaborators. You can see all the plans here.

image

There is still a bias towards open source, in that open source developers can use the Team plan for free. This is essential for GitHub to fulfil its role as the home of many widely used open source projects.

The addition of free private repositories is significant though. There are plenty of developers like myself who will now look again at hosting code on GitHub.

What is Microsoft’s strategy? There seem to me two important reasons why Microsoft acquired GitHub. One was as a defensive measure. Microsoft now has a ton of open source projects that are critical to its platform, things like .NET Core and now most of the .NET frameworks as well. It would have been uncomfortable if a rival like Google had acquired GitHub.

The second is to promote Azure. GitHub’s infrastructure will no doubt move to Azure, and all going well the service will promote Azure both as an example of a successful at-scale service, and by little ads and signposts that Microsoft can include. The developer audience is influential when it comes to platform choices.

Microsoft therefore does not need GitHub to be profitable, which is just as well having now removed one of the main incentives to get a paid account.

I will be interested to see how the company moves to further integrate GitHub and Azure DevOps. There is currently quite a lot of overlap and it would make sense to streamline the offerings to share the same back-end technology, or even to fold Azure DevOps services into GitHub.

There is no hurry. Microsoft’s priority will be to keep existing GitHub developers happy and to convince them that the acquisition will do no harm.

Desktop development: is Electron the answer, or a tragedy?

A few weeks ago InfoQ posted a session by Paul Betts on Desktop Applications in Electron. Betts worked on Slack Desktop, which he says was one of the first Electron apps after the Atom editor. There is a transcript as well as a video (which is great for text-oriented people like myself).

Electron, in case you missed it, is a framework for building desktop applications with Chromium, Google’s open source browser on which Chrome is based, and Node.js. In that it uses web technology for desktop applications, it is a similar concept to older frameworks like Apache Cordova/PhoneGap, though Electron only targets Windows, macOS and Linux, not mobile platforms, and is specific to a particular browser engine and JavaScript runtime.

image

Electron is popular as a quick route to cross-platform desktop applications. It is particularly attractive if you come from a web development background since you can use many of the same libraries and skills.

Betts says:

Electron is a way to build desktop applications that run on Mac and Linux and Windows PCs using web technologies. So we don’t have to use things like Cocoa or WPF or Windows Forms; these things from the 90s. We can use web technology and reuse a lot of the pieces we’ve used to build our websites, to build desktop applications. And that’s really cool because it means that we can do interesting desktop-y things like, open users’ files and documents and stuff like that, and show notifications and kind of do things that desktop apps can do. But we can do them in less than the bazillion years it will take you to write WPF and Coco apps. So that’s cool.

There are many helpful tips in this session, but the comment posted above gave me pause for thought. You can get excellent results from Electron: look no further than Visual Studio Code which in just a few years (first release was April 2015) has become one of the most popular development tools of all time.

At the same time, I am reluctant to dismiss native code desktop development as yesterday’s thing. John Gruber articulates the problem in his piece about Electron and the decline of native apps.

As un-Mac-like as Word 6 was, it was far more Mac-like then than Google Docs running inside a Chrome tab is today. Google Docs on Chrome is an un-Mac-like word processor running inside an ever-more-un-Mac-like web browser. What the Mac market flatly rejected as un-Mac-like in 1996 was better than what the Mac market tolerates, seemingly happily, today. Software no longer needs to be Mac-like to succeed on the Mac today. That’s a tragedy.

Unlike Gruber I am not a Mac person but even on Windows I love the performance and integration of native applications that look right, feel right, and take full advantage of the platform.

As a developer I also prefer C# to JavaScript but that is perhaps more incidental – though it shows how far-sighted C# inventor Anders Hejlsberg was when he shifted to work on TypeScript, another super popular open source project from Microsoft.

A glimpse into Microsoft history which goes some way to explaining the decline of Windows

Why is Windows in decline today? Short answer: because Microsoft lost out and/or gave up on Windows Phone / Mobile.

But how did it get to that point? A significant part of the story is the failure of Longhorn (when two to three years of Windows development was wasted in a big reset), and the failure of Windows 8.

In fact these two things are related. Here’s a post from Justin Chase; it is from back in May but only caught my attention when Jose Fajardo put it on Twitter. Chase was a software engineer at Microsoft between 2008 and 2014.

Chase notes that Internet Explorer (IE) stagnated because many of the developers working on it switched over to work on Windows Presentation Foundation, one of the “three pillars” of Longhorn. I can corroborate this to the extent that I recall a conversation with a senior Microsoft executive at Tech Ed Europe, in pre-Longhorn days, when I asked why not much was happening with IE. He said that the future lay in rich internet-connected applications rather than browser applications. Insightful perhaps, if you look at mobile apps today, but no doubt Microsoft also had in mind locking people into Windows.

WPF, based on .NET and DirectX, was intended to be used for the entire Windows shell in Longhorn. It was too slow, memory hungry, and buggy, eventually leading to the Longhorn reset.

“Ever since Longhorn the Windows team has had an extremely bitter attitude towards .NET. I don’t think its completely fair as they essentially went all in on a brand new technology and .NET has done a lot of evolving since then but nonetheless that sentiment remains among some of the now top players in Microsoft. So effectively there is a sentiment that some of the largest disasters in Microsoft history (IE’s fall from grace and multiple “bad” versions of Windows) are, essentially, totally the fault of gambling on .NET and losing (from their perspective). “

writes Chase.

This went on to impact Windows 8. You will recall that Windows Phone development was once based on Silverlight. Windows 8 however did not use Silverlight but instead had its own flavour of XAML. At the time I was bemused that Microsoft, with an empty Windows 8 app store, had not enabled compatibility with Windows Phone applications which would have given Windows 8 a considerable boost as well as helping developers port their code. Chase explains:

“So when Microsoft went to make their new metro apps for windows 8/10, they almost didn’t even support XAML apps but only C++ and JavaScript. It was only the passion of the developer community that pushed it over the edge and let it in.”

That was a shame because Silverlight was a great bit of technology, lightweight, powerful, graphically rich, and even cross-platform to some extent. If Microsoft had given developers a consistent and largely compatible path from Silverlight to Windows Phone to Windows 8 to Windows 10, rather than the endless changes of direction that happened instead, its modern Windows development platform would be stronger. Perhaps, even, Windows Phone / Mobile would not have been abandoned; and we would not have to choose today between the Apple island and the ad-driven Android.

The end of the Edge browser engine. Another pivotal moment in Microsoft’s history

Microsoft’s Joe Belfiore has announced that future versions of its Edge web browser will be built on Chromium. Chromium is an open source browser project originated by Google, which uses it for Chrome. The browser engine is Blink, which was forked from WebKit in April 2013.

image

Belfiore does not specify what will happen to Chakra, the JavaScript engine used by Edge, but it seems likely that future versions of Edge will use the Chrome V8 engine instead.

There is plenty of logic behind the move. The immediate benefit to Microsoft in having its own browser engine is rather small. Chromium-based Edge will still have Microsoft’s branding and can still have unique features. It opens an easy route to cross-platform Edge, not only for Android, but also for MacOS and potentially Linux. It will improve web compatibility because all web developers know their stuff has to run properly in Chrome.

This is still a remarkable moment. The technology behind Edge goes right back to Trident, the Internet Explorer engine introduced in 1997. In the Nineties, winning the browser wars was seen as crucial to the future of the company, as Microsoft feared that users working mostly in the browser would no longer be hooked to Windows.

Today those fears have somewhat come to pass; and Windows does indeed face a threat, especially from Chrome OS for laptops, and of course from iOS and Android on mobile, though it turns out that internet-connected apps are just as important. Since Microsoft is not doing too well with its app store either, there are challenges ahead for Microsoft’s desktop operating system.

The difference is that today Microsoft cares more about its cloud platform. Replacing a Windows-only building block with a cross-platform one is therefore strategically more valuable than the opportunity to make Edge a key attraction of Windows, which was in any case unsuccessful.

The downside though (and it is a big one) is that the disappearance of the Edge engine means there is only Mozilla’s Gecko (used by Firefox), and WebKit, used by Apple’s Safari browser, remaining as mainstream alternatives to Chromium. Browser monoculture is drawing closer then, though the use of open source lessens the risk that any one company (it would be Google in this instance) will be able to take advantage.

Internet Explorer was an unhealthy monoculture during its years of domination, oddly not because of all its hooks to Windows, but because Microsoft stagnated its development in order to promote its Windows-based application platform (at least, that is my interpretation of what happened).

Let me add that this is a sad moment for the Edge team. I like Edge and there was lots of good work done to make it an excellent web browser.

State of Microsoft .NET: transition to .Net Core or be left behind

The transition of Microsoft’s .NET platform from Windows-only to cross-platform (and open source) is the right thing. Along with Xamarin (.NET for mobile platforms), it means that developers with skills in C#, F# and Visual Basic can target new platforms, and that existing applications can with sufficient effort be migrated to Linux on the server or to mobile clients.

That does not mean it is easy. Microsoft forked .NET to create .NET Core (it is only four years since I wrote up one of the early announcements on The Register) and the problem with forks is that you get divergence, making it harder to jump from one fork to the other.

At first this was disguised. The idea was that .NET Framework (the old Windows-only .NET) would be evolved alongside .NET Core and new language features would apply to both, at least initially. In addition, ASP.NET Core (the web framework) runs on either .NET Framework or .NET Core.

This is now changing. Microsoft has shifted its position so that .NET Framework is in near-maintenance mode and that new features come only to .NET Core. Last month, Microsoft’s Damian Edwards stated that ASP.NET Core will only run on .NET Core starting from 3.0, the next major version.

This week Mads Torgersen, C# Program Manager, summarised new features in the forthcoming C# 8.0. Many of these features will only work on .NET Core:

Async streams, indexers and ranges all rely on new framework types that will be part of .NET Standard 2.1. As Immo describes in his post Announcing .NET Standard 2.1, .NET Core 3.0 as well as Xamarin, Unity and Mono will all implement .NET Standard 2.1, but .NET Framework 4.8 will not. This means that the types required to use these features won’t be available when you target C# 8.0 to .NET Framework 4.8.

Default interface member implementations rely on new runtime enhancements, and we will not make those in the .NET Runtime 4.8 either. So this feature simply will not work on .NET Framework 4.8 and on older versions of .NET.

The obvious answer is to switch to .NET Core. Microsoft is making this more feasible by supporting WPF and Windows Forms with .NET Core, on Windows only. Entity Framework 6 will also be supported.  It is also likely that this will work on Windows 7 as well as Windows 10.

This move will not be welcome to all developers. The servicing for .NET Framework is automatic, via Windows Update or on-premises equivalents, but for .NET Core requires developer attention. Inevitably some things will not work quite the same on .NET Core and for long-term stability it may be preferable to stay with .NET Framework. The more rapid release cycle of .NET Core is not necessarily a good thing if you prioritise reliability over new features.

The problem though: from now on, .NET Framework will not evolve much. There are a few new things in .NET Framework 4.8, like high DPI support, Edge-based browser control, and better touch support. There are really minimal essential updates. In time, maintaining applications on .NET Framework will look like a mistake as application capabilities and performance fall behind. That means, if you are a .NET developer, .NET Core is in your future.

From Big Blue to Big Red? IBM to acquire Red Hat

image

IBM has agreed to acquire Red Hat:

IBM will acquire all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion.

IBM Is presenting this as a hybrid cloud play, with the claim that businesses are held back from cloud migration “by the proprietary nature of today’s cloud market.”

IBM and Red Hat will be strongly positioned to address this issue and accelerate hybrid multi-cloud adoption. Together, they will help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. In doing so, they will draw on their shared leadership in key technologies, such as Linux, containers, Kubernetes, multi-cloud management, and cloud management and automation.

Notably, the announcement specifically refers to multi-cloud adoption, and that the company intends to “build and enhance” partnerships with Amazon Web Services (AWS), Microsoft Azure, Google Cloud and Alibaba.

Red Hat will be a “distinct unit” within IBM, the intention being to preserve its open source culture and independence.

My own instinct is that we will see more IBM influence on Red Hat, than Microsoft influence on GitHub, to take another recent example of an established tech giant acquiring a company with an open source culture.

IBM is coming from behind in the cloud wars, but with Linux ascendant, and Red Hat the leader in enterprise Linux, the acquisition gives the company a stronger position in today’s technology landscape.

Tech Writing