Yamaha’s vinyl revival on display at IFA in Berlin including GT-5000 turntable

At IFA in Berlin, Europe’s biggest consumer electronics show, there is no doubting that the vinyl revival is real.

At times it did feel like going back in time. On the Teac stand there were posters for Led Zeppelin and The Who, records by Deep Purple and the Velvet Underground, and of course lots of turntables.

image

Why all the interest in vinyl? Nostalgia is a factor but there is a little more to it. A record satisfies a psychological urge to collect, to own, to hold a piece of music that you admire, and streaming or downloading does not meet that need.

There is also the sound. At its best, records have an organic realism that digital audio rarely matches. Sometimes that is because of the freedom digital audio gives to mastering engineers to crush all the dynamics out of music in a quest to make everything as LOUD as possible. Other factors are the possibility of euphonic distortion in vinyl playback, or that excessive digital processing damages the purity of the sound. Records also have plenty of drawbacks, including vulnerability to physical damage, dust which collects on the needle, geometric issues which means that the arm is (most of the time) not exactly parallel to the groove, and the fact that he quality of reproduction drops near the centre of the record, where the speed is slower.

Somehow all these annoyances have not prevented vinyl sales from increasing, and audio companies are taking advantage. It is a gift for them, some slight relief from the trend towards smartphones, streaming, earbuds and wireless speakers in place of traditional hi-fi systems.

One of the craziest things I saw at IFA was Crosley’s RDS3, a miniature turntable too small even for a 7” single. It plays one-sided 3” records of which there are hardly any available to buy.Luckily it is not very expensive, and is typically sold on Record Store Day complete with a collectible 3” record which you can play again and again.

image

Moving from the ridiculous to the sublime, I was also intrigued by Yamaha’s GT-5000. It is a high-end turntable which is not yet in full production. I was told there are only three in existence at the moment, one on the stand at IFA, one in a listening room at IFA, and one at Yamaha’s head office in Japan.

IMG_20190907_091642

Before you ask, price will be around €7000, complete with arm. A lot, but in the world of high-end audio, not completely unaffordable.

There was a Yamaha GT-2000 turntable back in the eighties, the GT standing for “Gigantic and Tremendous”. Yamaha told me that engineers in retirement were consulted on this revived design.

The GT-5000 is part of a recently introduced 5000 series, including amplifier and loudspeakers, which takes a 100% analogue approach. The turntable is belt drive, and features a very heavy two-piece platter. The brass inner platter weights 2kg and the aluminium outer platter, 5.2kg. The high mass of the platter stabilises the rotation. The straight tonearm features a copper-plated aluminium inner tube and a carbon outer tube. The headshell is cut aluminium and is replaceable. You can adjust the speed ±1.5% in 0.1% increments. Output is via XLR balanced terminals or unbalanced RCA. Yamaha do not supply a cartridge but recommend the Ortofon Cadenza Black.

Partnering the GT-5000 is C-5000 pre-amplifier, the M-5000 100w per channel stereo power amplifier, and NS-5000 three-way loudspeakers. Both amplifiers have balanced connections and Yamaha has implemented what it calls “floating and balanced technology”:

Floating and balanced power amplifier technology delivers fully balanced amplification, with all amplifier circuitry including the power supply ‘floating’ from the electrical ground … one of the main goals of C-5000 development was to have completely balanced transmission of phono equaliser output, including the MC (moving coil) head amp … balanced transmission is well-known to be less susceptible to external noise, and these qualities are especially dramatic for minute signals between the phono cartridge and pre-amplifier.

In practice I suspect many buyers will partner the GT-5000 with their own choice of amplifier, but I do like the pure analogue approach which Yamaha has adopted. If you are going to pretend that digital audio does not exist you might as well do so consistently (I use Naim amplifiers from the eighties with my own turntable setup).

I did get a brief chance to hear the GT-5000 in the listening room at IFA. I was not familiar with the recording and cannot make meaningful comment except to say that yes, it sounded good, though perhaps slightly bright. I would need longer and to play some of my own familiar records to form a considered opinion.

What I do know is that if you want to play records, it really is worth investing in a high quality turntable, arm and cartridge; and that the pre-amplifier as well is critically important because of the low output, especially from moving coil cartridges.

GT-5000 arm geometry

There is one controversial aspect to the GT-5000 which is its arm geometry. All tonearms are a compromise. The ideal tonearm has zero friction, perfect rigidity, and parallel tracking at all points, unfortunately impossible to achieve. The GT-5000 has a short, straight arm, whereas most arms have an angled headshell and slightly overhang the centre of the platter. The problem with a short, straight arm is that it has a higher deviation from parallel than with a longer arm and angled headshell, so much so that it may only be suitable for a conical stylus. On the other hand, it does not require any bias adjustment, simplifying the design. With a straight arm, it would be geometrically preferable to have a very long arm but that may tend to resonate more as well as requiring a large plinth. I am inclined the give the GT-5000 the benefit of the doubt; it will be interesting to see detailed listening and performance tests in due course.

More information on the GT-5000 is here.

Saving documents in Office 365 desktop applications

Those readers who also follow The Register may have noticed that I am writing more for that publication now, though be assured that I will still post here from time to time. My most recent piece is on saving documents in Office and reflects a longstanding annoyance that in applications like Word and Excel Microsoft mostly bypasses the standard Windows file save dialog in favour of its own Backstage,  now supplemented by an additional dialog which the team says  will help us “save your files to the cloud more easily.”

image

Admittedly the new dialog is small and neat relative to the cluttered Backstage but it is not very flexible and if you use multiple sub-folders to organize our files you will be clicking More save options half the time, defeating the point.

There is also a suspicion that rather than helping us with something most of us do not need help with, Microsoft is trying to promote OneDrive – which it is entitled to do, but it is an annoyance if the software you have paid for is being used as a surreptitious marketing tool.

Microsoft earnings: strong quarter, but Xbox revenue dives

Microsoft has announced its quarterly financial statements, reporting revenue of $33.7 billion, up 12% on the same period last year.

The company stated that Azure revenue is up 64% year on year. Azure has overtaken the other two segments and is now the biggest, by a small amount. In addition, Azure gross margin has improved by 6% year on year.

Office 365 revenue is up 31% year on year.

Gaming was a black spot, declining 10% year on year – though Xbox Live monthly active users is at a record 65 million. The main problem is a 48% decline in the volume of Xbox consoles sold.

Quarter ending June 30th 2019 vs quarter ending June 30th 2018, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 11047 +1379 4344 +878
Intelligent Cloud 11391 +1785 4502 +601
More Personal Computing 11279 +468 3559 +547

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Not OK Google

Views on privacy vary. Most people either do not think about it, or trust that big tech companies will do no harm with knowledge of your location, who your friends are, what you like to view on the internet and so on.

That truest may be shaken by disturbing revelations last week from Belgian broadcaster VRT. The report states that:

  • Google records speech heard by its Google Assistant or Google Home devices
  • Google passes on a proportion of these recordings to third parties, to assist with transcription. This is done to improve the speech recognition
  • Many of these recordings – it is not known exactly how many – are recorded unintentionally, rather being started with the “OK Google” trigger words. This could be because of some sound that the device incorrectly interprets as “OK Google”, or because of a mis-tap on a smartphone.
  • The recording are not effectively anonymised. They include addresses, names, business names and so on. Identity is often easy to work out.
  • The recordings are personal. They include medical queries, domestic arguments, even on one occasion “a woman who was in definite distress.”

Google’s response? Its main concern is to prevent future leaks of audio files, rather than with the fact that these recordings should not have been in the hands of third parties in the first place. “We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” says Google’s David Monsees.

Did users consent, somewhere in the miasma and dark UI patterns of “Accept” buttons that now bombard us on the web? Maybe, but I do not think this is what was expected by those users whose identifiable private moments were first recorded and then passed around by Google. They have been let down.

Chromium and Microsoft annoyances : Dynamics CRM issues like broken downloads, Chromium team “won’t fix”

Microsoft Dynamics CRM (which exists in both cloud-hosted and on-premises versions) is not working well with Chromium, the open source browser engine used by Google Chrome.

I discovered one obvious issue using Edge Preview, which is based on Chromium. If you download a file, for example using a Word template, Microsoft Office does not recognise it. It turns out to have single quotes around it. I imagine the quotes are there to allow for document names which include spaces, but it should use double quotes. Chromium (and Chrome) used to work OK with single quotes but now does not. It’s causing quite a bit of grief for CRM users in businesses that have standardised on Chrome.

You can read all the details here. Here’s a user report by Troy Siegert, whose organization frequently downloads files from Dynamics:

This week when the Chrome beta build went mainstream, my 30 users suddenly had Windows 10 unable to determine what to do with the files they were so dutifully downloading and trying to look at. Instead of *Report.pdf* the file was named *’Report.pdf’* and of course Windows 10 has no idea what a *.pdf’* file is or what to do with it, so it started asking users questions for which they weren’t prepared and that they didn’t understand. Some of them got confused and tried to associate .xlsx files with Adobe and then became unhappy when Adobe was throwing up messages about corrupt files.

Google’s Abdul Syed responds:

For any server operators running into this issue, the way to fix for this is to use double quotes around any quoted string in the Content-Disposition header (And, more generally, in any HTTP header).

Translation: fix your stuff, don’t expect us to fix our stuff. And in fact the issue has been marked WontFix (Closed).

There was actually a bit of a battle about this. The original commit here (Oct 2018) was reverted here (Feb 12 2019) and unreverted here (Feb 19 2019). In other words, the Chromium team knew it broke downloads for Dynamics CRM users but were not willing to compromise.

I am in two minds about this one. Dynamics CRM is sloppy in places and part of me favours giving Microsoft’s team a kick to make them fix thing that should have been fixed years back.

On the other hand, Mozilla Firefox works fine with the CRM single quotes and you cannot help wondering if Google’s attitude would be different were it a Google application that is impacted.

Two Factor Authentication is great–but what if you lose your phone or have your number hijacked?

Account hijack is a worry for anyone. What kind of chaos could someone cause simply by taking over your email or social media account? Or how about spending money on your behalf on Amazon, eBay or other online retailers?

The obvious fraud will not be long-lasting, but there is an aftermath too. Changing passwords, getting back into accounts that have been compromised and their security information changed.

In the worst cases you might lose access to an account permanently. Organisations like Google, Microsoft, Facebook or eBay, are not easy to deal with in cases where your account is thoroughly compromised. They may not be sure whether your are the victim attempting to recover an account, or the imposter attempting to compromise an account. Even getting to speak to a human can be challenging, as they rely on automated systems, and when you do, you may not get the answer you want.

The solution is stronger security so that account hijack is less common, but security is never easy. It is a system, and like any system, any change you make can impact other parts of the system. In the old world, the most common approach had three key parts, username, password and email. The username was often the email address, so perhaps make that two key parts. Lose the password, and you can reset it by email.

Two problems with this approach. First, the password might be stolen or guessed (rather easy considering massive databases username/password combinations easily available online). And second, the email is security-critical, and email can be intercepted as it often travels the internet in plain text, for at least part of its journey. If you use Office 365, for example, your connection to Office 365 is encrypted, but an email sent to you may still be plain text until it arrives on Microsoft’s servers.

There is therefore a big trend towards 2-factor authentication (2FA): something you have as well as something you know. This is not new, and many of us have used things like little devices that display one-time pass codes that you use in addition to a password, such as the RSA SecureID key fob devices.

image

Another common approach is a card and a card reader. The card readers are all the same, and you use something like a bank card, put in your PIN, and it displays a code. An imposter would need to clone your card, or steal it and know the PIN.

in the EU, everyone is becoming familiar with 2FA thanks to the revised Payment Services Directive (PSD2) which comes into effect in September 2019 and requires Strong Customer Authentication (SCA). Details of what this means in the UK are here. See chapter 20:

Under the PSRs 2017, strong customer authentication means authentication based on the use of two or more independent elements (factors) from the following categories:

• something known only to the payment service user (knowledge)

• something held only by the payment service user (possession)

• something inherent to the payment service user (inherence)

So will we see a lot more card readers and token devices? Maybe not. They offer decent security, but they are expensive, and when users lose them or they wear out or the battery goes, they have to be replaced, which means more admin and expense. Giant companies like security, but they care almost as much about keeping costs down and automating password reset and account recovery.

Instead, the favoured approach is to use your mobile phone. There are several ways to do this, of which the simplest is where you are sent a one-time code by SMS. Another is where you install an app that generates codes, just like the key fob devices, but with support for multiple accounts and no need to clutter up your pocket or bag.

These are not bad solutions – some better than others – but this is a system, remember. It used to be your email address, but now it is your phone and/or your phone number that is critical to your security. All of us need to think carefully about a couple of things:

– if our phone is lost or broken, can we still get our work done?

– if a bad guy steals our phone or hijacks the number (not that difficult in many cases, via a little social engineering), what are the consequences?

Note that the SCA regulations insist that the factors are each independent of the other, but that can be difficult to achieve. There you are with your authenticator app, your password manager, your web browser with saved usernames and passwords, your email account – all on your phone.

Personally I realised recently that I now have about a dozen authenticator accounts on a phone that is quite old and might break; I started going through them and evaluating what would happen if I lost access to the app. Unlike many apps, most authenticator apps (for example those from Google and Microsoft) do not automatically reinstall complete with account data when you get a new phone.

Here are a few observations.

First, SMS codes are relatively easy from a recovery perspective (you just need a new phone with the same number), but not good for security. Simon Thorpe at Authy has a good outline of the issues with it here and concludes:

Essentially SMS is great for finding out your Uber is arriving, or when your restaurant table is ready. But SMS was never designed to provide a secure way for you to login to your online banking account.

Yes, Authy is pitching its alternative solutions but the issues are real. So try to avoid them; though, as Thorpe notes, SMS codes are much stronger security than password alone.

Second, the authenticator app problem. Each of those accounts is actually a long code. So you can back them up by storing the code. However it is not easy to get the code unless you hack your phone, for example getting root access to an Android device.

What you can do though is to use the “manually enter code” option when setting up an account, and copy the code somewhere safe. Yes you are undermining the security, but you can then easily recover the account. Up to you.

If you use the (free) Authy app, the accounts do roam between your various devices. This must mean Authy keeps a copy on its cloud services, hopefully suitably encrypted. So it must be a bit less secure, but it is another solution.

Third, check out the recovery process for those accounts where you rely on your authenticator app or smartphone number. In Google’s case, for example, you can access backup codes – they are in the same place where you set up the authenticator account. These will get you back into your account, once for each code. I highly recommend that you do something to cover yourself against the possibility of losing your authenticator code, as Google is not easy to deal with in account recovery cases.

A password manager or an encrypted device is a good place to store backup codes, or you may have better ideas.

The important thing is this: a smartphone is an easy thing to lose, so it pays to plan ahead.

Wrestling with Visual Basic 6 (really!) and how knowledge on the internet gets harder to find

I have not done a thing with Visual Basic 6 for years, but a contact of mine has done a very useful utility in a certain niche (it is the teaching of Contract Bridge though you do not need to know about bridge to follow this post) and his preferred tools are VBA and Visual Basic 6.

Visual Basic 6 is thoroughly obsolete, but apps compiled with it still run fine on Windows 10 and Microsoft probably knows better than to stop them working. The IDE is painful on versions of Windows later than XP but that is what VMs are for. VBA, which uses essentially the same runtime (though updated and also available in 64-bit)  is not really obsolete at all; it is still the macro language of Office though Microsoft would prefer you to use a modern add-in model that works in the cloud.

Specifically, we thought it would be great to use Bo Haglund’s excellent Double Dummy Solver which is an open source library and runs cross-platform. So it was just a matter of doing a VB6 wrapper to call this DLL.

I did not set my sights high, I just wanted to call one function which looks like this:

EXTERN_C DLLEXPORT int STDCALL CalcDDtablePBN(
   struct ddTableDealPBN tableDealPBN,
   struct ddTableResults * tablep);

and the ddTableResults struct looks like this:

struct ddTableResults
{
   int resTable[DDS_STRAINS][DDS_HANDS];
};

I also forked the source and created a Visual C++ project for it for a familiar debugging experience.

So the first problem I ran into (before I compiled the DLL for myself) is that VB6 struggles with passing a struct (user-defined type or UDT in VB) ByVal. Maybe there is a way to do it, but this was when I ran up Visual C++ and decided to modify the source to make it more VB friendly, creating a version of the function that has pointer arguments so that you can pass the UDT ByRef.

A trivial change, but at this point I discovered that there is some mystery about  __declspec(dllexport) in Visual C++ which is meant to export undecorated functions for use in a DLL but does not always do so. The easy solution is to go back to using a DEF file and after fiddling for a bit I did that.

Now the head-scratching started as the code seemed to run fine but I got the wrong results. My C++ code was OK and the unit test worked perfectly. Further, VB6 did not crash or report any error. Just that the values in the ddTableResults.resTable array after the function returned were wrong.

Of course I searched for help but it is somewhat hard to find help with VB6 and calling DLLs especially since Microsoft has broken the links or otherwise removed all the helpful documents and MSDN articles that existed 20 years ago when VB6 was hot.

I actually dug out my old copy of Daniel Appleman’s Visual Basic Programmer’s Guide to the Windows API where he assured me that arrays in UDTs should work OK, especially since it is just an array of integers rather than strings.

Eventually I noticed what was happening. When I passed my two-dimensional array to the DLL it worked fine but in the DLL the indexes were inverted. So all I needed to do was to fix this up in my wrapper. Then it all worked fine. Who knows what convoluted stuff happens under the surface to give this result – yes I know that VB6 uses SAFEARRAYs.

image

I do not miss VB6 at all and personally I moved on to VB.NET and C# at the earliest opportunity. I do understand however that some people like to stay with what is familiar, and also that legacy software has to be maintained. It would be interesting to know how many VB6 projects are still being actively maintained; my guess is quite a few. Which if you read Bruce McKinney’s well-argued rants here must be somewhat frustrating.

Microsoft’s Pipelines for Azure Kubernetes Service: fixing COPY failed

I like to try new technology when I can so following the Build conference I decided to deploy a Hello World app to Azure Kubernetes Service (AKS). I made a one-node AKS cluster in no time. I built a .NET Core app in Visual Studio deployed to a Linux Docker container, no problem. I pushed the container into ACR (Azure Container Registry) though it turns out I did not really need to do that. The tricky bit is getting the container deployed to the AKS cluster. There is a thing called Dev Spaces but it does not work in UK South:

image

I was contemplating the necessity of building a Helm chart when I tried a thing called Deployment Center (Preview) in the Azure portal.

Click Add Project and it builds a pipeline in Azure DevOps for you.

image

It worked but the pipeline failed when building the container.

COPY failed: stat /var/lib/docker/tmp/docker-builder088029891/AKS-Example/AKS-Example.csproj: no such file or directory

I spent some time puzzling over this error. You can view the exact logs of the build failure and I worked out that it is executing the Dockerfile steps:

COPY [“AKS-Example/AKS-Example.csproj”, “AKS-Example/”]

RUN dotnet restore “AKS-Example/AKS-Example.csproj”
COPY . .

This is failing because there the code in my repository is not nested like that. I eventually fixed it by amending the lines to:

COPY [“AKS-Example.csproj”, “AKS-Example/”]
RUN dotnet restore “AKS-Example/AKS-Example.csproj”

COPY . AKS-Example/

Now the pipeline completed and the container was deployed. I had to look at the Load Balancer Azure had generated for me to find the public IP number, but it worked.

image

Now the Dockerfile has a different path for local development than when deployed which is annoying. I found I could fix this by changing a step in the Deployment Center wizard:

image

Where it says /AKS-Example in Docker build context I replaced it with /. Now the build worked with the original Dockerfile.

I also noticed that the Deployment Center (Preview) used a sample YAML template which is linked directly from GitHub and referred confusingly to deploying sampleapp. It worked but felt a bit of a crude solution.

At this point I realised that I was not really using the latest and greatest, which is the pipeline wizard in Azure Devops. So I deleted everything and tried that.

image

This was great but I could not see an equivalent step to the Docker build context. And indeed, the new build failed with the same COPY failed error I got originally. Luckily I knew the workaround and was up and running in no time.

This different approach also has a slightly different shape than the Deployment Center pipeline, using Environments in Azure DevOps.

Currently therefore I have two questions:

  • Why does Azure offer both the Deployment Center (Preview) and the multi-stage pipeline which seem to have overlapping functionality?
  • What is the correct way to modify the generated YAML to fix the path issue?

I suppose it would also be good if the path problem were picked up by the wizard in the first place.

Automatic transcription for journalists: still not viable despite Microsoft push for “Modern journalism”

I am just back from Microsoft’s developer-focused Build event, where some special sessions were laid on for press, on the subject of “Modern journalism.”

Led by Microsoft’s Ben Rudolph, Modern Journalism is described on his public LinkedIn profile as “a new program committed to helping the news industry fight fake news, tell stories that resonate with modern audiences, and succeed financially.”

The sessions appealed to me for one particular reason, which was the promise of automatic transcription. We were given a leaflet which says:

Tired of digging through hours of recordings to find that one quote? When you record a Teams interview, it’s saved to Microsoft Stream. Here you’ll get game-changing AI features: searchable transcript to jump to exact moments a key word or phrase was used.

Before the transcription thing though, we were taken on a tour of OneNote and Word with AI. The latest AI Editor in Word will tighten up your prose and find gaffes like non-inclusive language. There is lack of clarity over the privacy implications (these features work by uploading everything you type to Microsoft) but perhaps it is useful. I make plenty of typographical errors and would welcome help, though I remain sceptical about the extent to which AI can deliver this.

On to transcription though. Just hit record during a voice or video meeting in Teams, Microsoft’s Office 365 collaboration tool, and it gets automatically transcribed.

Unfortunately I do not use Teams for interviews, though it is possible to use it even for in-person interviews by having a meeting of one and recording it. I am wary though. I normally use an external recording device. Many years ago my device failed one day (I forget whether it was battery or something else) and I used my Tablet PC to record an interview with the game inventor Peter Molyneux. My expectations were not particularly high – I just wanted something good enough that I could transcribe it later. Unfortunately the recording was so poor that you can only make out about one word in ten. This, combined with my written notes and memory, was just about sufficient to write up my piece; but it was not an experiment I felt inclined to repeat – though recording quality has improved since that early disaster.

Still, automatic transcription would be an amazing time-saver. Further, I respect what can be achieved. Nuance Dragon Dictate can give superb results after a bit of training. What about Teams?

Today I put the idea to the test. I took a recorded interview from Build, made with a dedicated device, and uploaded it to Microsoft Stream. I tried uploading an audio file directly, but it would not accept it. I then created a “video” by importing my audio into a one-slide PowerPoint presentation and exporting it as a video. The quality is fine, easily intelligible. Stream chewed on it for maybe 30 minutes, and then my transcript was ready. The subject was the Azure Kubernetes Service. Here is a snippet of what Stream came up with:

 image 

There is an unnecessary annoyance here, which is that you cannot easily select and copy the entire transcript. Notice that it is in short snippets. The best way to get the whole thing is to click the three dots under the video, choose Update Video Details, and then download the caption file.

image

Now you get something like this:

image

The format is, shall we say, sub-optimal for journalists, though it would not take too long to write a script that would extract the text.

The bigger problem is the actual transcription. The section I have chosen is wrong in an interesting way. Here is part of what was said:

With the KEDA announcement today, what you’re seeing is us working with the ecosystem, in this case Red Hat, to solve some tricky problems around how to autoscale containers.

and here is the transcription:

with
the Kate Announcement. Today, which are seeing is also
actually working with the ecosystem in this case. We had
to sell some tricky problems around how to autoscale containers

Many of the words are correct, but the meaning is scrambled. Red Hat has been transcribed as “we had” losing a critical part of the content.

It is not my intention to rubbish this technology. Automatic transcription is very challenging, especially with specialist content. It is not unreasonable for the system to transcribe KEDA as “Kate”: it is a brand new acronym (Kubernetes-based event-driven autoscaling).

Still, the question I ask myself is whether fixing up the auto transcription will save me any time versus the old-fashioned approach. I use a Word macro that plays back the interview with hot keys to pause and backtrack, editing as I go.

The answer is no. It will take me as long or longer to make sense of the automatic transcription, by comparing it to the original, than to type it from scratch.

This might not always be the case. Perhaps with a more AI-friendly subject the transcription will be good enough to save some time. It could also help to find where in the recording a particular quote appears. So it is not altogether useless.

Transcription is difficult, but there are some simpler matters which Microsoft could improve. Enabling upload of audio files rather than video, and providing a continuous transcript that can easily be copied, for example.

Having a team within Microsoft rooting for journalists strikes me as a good thing in that an internal team may have more influence over the products.

It may be more a matter of some bright spark thinking, hey if we get more journalists using Office 365 that will help to promote the product. A strategy which will be more successful if effort goes into making product fit better with the way journalists actually work.

image

The future of WPF for developers who need to support Windows 7

If you talk to Microsoft about what is new for Windows Presentation Foundation (WPF), a framework for Windows desktop applications, the answer tends to revolve around the Windows UI Library (WinUI), user interface controls for the Universal Windows Platform and therefore Windows 10, which you can use with WPF. That is no use if you need to compile applications that work on Windows 7. Is WPF on Windows 7 in effect frozen?

Not quite. First, note that WPF (and Windows Forms) was updated for .NET Framework 4.8, with High DPI enhancements and bug fixes. The complete list of fixes is here. So there have been recent updates.

Microsoft says though that .NET Framework 4.8 is the “last major version” of .NET Framework. This suggests that WPF on .NET Framework will not change much in future. WPF is open source; but the open source project targets .NET Core, the cross-platform version of .NET. In addition, there are a few features in WPF for .NET Framework that will never be ported, including XBAPs (XAML Browser Applications) – probably not something you care about.

The good news though is that .NET Core does run on Windows 7 (currently SP1 is required). You can see the progress of WPF on .NET Core here. It is not yet done and there are a few things that will never be supported. But when this is production-ready, it is likely that the open source WPF will run on Windows 7 and thus benefit from any updates and fixes made to the code.

From what I have learned here at Build, Microsoft’s developer conference, it is that .NET Core work that is currently top of mind for the WPF team. This means that WPF on Windows 7 does have a future – provided that .NET Core continues to support Windows 7. This proviso is important, since it is the decision of a different team. At some point there will be a version of .NET Core that does not support Windows 7, and that will be the moment when WPF cannot really progress on that operating system.

There may also be a special case. Presuming Edge Chromium runs on Windows 7, WPF may get a new Edge-based WebView control that runs on Windows 7.

Summary: WPF (and Windows Forms) on .NET Framework is not going to change much in future. If you can transition to using these frameworks on .NET Core though, there is more hope of improvements, though there is no magic that will make Windows 10 features available on Windows 7.

Tech Writing