Disabling automatic update restarts in Windows Server 2016

Windows Server 2016 is in effect the Windows 10 version of the server OS. If you look in Settings it seems to have the same attitude to updates; in other words, you get them automatically whether you like it or not. Currently my server is even offering me Windows 10 Creators Update:

image

However, I prefer to have servers just download updates and let me decide when to install them. There can be good reasons for this. For example, I run Exchange Server on a machine that is not really up to spec, and the Exchange services have to be manually started every time it reboots. Well, there are ways round this, but it makes the point.

It turns out that you can after all set Windows Server 2016 to download-only. Just run sconfig from the command line and choose option 5:

image

The sconfig menu will be familiar if you have worked with Server Core or other variants of Windows Server without a GUI.

Incidentally, I tried to install Exchange 2016 on Server 2016 without a GUI but it appears not to be supported. A shame.

Returning to the subject of updates, Brendan Power at Microsoft popped up on Reddit to say that this is a bug in in the settings:

The "Available updates will be downloaded…" text in the UI is a bug that doesn’t represent the actual automatic update settings.

To verify the actual server settings, you can open the command prompt and run sconfig.cmd; in the menu, you should see option 5 set to Manual.

A bug? I am not sure. If so, it seems an odd and obvious one. I think Microsoft is keen to have us update automatically. That said, Windows Server 2016 is meant to follow the Long Term Servicing Branch (LTSB) model rather than the “Windows as a service” approach in Windows 10, unless you run Nano Server, according to this post. So compulsory update to retain a supported configuration does not apply here.

Of course you should patch your Windows Server installations in a timely manner, however you choose to do it.

How to get a Bitlocker recovery key on Android

Scenario: you are out and about with your laptop and phone. Laptop protected with Bitlocker encryption. You start up your laptop and it decides for no obvious reason to demand your Bitlocker key before booting up.

If your laptop is domain-joined, this key is normally stored in Active Directory. If is not domain-joined, it is normally stored in the OneDrive linked with your Microsoft account. These are options when the Bitlocker encryption was applied.

This laptop is not domain-joined and the key is in OneDrive. So I pick up my phone and find the link, which is here.

Android helpfully opened this link in the OneDrive app, but not to the location of the keys, just the home page. No idea how to find the recovery keys in there.

The solution I found was to copy the link and paste it directly into the browser. You might need to get a code sent to your phone if you have 2-factor authentication. I found I needed to check the box for “I log on frequently with this device” before it worked.

Then I could see the key in the Android web browser. Phew.

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.