Strong financial results from Microsoft as it aims for breadth of services

Microsoft reported a big quarter (in terms of revenue) for the three months ending December 31st, with revenue of $28,918 million.

What’s notable? Mainly the big jump in Microsoft’s recent success stories: year on year Office 365 up by 41%, Azure up by 98%, Dynamics 365 up by 67%.

Windows is flat/weak as you would expect, and Surface hardware is standing still. Xbox grew a bit following the launch of Xbox One X.

LinkedIn is growing: revenue of $1.3 billion and “sessions growth of over 20%” in the quarter. In the earnings webcast, Microsoft’s Amy Hood said that the LinkedIn acquisition has both performed better, and seems more strategic, now than it did at the time.

Hood also made reference to the company’s ability to up-sell cloud users to higher-margin services. “Office 365 commercial revenue increased 41 percent from installed base growth across all customer segments, and ARPU [Average Revenue per User] expansion from continued customer migration to higher value offers in the E3 and E5 workloads.”

This point is key and is the answer (from the provider’s point of view) to the lower margins implicit in moving from software to services. When Microsoft sells a licence for you to use Windows or Office, the margin is huge because reproducing the software, or providing it for download, costs almost nothing; whereas with a subscription there is significant cost to providing the service. However the subscription has advantages which offset this, in particular the continuing interaction with the customer that both provides data, which the customer as well as the provider can mine (subject to appropriate privacy controls), and gives opportunity for the provider to extend the relationship into new or upgraded services.

CEO Satya Nadella fielded a good question about Microsoft losing out to Sony in gaming and to Alexa and Google Home in voice devices. On gaming, Nadella referred to the PC alongside Xbox as a strategic asset. “PC gaming is a growth market,” he said, as well as software such as Minecraft now on mobile devices, giving the company a broad reach. He also remarked on Azure as a gaming back end.

As for Cortana in the home (or absence from), Nadella said that the focus is on the server-side cognitive services. He also talked about voice input and control of Office 365. The key point though was that Microsoft wants to work both with its own and other voice assistant devices so it can win on services even when competitor devices are in use. “One-turn dialogs on one speaker in one home, that’s just not our vision,” he said.

Nadella made another key point in the webcast, in answer to a question about how Azure Stack (a packaged version of Azure for installation on-premises) will impact Azure. “Computing is becoming more distributed, not less distributed,” he said. IoT and sensors play a large part in this. Everything goes to the cloud but computing on the edge (the new buzzword for local processing) is important for efficiency.

It is easy to see ways in which Microsoft could stumble. The PC will decline as the number of users who need a desktop or laptop computer diminishes. Microsoft’s failure in mobile could prove costly as competitors use synergy with their own applications and cloud services to steer customers away. There are opportunities such as home automation and payments which seem closed to the company now.

Then again, strong results such as these show how the company can succeed by continuing to migrate its business users to cloud services. It remains deeply embedded in business computing.

Here is my chart summarising Microsoft’s performance:   

Quarter ending December 31st 2017 vs quarter ending December 31st 2016, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 8953 +1774 3337 +284
Intelligent Cloud 7795 +1037 2832 +541
More Personal Computing 12170 +281 2510 -51

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Which .NET framework for Windows: UWP, WPF or Windows Forms?

Yes, mobile is the future of client applications, cross-platform is cool, web applications are amazing; but out there in the real world, there are still a ton of people who work all day with a Windows PC, and businesses that want PC applications in order to get their work done.

So when a business comes to you and says, we want a new Windows application to do this or that, and presuming they do not care about mobile or Macs or access over the internet but just want something that runs on their internal network, what framework do you choose?

image

Let us even assume that they all run Windows 10 so that UWP (Universal Windows Platform) is a realistic option.

If you want to code in .NET (which is a great choice for a Windows-only application, and with the possibility of migrating code to cross-platform via Xamarin’s compiler later), then you have three obvious choices:

Windows Forms

This is the framework for Windows desktop applications that was introduced at the same time as .NET itself, back in 2002. Of course it has been revised many times since. There was a big update in 2006 with .NET 2.0. That said, Microsoft intended it to be replaced by Windows Presentation Foundation (WPF, see below), so it has not been a focus of attention. In 2014, High DPI support was improved, with .NET 4.5.2, reflecting the fact that this ancient framework is still widely used.

Windows Forms is a nice wrapper around the Windows API, and easy to use in that it uses essentially X Y layout. In other words, you can think of your form as a grid of pixels with the position of your controls determined at design time by its size and coordinates. This is great if you are designing and running on the same PC, but not so good when you deploy to other PCs with different display settings. It does kind-of scale if you follow certain rules, but successful scaling in a Windows Forms application is often difficult to achieve, so users may suffer chopped-off controls and text, or just ugly screens. Read this carefully if you use Windows Forms. And then read about High DPI support, which was improved again in .NET Framework 4.7.

If you are writing a database application, you can generate datasets by drag and drop from the Server Explorer in Visual Studio and bind them to controls. I am not a fan of this database framework, which quickly gets convoluted, but you do not have to use it. However the ability to bind list and grid controls to any kind of .NET collection is fantastically useful.

Why is Windows Forms still in use? It is partly legacy and the fact that it is easier to maintain and enhance an existing application than to start again. It is also because, scaling issues aside, Windows Forms is reliable, well supported by both built-in and third-party controls, and easy to learn.

Windows Presentation Foundation

This was Microsoft’s second go at a GUI framework for .NET and in many respects a great improvement. It was introduced with .NET Framework 3.0 in 2006, part of the Vista wave of technology. Unlike Windows Forms, it is based on the DirectX graphics API, so great for multimedia and special effects. Scaling is built-in and based on layout managers. The underlying presentation language is based on XAML, an XML language. As with Windows Forms, there is deep support for binding data to controls.

Why would you not always use WPF rather than Windows Forms? The main issue is that the time you save on figuring out scaling is more than consumed by the time you spend on design. WPF is a designer-centric framework. It will repay your efforts, but if you just want to slap a couple of grids and a few buttons on a form to get a working business application, Windows Forms remains tempting.

Universal Windows Platform

Both Windows Forms and WPF are old, and Microsoft is pointing developers towards its Universal Windows Platform (UWP) instead. UWP is an evolution of the new application platform introduced in Windows 8 in 2012. If WPF was all about scaling and multimedia, the Windows 8 modern app platform is about touch support and Store-based deployment. The application model was also service based, the idea being that your app consumes services published over the internet. Until the Windows 10 Fall Creators Update, you could not use the .NET SQLClient to connect directly to a SQL Server database (you can now). The app platform became UWP with the launch of Windows 10 in 2015. UWP can use XAML for layout design, but it is not compatible with WPF.

Personally I have mixed feelings about UWP. Unfortunately it has suffered from Microsoft’s ever-changing development strategy. The Windows 8 app platform made sense to me as a way of bringing Windows into the tablet era and enabling applications that were more secure and more easily deployed, even if it tended to result in applications that were blocky and ugly. Microsoft then changed its mind about full-screen touch applications and came up with the UWP for Windows 10, where applications again run in a window, but with a new selling point: you could run your application on Windows Phone as well as desktop. Then the company canned Windows Phone, before UWP had properly launched, in effect deleting the “Universal” part of the platform.

UWP still offers Store delivery and isolation from other applications, better for security and stability. However there are a few things against it. First, users require Windows 10. Second, like WPF it is a designer-centric platform and not so good for running up quick business applications. Third, UWP apps behave differently from standard desktop applications, sometimes not in a good way.

I was using Microsoft’s bundled Photos application recently. I work a lot with images so this often pops up, as the default image viewer on Windows 10. I was not stressing it, but it crashed which, as is typical for a UWP app, means it just disappeared without any message or warning.

UWP will be three years old this summer, but I am not convinced that the platform is quite there yet. I find it hard to think of UWP apps that I love. The apps I know best are the built-in ones, Mail, Photos, Groove Music, Calculator, and I do not love any of them. Paint 3D is amazing but not my thing.

At the same time I do see the merits of UWP versus traditional Windows application deployment. The existence of the Desktop Bridge (formerly Project Centennial) means you can get many of those benefits while still using WPF or Windows Forms.

Closing thoughts

Perhaps something like Power Apps will render this discussion irrelevant before long. There are also other options for the desktop, such as Xamarin Forms if you still want to use .NET, or Electron for using web technologies for desktop applications.

Still, while it may seem surprising, even in 2018 I can think of reasons why you might use any of the above frameworks, even Windows Forms, for a business app targeting Windows.

Spectre and Meltdown woes continue as Intel confesses to broken updates

Intel’s Navin Shenoy says the company has asked PC vendors to stop shipping its microcode updates that fix the speculative execution vulnerabilities identified by Google’s Project Zero team:

We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior.

This is a blow to industry efforts to fix this vulnerability, a process involving BIOS updates (to install the microcode) as well as operating system patches.

Intel says it has an “early version of the updated solution”. Given the length of time it takes for PC manufacturers to package and distribute BIOS updates for the many thousands of models affected, it looks like the moment at which the majority of active systems will be patched is now far in the future.

Vendors have not yet completed the rollout of the initial patch, which they are now being asked to withdraw.

The detailed microcode guidance is here. Intel also has a workaround which gives some protection while also preserving system stability:

For those concerned about system stability while we finalize the updated solutions, we are also working with our OEM partners on the option to utilize a previous version of microcode that does not display these issues, but removes the Variant 2 (Spectre) mitigations. This would be delivered via a BIOS update, and would not impact mitigations for Variant 1 (Spectre) and Variant 3 (Meltdown).

I am not sure who out there is not concerned about system stability? That said, public cloud vendors would rather almost anything than the possibility of code running in one VM getting unauthorised access to the host or to other VMs.

Right now it feels as if most of the world’s computing devices, from server to smartphone, are simply insecure. Though it should be noted that the bad guys have to get their code to run: trivial if you just need to run up a VM on a public cloud, more challenging if it is a server behind a firewall.

Office 2016 now “built out of one codebase for all platforms” says Microsoft engineer

Microsoft’s Erik Schweibert, principal engineer in the Apple Productivity Experiences group, says that with the release of Office 2016 version 16 for the Mac, the productivity suite is now “for the first time in 20 years, built out of one codebase for all platforms (Windows, Mac, iOS, Android).”

image

This is not the first time I have heard of substantial code-sharing between the various versions of Office, but this claim goes beyond that. Of course there is still platform-specific code and it is worth reading the Twitter thread for a more background.

“The shared code is all C++. Each platform has native code interfacing with the OS (ie, Objective C for Mac and iOS, Java for Android, C/C++ for Windows, etc),” says Schweibert.

Does this mean that there is exact feature parity? No. The mobile versions remain cut-down, and some features remain platform-specific. “We’re not trying to provide uniform “lowest common denominator” support across all platforms so there will always be disparate feature gaps,” he says.

Even the online version of Office shares much of the code. “Web components share some code (backend server is shared C++ compiled code, front end is HTML and script)”, Schweibert says.

There is more news on what is new in Office for the Mac here. The big feature is real-time collaborative editing in Word, Excel and PowerPoint. 

What about 20 years ago? Schweibert is thinking about Word 6 for the Mac in 1994, a terrible release about which you can read more here:

“Shipping a crappy product is a lot like beating your head against the wall.  It really does feel good when you ship a great product as a follow-up, and it really does motivate you to spend some time trying to figure out how not to ship a crappy product again.

Mac Word 6.0 was a crappy product.  And, we spent some time trying to figure out how not to do that again.  In the process, we learned a few things, not the least of which was the meaning of the term “Mac-like.”

Word 6.0 for the Mac was poor for all sorts of reasons, as explained by Rick Schaut in the post above. The performance was poor, and the look and feel was too much like the Windows version – because it was the Windows code, recompiled. “Dialog boxes had "OK" and "Cancel" exactly reversed compared to the way they were in virtually every other Mac application — because that was the convention under Windows,” says one comment.

This is not the case today. Thanks to its lack of a mobile platform, Microsoft has a strong incentive to create excellent cross-platform applications.

There is more about the new cross-platform engineering effort in the video below.

The mysterious microcode: Intel is issuing updates for all its CPUs from the last five years but you might not benefit

The Spectre and Meltdown security holes found in Intel and to a lesser extend AMD CPUs is not only one of the most serious, but also one of the most confusing tech issues that I can recall.

We are all used to the idea of patching to fix security holes, but normally that is all you need to do. Run Windows Update, or on Linux apt-get update, apt-get upgrade, and you are done.

This one is not like that. The reason is that you need to update the firmware; that is, the low-level software that drives the CPU. Intel calls this microcode.

So when Intel CEO Brian Krzanich says:

By Jan. 15, we will have issued updates for at least 90 percent of Intel CPUs introduced in the past five years, with updates for the remainder of these CPUs available by the end of January. We will then focus on issuing updates for older products as prioritized by our customers.

what he means is that Intel has issued new microcode for those CPUs, to mitigate against the newly discovered security holes, related to speculative execution (CPUs getting a performance gain by making calculations ahead of time and throwing them away if you don’t use them).

Intel’s customer are not you and I, the users, but rather the companies who purchase CPUs, which in most cases are the big PC manufacturers together with numerous device manufacturers. My Synology NAS has an Intel CPU, for example.

So if you have a PC or server from Vendor A, then when Intel has new microcode it is available to Vendor A. How it gets to your PC or server which you bought from Vendor A is another matter.

There are several ways this can happen. One is that the manufacturer can issue a BIOS update. This is the normal approach, but it does mean that you have to wait for that update, find it and apply it. Unlike Windows patches, BIOS updates do not come down via Windows update, but have to be applied via another route, normally a utility supplied by the manufacturer. There are thousands of different PC models and there is no guarantee that any specific model will receive an updated BIOS and no guarantee that all users will find and apply it even if they do. You have better chances if your PC is from a big name rather than one with a brand nobody has heard of, that you bought from a supermarket or on eBay.

Are there other ways to apply the microcode? Yes. If you are technical you might be able to hack the BIOS, but leaving that aside, some operating systems can apply new microcode on boot. Therefore VMWare was able to state:

The ESXi patches for this mitigation will include all available microcode patches at the time of release and the appropriate one will be applied automatically if the system firmware has not already done so.

Linux can do this as well. Such updates are volatile; they have to be re-applied on every boot. But there is little harm in that.

What about Windows? Unfortunately there is no supported way to do this. However there is a VMWare experimental utility that will do it:

This Fling is a Windows driver that can be used to update the microcode on a computer system’s central processor(s) (“CPU”). This type of update is most commonly performed by a system’s firmware (“BIOS”). However, if a newer BIOS cannot be obtained from a system vendor then this driver can be a potential substitute.

Check the comments – interest in this utility has jumped following the publicity around spectre/meltdown. If working exploits start circulating you can expect that interest to spike further.

This is a techie and unsupported solution though and comes with a health warning. Most users will never find it or use it.

That said, there is no inherent reason why Microsoft could not come up with a similar solution for PCs and servers for which no BIOS update is available, and even deliver it through Windows Update. If users do start to suffer widespread security problems which require Intel’s new microcode, it would not surprise me if something appears. If it does not, large numbers of PCs will remain unprotected.

Windows Mixed Reality: Acer headset review and Microsoft’s (lack of) content problem

Acer kindly loaned me a Windows Mixed Reality headset to review, which I have been trying over the holiday period.

First, an aside. I had a couple of sessions with Windows Mixed Reality before doing this review. One was at IFA in Berlin at the end of August 2017, where the hardware and especially the software was described as late preview. The second was at the Future Decoded event in London, early November. On both occasions, I was guided through a session either by the hardware vendor or by Microsoft. Those sessions were useful for getting a hands-on experience; but an extended review at home has given me a different understanding of the strengths and weaknesses of the product. Readers beware: those rushed “reviews” based on hands-on sessions at vendor events are poor guides to what a product is really like.

A second observation: I wandered into a few computer game shops before Christmas and Windows Mixed Reality hardware was nowhere to be seen. That is partly because PC gaming has hardly any bricks and mortar presence now. Retailers focus on console gaming, where there is still some money to be made before all the software becomes download-only. PC game sales are now mainly Steam-powered, with a little bit of competition from other download stores including GOS and Microsoft’s Windows Store. That Steam and download dominance has many implications, one of which is invisibility on the High Street.

What about those people (and there must be some) who did unwrap a Windows Mixed Reality headset on Christmas morning? Well, unless they knew exactly what they were getting and enjoy being on the bleeding edge I’m guessing they will have been a little perplexed and disappointed. The problem is not the hardware, nor even Microsoft’s implementation of virtual reality. The problem is the lack of great games (or other virtual reality experiences).

This may improve, provided Microsoft sustains enough momentum to make Windows Mixed Reality worth supporting. The key here is the relationship with Steam. Microsoft cheerfully told the press that Steam VR is supported. The reality is that Steam VR support comes via preview software which you get via Steam and which states that it “is not complete and may or may not change further.” It will probably all be fine eventually, but that is not reassuring for early adopters.

image

My experience so far is that native Windows MR apps (from the Microsoft Store) work more smoothly, but the best content is on Steam VR. The current Steam preview does work though with a few limitations (no haptic feedback) and other issues depending on how much effort the game developers have put into supporting Windows MR.

I tried Windows MR on a well-specified gaming PC: Core i7 with NVIDIA’s superb GTX 1080 GPU. Games in general run super smoothly on this hardware.

Getting started

A Windows Mixed Reality headset has a wired connection to a PC, broken out into an HDMI and a USB 3.0 connection. You need Windows 10 Fall Creators Update installed, and Setup should be a matter of plugging in your headset, whereupon the hardware is detected, and a setup wizard starts up, downloading additional software as required.

image

In my case it did not go well. Setup started OK but went into a spin, giving me a corrupt screen and never completing. The problem, it turned out, was that my GPU has only one HDMI port, which I was already using for the main display. I had the headset plugged into a DisplayPort socket via an adapter. I switched this around, so that the headset uses the real HDMI port, and the display uses the adapter. Everything then worked perfectly.

The controllers use Bluetooth. I was wary, because in my previous demos the controllers had been problematic, dropping their connection from time to time, but these work fine.

image

They are perhaps a bit bulky, thanks to their illuminated rings which are presumably a key part of the tracking system. They also chew batteries.

The Acer headsets are slightly cheaper than average, but I’ve enjoyed my time with this one. I wear glasses but the headset fits comfortably over them.

A big selling point of the Windows system is that no external tracking sensors are required. This is called inside-out tracking. It is a great feature and makes it easier just to plug in and go. That said, you have to choose between a stationary position, or free movement; and if you choose free movement, you have to set up a virtual boundary so that you do not walk into physical objects while immersed in a VR experience.

image

The boundary is an important feature but also illustrates an inherent issue with full VR immersion: you really are isolated from the real world. Motion sickness and disorientation can also be a problem, the reason being that the images your brain perceives do not match the physical movement your body feels.

Once set up, you are in Microsoft’s virtual house, which serves as a kind of customizable Start menu for your VR experiences.

image

The house is OK though it seems to me over-elaborate for its function, which is to launch games and apps.

I must state at this point that yes, a virtual reality experience is amazing and a new kind of computing. The ability to look all around is extraordinary when you first encounter it, and adds a level of realism which you cannot otherwise achieve. That said, there is some frustration when you discover that the virtual world is not really as extensive as it first appears, just as you get in an adventure game when you find that not all doors open and there are invisible barriers everywhere. I am pretty sure though that a must-have VR game will come along at some point and drive many new sales – though not necessarily for Windows Mixed Reality of course.

I looked for content in the Windows Store. It is slim pickings. There’s Minecraft, which is stunning in VR, until you realise that the controls do not work quite so well as they do in the conventional version. There is Space Pirate, an old-school arcade game which is a lot of fun. There is Arizona Sunshine, which is fine if you like shooting zombies.

I headed over to Steam. The way this works is that you install the Steam app, then launch Windows Mixed Reality, then launch a VR game from your Steam library. You can access the Windows Desktop from within the Windows MR world, though it is not much fun. Although the VR headset offers two 1440 x 1440 displays I found it impossible to keep everything in sharp focus all the time. This does not matter all that much in the context of a VR game or experience, but makes the desktop and desktop applications difficult to use.

I did find lots of goodies in the Steam VR store though. There is Google Earth VR, which is not marked as supporting Windows MR but works. There is also The Lab, which a Steam VR demo which does a great job of showing what the platform can do, with several mini-games and other experiences – including a fab archery game called Longbow where you defend your castle from approaching hordes. You can even fire flaming arrows.

image
Asteroids! VR, a short, wordless VR film which is nice to watch once. It’s free though!

Mainstream VR?

Irrespective of who provides the hardware, VR has some issues. Even with inside-out tracking, a Windows Mixed Reality setup is somewhat bulky and makes the wearer look silly. The kit will become lighter, as well as integrating audio. HTC’s Vive Pro, just announced at CES, offers built-in headphones and has a wireless option, using Intel’s WiGig technology.

Even so, there are inherent issues with a fully immersive environment. You are vulnerable in various ways. Having people around wearing earbuds and staring at a screen is bad enough, but VR takes anti-social to another level.

The added expense of creating the content is another issue, though the right tools can do an amazing job of simplifying and accelerating the process.

It is worth noting that VR has been around for a long time. Check out the history here. Virtual Reality arcade machines in 1991. Sega VR Glasses in 1993. Why has this stuff taken so long to take off, and remains in its early stages? It is partly about technology catching up to the point of real usability and affordability, but also an open question about how much VR we want and need.

Why patching to protect against Spectre and Meltdown is challenging

The tech world has been buzzing with news of bugs (or design flaws, take your pick) in mainly Intel CPUs, going way back, which enables malware to access memory in the computer that should be inaccessible.

How do you protect against this risk? The industry has done a poor job in communicating what users (or even system admins) should do.

A key reason why this problem is so serious is that it risks a nightmare scenario for public cloud vendors, or any hosting company. This is where software running in a virtual machine is able to access memory, and potentially introduce malware, in either the host server or other virtual machines running on the same server. The nature of public cloud is that anyone can run up a virtual machine and do what they will, so protecting against this issue is essential. The biggest providers, including AWS, Microsoft and Google, appear to have moved quickly to protect their public cloud platforms. For example:

The majority of Azure infrastructure has already been updated to address this vulnerability. Some aspects of Azure are still being updated and require a reboot of customer VMs for the security update to take effect. Many of you have received notification in recent weeks of a planned maintenance on Azure and have already rebooted your VMs to apply the fix, and no further action by you is required.

With the public disclosure of the security vulnerability today, we are accelerating the planned maintenance timing and will begin automatically rebooting the remaining impacted VMs starting at 3:30pm PST on January 3, 2018. The self-service maintenance window that was available for some customers has now ended, in order to begin this accelerated update.

Note that this fix is at the hypervisor, host level. It does not patch your VMs on Azure. So do you also need to patch your VM? Yes, you should; and your client PCs as well. For example, KB4056890 (for Windows Server 2016 and Windows 10 1607), or KB4056891 for Windows 10 1703, or KB4056892. This is where it gets complex though, for two main reasons:

1. The update will not be applied unless your antivirus vendor has set a special registry key. The reason is that the update may crash your computer if the antivirus software accesses memory is a certain way, which it may do. So you have to wait for your antivirus vendor to do this, or remove your third-party anti-virus and use the built-in Windows Defender.

2. The software patch is not complete protection. You also need to update your BIOS, if an update is available. Whether or not it is available may be uncertain. For example, I am pretty sure that I found the right update for my HP PC, based on the following clues:

– The update was released on December 20 2017

– The description of the update is “Provides improved security”

image

So now is the time, if you have not done so already, to go to the support sites for your servers and PCs, or motherboard vendor if you assembled your own, see if there is a BIOS update, try to figure out it it addresses Spectre and Meltdown, and apply it.

If you cannot find an update, you are not fully protected.

It is not an easy process and realistically many PCs will never be updated, especially older ones.

What is most disappointing is the lack of clarity or alerts from vendors about the problem. I visited the HPE support site yesterday in the hope of finding up to date information on HP’s server patches,  to find only a maze of twist little link passages, all alike, none of which led to the information I sought. The only thing you can do is to trace the driver downloads for your server in the hope of finding a BIOS update.

Common sense suggests that PCs and laptops will be a bigger risk than private servers, since unlike public cloud vendors you do not allow anyone out there to run up VMs.

At this point it is hard to tell how big a problem this will be. Best practice though suggests updating all your PCs and servers immediately, as well as checking that your hosting company has done the same. In this particular case, achieving this is challenging.

PS kudos to BleepingComputer for this nice article and links; the kind of practical help that hard-pressed users and admins need.

There is also a great list of fixes and mitigations for various platforms here:

https://github.com/hannob/meltdownspectre-patches

PPS see also Microsoft’s guidance on patching servers here:

https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution

and PCs here:

https://support.microsoft.com/en-us/help/4073119/protect-against-speculative-execution-side-channel-vulnerabilities-in

There is a handy PowerShell script called speculationcontrol which you can install and run to check status. I was able to confirm that the HP bios update mentioned above is the right one. Just run PowerShell with admin rights and type:

install-module speculationcontrol

then type

get-speculationcontrolsettings

image

Thanks to @teroalhonen on Twitter for the tip.

Let’s Encrypt: a quiet revolution

Any website that supports SSL (an HTTPS connection) requires a  digital certificate. Until relatively recently, obtaining a certificate meant one of two things. You could either generate your own, which works fine in terms of encrypting the traffic, but results in web browser warnings for anyone outside your organisation, because the issuing authority is not trusted. Or you could buy one from a certificate provider such as Symantec (Verisign), Comodo, Geotrust, Digicert or GoDaddy. These certificates vary in price from fairly cheap to very expensive, with the differences being opaque to many users.

Let’s Encrypt is a project of the Internet Security Research Group, a non-profit organisation founded in 2013 and sponsored by firms including Mozilla, Cisco and Google Chrome. Obtaining certificates from Let’s Encrypt is free, and they are trusted by all major web browsers.

image

Last month Let’s Encrypt announced coming support for wildcard certificates as well as giving some stats: 46 million active certificates, and plans to double that in 2018. The post also notes that the latest figures from Firefox telemetry indicate that over 65% of the web is now served using HTTPS.

image
Source: https://letsencrypt.org/stats/

Let’s Encrypt only started issuing certificates in January 2016 so its growth is spectacular.

The reason is simple. Let’s Encrypt is saving the IT industry a huge amount in both money and time. Money, because its certificates are free. Time, because it is all about automation, and once you have the right automated process in place, renewal is automatic.

I have heard it said that Let’s Encrypt certificates are not proper certificates. This is not the case; they are just as trustworthy as those from the other SSL providers, with the caveat that everything is automated. Some types of certificate, such as those for code-signing, have additional verification performed by a human to ensure that they really are being requested by the organisation claimed. No such thing happens with the majority of SSL certificates, for which the process is entirely automated by all the providers and typically requires that the requester can receive email at the domain for which the certificate is issued. Let’s Encrypt uses other techniques, such as proof that you control the DNS for the domain, or are able to write a file to its website. Certificates that require human intervention will likely never be free.

A Let’s Encrypt certificate is only valid for three months, whereas those from commercial providers last at least a year. Despite appearances, this is not a disadvantage. If you automate the process, it is not inconvenient, and a certificate with a shorter life is more secure as it has less time to be compromised.

The ascendance of Let’s Encrypt is probably regretted both by the commercial certificate providers and by IT companies who make a bit of money from selling and administering certificates.

Let’s Encrypt certificates are issued in plain-text PEM (Privacy Enhanced Mail) format. Does that mean you cannot use them in Windows, which typically uses .cer or .pfx certificates?  No, because it is easy to convert between formats. For example, you can use the openssl utility. Here is what I use on Linux to get a .pfx:

openssl pkcs12 -inkey privkey.pem -in fullchain.pem -export -out yourcert.pfx

If you have a website hosted for you by a third-party, can you use Let’s Encrypt? Maybe, but only if the hosting company offers this as a service. They may not be in a hurry to do so, since there is a bit of profit in selling SSL certificates, but on the other hand, a far-sighted ISP might win some business by offering free SSL as part of the service.

Implications of Let’s Encrypt

Let’s Encrypt removes the cost barrier for securing a web site, subject to the caveats mentioned above. At the same time, Google is gradually stepping up warnings in the Chrome browser when you visit unencrypted sites:

Eventually, we plan to show the “Not secure” warning for all HTTP pages, even outside Incognito mode.

Google search is also apparently weighted in favour of encrypted sites, so anyone who cares about their web presence or traffic is or will be using SSL.

Is this a good thing? Given the trivia (or worse) that constitutes most of the web, why bother encrypting it, which is slower and takes more processing power (bad for the planet)? Note also that encrypting the traffic does nothing to protect you from malware, nor does it insulate web developers from security bugs such as SQL injection attacks – which is why I prefer to call SSL sites encrypted rather than secure.

The big benefit though is that it makes it much harder to snoop on web traffic. This is good for privacy, especially if you are browsing the web over public Wi-Fi in cafes, hotels or airports. It would be a mistake though to imagine that if you are browsing the public web using HTTPS that you are really private: the sites you visit are still getting your data, including Facebook, Google and various other advertisers who track your browsing.

In the end it is worth it, if only to counter the number of times passwords are sent over the internet in plain text. Unfortunately people remain willing to send passwords by insecure email so there remains work to do.

New year, new web site

I took the opportunity of the Christmas break to move itwriting.com to a new server.

The old server has worked wonderfully for many years, but in that time a lot of cruft accumulated.

The old WordPress template was also out of date. Today it is necessary not only to have a site that works well on mobile, but also one that is served over SSL. I am taking advantage of Let’s Encrypt to give itwriting.com trusted SSL support.

On the old site I ran three blogs. itwriting.com, aimed at professionals. Gadgets.itwriting.com aimed at consumers. Taggedtalk.com for when I occasionally wanted to blog about music. Running three blogs is a hassle and I decided to combine them into one despite the differences in content. The idea is to use WordPress categories to make sense of this but this too is work in progress.

The price of this migration is broken links. Content that is migrated from the old itwriting.com blog is mostly fine, though there are some images linked with http that will need to be fixed. Content from the other two blogs is more problematic and I have some work to do tidying up the images. There is also the old pre-wordpress blog which is now offline. This was active from 2003 to 2006 and I am undecided about whether to reinstate it.

Apologies then for the disruption but I hope it will be worth it.

Honor 7x: a great value mid-range smartphone spoilt by unexciting design

Honor is Huawei’s youth/consumer smartphone brand and deserves its reputation for putting out smartphones with compelling features for their price. Just released in the UK is the Honor 7x, a mid-range phone whose most striking features are a 5.93" 2160×1080 (18:9) display and dual-lens camera.

I came to respect the Honor brand when I tried the Honor 8, a gorgeous translucent blue device which at the time seemed to provide all the best features of Huawei’s premium phone at a lower price. The Honor 8 is 18 months old now, but still on sale for around £70 more than a 7x (there is also an Honor 9 which I have not tried). The 5.2" 1920 x 1080 screen happens to be the perfect size for my hands.

What about the new 7x though?

The 7x feels solid and well-made though unexciting in appearance. The smooth rear of the matt metal case is broken only by the fingerprint reader and dual camera lenses and flash. On the front there is the phone speaker and front-facing camera at the top of the screen, media speaker, microphone, headset socket and Micro-B USB along the bottom edge. The larger than average screen does make for a phone that is less comfortable to hold than a smaller device, but that is the trade-off you make.

The Kirin 659 processor has 8 ARM Cortex-A53 cores, comprised of 4 high-speed 2.38 GHz cores and 4 power-saving 1.7 GHz cores. SoC (System on a Chip) also includes an ARM Mali-T830 graphics processing unit. This is a mid-range processor which is fine for everyday use but not a powerhouse.  Benchmark performance is around 15% better than Samsung’s Exynos 7 Octa 7870, found in the Galaxy A3, for example.

PC Mark came up with a score of 4930, a little behind the older Honor 8 at 5799.

image

The screen resolution at 2160 x 1080 is impressive, though I found it a little dull on the default automatic brightness settings.

Music audio quality is great on headphones or high quality earbuds, but poor using the built-in loudspeaker – usually not important, but I have heard much better.

Where this phone shines is in photography. The dual lens is now well proven technology from Huawei/Honor and does make a difference, enabling better focusing and sharper images. If you enable the wide aperture in the camera, you can refocus pictures after the event, a magical feature.

If you swipe from the left in the camera app, you can select between a dozen or so modes, including Photo, Pro Photo, video, panorama, time-lapse, effects  and more. Selecting Pro Photo enables controls for metering (determines how the camera calculates the exposure), ISO, shutter speed,  exposure compensation (affects brightness) and focus mode.

image

If you swipe from the right, you can access settings including photo resolution (default is 4608 x 3456), storage location, GPS tagging, object tracking and more. There is no option for RAW images though.

image

Along the top of the camera screen are settings for flash, wide aperture, portrait mode, moving picture (records a short video when you take a picture), and front/rear camera enable.

There are also a couple of features aimed at selfies or group photos, where you want to be in the picture. If you enable audio control, the camera will take a picture when you say “Cheese”. If you enable gesture control (only works with the front camera), you can take a picture by raising your hand, triggering a countdown. I tried both features and they work.

How are the actual results though? Here is a snap taken with default settings on the 7x, though I’ve resized the image for this blog:

and here it is again on the Honor 8:

Personally I think the colours are a bit more natural on the Honor 8 but there is not much between them. I was also impressed with the detail when zoomed in. In the hands of an expert you could take excellent pictures with this, and those of us taking quick snaps will be happy too.

Likes, dislikes and conclusion

For 25% of the price of Apple’s latest iPhone, you get a solid and capable device with above average photographic capability and a high resolution display. I also like the fact that the fingerprint reader is on the rear, even though this is against the recent trend. This makes it easy to pick up and unlock the phone with one hand, with no need for face recognition.

Still, while I would be happy to recommend the phone, I do not love it. The design is plain and functional, rather out of keeping with Honor’s “for the brave” slogan. No NFC is a negative, and it is a shame Honor has provided the old micro USB instead of USB C as on the premium models.

These are minor nitpicks though and I cannot fault it for value or essential features.

Specification

OS Android 7
Chipset 8-core Kirin 659 (4 x 2.38GHz + 4 x 1.7 GHz)
Battery 3340 mAh
Screen 2160 x 1080
Rear camera Dual lens 16MP + 2MP, F/0.95 – F/16 aperture
Front Camera 8MP
Connectivity 802.11 b/g/n wifi, Bluetooth 4.1, USB 2.0
Dimensions 156.5mm x 75.3mm x 7.6mm
Weight 165g
Memory 4GB RAM, 64GB storage, microSD up to 256GB
SIM slots Dual TD-LTE/FDD LTE/WCDMA/GSM SIM or SIM + microSD
Fingerprint reader Rear
Sensors Proximity, ambient light, compass, gravity
Audio 3.5mm headset jack
Materials Metal unibody design
Price £269.99

Tech Writing