Technical Writing

Welcome to IT Writing. This site is edited by Tim Anderson, a techncal journalist. Here you will find comment, articles and reviews on a variety of subjects. A few words of guidance on navigating the site:

  • On the home page you will find posts relating to professional technology.
  • Consumer technology is covered in the Gadget Writing section on the top menu.
  • Occasional posts about music are found in the Music Writing section on the top menu.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

Notes from the field: Windows Time Service interrupts email delivery

A business with Exchange Server noticed that email was not flowing. The internet connection was fine, all the servers were up and running including Exchange 2016. Email has been fine just a few hours earlier. What was wrong?

The answer, or the beginning of the answer, was in the Event Viewer on the Exchange Server. Event ID 1035, only a warning:

Inbound authentication failed with error UnexpectedExchangeAuthBlobCheckForClockSkew for Receive connector Default Mailbox Delivery

Hmm. A clock problem, right? It turned out that the PDC for the domain was five minutes fast. This is enough to trigger Kerberos authentication failures. Result: no email. We fixed the time, restarted Exchange, and everything worked.

Why was the PDC running fast? The PDC was configured to get time from an external source, apparently, and all other servers to get their time from the PDC. Foolproof?

Not so. If you typed:

w32tm /query /status

at a command prompt on the PDC (not the Exchange Server, note), it reported:

Source: Free-running System Clock

Oops. Despite efforts to do the right thing in the registry, setting the Type key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters to NTP and entering a suitable list of time servers in the NtpServer key, it was actually getting its time from the server clock. This being a Hyper-V VM, that meant the clock on the host server, which – no surprise – was five minutes fast.

You can check for this error by typing:

w32tm /resync

at the command prompt. If it says:

The computer did not resync because no time data was available.

then something is wrong with the configuration. If it succeeds, check the status as above and verify that it is querying an internet time server. If it is not querying a time server, run a command like this:

w32tm /config /update /manualpeerlist:”0.pool.ntp.org,0x8 1.pool.ntp.org,0x8 2.pool.ntp.org,0x8 3.pool.ntp.org,0x8″ /syncfromflags:MANUAL

until you have it right.

Note this is ONLY for the server with the PDC Emulator FSMO role. Other servers should be configured to get time from the PDC.

Time server problems seem to be common on Windows networks, despite the existence of lots of documentation. There are also various opinions on the best way to configure Hyper-V, which has its own time synchronization service. There is a piece by Eric Siron here on the subject, and I reckon his approach is a safe one (Hyper-V Synchronization Service OFF for the PDC Emulator, ON for every other VM).

I love his closing remarks:

The Windows Time service has a track record of occasionally displaying erratic behavior. It is possible that some of my findings are not entirely accurate. It is also possible that my findings are 100% accurate but that not everyone will be able to duplicate them with 100% precision. If working with any time sensitive servers or applications, always take the time to verify that everything is working as expected.

Inside Azure Cosmos DB: Microsoft’s preferred database manager for its own high-scale applications

At Microsoft’s Build event in May this year I interviewed Dharma Shukla, Technical Fellow for the Azure Data group, about Cosmos DB. I enjoyed the interview but have not made use of the material until now, so even though Build was some time back I wanted to share some of his remarks.

Cosmos DB is Microsoft’s cloud-hosted NoSQL database. It began life as DocumentDB, and was re-launched as Cosmos DB at Build 2017. There are several things I did not appreciate at the time. One was how much use Microsoft itself makes of Cosmos DB, including for Azure Active Directory, the identity provider behind Office 365. Another was how low Cosmos DB sits in the overall Azure cloud system. It is a foundational piece, as Shukla explains below.

image

There were several Cosmos DB announcements at Build. What’s new?

“Multi-master is one of the capabilities that we announced yesterday. It allows developers to scale writes all around the world. Until yesterday Cosmos DB allowed you to scale writes in a single region but reads all around the world. Now we allow developers to scale reads and writes homogeneously all round the world. This is a huge deal for apps like IoT, connected cars, sensors, wearables. The amount of writes are far more than the amount of reads.

“The second thing is that now you get single-digit millisecond write latencies at the 99 percentile not just in one region.

“And the third piece is that what falls out of this high availability. The window of failover, the time it takes to failover from one region when a disaster happens, to the other, has shrunk significantly.

“It’s the only system I know of that has married the high consistency models that we have exposed with multi-master capability as well. It had to reach a certain level of maturity, testing it with first-party Microsoft applications at scale and then with a select set of external customers. That’s why it took us a long time.

“We also announced the ability to have your Cosmos Db database in your own VNet (virtual network). It’s a huge deal for enterprises where they want to make sure that no data leaks out of that VNet. To do it for a global distributed database is specially hard because you have to close all the transitive networking dependencies.”

image
Technical Fellow Dharma Shukla

Does Cosmos DB work on Azure Stack?

“We are in the process of going to Azure Stack. Azure Stack is one of the top customer asks. A lot of customers want a hybrid Cosmos DB on Azure Stack as well as in Azure and then have Active – Active. One of the design considerations for multi master is for edge devices. Right now Azure has about 50 regions. Azure’s going to expand to let’s say 200 regions. So a customer’s single Cosmos DB table spanning all these regions is one level of scalability. But the architecture is such that if you directly attach lots of Azure Stack devices, or you have sensors and edge devices, they can also pretend to be replicas. They can also pretend to be an Azure region. So you can attach billions of endpoints to your table. Some of those endpoints could be Azure regions, some of them could be instances of Azure Stack, or IoT hub, or edge devices. This kind of scalability is core to the system.”

Have customers asked for any additional APIs into Cosmos DB?

“There is a list of APIs, HBase, richer SQL, there are a number of such API requests. The good news is that the system has been built in a way that adding new APIs is relatively easy addition. So depending on the demand we continue to add APIs.”

Can you tell me anything about how you’ve implemented Cosmos DB? I know you use Service Fabric. Do you use other Azure services?

“We have dedicated clusters of compute machines. Cosmos DB is a Ring 0 service. So it’s there any time Azure opens a new region, Cosmos DB clusters have provision by default. Just like compute, storage, Cosmos DB is also one of the Ring 0 services which is the bottommost. Azure Active Directory for example depends on Cosmos DB. So Cosmos DB cannot take a dependency on Active Directory.

“The dependency that we have is our own clusters and machines, on which we put Service Fabric. For deployment of Cosmos DB code itself, we use Service Fabric. For some of the load balancing aspects we use Service Fabric. The partition management, global distribution, replication, is our own. So Cosmos DB is layered on top of Service Fabric, it is a Service Fabric application. But then it takes over. Once the Cosmos DB bits are laid out on the machine then its replication and partition management and distribution pieces take over. So that is the layering.

“Other than that there is no dependency on Azure. And that is why one of the salient aspects of this is that you can take the system and host it easily in places like Azure Stack. The dependencies are very small.

“We don’t use Azure Storage because of that dependency. So we store the data locally and then replicate it. And all of that data is also encrypted at rest.”

So when you say it is not currently in Azure Stack, it’s there underneath, but you haven’t surfaced it?

“It is in a defunct mode. We have to do a lot of work to light it up. When we light up it on such on-prem or private cloud devices, we want to enable this active to active pathway. So you are replicating your data and that is getting synchronized with the cloud and Azure Stack is one of the sockets.”

Microsoft itself is using Cosmos DB. How far back does this go? Azure AD is quite old now. Was it always on Cosmos DB / DocumentDB?

“Over the years Office 365, Xbox, Skype, Bing, and more and more of Azure services, have started moving. Now it has almost become ubiquitous. Because it’s at the bottom of the stack, taking a dependency on it is very easy.

“Azure Active Directory consists of a set of microservices. So they progressively have moved to Cosmos DB. Same situation with Dynamics, and our slew of such applications. Skype is by and large on Cosmos DB now. There are still some fragments of the past.  Xbox and the Microsoft Store and others are running on it.”

Do you think your customers are good at making the right choices over which database technology to use? I do pick up some uncertainty about this.

“We are working on making sure that we provide that clarity. Postgres and MySQL and MariaDB and SQL Server, Azure SQL and elastic pools, managed instances, there is a whole slew of relational offerings. Then we have Cosmos DB and then lots of analytical offerings as well.

“If you are a relational app, and if you are using a relational database, and you are migrating from on-prem to Azure, then we recommend the relational family. It comes with this fundamental scale caveat which is that up to 4TB. Most of those customers are settled because they have designed the app around those sorts of scalability limitations.

“A subset of those customers, and a whole bunch of brand new customers, are willing to re-write the app. They know that that they want to come to cloud for scale. So then we pitch Cosmos DB.

“Then there are customers who want to do massive scale offline analytical processing. So there is, Databricks, Spark, HD Insight, and that set of services.

“We realise there are grey lines between these offerings. We’re tightening up the guidance, it’s valid feedback.”

Any numbers to flesh out the idea that this is a fast-growing service for Microsoft?

“I can tell you that the number of new clusters we provision every week is far more than the total number of clusters we had in the first month. The growth is staggering.”

Manage your privacy online through cookie settings? You must be joking.

Since the deadline passed for the enforcement of the EU’s GDPR (General Data Protection Register) most major web sites have revamped their privacy settings with new privacy policies and more options for controlling how your personal data is used. Unfortunately, the options offered are in many cases too obscure, too complex and too time-consuming to be of any practical value.

Recital 32 of the GDPR says:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data … this could include ticking a box when visiting an internet website … silence, pre-ticked boxes or inactivity should not indicate consent.

I am sure the controls on offer via major web properties are the outcome of legal advice; at the same time, as a non-legal person I struggle on occasion to see how they meet the requirements or the spirit of the legislation. For example, another part of Recital 32 says:

… the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided.

This post describes what I get if I go to technology news site zdnet.com and it detects that I have not agreed to its cookie management.

Note: before I continue, let me emphasize that there is lots of great content on zdnet, some written by people I know; the site as far as I know is doing its best to make business sense of providing such content, in what has become a hostile environment for professional journalism. I would like to see fundamental change in this environment but that is just wishful thinking.

That said, this is one of the worst experiences I have found for privacy-seeking users. Here is the initial banner:

image

Naturally I click Manage Settings.

Now I get a scrolling dialog from CBS Interactive, with a scroll gadget that indicates that this is a loooong document:

image

There is also some puzzling news. There are a bunch of third-parties whose cookies are apparently necessary for “our sites, products and services to function correctly.” These include cookies for analytics and also for Google ad-serving. I am not clear why these third-parties perform functions which are necessary to read a technical news site, but there we are.

I scroll down and reach a button that lets me opt out of being tracked by the third party advertisers using zdnet.com, or so it seems:

image

I want to opt out, so I click. Some of the options below are unchecked, but not many. Most of the options say “Opt out through company”.

It also seems pretty technical to me. Am I meant to understand what a “Demand Side Platform” is?

image

I counted the number of links that say “opt out through company”. There are 63 of them.

I click the first one, called Adform. Naturally, the first thing I see is a request to agree (or at least click OK to) their Cookie Policy.

image

I click to read the policy (remember this is only the first of 63 sites I have to visit). I am not offered any sort of settings, but invited to visit youronlinechoices or aboutads.info.

image

Well, I don’t want anything to do with Adform and don’t intend to return to the site. Maybe I can ignore the Adform Cookie Policy and just focus on the opt-out button above it.

image

Currently I am “Opted-in”. This is a lie, I have never opted in. Rather, I have failed to opt out, until I click the button. Opting out will in fact set a cookie, so that Adform knows I have opted out. I am also reminded that this opt out only applies to this particular browser on this particular device. On all other browsers and/or devices, I will still be “opted in”.

OK, one down, 62 to go. However scrolling further down the list I get some bad news:

image

In some cases, it seems, “this partner does not provide a cookie opt-out”. The best I can do is to “visit their privacy policy for more information”. This will require a search, since the link is not clickable.

How to control your privacy

What should you do if you do not want to be tracked? Attempting to follow the industry-provided opt-outs is just hopeless. It is mostly PR and attempting to tick legal boxes.

If you do not want to be tracked, use a VPN, use ad blockers, and delete all cookies at the end of each browsing session. This will be tedious for you though, since your browsing experience will be one of constant “I agree” dialogs, some of which you may be able to ignore, or others for which you have to click I Agree or endure a myriad of semi-functional links and settings,

Maybe the EU GDPR legislation is unreasonable. Maybe we have been backed into this corner by allowing the internet to be dominated by a few giant companies. All we can state for sure is that the current situation is hopelessly broken, from a privacy and usability perspective.

Is Ron Jeffries right about the shortcomings of Agile?

A post from InfoQ alerted me to this post by Agile Manifesto signatory Ron Jeffries with the rather extreme title “Developers should abandon Agile”.

If you read the post, you discover that what Jeffries really objects to is the assimilation of Agile methodology into the old order of enterprise software development, complete with expensive consultancy, expensive software that claims to manage Agile for you, and the usual top-down management.

All this goes to show that it is possible do do Agile badly; or more precisely, to adopt something that you call Agile but in reality is not. Jeffries concludes:

Other than perhaps a self-chosen orientation to the ideas of Extreme Programming — as an idea space rather than a method — I really am coming to think that software developers of all stripes should have no adherence to any “Agile” method of any kind. As those methods manifest on the ground, they are far too commonly the enemy of good software development rather than its friend.

However, the values and principles of the Manifesto for Agile Software Development still offer the best way I know to build software, and based on my long and varied experience, I’d follow those values and principles no matter what method the larger organization used.

I enjoyed a discussion on the subject of Agile with some of the editors and writes at InfoQ during the last London QCon event. Why is it, I asked, that Agile is no longer at the forefront of QCon, when a few years back it was at the heart of these events?

The answer, broadly, was that the key concepts behind Agile are now taken for granted so that there are more interesting things to discuss.

While this makes sense, it is also true (as Jeffries observes) that large organizations will tend to absorb these ideas in name only, and continue with dark methods if that is in their culture.

The core ideas in Extreme Programming are (it seems to be) sound. Working in small chunks, forming a team that includes the customer, releasing frequently and delivering tangible benefits, automated tests and continuous refactoring, planning future releases as you go rather than in one all-encompassing plan at the beginning of a project; these are fantastic principles and revolutionary when you first come across them. See here for Jeffries’ account of what is Extreme Programming.

These ideas have everything to do with how the team works and little to do with specific tools (though it is obvious that things like a test framework, DevOps strategy and so on are needed).

Equally, you can have all the best tools but if the team is not functioning as envisaged, the methodology will fail. This is why software development methodology and the psychology of human relationships are intimately linked.

Real change is hard, and it is easy to slip back into bad practices, which is why we need to rediscover Agile, or something like it, repeatedly. Maybe the Agile word itself is not so helpful now; but the ideas are as strong as ever.

Microsoft announces Visual Studio 2019, but pleasing developers is a tough challenge

Microsoft’s John Montgomery has announced Visual Studio 2019, in a post which is short on any details of what might be in the product, other than to continue evolving features that we already know about, such as Live Share, AI-powered IntelliCode, more refactorings and so on.

The acquisition of GitHub is bound to impact both Visual Studio and Visual Studio Team Services, but Montgomery does not talk about this.

Note there is already a Visual Studio roadmap which gives some clues about what is coming. A common theme is integration with Azure services such as Azure Key Vault (for app secrets), Azure Functions, and Azure Container Service (Kubernetes).

It is more illuminating to read the comments to Montgomery’s post. Montgomery says that Visual Studio 2017 is “our most popular Visual Studio release ever,” which I presume is a count of how many times it has been downloaded or installed. It is not the most reliable though; one comment says “2017 has been buggier than all of the bugs 2015 and 2013 had combined.” I imagine every Visual Studio developer, myself included, has to exit and reload the IDE from time to time to fix odd behaviour. Other comments include:

– Reporting components have to be added per project rather than being integrated into the toolbox

– SQL Server Data Tools (SSDT) lagged behind the 2017 release and still have issues

– the XAML designer has performance and behaviour issues and the new XAML designer in preview is missing many features

In general, Microsoft struggles to keep Visual Studio up to date with its constantly-changing developer platform while also working well with the older technologies that are still widely used. The transition from .NET Framework to .NET Core is a tricky issue for the team to solve.

User Benjamin Callister says this:

I have been developing professionally with VS for 20 years now. honestly, the experience seems to get worse with each new release. the amount of time wasted in my day working with XAML alone makes me more than frustrated. The feeling is mutual among my peers as well – and it has been for years now. VS Code is such a fresh breath of air because of its speed. VS full has become so bloated, working with UWP/XAML so slow, and build times so slow. Also, imo profiling tools should be turned OFF by default, with a simple button to toggle them back on when needed. As a developer, I don’t want them on all the time – rather, just when I want to profile.

The mention of Visual Studio code is an interesting one. Code is cross-platform and has an increasing number of extensions and will be an increasingly popular choice for developers who can live without the vast range of features in Visual Studio.

Asus Project Precog dual-screen laptop: innovation in PC hardware, but missing the keyboard may be too high a price

Asus has announced Project Precog at Computex in Taiwan. This is a dual-screen laptop with a 360° hinge and no keyboard.

image

The name suggests a focus on AI, but how much AI is actually baked into this device? Not that much. It features “Intelligent Touch” that will change the virtual interface automatically and adjust the keyboard location or switch to stylus mode. It includes Cortana and Amazon Alexa for voice control. And the press release remarks optimistically that “The dual-screen design of Project Precog lets users keep their main tasks in full view while virtual assistants process other tasks on the second screen,” whatever that means – not much is my guess, since is the CPU that processes tasks, not the screen.

image

Even so, kudos to Asus for innovation. The company has a long history of bold product launches; some fail, some, like the inexpensive 2007 Eee PC which ran Linux, have been significant. The Eee PC was both a lot of fun and helped to raise awareness of alternatives to Windows.

The notable feature of Project Precog of course is not so much the AI, but the fact that it has two screens and no keyboard. Instead, if you want to type, you get an on-screen keyboard. The trade-off is extra screen space at the cost of convenient typing.

I am not sure about this one. I like dual screens, and like many people much prefer using two screens for desktop work. That said, I am also a keyboard addict. After many experiments with on-screen keyboards on iPads, Windows and Android tablets, I am convinced that the lack of tactile feedback and give on a virtual keyboard makes them more tiring to work on and therefore less productive.

Still, not everyone works in the same way as I do; and until we get to try a Project Precog device (no date announced), we will not know how well it works or how useful the second screen turns out to be.

Microsoft and GitHub, and will GitHub get worse?

Microsoft has announced an agreement to acquire GitHub for $7.5 billion (in Microsoft stock). Nat Friedman, formerly CEO of Xamarin, will become GitHub’s CEO, and GitHub will continue to run somewhat independently. A few comments.

image

Background: GitHub is a cloud-based source code repository based on Git, a distributed version control system created by Linus Torvalds. It is free to use for public, open source projects but charges a fee (from 7$ to $21 per user per month) for private repositories.

First, why? This one is easy. Microsoft is a big customer of GitHub. Microsoft used to have its own hosting service for open source software called CodePlex but abandoned it in favour of GitHub, formally closing CodePlex in March 2017:

Over the years, we’ve seen a lot of amazing options come and go but at this point, GitHub is the de facto place for open source sharing and most open source projects have migrated there. We migrated too.

said Brian Harry.

Microsoft also uses GitHub for its documentation, and this has turned out to be a big improvement on its old documentation sites.

Note also that Microsoft has many important open source projects of its own, including much of its developer platform (.NET Core, ASP.NET Core and Entity Framework Core). Many of its projects are overseen by the .NET Foundation. Other notable open source, Github-hosted projects include Visual Studio Code, a programmer’s editor that has won many friends, and TypeScript, a typed superset of JavaScript that compiles to standard JavaScript code.

When big companies become highly dependent on the services of another company they may become anxious about it. What if the other company were taken over by a competitor? What if it were to run into trouble, or to change in ways that cause problems? Acquisition is an easy solution.

In the case of GitHub, there was reason to be anxious since it appears not to be profitable – unsurprising given the large number of free accounts.

Second, Microsoft is always pitching to developers, trying to attract them to its platform and especially Azure services. It has a difficult task because it is the Windows company and the Windows platform overall is in decline, versus Linux on servers and Android/iOS on mobile. Therefore it is striving to become a cross-platform company, and with considerable success. I discuss this at some length in this piece. Note that there is a huge amount of Linux on Azure, including “more than 40%” of the virtual machines. More than 50%? Maybe.

If Microsoft can keep GitHub working as well as before, or even improve it, it will do a lot to win the confidence of developers who are currently outside the Microsoft platform ecosystem.

image

Will GitHub get worse?

The tricky question: under Microsoft, will GitHub get worse? The company’s track record with acquisitions is spotty, ranging from utter disasters (Nokia, Danger) to doubtful (Skype), to moderately successful so far (LinkedIn, Xamarin).

Under the current leadership, I doubt anything bad will happen to GitHub. I’d guess it will migrate some infrastructure to Azure (GitHub runs mainly from its own datacentres as I understand it) but there is no need to re-engineer the platform to run on Windows.

Some businesses will be uncomfortable hosting their valuable source code with Microsoft. That is understandable, in the same way that I hear of retailers reluctant to use Amazon Web Services (since it is a platform owned by a competitor), but it is a low risk. Others have long-standing mistrust of Microsoft and will want to migrate away from GitHub because of this.

Personally I think it is right to be wary of any giant global corporation, and dislike the huge and weakly regulated influence they have on our lives. I doubt that Microsoft is any worse than its peers in terms of trustworthiness but of course this is open to debate.

Another point: with this acquisition, free GitHub hosting for open source projects will be likely to continue. The press release says:

GitHub will retain its developer-first ethos and will operate independently to provide an open platform for all developers in all industries. Developers will continue to be able to use the programming languages, tools and operating systems of their choice for their projects — and will still be able to deploy their code to any operating system, any cloud and any device.

It is of course in Microsoft’s interests to make this work and the success of Visual Studio Code and TypeScript (which also come from the developer side of the company) shows that it can make cross-platform projects work. So I am optimistic that GitHub will be OK.

Update: I’ve noticed Sam Newman and Martin Fowler taking this view, a good sign from a people I respect and who are by no means from the usual Microsoft crowd.

image

Official announcements

Press release: https://news.microsoft.com/?p=406917

Chris Wanstrath’s Blog Post: https://blog.github.com/2018-06-04-github-microsoft/

Satya Nadella’s Blog Post: https://blogs.microsoft.com/?p=52553832

Case sensitive directories now possible in Windows Explorer as well as in the Windows Subsystem for Linux

Experienced Windows users will know that occasionally you hit a problem with case sensitivity in file names. The problem is that on Linux, you can have files whose name differs only in case, such as MyFile.txt and myfile.txt. Windows on the other hand will not normally let you do this and the second will overwrite the first.

The latest build of Windows 10 (1803, or the April 2018 Update) has a fix for this. You can now set directories to be case-sensitive using the fsutil command line utility:

fsutil.exe file setCaseSensitiveInfo <path> enable

You can then enjoy case sensitivity even in Windows Explorer:

image

This is not particularly useful in Windows. In fact, it is probably a bad idea since most Windows applications presume case-insensitivity. I found that using Notepad on my case-insensitive directory I soon hit bugs. I double-click a file, edit, save, and get this:

image

Press F5 and it sorts itself out.

Developers may have written applications where a file is specified with different case in different places. Everything is fine; it is the same file. Then you enable case-sensitivity and it breaks, possibly with unpredictable behaviour where the application does not actually crash, but gives wrong results (which is worse).

If you are using WSL though, you may well want case-sensitivity. There are even applications which will not compile without it, because there are different files in the source whose name differs only by case. Therefore, WSL has always supported case-sensitivity by default. However, Windows did not recognize this so you had to use this feature only from WSL.

In the new version this has changed and when you create a directory in WSL it will be case-sensitive in both WSL and Windows.

There is a snag. In the full explanation here there is an explanation of how to adjust this behaviour using /etc/wsl.conf and also the warning:

Any directories you created with WSL before build 17093 will not be treated as case sensitive anymore. To fix this, use fsutil.exe to mark your existing directories as case sensitive.

Hmm. If you are wondering why that application will not compile any more, this could be the reason. You can set it back to the old behaviour if you want.

Should Microsoft have made the file system case-sensitive? Possibly, though it is one of those things where it is very difficult to change the existing behaviour, for the reasons stated above. Note that Windows NT has always supported case-sensitive file names, but the feature is in effect disabled for compatibility reasons. It is poor for usability, having files whose names differ only in case which are therefore easily confused. So I am not sure. Being able to switch it on selectively is nice though.

Honor 10 AI smartphone launched in London, and here are my first impressions

The Honor 10 “AI” has been launched in London, and is on sale now either on contract with Three (exclusively), or unlocked from major retailers. Price is from £31 pay monthly (free handset), or SIM-free at £399.99.

image

Why would you buy an Honor 10? Mainly because it is a high-end phone at a competitive price, especially if photography is important to you. As far as I can tell, Honor (which is a brand of Huawei) offers the best value of any major smartphone brand.

How is the Honor brand differentiated from Huawei? When I first came across the brand, it was focused on a cost-conscious, fashion-conscious youth market, and direct selling rather than a big high street presence. It is a consumer brand whereas Huawei is business and consumer. At the London launch, the consumer focus is still evident, but I got the impression that the company is broadening its reach, and the deal with Three and sale through other major retailers shows that Honor does now want to be on the high street.

image

What follows is a quick first impression. At the launch, Honor made a big deal of the phone’s multi-layer glass body, which gives a 3D radiant effect as you view the rear of the phone. I quite like the design but in this respect it is not really all that different from the glass body of the (excellent) Honor 8, launched in 2016. I also wonder how often it will end up hidden by a case. The Honor 10 AI is supplied with a transparent gel case, and even this spoils the effect somewhat.

The display is great through, bright and high resolution. Reflectivity is a problem, but that is true of most phones. Notable is that by default there is a notch at the top around the front camera, but that you can disable this in settings. I think the notch (on this or any phone) is an ugly feature and was quick to disable it. Unfortunately screenshots do not show the notch so you will have to make do with my snaps from another phone:

With notch:

image

Without notch:

image

The camera specs are outstanding, with dual rear lens 24MP + 16MP, and 24MP front. At the launch at least half the presentation was devoted to the photography, and in particular the “AI” feature. The Honor 10 has an NPU (Neural Processing Unit), which is hardware acceleration for processes involved in image recognition. All smartphone cameras do a ton of work in software to optimize images, but the Honor 10 should be faster and use less power than most rivals thanks to the NPU. The AI works in several ways. If it recognises the photo as one of around 500 “scenarios”, it will optimize for that scenario. At a detail level, image recognition will segment a picture into objects it recognises, such as sky, buildings, people and so on, and optimize accordingly. For example, people get high priority, and especially the person who is the subject of a portrait. It will also segment the image of a person into hair, eyes, mouth and so on, for further optimisation.

What is optimisation? This is the key question. One of the AI effects is bokeh (blurring the background) which can be a nice way to make a portrait. On the other hand, if you take a picture of someone with the Niagara Falls in the background, do you really want it blurred to streaks of grey so that the picture might have been taken anywhere? It is a problem, and sometimes the AI will make your picture worse. I am reserving judgment on this, but will do another post on the subject after more hands-on.

Of course you can disable the AI, and in the Pro camera mode you can capture RAW images, so this is a strong mobile for photography even if you do not like the AI aspect. I have taken a few snaps and been impressed with the clarity and detail.

24MP for the front camera is exceptional so if selfies are your thing this is a good choice.

You have various options for unlocking the device: PIN, password, pattern swipe, fingerprint, proximity of Bluetooth device, or Face Unlock. The fingerprint reader is on the front, which is a negative for me as I prefer a rear fingerprint reader that lets you grab the device with one hand and instantly unlock. But you can do this anyway with Face Unlock, though Honor warns that this is the least secure option as it might work with a similar face (or possibly a picture). I found the Face Unlock effective, even with or without spectacles.

The fingerprint scanner is behind glass which Honor says helps if your finger is wet.

There are a few compromises. A single speaker means sound is OK but not great; it is fine through headphones or an external speaker though. No wireless charging.

Geekbench scores

image

image

PC Mark scores

image

So how much has performance improved since the Honor 8 in 2016? On PCMark, Work 2.0 performance was 5799 on the Honor 8, 7069 on the 10 (+21%). Geekbench 4 CPU scores go from 5556 multi-core on the 8 to 6636 on the 10 (+19.4%).  The GPU though is more substantially improved, 4728 on the 8 and 8585 on 10 (+81.5%). These figures take no account of the new NPU.

First impressions

I must confess to some disappointment that the only use Honor seems to have found for its NPU is photo enhancement, important though this is. It does not worry me much though. I will report back on the camera, but first impressions are good, and this strikes me as a strong contender as a high-end phone at a mid-range price. 128GB storage is generous.

Spec summary

OS: Android 8.1 “Oreo” with  EMUI (“Emotion UI”) 8.1 user interface

Screen: 5.84″ 19:9, 2280p x 1080p, 432 PPI, Removeable notch

Chipset: Kirin 970 8-core, 4x A73 @ 2.36 GHz, 4x A53 @ 1.84 GHz

Integrated GPU: ARM Mali-G72MP12 746 MHz

Integrated NPU (Neural Processing Unit): Hardware acceleration for machine learning/AI

RAM: 4GB

Storage: 128GB ROM.

Dual SIM: Yes (nano SIM)

NFC: Yes

Sensors: Gravity Sensor, Ambient Light Sensor, Proximity Sensor, Gyroscope, Compass, Fingerprint sensor, infrared sensor, Hall sensor, GPS

WiFi: 802.11 a/b/g/n/ac, 2.4GHz/5GHz

Bluetooth: 4.2

Connections: USB 2.0 Type-C, 3.5mm headphone socket

Frequency bands: 4G LTE TDD: B38/B40/B414G LTE FDD: B1/B3/B5/B7/B8/B19/B203G WCDMA: B1/B2/B5/B8/B6/B192G GSM: B2/B3/B5/B8

Size and weight: 149.6 mm x 71.2 mm  x 7.7 mm, 153g

Battery: 3,400 mAh,  50% charge in 25 minutes. No wireless charging.

Fingerprint sensor: Front, under glass

Face unlock: Yes

Rear camera: Rear: 24MP + 16MP Dual Lens Camera,F1.8 Aperture.

Front camera: 24MP