Progressive Delivery: the next step in DevOps?

I attended the always-excellent QCon developer conference in London earlier this week. James Governor from Redmonk what there, presenting what he calls Progressive Delivery, the idea being that rather than rolling out continuous and (mostly) small changes to everyone, you segment your deployments. Progressive deployment, see.

image

It is not really a new idea and might even be considered a rediscovery of what we already knew: that it makes sense to deploy new stuff to a small sample first. However it is true that tools are constantly evolving, and Progressive Delivery is perhaps best seen as a necessary refinement to the Continuous Delivery concept. In particular, LaunchDarkly exhibited at QCon; the product is a feature management platform which lets you create groups of users and toggle features on or off for particular groups. Needless to say, the LaunchDarkly folk love the Progressive Delivery concept.

Why Progressive Delivery? My first reaction is that this is about caution: if stuff breaks, let us make sure it only breaks for a few users. Then I saw that it can be equally about bold experimentation, trying new ideas with small groups so you can observe what works and what does not.

Of course you can do this anyway and in the end there is no magic in LaunchDarkly; it is still down to the developer to write the code:

image

This stuff can also easily become non-trivial; one attendee asked about managing database structure and it is obvious that not all features are equally amenable to being switched on or off for groups of users.

Still, I reckon “how do you manage features?” is a good question to add to the list when considering DevOps tools.

You can read most of what Governor talked about in his post from last year here.

Location Services: GPS-only no longer protects your privacy on Android 9 “Pie”, Huawei / Honor 10

I have an Honor 10 AI phone (among others) and this recently upgraded itself to Android 9 “Pie”. It is always good to be on the latest Android; but I noticed a change in something I care about (though acknowledging that for most people it is not top of mind).

Specifically, I am averse to sharing my location more than is necessary, especially with large organizations that want to track me for advertising and marketing purposes (hello Google!). Therefore I normally set Android Location Services to GPS-only. This means you do not have to agree to send your location data to Google in the dialog that appears when you turn on what Google calls “High Accuracy” location services. Here is what the setting looks like in Android 7:

image

I have found that Google Maps works badly on GPS-only, but other mapping apps like HERE WeGo work fine.

However, following the upgrade to Android 9 on the Honor 10 AI, my use of HERE WeGo was blocked.

image

This is coercive, in that mapping is a core function of a smartphone. And it is unnecessary, since I know for sure that this app works fine without the Wi-Fi scanning and Google data collection referenced in the dialog.

I agreed the setting but noticed another curious thing. When you switch on location services, you also make a new agreement with Huawei:

image

This is confusing. Is location services provided by Google, or Huawei?

Note also that I have little confidence in the promise that no “personal information” will be collected. The intention may be there, but history suggests that it is often pretty easy to identify the person from so-called non-personal information. It is better not to send the data at all if you care about privacy.

Huawei’s only suggestion if you do not agree is not to use location services. Or throw your device in the bin.

Having agreed all this data collection, note that you can still turn off wi-fi scanning and Bluetooth scanning in the advanced settings of Google location services. Is this respected though by Huawei? It is hard to tell.

Finally, note that Google now strongly encourages developers to use the Google Play location API rather than the Android location API.

image

This all seems like bad news if, like me, you want to minimize the location data that you share.

How Windows 10 Ransomware Protection interferes with application installs

Three times recently I have had install failures on Windows 10, and three times have fixed them by disabling one of its security features.

image

“Ransomware protection” does not know anything about ransomware as such, but simply blocks access to files and folders where you are likely to store documents and data. Ransomware works by encrypting your documents and demanding money (usually in the form of bitcoin) to unlock them. Therefore blocking access prevents it from working, or at least that is the idea.

I am all in favour of blocking ransomware, but unfortunately this feature may also break application installs. Worse, they tend to break in ways that do not make the problem obvious to the user.

I installed Docker for Windows, for example (actually just accepted a prompt to upgrade) and it hung on the Installing… dialog.

Another application complained about insufficient disk space.

The problem seems to be related to application shortcuts, which are created in protected folders such as the desktop. Even though the install is running with admin rights, it cannot write these files. What happens next depends on how well the installer handles unexpected outcomes. This is not really a fatal error, but may become so.

What is the solution?

The obvious answer is to turn off controlled folder access before running installers for desktop applications. You can turn it back on afterwards.

Microsoft may say that if we all use Store apps or the Desktop Bridge packager the problem will not occur. The issue is related to the Windows legacy of free-for-all installations and the way Microsoft has been trying for many years (with partial success) to bring it under control. Mobile operating systems tend to be better behaved because they were designed to be locked down and to isolate applications from one another and from the operating system.

Adding a Visual Studio code workspace to a GitHub repository

Rather to my surprise, I am currently spending more of my development time in Visual Studio Code than in Visual Studio. There are a few reasons:

– I am working on a Java project and chose to use VS Code in part as a learning exercise

– I have a PHP website and have worked out a nice debugging environment using VS Code and WSL (Windows Subsystem for Linux)

– I am finding VS Code handy as a general-purpose editor

How about source control though? I guess as you would expect from Microsoft (which now owns GitHub) the git support is built in. So this is how I moved my PHP website, which was not under source control, to a private GitHub repository:

1. In VS Code, open the workspace and press Ctrl-Shift-G or click the Source control icon. Click the repository icon for Initialize Repository:

image

Then select your workspace from the dropdown and the local repository is created.

Initially all your files are in an unstaged state. Staging in git is where you define which changes will be committed in your next commit. We want to commit everything to form the initial repo, so drop down the git menu (three dots to the right of the source control pane) and choose Commit all, click Yes.

image

Type a commit message and go.

Now go to GitHub and create a new repository.

image

This is a private repository as nobody else needs to see the code for my website.

The repository is created, and right there on the default help page is the command for pushing your existing repo to GitHub.

Just open a terminal and paste the command:

git remote add origin https://github.com/[your repo name]

git push -u origin master

After the second command you will be prompted to login to GitHub. This creates an access token.

Done! If you go back to the repo on GitHub you will find it populated with your files.

A similar workflow applies if you use Azure DevOps. The choice is yours; the features of the two services are different but if all you want is source code management GitHub seems the obvious choice.

Linksys LPAC1750C, Cloud Manager, and the mystery of the wifi printer that would not print

I have been testing the Linksys LPAC1750C wifi access point, a mid-price unit aimed at small businesses (or owners of large homes) who want an extensible wifi network with more features than home networking gear, but at a more affordable price than Cisco or other enterprise vendors.

image

This unit supports clustering so is really intended for multiple access points managed as one system, but I have only a single access point. Still, it is enough to get a feel for how it works. In particular, I was interested in the Linksys Cloud Manager, which simplifies management and configuration. A five year license comes in the box, and setup is a snap. Just plug the access point into your switch, create an account on Linksys cloud manager, start a new network, enter the serial number and MAC address of your access point, and you are almost done. The only thing that remains is to create one or more SSIDs, which apply automatically to all the access points in your network. VLAN support means you can configure guest networks, and there are options for client isolation and splash screens so that you can display some information to guest users when they log on.

I was impressed with the ease of use, but noticed that the cloud manager has limited features compared to the local browser-based configuration screens. No RADIUS support, for example. You cannot use both, since the cloud manager takes over all the configuration. If you revert to local configuration, everything is reset.

All seemed well, except for a curious problem. I have a wifi connected printer and although it joined the network without any problem, I could not print to it. It was as if it was invisible on the network. Sounds like client isolation (where one wifi client is blocked from accessing other wifi clients), except that client isolation was off. The other odd thing was that rebooting the access point seemed to fix it, and I could print, but only for a short time before it reverted to invisibility.

I called support but no joy. You could try resetting the access point, said the tech person, once I had managed to explain the problem successfully. This wasn’t a problem I could live with, so I did the obvious thing, disabling the cloud manager and using the local configuration.

When I did, I soon spotted the issue. The cloud manager automatically applies the same SSID to both 2.4 and 5GHz radios, which is nice for simplicity but there is an unfortunate side-effect. Although they have the same name, these are really two separate SSIDs, and the LAPAC1750C applies SSID isolation by default.

image

The printer, being a few years old, does not support 5GHz wifi connections, so it connected to the 2.4Ghz radio. It was then isolated from a PC, also connected by wifi, trying to print to it directly. You could overcome this by routing printing through a server on the wired network but any direct client to client communication will not work.

The solution is to disable isolation between SSIDs but this option is not exposed in cloud manager. So in my case, cloud manager is not suitable. A shame, since within its limitations it seems nicely done.

Everything works now and printing is fine.

Office 365 vs Office 2019 vs LibreOffice: some thoughts

What has rescued Microsoft in the cloud era? It seems to me that Office 365, rather than Azure, is its most strategic product. Users do not like too much change; and back when Office 365 was introduced in 2011 it offered an easy way for businesses small and large to retire their Exchange servers while retaining Outlook with all its functionality (Outlook works with other mail servers but with limited features). You also got SharePoint online, cloud storage, and in-browser versions of Word, Excel and PowerPoint.

There was always another aspect to Office 365 though, which is that it allowed you to buy the Office desktop applications as a subscription. Unless you are the kind of person (or business) that happily runs old software, the subscription is better value than a permanent license, especially for small businesses. Currently Office 365 Business Premium gets you Outlook, Word, Excel, PowerPoint, OneNote and Access, as well as hosted Exchange and SharePoint etc, for £9.40 per month. Office Home and Business (which does not include Access) is £250, or about the same as two years subscription, and can only be installed on one PC or Mac, versus 5 PCs or Macs, 5 tablets and 5 mobile devices for the subscription product.

The subscription product is called Office 365, and the latest version of the desktop suite is called Office 2019. Microsoft would much rather you bought the subscription, not only because it delivers recurring revenue, but also because Office 365 is a great upselling opportunity. Once you are on Office 365 and Azure Active Directory, products like Dynamics 365 are a natural fit.

Microsoft’s enthusiasm for the subscription product has resulted in a recent “Twins Challenge” campaign which features videos of identical twins trying the same task in both Office 365 and Office 2019. They are silly videos and do a poor job of selling the Office 365 features. For example, in one video the task is to “fill out a spreadsheet with data about all 50 states” (US centric or what?).

image

In the video, the Office 365 guy is done in seconds thanks to Excel Data Types, a new feature which uses online data from the Bing search engine to provide intelligent features like entering population, capital city and so on. It seems though that the twins were pre-provided with a spreadsheet that had a list of the 50 states, as Excel cannot enter these automatically. And when I tried my own exercise with a few capital cities I found it frustrating because not much data was available, and the data is inconsistent so that one city has fields not available for another city. So my results were not that great.

image

I’m also troubled to see data like population chucked into a spreadsheet with no information on its source or scope. Is that Greater London (technically a county) or something less than that? What year? Whose survey? These things matter.

Perhaps even more to the point, this is not what most users do with Office. It varies of course; but a lot of people type documents and do simple spreadsheets that do not stress the product. They care about things like will it print correctly, and if I email it, will the recipient be able to read it OK. Office to be fair is good in both respects, but Microsoft often struggles to bring new features to Office that matter to a large proportion of users (though every feature matters to someone).

It is interesting to browse through the new features in Office 2019, listed here. LaTeX equation support, nice. And a third time zone in Outlook, handy if you discover it in the convoluted Outlook UI (and yes, discoverability is a problem):

image

It is worth noting though that for document editing the free LibreOffice is excellent and good enough for a lot of purposes. You do not get Outlook though, and Calc is no Excel. If you mostly do word processing though, do look at LibreOffice, it is better in some respects than Word (style support, for example).

I use Office constantly and like all users, I do have a list of things I would like fixed or improved, that for the most part seem to be completely different from what the Office team focuses on. There are even longstanding bugs – see the recent comment. Ever had an email in Outlook, clicked Reply, and found that the the formatting and background of the original message affects your reply text as well and the only way to fix it is to remove all formatting? Or been frustrated that Outlook makes it so hard to make interline comments in a reply with sensible formatting? Or been driven crazy by Word paragraph numbering and indentation when you want to have more than one paragraph within the same numbered point? Little things; but they could be better.

Then again there is Autosave (note quite different from autorecover), which is both recent and a fantastic feature. Unfortunately it only works with OneDrive. The value of this feature was brought home to me by an anecdote: a teenager who lost all the work in their Word document because they had not previously encountered a Save button (Google docs save automatically). This becomes what you expect.

So yes, Office does improve, and for what you get it is great value. Will Office 2019 users miss lots of core features? No. In most cases though, the Office 365 subscription is much better value.

How Windows 10 Ransomware protection can cause install failures, LibreOffice for example

While researching a piece on Office applications I needed to install LibreOffice. The install failed with a message about an error creating a temporary file needed for installation.

image

Fortunately I knew where to look for the answer. Windows Ransomware Protection is a feature which whitelists the applications allowed to write data to the folders likely to contain the data you care about, such as documents and pictures. The idea is that malware which wants to encrypt these folders and then demand a ransom will find it harder to do so.

image

Ransomware protection can have side effects though. Operations like creating desktop shortcuts may fail because the desktop is one of the protected locations. That is just an annoyance; but in the case of LibreOffice, setup tried to write an essential file to a protected location and the install failed completely.

Solution: turn off Ransomware protection temporarily and re-run setup.

image

Which application platform for desktop Windows apps? Microsoft has stated its official line, but UWP is still not compelling

One year ago I wrote a post on Which .NET framework for Windows: UWP, WPF or Windows Forms? which is still the most popular post on this site, indicating perhaps that this is a tricky issue for many developers. That this is a live question is a symptom of Microsoft’s many changes of strategic direction over the last decade, making it hard for even the most loyal developers to read the signals.

I was intrigued therefore to note that Microsoft has an official Choose your platform post on this subject. There is something curious about this post. It covers three frameworks: Universal Windows Platform (UWP), Windows Presentation Foundation (WPF) and Windows Forms (WinForms). Microsoft states:

UWP is our newest, leading-edge application platform.

implying that if you have an unconstrained choice, this is the way to go. Yet if you look at the table of “Scenarios that have limited support”, UWP has the longest list. It is not only Windows 7 support that you will miss, but also something called Dense UI, along with other rather significant features like multiple windows and “full platform support”.

What is Dense UI? I presume this is a reference to the chunkiness of a typical UWP UI, caused by the fact that it was originally optimised for touch control. This matters if, for example, you are writing a business application and want to have a lot of information to hand in a single window. It may not be ideal for cosmetics, but it can be good for productivity.

With respect to all three of these limitations, Microsoft does note that “We have publicly announced features that will address this scenario in a future release of Windows 10.” I am not sure that they are in fact fully addressed; but it is clear that improvements are coming. In fact, the promise of further active development is perhaps the key reason why you might choose UWP for a new project, that is, if you do not learn from the past and believe that UWP will still be core to Microsoft’s strategy in say five years time.

Take a look at the strengths column for UWP though. Anything really compelling there? To my mind, just one. “Secure execution via application containers.” Yet the security of UWP was undermined by Microsoft’s decision to abandon its original goal of restricting the Windows Runtime API (used for UWP) to a safe subset of the full Windows API. You can also now wrap WPF and WinForm applications using Desktop Bridge, getting Store delivery and a certain amount of isolation.

At the time of writing, Microsoft is still displaying this diagram in its guide to UWP.

image

This is now somewhat misleading though. Windows Mobile is on death row:

Windows 10 Mobile, version 1709 (released October 2017) is the last release of Windows 10 Mobile and Microsoft will end support on December 10, 2019. The end of support date applies to all Windows 10 Mobile products, including Windows 10 Mobile and Windows 10 Mobile Enterprise.

Windows 10 Mobile users will no longer be eligible to receive new security updates, non-security hotfixes, free assisted support options or online technical content updates from Microsoft for free.

As a developer then, would you rather have PC, Xbox and HoloLens support? Or PC, Mac, iOS and Android support? If the latter, you would be better off investigating Microsoft’s Xamarin Forms framework than UWP as such.

The truth is, many developers who target Windows desktop applications do so because they want to run well on Windows and are not concerned about cross-platform. While that may seem odd from a consumer perspective, it is not so odd for corporate development with deskbound users performing specific business operations.

I was at one time enthusiastic about Windows Runtime/UWP because I liked the idea of “one Windows platform” as illustrated above, and I liked the idea of making Windows a platform for secure applications. Both these concepts have been thoroughly undermined, and I would suggest that the average developer is probably better off with WPF or WinForms (or other approaches to Win32 applications such as Delphi etc), than with UWP. Or with Xamarin for a cross-platform solution. That is unfortunate because it implies that the application platform Microsoft is investing in most is at odds with what developers need.

If UWP becomes a better platform than WPF or WinForms in all important respects, that advice will change; but right now it is not all that compelling.

Microsoft quarterly financials: strong figures, note LinkedIn and Dynamics numbers

Microsoft has released its financial statements for the quarter ending December 31 2018. Sometimes it seems that all the talk is of Google, Facebook, Apple and Amazon, but Microsoft continues to deliver strong results.

That said, it is an increasingly corporate story. The company still has a presence in gaming, both on Xbox and PC, and reports Xbox software and services growth of 31%. Consumers still buy Windows and Office; there are now 33.3 million Office 365 consumer customers.

There is no longer a PC in every home though. There might be an old one; but PCs now  tend to be bought for specific purposes such as gaming or home working. There are plenty of other options for casual home computing. Windows OEM revenue is down 5%.

It is a different story in the business world. Office 365 is still motoring, with revenue growth of 34% year on year. A spin-off benefit is that Dynamics 365, once a poor cousin to Salesforce for cloud CRM, now reports revenue growth of 51% year on year, despite the product’s eccentricities and high price. The key is integration and upsell: get users hooked on Office 365 for email and documents, and compelling add-ons become an easy sell.

Rather to my surprise, Microsoft’s LinkedIn acquisition seems to be working. Revenue is up 29%, session numbers are up 30%. My anecdotal experience bears this out. People are actually acquiring and doing business via LinkedIn, even though it suffers from masses of bad data and the usual perils of social media (fake accounts, scammers, harassers and so on). For now, users seem to be able to manage these problems and interact with the right people.

Azure revenue is up 76%.

All well in Redmond then? The risk is that the company’s narrowing focus will leave it vulnerable to competitors who take advantage of their control of the end points (clients): smartphones, tablets, smart devices running Linux. Even now the web browser, with the Edge team now integrating Google’s browser engine, Chromium, rather than building their own.

For now though, Microsoft powers on.

Here is the breakdown by segment, such as it is:   

Quarter ending December 31st 2018 vs quarter ending December 31st 2017, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 10100 +1147 4015 +678
Intelligent Cloud 9378 +1583 3279 +447
More Personal Computing 12993 +823 2964 +454

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Google’s search monopoly, the decline of organic search and its implications

A piece by Rand Fishkin tells me what I already knew: that Google has a de facto monopoly in search, and that organic search (meaning clicking on a result from a search engine that is not an ad) is in decline, especially on mobile.

According to Fishkin, using data from digital intelligence firm Jumpshot, Google properties deliver 96.1% of all search in the EU and 93.4% of all US searches. “Google properties” include Google, Google Images, Youtube, and Google Maps.

To the extent that this shows high satisfaction with Google’s service, this is a credit to the company. We should also look carefully though at the outcome of those searches. In the latest figures available (Jan-Sept 2018) they break down as follows (EU figures):

  • Mobile: 36.7% organic, 8.8% paid, 54.4% no-click
  • Desktop: 63.6% organic, 6.4% paid, 30% no-click

On mobile, the proportion of paid clicks has more than doubled since 2016. On the desktop, it has gone up by over 40%.

A no-click search is one where the search engine delivers the result without any click-through to another site. Users like this in that it saves a tap, and more important, spares them the ads, login-in pleas, and navigation challenges that a third-party site may present.

There is a benefit to users therefore, but there are also costs. The user never leaves Google, there is no opportunity for a third-party site to build a relationship or even sell a click on one of its own ads. It also puts Google in control of information which has huge political and commercial implications, irrespective of whether it is AI or Google’s own policies that determine what users see.

My guess is that the commercial reality is that organic search has declined even more than the figures suggest. Not all searches signal a buying intent. These searches are less valuable to advertisers and therefore there are fewer paid ads. On the other hand, searches that do indicate a buying intent (“business insurance”, “IT support”, “flight to New York”) are highly valued and attract more paid-for advertising. So you can expect organic search to me more successful on searches that have less commercial value.

In the early days of the internet the idea that sites would have to pay to get visitors was not foreseen. Of course it is still possible to build traffic without paying a Google tax, via social media links or simply by hosting amazing content that users want to see in full detail, but it is increasingly challenging.

There must be some sort of economic law that says entities that can choose whether to give something away or to charge for it, will eventually charge for it. We all end up paying, since whoever actually provides the goods or services that we want has to recoup the cost of winning our business, including a share to Google.

Around six years ago I wrote a piece called Reflecting on Google’s power: a case for regulation? Since then, the case for regulation has grown, but the prospect of it has diminished, since the international influence and lobbying power of the company has also grown.

Tech Writing