Notes from the Field: dmesg error blocks MySQL install on Windows Subsystem for Linux

I enjoy Windows Subsystem for Linux (WSL) on Windows 10 and use it constantly. It does not patch itself so from time to time I update it using apt-get. The latest update upgraded MySQL to version 5.7.22 but unfortunately the upgrade failed. The issue is that dpkg cannot configure it. I saw messages like:

invoke-rc.d: could not determine current runlevel

2002: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock

After multiple efforts uninstalling and reinstalling I narrowed the problem down to a dmesg error:

dmesg: read kernel buffer failed: Function not implemented

It is true, dmesg does not work on WSL. However there is a workaround here that says if you write something to /dev/kmsg then at least calling dmesg does not return an error. So I did:

sudo echo foo > /dev/kmsg

Removed and reinstalled MySQL one more time and it worked:

image

Apparently partial dmesg support in WSL is on the way, previewed in Build 17655.

Note: be cautious about fully uninstalling MySQL if you have data you want to preserve. Export/backup the databases first.

Instant applications considered harmful?

Adrian Colyer, formerly of SpringSource, VMWare, and Pivotal, is running an excellent blog where he looks at recent technical papers. A few days ago he covered The Rise of the Citizen Developer – assessing the security impact of online app generators. This was about online app generators for Android, things like Andromo which let you create an app with a few clicks. Of course the scope of such apps is rather limited, but they have appeal as a quick way to get something into the Play Store that will promote your brand, broadcast your blog, convert your website into an app, or help customers find your office.

It turns out that there are a few problems with these app generators. Andromo is one of the better ones. Some of them just download a big generic application with a configuration file that customises it to your requirements. Often this configuration is loaded from the internet, in some cases over HTTP with no encryption. API keys used for interaction with other services such as Twitter and Google can easily leak. They do not conform to Android security best practices and request more permissions that are needed.

Low code or no-code applications are not confined to Android applications. Appian promises “enterprise-grade” apps via its platform.  Microsoft PowerApps claims to “solve business problems with intuitive visual tools that don’t require code.” It is an idea that will not go away: an easy to use visual environment that will enable any business person to build productive applications.

Some are better than others; but there are inherent problems with all these kinds of tools. Three big issues come to mind:

  1. Bloat. You only require a subset of what the application generator can do, but by trying to be universal there is a mass of code that comes along with it, which you do not require but someone else may. This inevitably impacts performance, and not in a good way.
  2. Brick walls. Everything is going well until you require some feature that the platform does not support. What now? Often the only solution is to trash it and start again with a more flexible tool.
  3. Black box. You app mostly works but for some reason in certain cases it gives the wrong result. Lack of visibility into what it happening behind the scenes makes problems like this hard to fix.

It is possible for an ideal tool to overcome these issues. Such a tool generates human-understandable code and lets you go beyond the limitations of the generator by exporting and editing the project in a full programming environment. Most of the tools I have seen do not allow this; and even if they do, it is still hard for the generator to avoid generating a ton of code that you do not really need.

The more I have seen of different kinds of custom applications, the more I appreciate projects with nicely commented textual code that you can trace through and understand.

The possibility of near-instant applications has huge appeal, but beware the hidden costs.

On Microsoft Teams in Office 365, and why we prefer walled gardens to the Internet jungle

Gartner has recently delivered a report called Why Microsoft Teams will soon be just as common as Outlook, which gave me pause for reflection.

The initial success of Office 365 was almost all to do with email. Hosted Exchange at a reasonable cost is a an obvious win for businesses who were formerly on on-premises Exchange or Small Business Server. Microsoft worked to make the migration relatively seamless, and with strong Active Directory support it can be done with users hardly noticing. Exchange of course is more than just email, also handling calendars and tasks, and Outlook and Exchange are indispensable tools for many businesses.

The other pieces of Office 365, such as SharePoint, OneDrive and Skype for Business (formerly Lync) took longer to gain traction, in part because of flaws in the products. Exchange has always been an excellent email server, but in cloud document storage and collaboration Microsoft’s solution was less good than alternatives like DropBox and Box, and ties to desktop Office are a mixed blessing, welcome because Office is familiar and capable, but also causing friction thanks to the need for old-style software installations.

Microsoft needed to up its game in areas beyond email, and to its credit it has done so. SharePoint and OneDrive are much improved. In addition, the company has introduced a range of additional applications, including StaffHub for managing staff schedules, Planner for project planning and task assignment, and PowerApps for creating custom applications without writing code.

We have also seen a boost to the cloud-based Dynamics suite thanks to synergy between this and Office 365.

Having lots of features is one thing, winning adoption is another. Microsoft lacked a unifying piece that would integrate these various elements into a form that users could easily embrace. Teams is that piece. Introduced in March 2017, I initially thought there was nothing much to it: just a new user interface for existing features like SharePoint sites and Office 365/Exchange groups, with yet another business messaging service alongside Skype for Business and Yammer.

Software is about usability as much or more than features though, and Teams caught on. Users quickly demanded deeper integration between Teams and other parts of Office 365. It soon became obvious that from the user’s perspective there was too much overlap between Teams and Skype for Business, and in September 2017 Microsoft announced that Teams would replace Skype for Business, though this merging of two different tools is not yet complete.

image

To see why Teams has such potential you need only click Add a tab in the Windows client. Your screen fills with stuff you can add to a Team, from document links to Planner to third-party tools like Trello and Evernote.

image

This is only going to grow. Users will open Teams at the beginning of the day and live there, which is exactly the point Garner is making in its attention-grabbing title.

A good thing? Well, collaboration is good, and so is making better use of what you are paying for with an Office 365 subscription, so it has merit.

The part that troubles me is that we are losing diversity as well as granting Microsoft a firmer hold on its customers.

It all started with email, remember. But email is a disaster, replete with unwanted marketing, malware links, and some number of communications that have some possible value but which life is too short to investigate. In the consumer world, people prefer the safer world of Facebook Messenger or WhatsApp, where messages are more likely to be wanted. Email is also ancient, hard to extend with new features, and generally insecure.

Business-oriented messaging software like Slack and now Teams have moved in, to give users a safer and more usable way of communicating with colleagues. Consumers prefer Facebook’s walled garden to the internet jungle, and business users are no different.

It is a trade-off though. Email, for all its faults, is open and has multiple providers. Teams is not.

This will not stop Teams from succeeding, even though there are plenty of user requests and considerable dissatisfaction with the current release. Performance can be poor, the clients for Mac and mobile not as good as for Windows, and there is no Linux client at all.

Third-parties with applications or services that make sense in the Teams environment should hasten to get their stuff available there.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

Notes from the field: Windows Time Service interrupts email delivery

A business with Exchange Server noticed that email was not flowing. The internet connection was fine, all the servers were up and running including Exchange 2016. Email has been fine just a few hours earlier. What was wrong?

The answer, or the beginning of the answer, was in the Event Viewer on the Exchange Server. Event ID 1035, only a warning:

Inbound authentication failed with error UnexpectedExchangeAuthBlobCheckForClockSkew for Receive connector Default Mailbox Delivery

Hmm. A clock problem, right? It turned out that the PDC for the domain was five minutes fast. This is enough to trigger Kerberos authentication failures. Result: no email. We fixed the time, restarted Exchange, and everything worked.

Why was the PDC running fast? The PDC was configured to get time from an external source, apparently, and all other servers to get their time from the PDC. Foolproof?

Not so. If you typed:

w32tm /query /status

at a command prompt on the PDC (not the Exchange Server, note), it reported:

Source: Free-running System Clock

Oops. Despite efforts to do the right thing in the registry, setting the Type key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters to NTP and entering a suitable list of time servers in the NtpServer key, it was actually getting its time from the server clock. This being a Hyper-V VM, that meant the clock on the host server, which – no surprise – was five minutes fast.

You can check for this error by typing:

w32tm /resync

at the command prompt. If it says:

The computer did not resync because no time data was available.

then something is wrong with the configuration. If it succeeds, check the status as above and verify that it is querying an internet time server. If it is not querying a time server, run a command like this:

w32tm /config /update /manualpeerlist:”0.pool.ntp.org,0x8 1.pool.ntp.org,0x8 2.pool.ntp.org,0x8 3.pool.ntp.org,0x8″ /syncfromflags:MANUAL

until you have it right.

Note this is ONLY for the server with the PDC Emulator FSMO role. Other servers should be configured to get time from the PDC.

Time server problems seem to be common on Windows networks, despite the existence of lots of documentation. There are also various opinions on the best way to configure Hyper-V, which has its own time synchronization service. There is a piece by Eric Siron here on the subject, and I reckon his approach is a safe one (Hyper-V Synchronization Service OFF for the PDC Emulator, ON for every other VM).

I love his closing remarks:

The Windows Time service has a track record of occasionally displaying erratic behavior. It is possible that some of my findings are not entirely accurate. It is also possible that my findings are 100% accurate but that not everyone will be able to duplicate them with 100% precision. If working with any time sensitive servers or applications, always take the time to verify that everything is working as expected.

Inside Azure Cosmos DB: Microsoft’s preferred database manager for its own high-scale applications

At Microsoft’s Build event in May this year I interviewed Dharma Shukla, Technical Fellow for the Azure Data group, about Cosmos DB. I enjoyed the interview but have not made use of the material until now, so even though Build was some time back I wanted to share some of his remarks.

Cosmos DB is Microsoft’s cloud-hosted NoSQL database. It began life as DocumentDB, and was re-launched as Cosmos DB at Build 2017. There are several things I did not appreciate at the time. One was how much use Microsoft itself makes of Cosmos DB, including for Azure Active Directory, the identity provider behind Office 365. Another was how low Cosmos DB sits in the overall Azure cloud system. It is a foundational piece, as Shukla explains below.

image

There were several Cosmos DB announcements at Build. What’s new?

“Multi-master is one of the capabilities that we announced yesterday. It allows developers to scale writes all around the world. Until yesterday Cosmos DB allowed you to scale writes in a single region but reads all around the world. Now we allow developers to scale reads and writes homogeneously all round the world. This is a huge deal for apps like IoT, connected cars, sensors, wearables. The amount of writes are far more than the amount of reads.

“The second thing is that now you get single-digit millisecond write latencies at the 99 percentile not just in one region.

“And the third piece is that what falls out of this high availability. The window of failover, the time it takes to failover from one region when a disaster happens, to the other, has shrunk significantly.

“It’s the only system I know of that has married the high consistency models that we have exposed with multi-master capability as well. It had to reach a certain level of maturity, testing it with first-party Microsoft applications at scale and then with a select set of external customers. That’s why it took us a long time.

“We also announced the ability to have your Cosmos Db database in your own VNet (virtual network). It’s a huge deal for enterprises where they want to make sure that no data leaks out of that VNet. To do it for a global distributed database is specially hard because you have to close all the transitive networking dependencies.”

image
Technical Fellow Dharma Shukla

Does Cosmos DB work on Azure Stack?

“We are in the process of going to Azure Stack. Azure Stack is one of the top customer asks. A lot of customers want a hybrid Cosmos DB on Azure Stack as well as in Azure and then have Active – Active. One of the design considerations for multi master is for edge devices. Right now Azure has about 50 regions. Azure’s going to expand to let’s say 200 regions. So a customer’s single Cosmos DB table spanning all these regions is one level of scalability. But the architecture is such that if you directly attach lots of Azure Stack devices, or you have sensors and edge devices, they can also pretend to be replicas. They can also pretend to be an Azure region. So you can attach billions of endpoints to your table. Some of those endpoints could be Azure regions, some of them could be instances of Azure Stack, or IoT hub, or edge devices. This kind of scalability is core to the system.”

Have customers asked for any additional APIs into Cosmos DB?

“There is a list of APIs, HBase, richer SQL, there are a number of such API requests. The good news is that the system has been built in a way that adding new APIs is relatively easy addition. So depending on the demand we continue to add APIs.”

Can you tell me anything about how you’ve implemented Cosmos DB? I know you use Service Fabric. Do you use other Azure services?

“We have dedicated clusters of compute machines. Cosmos DB is a Ring 0 service. So it’s there any time Azure opens a new region, Cosmos DB clusters have provision by default. Just like compute, storage, Cosmos DB is also one of the Ring 0 services which is the bottommost. Azure Active Directory for example depends on Cosmos DB. So Cosmos DB cannot take a dependency on Active Directory.

“The dependency that we have is our own clusters and machines, on which we put Service Fabric. For deployment of Cosmos DB code itself, we use Service Fabric. For some of the load balancing aspects we use Service Fabric. The partition management, global distribution, replication, is our own. So Cosmos DB is layered on top of Service Fabric, it is a Service Fabric application. But then it takes over. Once the Cosmos DB bits are laid out on the machine then its replication and partition management and distribution pieces take over. So that is the layering.

“Other than that there is no dependency on Azure. And that is why one of the salient aspects of this is that you can take the system and host it easily in places like Azure Stack. The dependencies are very small.

“We don’t use Azure Storage because of that dependency. So we store the data locally and then replicate it. And all of that data is also encrypted at rest.”

So when you say it is not currently in Azure Stack, it’s there underneath, but you haven’t surfaced it?

“It is in a defunct mode. We have to do a lot of work to light it up. When we light up it on such on-prem or private cloud devices, we want to enable this active to active pathway. So you are replicating your data and that is getting synchronized with the cloud and Azure Stack is one of the sockets.”

Microsoft itself is using Cosmos DB. How far back does this go? Azure AD is quite old now. Was it always on Cosmos DB / DocumentDB?

“Over the years Office 365, Xbox, Skype, Bing, and more and more of Azure services, have started moving. Now it has almost become ubiquitous. Because it’s at the bottom of the stack, taking a dependency on it is very easy.

“Azure Active Directory consists of a set of microservices. So they progressively have moved to Cosmos DB. Same situation with Dynamics, and our slew of such applications. Skype is by and large on Cosmos DB now. There are still some fragments of the past.  Xbox and the Microsoft Store and others are running on it.”

Do you think your customers are good at making the right choices over which database technology to use? I do pick up some uncertainty about this.

“We are working on making sure that we provide that clarity. Postgres and MySQL and MariaDB and SQL Server, Azure SQL and elastic pools, managed instances, there is a whole slew of relational offerings. Then we have Cosmos DB and then lots of analytical offerings as well.

“If you are a relational app, and if you are using a relational database, and you are migrating from on-prem to Azure, then we recommend the relational family. It comes with this fundamental scale caveat which is that up to 4TB. Most of those customers are settled because they have designed the app around those sorts of scalability limitations.

“A subset of those customers, and a whole bunch of brand new customers, are willing to re-write the app. They know that that they want to come to cloud for scale. So then we pitch Cosmos DB.

“Then there are customers who want to do massive scale offline analytical processing. So there is, Databricks, Spark, HD Insight, and that set of services.

“We realise there are grey lines between these offerings. We’re tightening up the guidance, it’s valid feedback.”

Any numbers to flesh out the idea that this is a fast-growing service for Microsoft?

“I can tell you that the number of new clusters we provision every week is far more than the total number of clusters we had in the first month. The growth is staggering.”

TalkTalk’s new Sagemcom FAST 5364 Router and WiFi Hub

TalkTalk has a new router available to its 4 million broadband customers in the UK. The router is made by Sagemcom and called the FAST 5364. The company will sell you one for £120 here but it comes for free if you get the Faster Fibre Broadband package; or for £30 with the Fast Broadband package.

TalkTalk’s previous router was the Huawei HG633 or for some luckier customers the HG635, or perhaps a DLINK DSL3782. The HG633 is a poor product with slow WiFi performance and 100 Mbps Ethernet ports. The FAST 5364 looks like an effort to put things right. It is not worth £120 (you can get a better 3rd-party router for that money) but it is well worth £30 as an upgrade.

The router comes in a smart box with a big emphasis on the step-by-step guide to getting started.

image

The router itself has a perforated plastic case with a flip-out stand. On the back are four Gigabit Ethernet ports, a WAN port, a VDSL/ADSL Broadband port, a WPS button and an on-off switch. There is also a recessed Reset button.

image

A handy feature is that the WiFi details are on a removable panel. The router admin password is on the back label but not on the removeable panel – better for security.

image

Getting started

Presuming you are a TalkTalk customer, it should just be a matter of connecting the cables and turning on. In my case it took a little longer as I am not a TalkTalk consumer customer. I connected up, then logged into the admin at http://192.168.1.1 to enter my username and password for the internet connection, following which I was online. An LED on the front turns from amber to white to confirm.

There is an oddity though. The FAST 5364 has a red Ethernet port marked WAN. This should be suitable for connecting to a cable modem or any internet connection via Ethernet. However when I tried to use this it did not work, but kept on trying to connect via ADSL/VDSL. Either this is deliberately disabled, or this is a firmware bug.

Performance and specification

The good news is that performance on the FAST 5364 is good. Here is the spec:

Antennas: 4×4 5GHz and 3×3 2.4GHz

WiFi: 2.4GHz Wi-Fi (802.11 b/g/n) and MU-MIMO 5GHz Wi-Fi (802.11 a/n/ac)

Broadband: ADSL2+ & VDSL2

A point of interest here is that the WiFi supports a technology called Beamforming. This uses an array of antennas to optimise the signal. It is called Beamforming because it shapes the the beam according to the location of the client.

In addition, MU-MIMO (Multi-User, Multi-input, Multi-output) means that multiple WiFi streams are available, so multiple users can each have a dedicated stream. This means better performance when you have many users. TalkTalk claims up to 50 devices can connect with high quality.

Features

The FAST 5364 is managed through a web browser. Like many devices, it has a simplified dashboard along with “Advanced settings”.

From the simple dashboard, you can view status, change WiFi network name and password, and not much else.

If you click Manage my devices and then Manage advanced settings, you get to another dashboard.

Then you can click Access Control, where you get to manage the firewall, and set the admin password for the router.

Or you can click TalkTalk WI-Fi Hub, where you get more detailed status information, and can manage DHCP, Light control (literally whether the LED lights up or not), DNS (this sets the DNS server which connected clients use), DynDNS (which supports several dynamic DNS providers, not just DynDNS), Route for adding static routes, and Maintenance for firmware updates, logs, and setting an NTP server (so your router knows the time and date).

image

Or you can click Internet Connectivity so you can set a DNS server to be used on the WAN side as well as username, password and other settings if you cannot connect automatically.

Firewall and port forwarding

The firewall in your router is critically important for security. Further, users often want to configure port forwarding to enable multi-user online gaming or other services to work.

Dealing with this can be fiddly so most modern routers support a feature called UPnP which lets devices on your network request port forwarding automatically.

Personally I dislike UPnP because it is a security risk if an insecure device is present on your network (cheap security cameras are a notorious example). I like to control which ports are forwarded manually. That said, UPnP is better in some ways since it allows the same port to be forwarded to different devices depending on what is in use. It is a trade-off. Ideally you should be able to specify which devices are allowed to use UPnP but that level of control is not available here. Instead, you can turn UPnP on or off.

image

On the Port Forwarding screen, you can add rules manually, or select Games and Applications, which automatically sets the rules for the selected item if you specify its IP address on the network.

image

You can get to this same screen via Connected Devices, in which case the IP address of the selected device is pre-populated.

The Firewall management gives you four levels:

Low: Allow all traffic both LAN->WAN and WAN->LAN. Not recommended, but not quite as bad as it sounds since NAT will give you some protection.

Medium: Allow all traffic LAN->WAN. Block NETBIOS traffic WAN->LAN. This is the default. More relaxed than I would like, presuming it means that all other traffic WAN->LAN is allowed, which is the obvious interpretation.

image

High: Allow common protocols LAN->WAN. Block all traffic WAN->LAN. A good secure setting but could be annoying since you will not be able to connect to non-standard ports and will probably find some web sites or applications not working as they should.

image

Custom: This seems to be the High setting but shown as custom rules, with the ability to add new rules. Thus with some effort you could set a rule to allow all traffic LAN->WAN, and block all traffic WAN->LAN except where you add a custom rule. To my mind this should be the default.

Most home users will never find this screen so it seems that TalkTalk is opening up its customers to a rather insecure setup by default, especially if there are bugs discovered in the router firmware.

I am asking TalkTalk about this and will let you know the response.

Missing features

The most obvious missing feature, compared to previous TalkTalk routers, is the lack of any USB port to attach local storage. This can be useful for media sharing. It is no great loss though, as you would be better off getting a proper NAS device and attaching it to one of the wired Ethernet ports.

Next, there is no provision for VPN connections. Of course you can set up a VPN endpoint on another device and configure the firewall to allow the traffic.

I cannot see a specific option to set a DHCP reservation, though I suspect this happens automatically. This is important when publishing services or even games, as the internal IP must not change.

There is no option to set a guest WiFi network, with access to the internet but not the local network.

Overall I would describe the router and firewall features as basic but adequate.

TalkTalk vs third party routers

Should you use a TalkTalk-supplied router, or get your own? There are really only a couple of reasons to use the TalkTalk one. First, it comes free or at a low price with your broadband bundle. Second, if you need support, the TalkTalk router is both understood and manageable by TalkTalk staff. Yes, TalkTalk can access your router, via the TR-069 protocol designed for this purpose (and which you cannot disable, as far as I can tell). If you want an easy life with as much as possible automatically configured, it makes sense to use a TalkTalk router.

That said, if you get a third-party router you can make sure it has all the features you need and configure it exactly as you want. These routers will not be accessible by TalkTalk staff. I would recommend this approach if you have anything beyond basic connectivity needs, and if you want the most secure setup. Keep a TalkTalk router handy in case you need to connect it for the sake of a support incident.

Final remarks

TalkTalk users are saying that the new router performs much better than the old ones (though this is not a high bar). For example:

“this is a very very good router with strong stable wifi. It is a massive upgrade to any of the routers supplied currently and its not just the wifi that is better. I get 16 meg upload now was 14 before”

That sounds good, and really this is a much better device than the previous TalkTalk offerings.

My main quibble is over the questionable default firewall settings. The browser UI is not great but may well improve over time. Inability to use the WAN port with a cable modem is annoying, and it would be good to see a more comprehensive range of features, though given that most users just want to plug in and go, a wide range of features is not the most important thing.

On Face Unlock

Face unlock is a common feature on premium (and even mid-range) devices today. Notable examples are Apple with the iPhone X, Microsoft with Windows Hello (when fully implemented with a depth-sensing camera like Intel RealSense), and on Android phones including Samsung Galaxy S9, OnePlus 6, Huawei P20 and Honor View 10 and Honor 10 AI

I’ve been trying the Honor 10 AI and naturally enabled the Face Unlock, passing warnings that it was less secure than a PIN or password. Why less secure? It is not stated, but a typical issue is being able to log in with a picture of the normal user (this would not work with Microsoft Hello).

Security is an issue, but I was also interested in how desirable this is as a feature. So far I am not convinced. Technically it works reasonably well. It is not 100% effective, especially in either bright sunlight or dim light, but most of the time it successfully unlocks the Honor phone. It is all the more impressive because I sometimes wear glasses, and it works whether or not I am wearing them.

image

I enjoyed face unlock at first, since it removes a bit of friction in day to day use. Then I came across annoyances. Sometimes the face recognition takes longer than a PIN, if the lighting conditions are not optimal, and occasionally it fails. It has introduced a touch of uncertainty to the unlock process, whereas the PIN is fully reliable and controllable. I tried the optional “wake on pick up” feature and again had a mixed experience; sometimes the the phone would light up and unlock when I did not need it.

Conclusion? It is something I can easily live without so a low priority when choosing a new phone. Whereas fingerprint unlock, now that the technology has matured to the point of high reliability, is something I still enjoy.

Manage your privacy online through cookie settings? You must be joking.

Since the deadline passed for the enforcement of the EU’s GDPR (General Data Protection Register) most major web sites have revamped their privacy settings with new privacy policies and more options for controlling how your personal data is used. Unfortunately, the options offered are in many cases too obscure, too complex and too time-consuming to be of any practical value.

Recital 32 of the GDPR says:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data … this could include ticking a box when visiting an internet website … silence, pre-ticked boxes or inactivity should not indicate consent.

I am sure the controls on offer via major web properties are the outcome of legal advice; at the same time, as a non-legal person I struggle on occasion to see how they meet the requirements or the spirit of the legislation. For example, another part of Recital 32 says:

… the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided.

This post describes what I get if I go to technology news site zdnet.com and it detects that I have not agreed to its cookie management.

Note: before I continue, let me emphasize that there is lots of great content on zdnet, some written by people I know; the site as far as I know is doing its best to make business sense of providing such content, in what has become a hostile environment for professional journalism. I would like to see fundamental change in this environment but that is just wishful thinking.

That said, this is one of the worst experiences I have found for privacy-seeking users. Here is the initial banner:

image

Naturally I click Manage Settings.

Now I get a scrolling dialog from CBS Interactive, with a scroll gadget that indicates that this is a loooong document:

image

There is also some puzzling news. There are a bunch of third-parties whose cookies are apparently necessary for “our sites, products and services to function correctly.” These include cookies for analytics and also for Google ad-serving. I am not clear why these third-parties perform functions which are necessary to read a technical news site, but there we are.

I scroll down and reach a button that lets me opt out of being tracked by the third party advertisers using zdnet.com, or so it seems:

image

I want to opt out, so I click. Some of the options below are unchecked, but not many. Most of the options say “Opt out through company”.

It also seems pretty technical to me. Am I meant to understand what a “Demand Side Platform” is?

image

I counted the number of links that say “opt out through company”. There are 63 of them.

I click the first one, called Adform. Naturally, the first thing I see is a request to agree (or at least click OK to) their Cookie Policy.

image

I click to read the policy (remember this is only the first of 63 sites I have to visit). I am not offered any sort of settings, but invited to visit youronlinechoices or aboutads.info.

image

Well, I don’t want anything to do with Adform and don’t intend to return to the site. Maybe I can ignore the Adform Cookie Policy and just focus on the opt-out button above it.

image

Currently I am “Opted-in”. This is a lie, I have never opted in. Rather, I have failed to opt out, until I click the button. Opting out will in fact set a cookie, so that Adform knows I have opted out. I am also reminded that this opt out only applies to this particular browser on this particular device. On all other browsers and/or devices, I will still be “opted in”.

OK, one down, 62 to go. However scrolling further down the list I get some bad news:

image

In some cases, it seems, “this partner does not provide a cookie opt-out”. The best I can do is to “visit their privacy policy for more information”. This will require a search, since the link is not clickable.

How to control your privacy

What should you do if you do not want to be tracked? Attempting to follow the industry-provided opt-outs is just hopeless. It is mostly PR and attempting to tick legal boxes.

If you do not want to be tracked, use a VPN, use ad blockers, and delete all cookies at the end of each browsing session. This will be tedious for you though, since your browsing experience will be one of constant “I agree” dialogs, some of which you may be able to ignore, or others for which you have to click I Agree or endure a myriad of semi-functional links and settings,

Maybe the EU GDPR legislation is unreasonable. Maybe we have been backed into this corner by allowing the internet to be dominated by a few giant companies. All we can state for sure is that the current situation is hopelessly broken, from a privacy and usability perspective.

Is Ron Jeffries right about the shortcomings of Agile?

A post from InfoQ alerted me to this post by Agile Manifesto signatory Ron Jeffries with the rather extreme title “Developers should abandon Agile”.

If you read the post, you discover that what Jeffries really objects to is the assimilation of Agile methodology into the old order of enterprise software development, complete with expensive consultancy, expensive software that claims to manage Agile for you, and the usual top-down management.

All this goes to show that it is possible do do Agile badly; or more precisely, to adopt something that you call Agile but in reality is not. Jeffries concludes:

Other than perhaps a self-chosen orientation to the ideas of Extreme Programming — as an idea space rather than a method — I really am coming to think that software developers of all stripes should have no adherence to any “Agile” method of any kind. As those methods manifest on the ground, they are far too commonly the enemy of good software development rather than its friend.

However, the values and principles of the Manifesto for Agile Software Development still offer the best way I know to build software, and based on my long and varied experience, I’d follow those values and principles no matter what method the larger organization used.

I enjoyed a discussion on the subject of Agile with some of the editors and writes at InfoQ during the last London QCon event. Why is it, I asked, that Agile is no longer at the forefront of QCon, when a few years back it was at the heart of these events?

The answer, broadly, was that the key concepts behind Agile are now taken for granted so that there are more interesting things to discuss.

While this makes sense, it is also true (as Jeffries observes) that large organizations will tend to absorb these ideas in name only, and continue with dark methods if that is in their culture.

The core ideas in Extreme Programming are (it seems to be) sound. Working in small chunks, forming a team that includes the customer, releasing frequently and delivering tangible benefits, automated tests and continuous refactoring, planning future releases as you go rather than in one all-encompassing plan at the beginning of a project; these are fantastic principles and revolutionary when you first come across them. See here for Jeffries’ account of what is Extreme Programming.

These ideas have everything to do with how the team works and little to do with specific tools (though it is obvious that things like a test framework, DevOps strategy and so on are needed).

Equally, you can have all the best tools but if the team is not functioning as envisaged, the methodology will fail. This is why software development methodology and the psychology of human relationships are intimately linked.

Real change is hard, and it is easy to slip back into bad practices, which is why we need to rediscover Agile, or something like it, repeatedly. Maybe the Agile word itself is not so helpful now; but the ideas are as strong as ever.

Tech Writing