Tag Archives: privacy

The price of free Wi-Fi, and is it a fair deal?

Here we are in a pub trying to get on the Wi-Fi. The good news: it is free:


But the provider wants my mobile number. I am a little wary. I hate being called on my mobile, other than by people I want to hear from. Let’s have a look at the T&C. Luckily, this really is free:


But everything has a cost, right? Let’s have a look at that “privacy” policy. I put privacy in quotes because in reality such policies are often bad news for your privacy:


Now we get to the heart of it. And I don’t like it. Here we go:

“You also agree to information about you and your use of the Service including, but not limited to, how you conduct your account being used, analysed and assessed by us and the other parties identified in the paragraph above and selected third parties for marketing purposes”

[You give permission to us and to everyone else in the world that we choose to use your data for marketing]

“…including, amongst other things, to identify and offer you by phone, post, our mobile network, your mobile phone, email, text (SMS), media messaging, automated dialling equipment or other means, any further products, services and offers which we think might interest you.”

[You give permission for us to spam you with phone calls, texts, emails, automated dialling and any other means we can think of]

“…If you do not wish your details to be used for marketing purposes, please write to The Data Controller, Telefönica UK Limited, 260 Bath Road, Slough, SLI 4DX stating your full name, address, account number and mobile phone number.”

[You can only escape by writing to us with old-fashioned pen and paper and a stamp and note you have to include your account number for the account that you likely have no clue you even have; and even then, who is to say whether those selected third parties will treat your personal details with equal care and concern?]

A fair deal?

You get free Wi-Fi, O2 gets the right to spam you forever. A fair deal? It could be OK. Maybe there won’t in fact be much spam. And since you only give your mobile number, you probably won’t get email spam (unless some heartless organisation has a database linking the two, or you are persuaded to divulge it).

In the end it is not the deal itself I object to; that is my (and your) decision to make. What I dislike is that the terms are hidden. Note that the thing you are likely to care about is clause 26 and you have to not only view the terms but scroll right down in order to find it.

Any why the opt-out by post only? There is only one reason I can think of. To make it difficult.

Privacy and online data sharing is a journey into the unknown: report from QCon London

I’m at QCon London, an annual developer conference which is among my favourites thanks to its vendor-neutral content.

One session which stood out for me was from Robin Wilton, Director for Identity and Privacy at the Internet Society, who spoke on “Understanding and managing your Digital Footprint”. I should report dissatisfaction, in that we only skated the surface of “understanding” and got nowhere close to “managing”. I will give him a pass though, for his eloquent refutation of the common assumption that privacy is unimportant if you are doing nothing wrong. If you have nothing to hide you are not a social being, countered Wilton, explaining that humans interact by choosing what to reveal about themselves. Loss of privacy leads to loss of other rights.


In what struck me as a bleak talk, Wilton described the bargain we make in using online services (our data in exchange for utility) and explained our difficulty in assessing the risks of what we share online and even offline (such as via cameras, loyalty cards and so on). Since the risks are remote in time and place, we cannot evaluate them. We have no control over what we share beyond “first disclosure”. The recipients of our data do not necessarily serve our interests, but rather their own. Paying for a service is no guarantee of data protection. We lack the means to separate work and personal data; you set up a LinkedIn account for business, but then your personal friends find it and ask to be contacts.

Lest we underestimate the amount of data held on us by entities such as Facebook and Google, Wilton reminded us of Max Schrems, who made a Subject Access Request to Facebook and received 1200 pages of data.

When it came to managing our digital footprint though, Wilton had little to offer beyond vague encouragement to increase awareness and take care out there.

Speaking to Wilton after the talk, I suggested an analogy with climate change or pollution, on the basis that we know we are not doing it right, but are incapable of correcting it and can only work towards mitigation of whatever known and unknown problems we are creating for ourselves.

Another issue is that our data is held by large commercial entities with strong lobbying teams and there is little chance of effective legislation to control them; instead we get futility like the EU cookie legislation.

There is another side to this, which Wilton did not bring out, concerning the benefit to us of sharing our data both on a micro level (we get Google Now) or aggregated (we may cure diseases). This is arguably the next revolution in personal computing; or put another way, maybe the bargain is to our advantage after all.

That said, I do not believe we have enough evidence to make this judgment and much depends on how trustworthy those big commercial entities prove to be in the long term.

Good to see this discussed at Qcon, despite a relatively small attendance at Wilton’s talk.

Privacy, Google Now, Scroogled, and the connected world

2013 saw the launch of Google Now, a service which aspires to alert you to information you care about at just the right time. Rather than mechanical reminders of events 15 minutes before start time, Google Now promises to take into account location, when you are likely to have to leave to arrive where you want to be, and personal preferences. Much of its intelligence is inferred from what Google knows about you through your browsing patterns, searches, location, social media connections and interactions, and (following Google’s acquisition of Nest, which makes home monitoring kit) who knows what other data that might be gathered.

It is obvious that users are being invited to make a deal. Broadly, the offer is that if you hand over as much of your personal data to Google as you can bear, then in return you will get services that will make your life easier. The price you pay, loss of privacy aside, is more targeted advertising.

There could be other hidden costs. Insurance is one that intrigues me. If insurance companies know everything about you, they may be able to predict more accurately what bad things are likely to happen to you and make insuring against them prohibitively expensive.

Another issue is that the more you use Google Now, the more benefit there is in using Google services versus their competitors. This is another example of the winner-takes-all effect which is commonplace in computing, though it is a different mechanism. It is similar to the competitive advantage Google has already won in search: it has more data, therefore it can more easily refine and personalise search results, therefore it gets more data. However this advantage is now extended to calendar, smartphone, social media, online shopping and other functions. I would expect more future debate on whether it is fair for one company to hold all these data. I have argued before about Google and the case for regulation.

This is all relatively new, and there may be – probably are – other downsides that we have not thought of.

Microsoft in 2013 chose to highlight the privacy risks (among other claimed deficiencies) of engaging with Google through its Scroogled campaign.


Some of the concerns raised are valid; but Microsoft is the wrong entity to do this, and the campaign betrays its concern over more mundane risks like losing business: Windows to Android or Chrome OS, Office to Google Docs, and so on. Negative advertising rarely impresses, and I doubt that Scroogled will do much either to promote Microsoft’s services or to disrupt Google. It is also rather an embarrassment.

The red box above suits my theme though. What comes to mind is what in hindsight is one of the most amusing examples of wrong-headed legislation in history. In 1865 the British Parliament passed the first of three Locomotive Acts regulating “road locomotives” or horseless carriages. It limited speed to 4 mph in the country and 2 mph in the town, and required a man carrying a red flag to walk in front of certain types of vehicles.


The reason this is so amusing is that having someone walk in front of a motorised vehicle limits the speed of the vehicle to that of the pedestrian, negating its chief benefit.

How could legislators be so stupid? The answer is that they were not stupid and they correctly identified real risks. Motor vehicles can and do cause death and mayhem. They have changed our landscape, in many ways for the worse, and caused untold pollution.

At the same time, the motor vehicle has been a huge advance in civilisation, enabling social interaction, trade and leisure opportunities that we could not now bear to lose. The legislators saw the risks, but had insufficient vision to see the benefits – except that over time, and inevitably, speed limits and other restrictions were relaxed so that motor vehicles were able to deliver the benefits of which they were capable.

My reflection is whether the fears into which the Scroogled campaign attempts to tap are similar to those of the Red Flag legislators. The debate around privacy and data sharing should not be driven by fear, but rather about how to enable the benefits while figuring out what is necessary in terms of regulation. And there is undoubtedly a need for some regulation, just as there is today for motor vehicles – speed limits, safety belts, parking restrictions and all the rest.

Returning for a moment to Microsoft: it seems to me that another risk of its Scroogling efforts is that it positions itself as the red flag rather than the horseless carriage. How is that going to look ten years from now?

Google’s privacy campaign, and three ways in which Google gets your data

Google is campaigning to reassure us that its Chrome browser is, well, no worse at recording your every move on the web than any other browser.

Using Chrome doesn’t mean sharing more information with Google than using any other browser

says a spokesman in this video, part of a series on Google Chrome & Privacy.


What then follows is links to four other videos describing the various ways in which Google Chrome records your web activity.

If you subtract the spin, the conclusion is that Google retrieves a large amount of data from you, especially if you stick with the default settings. Further, it is not possible as far as I know to use the browser without sending any data to your default search provider, most likely Google. The reason is the Omnibox, the combined address and search box. Here’s what Google’s Brian Rakowski says in the video on Google Chrome & Privacy – Browsers search and suggestions

For combined search and web address to work, input in the Omnibox will need to be sent to your search provider to return suggestions. If you have chosen Google as your search provider, only around 2% of the search input is logged and used to improve Google’s suggestion service. Rest assured that this data is anonymised as soon as possible within 24 hours, and you always have the option of disabling the suggest feature at any time.

However, even if you disable suggestions, what you type in the box still gets sent to your search provider if it is not a valid web address, in other words anything that is not a complete URL (though Chrome will infer the http:// prefix).

It is also worth noting that Google does not only get your data via browser features. Most web pages today are not served from a single source. They include scripts that serve data from other locations, which means that your browser requests it, which means that these other locations know your IP number, browser version and so on. Two of the most common sources for such scripts are Google AdSense (for advertising) and Google Analytics (for analysing web traffic).

Even if you contrive not to tell Google in advance where you are going, it will probably find out when you get there.

It is important to distinguish what Google can do from what it does do. Note the language in Rakowski’s explanation above. When he says input is sent to your search provider, he is describing the technology. When he says that data is anonymized as soon as possible, he is asking us to trust Google.

Note also that if you ask to send in auditors to verify that Google is successfully anonymising your data, it is likely that your request will be refused.

There are ways round all these things, but most of us have to accept that Google is getting more than enough data from us to create a detailed profile. Therefore the secondary question, of how trustworthy the company is, matters more than the first one, about how it gets the data.