Tag Archives: security

Let’s Encrypt: a quiet revolution

Any website that supports SSL (an HTTPS connection) requires a  digital certificate. Until relatively recently, obtaining a certificate meant one of two things. You could either generate your own, which works fine in terms of encrypting the traffic, but results in web browser warnings for anyone outside your organisation, because the issuing authority is not trusted. Or you could buy one from a certificate provider such as Symantec (Verisign), Comodo, Geotrust, Digicert or GoDaddy. These certificates vary in price from fairly cheap to very expensive, with the differences being opaque to many users.

Let’s Encrypt is a project of the Internet Security Research Group, a non-profit organisation founded in 2013 and sponsored by firms including Mozilla, Cisco and Google Chrome. Obtaining certificates from Let’s Encrypt is free, and they are trusted by all major web browsers.

image

Last month Let’s Encrypt announced coming support for wildcard certificates as well as giving some stats: 46 million active certificates, and plans to double that in 2018. The post also notes that the latest figures from Firefox telemetry indicate that over 65% of the web is now served using HTTPS.

image
Source: https://letsencrypt.org/stats/

Let’s Encrypt only started issuing certificates in January 2016 so its growth is spectacular.

The reason is simple. Let’s Encrypt is saving the IT industry a huge amount in both money and time. Money, because its certificates are free. Time, because it is all about automation, and once you have the right automated process in place, renewal is automatic.

I have heard it said that Let’s Encrypt certificates are not proper certificates. This is not the case; they are just as trustworthy as those from the other SSL providers, with the caveat that everything is automated. Some types of certificate, such as those for code-signing, have additional verification performed by a human to ensure that they really are being requested by the organisation claimed. No such thing happens with the majority of SSL certificates, for which the process is entirely automated by all the providers and typically requires that the requester can receive email at the domain for which the certificate is issued. Let’s Encrypt uses other techniques, such as proof that you control the DNS for the domain, or are able to write a file to its website. Certificates that require human intervention will likely never be free.

A Let’s Encrypt certificate is only valid for three months, whereas those from commercial providers last at least a year. Despite appearances, this is not a disadvantage. If you automate the process, it is not inconvenient, and a certificate with a shorter life is more secure as it has less time to be compromised.

The ascendance of Let’s Encrypt is probably regretted both by the commercial certificate providers and by IT companies who make a bit of money from selling and administering certificates.

Let’s Encrypt certificates are issued in plain-text PEM (Privacy Enhanced Mail) format. Does that mean you cannot use them in Windows, which typically uses .cer or .pfx certificates?  No, because it is easy to convert between formats. For example, you can use the openssl utility. Here is what I use on Linux to get a .pfx:

openssl pkcs12 -inkey privkey.pem -in fullchain.pem -export -out yourcert.pfx

If you have a website hosted for you by a third-party, can you use Let’s Encrypt? Maybe, but only if the hosting company offers this as a service. They may not be in a hurry to do so, since there is a bit of profit in selling SSL certificates, but on the other hand, a far-sighted ISP might win some business by offering free SSL as part of the service.

Implications of Let’s Encrypt

Let’s Encrypt removes the cost barrier for securing a web site, subject to the caveats mentioned above. At the same time, Google is gradually stepping up warnings in the Chrome browser when you visit unencrypted sites:

Eventually, we plan to show the “Not secure” warning for all HTTP pages, even outside Incognito mode.

Google search is also apparently weighted in favour of encrypted sites, so anyone who cares about their web presence or traffic is or will be using SSL.

Is this a good thing? Given the trivia (or worse) that constitutes most of the web, why bother encrypting it, which is slower and takes more processing power (bad for the planet)? Note also that encrypting the traffic does nothing to protect you from malware, nor does it insulate web developers from security bugs such as SQL injection attacks – which is why I prefer to call SSL sites encrypted rather than secure.

The big benefit though is that it makes it much harder to snoop on web traffic. This is good for privacy, especially if you are browsing the web over public Wi-Fi in cafes, hotels or airports. It would be a mistake though to imagine that if you are browsing the public web using HTTPS that you are really private: the sites you visit are still getting your data, including Facebook, Google and various other advertisers who track your browsing.

In the end it is worth it, if only to counter the number of times passwords are sent over the internet in plain text. Unfortunately people remain willing to send passwords by insecure email so there remains work to do.

Thoughts on Petya/NotPetya and two key questions. What should you do, and is it the fault of Microsoft Windows?

Every major IT security incident generates a ton of me-too articles most of which lack meaningful content. Journalists receive a torrent of emails from companies or consultants hoping to be quoted, with insightful remarks like “companies should be more prepared” or “you should always keep your systems and security software patched and up to date.”

An interesting feature of NotPetya (which is also Not Ransomware, but rather a malware attack designed to destroy data and disrupt business) is that keeping your systems and security software patched and up to date in some cases did not help you. Note this comment from a user:

Updated Win10 CU with all new cumulative updates and Win10 Insider Fast latest were attacked and affected. Probably used “admin” shares but anyway – Defender from Enterprise just ignored virus shared through network.

Nevertheless, running a fully updated Windows 10 did mitigate the attack compared to running earlier versions, especially Windows 7.

Two posts about NotPetya which are worth reading are the technical analyses from Microsoft here and here. Reading these it is hard not to conclude that the attack was an example of state-sponsored cyberwarfare primarily targeting Ukraine. The main factors behind this conclusion are the lack of financial incentive (no serious effort to collect payment which in any case could not restore files). Note the following from Microsoft’s analysis:

The VictimID shown to the user is randomly generated using CryptGenRandom() and does not correspond to the MFT encryption, so the ID shown is of no value and is also independent from the per-drive file encryption ID written on README.TXT.

My observations are as follows.

1. You cannot rely on security software, nor on OS patching (though this is still critically important). Another example of this came in the course of reviewing the new SENSE consumer security appliance from F-Secure. As part of the test, I plucked out a recent email which asked me to download a virus (thinly disguised as an invoice) and tried to download it. I succeeded. It sailed past both Windows Defender and F-Secure. When I tested the viral file with VirusTotal only 4 of 58 anti-virus applications detected it.

The problem is that competent new malware has a window of opportunity of at least several hours when it is likely not to be picked up. If during this time it can infect a significant number of systems and then spread by other means (as happened with both WannaCry and NotPetya) the result can be severe.

2. Check your backups. This is the most effective protection against malware. Further, backup is complicated. What happens if corrupted or encrypted files are backed up several times before the problem is spotted? This means you need a backup that can go back in time to several different dates. If your backup is always online, what happens if a network intruder is able to manage and delete your backups? This means you should have offline backups, or at least avoid having a single set of credentials which, if stolen, give an attacker full access to all your backups. What happens if you think you are backed up, but in fact critical files are not being backed up? This is common and means you must do a test restore from time to time, pretending that all your production systems have disappeared.

3. If you are running Windows, run Windows 10. I am sorry to have to say this, in that I recognize that in some respects Windows 7 has a more coherent design and user interface. But you cannot afford to miss out on the security work Microsoft has done in Windows 10, as the second Microsoft article referenced above spells out. 

4. Is it the fault of Microsoft Windows? An interesting discussion point which deserves more attention. The simplistic argument against Windows is that most malware attacks exploit bugs in Windows, therefore it is partly Microsoft’s fault for making the bugs, and partly your fault for running Windows. The more plausible argument is that Windows monoculture in business gives criminals an easy target, with a huge array of tools and expertise on how to hack it easily available.

The issue is in reality a complex one and we should credit Microsoft at least with huge efforts to make Windows more secure. Users, it must be noted, are in many cases resistant to these efforts, perceiving them as an unnecessary nuisance (for example User Access Control in Vista); and historically third-party software vendors have also often got in the way, such as being slow or reluctant to apply digital signatures to software drivers and applications.

Windows 8 was in part an effort to secure Windows by introducing a new and secure model for applications. There are many reasons why this was unsuccessful, but too little recognition of the security aspect of these efforts.

The answer then is also nuanced. If you run Windows you can so with reasonable security, especially if you are serious about it and use features such as Device Guard, which whitelists trusted applications. If you switch your business to Mac or Linux, you might well escape the next big attack, not so much because the OS is inherently more secure, but because you become part of a smaller and less attractive target.

For a better answer, see the next observation.

5. Most users should run a locked-down operating system. This seems rather obvious. Users who are not developers, who use the same half a dozen applications day to day, are better and more safely served by running a computer in which applications are properly isolated from the operating system and on which arbitrary executables from unknown sources are not allowed to execute. Examples are iOS, Android, Chrome OS and Windows 10 S.  Windows 10 Creators Update lets you move a little way in this direction by setting it to allow apps from the Store only:

image

There is a significant downside to running a locked-down operating system, especially as a consumer, in that you cede control of what you can and cannot install to the operating system vendor, as well as paying a fee to that vendor for each paid-for installation. Android and iOS users live with this because it has always been that way, but in Windows the change of culture is difficult. Another issue is limitations in the Windows Store app platform, though this is becoming less of an issue thanks to the Desktop Bridge, which means almost any application can become a Store application. In gaming there is a problem with Steam which is an entire third-party Store system (apparently Steam bypasses the Windows 10 control panel restriction, though it does not run on Windows 10 S). Open source applications are another problem, since few are available in the Windows Store, though this could change.

If we really want Windows to become more secure, we should get behind Windows 10 S and demand better third-party support for the Windows Store.

F-Secure Sense: a success and a failure (and why you should not rely on your anti-virus software)

I am in the process of reviewing F-Secure sense, a hardware firewall which works by inspecting internet traffic, rather than scanning files on your PC or mobile device. This way, it can protect all devices, not only the ones on which an anti-malware application is installed.

I get tons of spam and malware by email, so I plucked out a couple to test. The first was an email claiming to be an NPower invoice. I don’t have an account with NPower, so I was confident that it was malware. Even if I did have an account with NPower, I’d be sure it was malware since it arrived as a link to a website on my.sharepoint.com, where someone’s personal site has presumably been hacked.

I clicked the link hoping that Sense would intercept it. It did not. Here is what I saw in Safari on my iPad:

image

(Wi-Drive is a storage app that I have installed and forgotten about). I clicked More and saved the suspect file to Apple’s iCloud Drive.

Then I went to a Windows PC, and clicking very carefully, downloaded the file from iCloud Drive. The PC is also connected to the Sense network.

Finally, I uploaded the file for analysis by VirusTotal:

image

Well, it is certainly a virus, but only 4 of 58 scanning engines used by VirusTotal detect it. You will not be surprised to know that F-Secure was one of the engines which passed it as clean.

image

Note that I did not try to extract or otherwise open the files in the ZIP so there is a possibility that it might have been picked up then. Still, disappointing, and an illustration of why you should NOT rely on your antivirus software to catch all malware.

Now the good news. I had another email which looked like a phishing attempt. I clicked the link on the iPad. It came up immediately with “Harmful web site blocked.”

image

While that is a good thing, 50% of two attempts is not good – it only takes one successful infection to cause a world of pain.

My view so far is that while Sense is a useful addition to your security defence, it is not to be trusted on its own.

In this I am odds with F-Secure which says in its FAQ that “With F-Secure SENSE no traditional security software is needed,” though the advice adds that you should also install the SENSE security app.

image

F-Secure Sense Firewall first look: a matter of trust

Last week I journeyed to Helsinki, Finland, to learn about F-Secure’s new home security device (the first hardware product from a company best known for anti-virus software), called Sense.

I also interviewed F-Secure’s Chief Research Officer Mikko Hypponen and wrote it up for The Register here. Hypponen explained that a firewall is the only way to protect the “connected home”, smart devices such as alarms, cameras, switches, washing machines or anything that connects to the internet. In fact, he believes that every appliance we buy will be online in a few years time, because it costs little to add this feature and gives vendors great value in terms of analytics.

Sense is a well made, good looking firewall and wireless router. The idea is that you connect it to your existing router (usually supplied by your broadband provider), and then ensure that all other computers and devices on your networks connect to Sense, using either a wired or wireless connection. Sense has 3 LAN Ethernet ports as well as wireless capability.

This is not a full review, but a report on my first look.

image

Currently you can only set up Sense using a device running iOS or Android. You install the Sense app, then follow several steps to create the Sense network. You can rename the Sense wifi identifier and change the password. The device you use to setup Sense becomes the sole admin device, so choose carefully. If you lose it, you have to reset the Sense and start again.

My initial effort used the Android app. I ran into a problem though. The Sense setup said it required permission to use location:

image

I am not sure why this is necessary but I was happy to agree. I clicked continue and verified that Location was on:

image

Then I returned to the Sense app but it still did not think Location was available and I could not continue.

Next I tried the iOS Sense app on an iPad. This worked better, though I did hit a glitch where the setup did not think I had connected to the wifi point even though I had. Quitting and restarting the app fixed this. I am sure these glitches in the app will be fixed soon.

I was impressed by the 16 character password generated by default. Yes I have changed it!

image

I was up and running, and started connecting devices to the Sense network. Each device you connect shows up as a protected device in the Sense app.

There are very limited settings available (and no, you cannot use a web browser instead, only the app). You can set a few network things: IP address, DHCP range. You can configure port forwarding. You can set the brightness of the display, which normally just shows the time of day. You can view an event log which shows things like devices added and threats detected; it is not a firewall log. You can block a device from the internet. You can send feedback to the Sense team. And that is about it, apart from the following protection settings:

image

The above is the default setting. What exactly do Tracking protection and Identify device type do? I cannot find this documented anywhere, but I recall in our briefing there was discussion of blocking tracking by advertisers and identifying IoT devices in order to build up a knowledgebase of any security flaws in order to apply protection automatically. But I may be wrong and do not have any detail on this. I enabled all the options on my Sense.

As it happens, I have a device which I know to be insecure, a China-made IP camera which I wrote about here. I plugged it into the Sense and waited to see what would happen.

Nothing happened. Sense said everything was fine.

image

Is everything OK? I confess that I did not attach Sense directly to my router. I attached it to my network which is behind another firewall. I used this second firewall to inspect the traffic to and from the Sense. I also disconnected all the devices other than the IP Camera.

I noticed a couple of things. One is that the Sense makes frequent connections to computers running on AWS (Amazon Web Services). No doubt this is where the F-Secure Security Cloud is hosted. The Security Cloud is the intelligence piece in the Sense setup. Not all traffic is sent to the Security Cloud for checking, but some is sent there. In fact, I was surprised at the frequency of calls to AWS, and hope that F-Secure has got its scaling right since clearly this could impact performance.

The other thing I noticed is that, as expected, the IP Camera was making outbound calls to a couple of servers, one in China and one in Singapore, according to the whois tools I used. Both seem to be related to Alibaba in China. Alibaba is not only a large retailer and wholesaler, but also operates a cloud hosting service, so this does not tell me much about who is using these servers. However my guess is that this is some kind of registration on a peer to peer network used for access to these cameras over the internet. I don’t like this, but there is no way I can see in the camera settings to disable it.

Should Sense have picked this up as a threat? Well, I would have liked it if it had, but appreciate that merely making outbound calls to servers in China is not necessarily a threat. Perhaps if someone tried to hack into my camera the intrusion attempt would be picked up as a threat; it is not easy to test.

On the plus side, Sense makes it very easy to block the camera from internet access, but to do that I have to be aware that it might be a threat, as well as finding other ways to access it remotely if that is something I require.

Sense did work perfectly when I tried to access a dummy threat site from a web browser.

image

If you disagree with Sense, there is no way to proceed to the dangerous site, other than disabling browser protection completely. Perhaps a good thing, perhaps not.

It all comes down to trust. If you trust F-Secure’s Security Cloud and technology to detect and prevent any dangerous traffic, Sense is a great device and well worth the cost – currently £169.00 and then a subscription of £8.50 per month after the first year. If you think it may make mistakes and cause you hassle, or fail to detect attacks or malware downloads, then it is not a good deal. At this point it is hard for me to tell how good a job the device is doing. Unfortunately I am not set up to click on lots of dangerous sites for a more extensive test.

I do think the product will improve substantially in the first few months, as it builds up data on security risks in common devices and on the web.

Unfortunately more technical users will find the limited options frustrating, though I understand that F-Secure wants to limit access to the device for security reasons as well as making it simpler to use. The documentation needs improving and no doubt that will come soon.

More information on Sense is here.


How to remove the WINS server feature from Windows Server

The WINS service is not needed in most Windows networks but may be running either for legacy reasons, or because someone enabled it in the hope that it might fix a network issue.

It is now apparently a security risk. See here and Reg article here.

Apparently Microsoft says “won’t fix” despite the service still being shipped in Server 2016, the latest version:

In December 2016, FortiGuard Labs discovered and reported a WINS Server remote memory corruption vulnerability in Microsoft Windows Server. In June of 2017, Microsoft replied to FortiGuard Labs, saying, "a fix would require a complete overhaul of the code to be considered comprehensive. The functionality provided by WINS was replaced by DNS and Microsoft has advised customers to migrate away from it." That is, Microsoft will not be patching this vulnerability due to the amount of work that would be required. Instead, Microsoft is recommending that users replace WINS with DNS.

It should be removed then. I noticed it was running on a server in my network, running Server 2012 R2, and that although it was listed as a feature in Server Manager, the option to remove it was greyed out.

I removed it as follows:

1. Stop the WINS service and set it to manual or disabled.

2. Remove the WINS option in DHCP Scope Options if it is present.

3. Run PowerShell as an administrator and execute the following command:

uninstall-windowsfeature wins

This worked first time, though a restart is required.

Incidentally, if Microsoft ships a feature in a Server release, I think it should be kept patched. No doubt the company will change its mind if it proves to be an issue.

Note: you can also use remove-windowsfeature which is an alias for uninstall-windowsfeature. You do need Windows Server 2008 R2 or higher for this to work.

The threat from insecure “security” cameras and how it goes unnoticed by most users

Ars Technica published a piece today about insecure network cameras which reminded me of my intention to post about my own experience.

I wanted to experiment with IP cameras and Synology’s Surveillance Station so I bought a cheap one from Amazon to see if I could get it to work. The brand is Knewmart.

image

Most people buying this do not use it with a Synology. The idea is that you connect it to your home network (most will use wifi), install an app on your smartphone, and enjoy the ability to check on how well your child is sleeping, for example, without the trouble of going up to her room. It also works when you are out and about. Users are happy:

So far, so good for this cheap solution for a baby monitor. It was easy to set up, works with various apps (we generally use onvif for android) and means that both my wife and I can monitor our babies while they’re sleeping on our phones. Power lead could be longer but so far very impressed with everything. The quality of both the nightvision and the normal mode is excellent and clear. The audio isn’t great, especially from user to camera, but that’s not what we bought it for so can’t complain. I spent quite a long time looking for an IP cam as a baby monitor, and am glad we chose this route. I’d highly recommend.

My needs are a bit different especially as it did not work out of the box with Surveillance Station and I had to poke around a bit. FIrst I discovered that the Chinese-made camera was apparently identical to a model from a slightly better known manufacturer called Wanscam, which enabled me to find a bit more documentation, but not much. I also played around with a handy utility called Onvif Device Manager (ONVIF being an XML standard for communicating with IP cameras), and used the device’s browser-based management utility.

This gave me access to various settings and the good news is that I did get the camera working to some extent with Surveillance Station. However I also discovered a number of security issues, starting of course with the use of default passwords (I forget what the admin password was but it was something like ‘password’).

The vendor wants to make it easy for users to view the camera’s video over the internet, for which it uses port forwarding. If you have UPnP enabled on your router, it will set this up automatically. This is on by default. In addition, something strange. There is a setting for UPnP but you will not find it in the browser-based management, not even under Network Settings:

image

Yet, if you happen to navigate to [camera ip no]/web/upnp.html there it is:

image

Why is this setting hidden, even from those users dedicated enough to use the browser settings, which are not even mentioned in the skimpy leaflet that comes with the camera? I don’t like UPnP and I do not recommend port forwarding to a device like this which will never be patched and whose firmware has a thrown-together look. But it may be because even disabling UPnP port forwarding will not secure the device. Following a tip from another user (of a similar camera), I checked the activity of the device in my router logs. It makes regular outbound connections to a variety of servers, with the one I checked being in Beijing. See here for a piece on this, with regard to Foscam cameras (also similar to mine).

I am not suggesting that there is anything sinister in this, and it is probably all about registering the device on a server in order to make the app work through a peer-to-peer network over the internet. But it is impolite to make these connections without informing the user and with no way that I have found to disable them.

Worse still, this peer-to-peer network is not secure. I found this analysis which goes into detail and note this remark:

an attacker can reach a camera only by knowing a serial number. The UDP tunnel between the attacker and the camera is established even if the attacker doesn’t know the credentials. It’s useful to note the tunnel bypasses NAT and firewall, allowing the attacker to reach internal cameras (if they are connected to the Internet) and to bruteforce credentials. Then, the attacker can just try to bruteforce credentials of the camera

I am not sure that this is the exact system used by my camera, but I think it is. I have no intention of installing the P2PIPC Android app which I am meant to use with it.

The result of course is that your “security” camera makes you vulnerable in all sorts of ways, from having strangers peer into your bedroom, to having an intrusion into your home or even business network with unpredictable consequences.

The solution if you want to use these camera reasonably safely is to block all outbound traffic from their IP address and use a different, trusted application to get access to the video feed. As well as, of course, avoiding port forwarding and not using an app like P2PIPC.

There is a coda to this story. I wrote a review on Amazon’s UK site; it wasn’t entirely negative, but included warnings about security and how to use the camera reasonably safely. The way these reviews work on Amazon is that those with the most “helpful votes” float to the top and are seen by more potential purchasers. Over the course of a month or so, my review received half a dozen such votes and was automatically highlighted on the page. Mysteriously, a batch of negative votes suddenly appeared, sinking the review out of sight to all but the most dedicated purchasers. I cannot know the source of these negative votes (now approximately equal to the positives) but observe that Amazon’s system makes it easy for a vendor to make undesirable reviews disappear.

What I find depressing is that despite considerable publicity these cameras remain not only on sale but highly popular, with most purchasers having no idea of the possible harm from installing and using what seems like a cool gadget.

We need, I guess, some kind of kitemark for security along with regulations similar to those for electrical safety. Mothers would not dream of installing an unsafe electrical device next to their sleeping child. Insecure IoT devices are also dangerous, and somehow that needs to be communicated beyond those with technical know-how.

Fake TalkTalk Frequently Asked Questions

I use TalkTalk for broadband and landline – though I never signed up with TalkTalk, I signed up with a smaller provider that was taken over – and recently I have been plagued with calls from people claiming to be from TalkTalk, but who in fact have malicious intent. If I am busy I just put the phone down, but sometimes I chat with them for a while, to discover more about what they are trying to do.

Rather than write a long general piece about this problem, I thought the best approach would be a Q&A with answers to the best of my knowledge.

Why so many fake TalkTalk calls?

I have two landline numbers, and until recently only the non-TalkTalk number ever got called by scammers. This makes me think that the flood of TalkTalk calls is related to data stolen from the company, perhaps in October 215 or perhaps in subsequent attacks. Some victims report that scammers know their name and account number; in my case I don’t have any evidence for that. On a couple of occasions I have asked the caller to state my account number but they have given me a random number. However I do think that my telephone number is on a list of valid TalkTalk numbers that is circulating among these criminal companies.

How do I know if it is really TalkTalk?

My advice is to assume that is it not TalkTalk. If you think TalkTalk really wants to get in touch with you, put the phone down and call TalkTalk customer service, either from another number or after waiting 15 minutes to make sure that the person who called you has really terminated the call.

How does the caller know my Computer License ID?

A common part of these scripts is that the caller will show that he knows your “computer license ID” by guiding you to show it on your screen and then reading it to you. They do this by getting to you open a command window and type assoc:

image

The way this works is simple. The number you see next to .ZFSendToTarget is not a license ID. The abbreviation stands for Class ID and it is part of the plumbing of Windows, the same on every Windows PC.

What about all the malware errors and warnings on my PC?

This is a core part of the fake TalkTalk (and fake Microsoft) script. Our server has picked up warning messages from your computer, they say, and they show you a list of them.

The way this works is that the scammer guides you to open a Windows utility called Event Viewer, usually via the Run dialog (type eventvwr). Then they get you to filter it to show “Administrative events” which filters the log to show only errors and warnings.

Now, you have to agree that the number of errors and warnings Windows manages to generate is remarkable. My PC has over 9,000:

image

However, these messages are not generated by malware, nor are they broadcast to the world (or to TalkTalk servers). They are simply log entries generated by the operating system. If you have time on your hands, you can look up the reason for each one and even fix many of them; but in most cases they are just noise. Real malware, needless to say, does not make helpful logs of its activity but keeps quiet about it.

What does Fake TalkTalk really want to do?

Once your fake TalkTalk caller has persuaded you that something is wrong with your PC or router or internet connection, the next step is invariably to get remote access to your PC. They do this by guiding you to a website such as Ammyy or Logmein Rescue, and initiate a support session. These are legitimate services used by support engineers, but unfortunately if you allow someone untrustworthy to log onto your PC bad things will happen. Despite what the caller may tell you, these sessions are not just for messaging but enable the scammer to see your computer screen and even take over mouse and keyboard input.

Windows will generally warn you before you allow a remote session to start. You have to pass a dialog that says something like “Do you want to allow this app to make changes to your PC?” or similar. This warning is there for a reason! For sure say No if fake TalkTalk is on the line.

Note though that this remote control software is not in itself malware. Therefore you will see that the software that is trying to run is from a legitimate company. Unfortunately that will not protect you when someone who means you harm is at the other end of the connection.

OK, so Fake TalkTalk has a remote connection. What next?

Despite my interest in the goals of these scammers, I have never gone so far as to allow them to connect. There are ways to do this relatively safely, with an isolated virtual machine, but I have not gone that far. However I have seen reports from victims.

There is no single fake TalkTalk, but many organisations out there who do this impersonating. So the goals of these various organisations (and they are generally organisations rather than individuals) will vary.

A known scam is that the scammer will tell you a refund is due because of your slow internet connection. They show you that the sum has been paid, via a fake site, but oh dear, it is more than is due! For example, you are due £200 but have been paid £1200. Oops. Would you mind repaying the £1000 or I will be fired? So you send off £1000 but it turns out you were not paid any money at all.

Other possibilities are that your PC becomes part of a bot network, to be rented out to criminals for various purposes; or that the “engineer” finds such severe “problems” with your PC that you have to purchase their expensive anti-malware software or service; or your PC may be used to send out spam; or a small piece of software is installed that captures your keystrokes so your passwords will be sent to the scammer; or the scammer will search your documents for information they can use for identity theft.

Many possibilities, so for sure it is better not to let these scammers, or anyone you do not trust, to connect to your PC.

Who are the organisations behind Fake TalkTalk?

When I am called by TalkTalk impersonators, I notice several things. One is that the call quality is often poor, thanks to use of a cheap voice over IP connection from a far-off country. Second, I can hear many other calls taking place in the background, showing that these are not just individuals but organisations of some size. In fact, a common pattern is that three people are involved, one who initiates the call, a supervisor who makes the remote connection, and a third “engineer” who takes over once the connection is made.

One thing you can be sure of is that the are not in the UK. In fact, all the calls I have had seem to originate from outside Europe. This means of course that they are outside the scope of our regulators and difficult for police or fraud investigators to track down.

If you ask one of these callers where they are calling from, they often say they are in London. You can have some fun by asking questions like “what is the weather like in London?” or “what is the nearest tube station?”, they probably have no idea.

What is being done about this problem?

Good question. I have reported all my calls to TalkTalk, as well as using “Report abuse” forms on LogMeIn with the PIN numbers used by the criminals. On one occasion I had a scammer’s Google email address given to me; there is no way I can find to report this to Google which perhaps shows the limits of how much the company cares about our security.

I am not optimistic then that much of substance is being done or can be done. Addressing the problem at source means visiting the country where the scam is based and working with local law enforcement; even if that worked, other organisations in other countries soon pop up.

That means, for the moment, that education and warning is essential, imperfect though it is. TalkTalk, it seems to me, could do much better. Have they contacted all their customers will information and warnings? I don’t believe so. It is worried, perhaps, more about its reputation than the security of its customers.

DatAshur encrypted drives: protect your data but be sure to back it up too

The iStorage DataAshur USB flash drive is a neat way to encrypt your data. Lost USB storage devices are a common cause of data theft anxiety: in most cases the finder won’t care about your data but you can never be certain.

image

The DatAshur is simple to operate but highly secure, presuming it meets the advertised specification. All data written to the drive is automatically encrypted with 256-bit AES CBC (Advanced Encryption Standard with Cipher Block Chaining) and meets the US FIPS 140-2 standard. The encryption is transparent to the operating system, since decryption is built into the device and enabled by entering a PIN of 7 to 15 digits.

Note that a snag with this arrangement is that if your PC is compromised a hacker might be able to read the data while the drive is connected. If you are really anxious you could get round this by working offline, or perhaps using Microsoft’s clever Windows to Go (WTG) technology where you boot from a USB device and work in isolation from the host operating system. Unfortunately DatAshur does not support WTG (as far as I know) but there are alternatives which do, or you could boot into WTG and then insert your DatAshur device.

Normally you enter the PIN to unlock the drive before connecting it to a PC or Mac. This does mean that the DatAshur requires a battery, and a rechargeable battery is built in. However if the battery is exhausted you can still get your data back by recharging the device (it charges whenever it is plugged into a USB port).

OK, so what happens if a bad guy gets your device and enters PINs repeatedly until the right one is found? This will not work (unless you chose 1234567 or something like that) since after 10 failed tries the device resets, deleting all your data.

You should avoid, then, the following scenario. You give your DatAshur drive to your friend to show it off. “I’ve just updated all my expenses on this and there is no way you’ll be able to get at the data”. Friend fiddles for a bit. “Indeed,and neither can you”.

Here then is the security dilemma: the better the security, the more you risk losing access to your own data.

The DatAshur does have an additional feature which mitigates the risk of forgetting the PIN. You can actually set two PINs, a user PIN and an admin PIN. The admin PIN could be retained by a security department at work, or kept in some other safe place. This still will not rescue you though if more than 10 attempts are made.

What this means is that data you cannot afford to lose must be backed up as well as encrypted, with all the complexity that backup involves (must be off-site and secure).

Still, if you understand the implications this is a neat solution, provided you do not need to use those pesky mobile devices that lack USB ports.

The product tested has a capacity from 4GB to 32GB and has a smart, strong metal case. The plastic personal edition runs from 8GB to 32GB and is less robust. An SSD model offers from 30GB to 240GB, and larger desktop units support SSD or hard drive storage from 64GB to 6TB, with USB 3.0 for fast data transfer.

Prices range from around £30 inc VAT for an 8GB Personal USB stick, to £39.50 for the 4GB professional device reviewed here, up to £470 for the monster 6TB drive or £691 for a USB 3.0 external SSD (prices taken from a popular online retailer). The cost strikes me as reasonable for well-made secure storage.

More information on DatAshur is here.

Privacy, Google Now, Scroogled, and the connected world

2013 saw the launch of Google Now, a service which aspires to alert you to information you care about at just the right time. Rather than mechanical reminders of events 15 minutes before start time, Google Now promises to take into account location, when you are likely to have to leave to arrive where you want to be, and personal preferences. Much of its intelligence is inferred from what Google knows about you through your browsing patterns, searches, location, social media connections and interactions, and (following Google’s acquisition of Nest, which makes home monitoring kit) who knows what other data that might be gathered.

It is obvious that users are being invited to make a deal. Broadly, the offer is that if you hand over as much of your personal data to Google as you can bear, then in return you will get services that will make your life easier. The price you pay, loss of privacy aside, is more targeted advertising.

There could be other hidden costs. Insurance is one that intrigues me. If insurance companies know everything about you, they may be able to predict more accurately what bad things are likely to happen to you and make insuring against them prohibitively expensive.

Another issue is that the more you use Google Now, the more benefit there is in using Google services versus their competitors. This is another example of the winner-takes-all effect which is commonplace in computing, though it is a different mechanism. It is similar to the competitive advantage Google has already won in search: it has more data, therefore it can more easily refine and personalise search results, therefore it gets more data. However this advantage is now extended to calendar, smartphone, social media, online shopping and other functions. I would expect more future debate on whether it is fair for one company to hold all these data. I have argued before about Google and the case for regulation.

This is all relatively new, and there may be – probably are – other downsides that we have not thought of.

Microsoft in 2013 chose to highlight the privacy risks (among other claimed deficiencies) of engaging with Google through its Scroogled campaign.

image

Some of the concerns raised are valid; but Microsoft is the wrong entity to do this, and the campaign betrays its concern over more mundane risks like losing business: Windows to Android or Chrome OS, Office to Google Docs, and so on. Negative advertising rarely impresses, and I doubt that Scroogled will do much either to promote Microsoft’s services or to disrupt Google. It is also rather an embarrassment.

The red box above suits my theme though. What comes to mind is what in hindsight is one of the most amusing examples of wrong-headed legislation in history. In 1865 the British Parliament passed the first of three Locomotive Acts regulating “road locomotives” or horseless carriages. It limited speed to 4 mph in the country and 2 mph in the town, and required a man carrying a red flag to walk in front of certain types of vehicles.

red-flag

The reason this is so amusing is that having someone walk in front of a motorised vehicle limits the speed of the vehicle to that of the pedestrian, negating its chief benefit.

How could legislators be so stupid? The answer is that they were not stupid and they correctly identified real risks. Motor vehicles can and do cause death and mayhem. They have changed our landscape, in many ways for the worse, and caused untold pollution.

At the same time, the motor vehicle has been a huge advance in civilisation, enabling social interaction, trade and leisure opportunities that we could not now bear to lose. The legislators saw the risks, but had insufficient vision to see the benefits – except that over time, and inevitably, speed limits and other restrictions were relaxed so that motor vehicles were able to deliver the benefits of which they were capable.

My reflection is whether the fears into which the Scroogled campaign attempts to tap are similar to those of the Red Flag legislators. The debate around privacy and data sharing should not be driven by fear, but rather about how to enable the benefits while figuring out what is necessary in terms of regulation. And there is undoubtedly a need for some regulation, just as there is today for motor vehicles – speed limits, safety belts, parking restrictions and all the rest.

Returning for a moment to Microsoft: it seems to me that another risk of its Scroogling efforts is that it positions itself as the red flag rather than the horseless carriage. How is that going to look ten years from now?

Adobe’s security calamity: 2.9 million customer account details accessed

Adobe has reported a major security breach. According to the FAQ:

Our investigation currently indicates that the attackers accessed Adobe customer IDs and encrypted passwords on our systems. We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders. At this time, we do not believe the attackers removed decrypted credit or debit card numbers from our systems.

We are also investigating the illegal access to source code of numerous Adobe products. Based on our findings to date, we are not aware of any specific increased risk to customers as a result of this incident.

A few observations.

  • If the criminals downloaded 2.9 million customer details with name, address and credit card details the risk of fraud is substantial. Encryption is good of course, but if you have a large body of encrypted information which you can attack at your leisure then it may well be cracked. Adobe has not told us how strong the encryption is.
  • The FAQ is full of non-answers. Like, question: how did this happen? answer, Our investigation is still ongoing.
  • Apparently if Adobe thinks your credit card details were stolen you will get a letter. That seems odd to me, unless Adobe is also contacting affected customers by email or telephone. Letters are slow and not all that reliable since people move regularly (though I suppose if the address on file is wrong then the credit card information may well be of little use.)
  • Adobe says source code was stolen too. This intrigues me. What is the value of the source code? It might help a criminal crack the protection scheme, or find new ways to attack users with malicious PDF documents. A few people in the world might even be interested to see how certain features of say Photoshop are implemented in order to assist with coding a rival product, but finding that sort of buyer might be challenging.
  • Is the vulnerability which enabled the breach now fixed? Another question not answered in the FAQ. Making major changes quickly to such a large system would be difficult, but it all depends what enabled the breach which we do not know.
  • I’d like to see an option not to store credit card details, but to enter them afresh for each transaction. Hassle of course, and not so good for inertia marketing, but more secure.