Category Archives: security

Two Factor Authentication is great–but what if you lose your phone or have your number hijacked?

Account hijack is a worry for anyone. What kind of chaos could someone cause simply by taking over your email or social media account? Or how about spending money on your behalf on Amazon, eBay or other online retailers?

The obvious fraud will not be long-lasting, but there is an aftermath too. Changing passwords, getting back into accounts that have been compromised and their security information changed.

In the worst cases you might lose access to an account permanently. Organisations like Google, Microsoft, Facebook or eBay, are not easy to deal with in cases where your account is thoroughly compromised. They may not be sure whether your are the victim attempting to recover an account, or the imposter attempting to compromise an account. Even getting to speak to a human can be challenging, as they rely on automated systems, and when you do, you may not get the answer you want.

The solution is stronger security so that account hijack is less common, but security is never easy. It is a system, and like any system, any change you make can impact other parts of the system. In the old world, the most common approach had three key parts, username, password and email. The username was often the email address, so perhaps make that two key parts. Lose the password, and you can reset it by email.

Two problems with this approach. First, the password might be stolen or guessed (rather easy considering massive databases username/password combinations easily available online). And second, the email is security-critical, and email can be intercepted as it often travels the internet in plain text, for at least part of its journey. If you use Office 365, for example, your connection to Office 365 is encrypted, but an email sent to you may still be plain text until it arrives on Microsoft’s servers.

There is therefore a big trend towards 2-factor authentication (2FA): something you have as well as something you know. This is not new, and many of us have used things like little devices that display one-time pass codes that you use in addition to a password, such as the RSA SecureID key fob devices.

image

Another common approach is a card and a card reader. The card readers are all the same, and you use something like a bank card, put in your PIN, and it displays a code. An imposter would need to clone your card, or steal it and know the PIN.

in the EU, everyone is becoming familiar with 2FA thanks to the revised Payment Services Directive (PSD2) which comes into effect in September 2019 and requires Strong Customer Authentication (SCA). Details of what this means in the UK are here. See chapter 20:

Under the PSRs 2017, strong customer authentication means authentication based on the use of two or more independent elements (factors) from the following categories:

• something known only to the payment service user (knowledge)

• something held only by the payment service user (possession)

• something inherent to the payment service user (inherence)

So will we see a lot more card readers and token devices? Maybe not. They offer decent security, but they are expensive, and when users lose them or they wear out or the battery goes, they have to be replaced, which means more admin and expense. Giant companies like security, but they care almost as much about keeping costs down and automating password reset and account recovery.

Instead, the favoured approach is to use your mobile phone. There are several ways to do this, of which the simplest is where you are sent a one-time code by SMS. Another is where you install an app that generates codes, just like the key fob devices, but with support for multiple accounts and no need to clutter up your pocket or bag.

These are not bad solutions – some better than others – but this is a system, remember. It used to be your email address, but now it is your phone and/or your phone number that is critical to your security. All of us need to think carefully about a couple of things:

– if our phone is lost or broken, can we still get our work done?

– if a bad guy steals our phone or hijacks the number (not that difficult in many cases, via a little social engineering), what are the consequences?

Note that the SCA regulations insist that the factors are each independent of the other, but that can be difficult to achieve. There you are with your authenticator app, your password manager, your web browser with saved usernames and passwords, your email account – all on your phone.

Personally I realised recently that I now have about a dozen authenticator accounts on a phone that is quite old and might break; I started going through them and evaluating what would happen if I lost access to the app. Unlike many apps, most authenticator apps (for example those from Google and Microsoft) do not automatically reinstall complete with account data when you get a new phone.

Here are a few observations.

First, SMS codes are relatively easy from a recovery perspective (you just need a new phone with the same number), but not good for security. Simon Thorpe at Authy has a good outline of the issues with it here and concludes:

Essentially SMS is great for finding out your Uber is arriving, or when your restaurant table is ready. But SMS was never designed to provide a secure way for you to login to your online banking account.

Yes, Authy is pitching its alternative solutions but the issues are real. So try to avoid them; though, as Thorpe notes, SMS codes are much stronger security than password alone.

Second, the authenticator app problem. Each of those accounts is actually a long code. So you can back them up by storing the code. However it is not easy to get the code unless you hack your phone, for example getting root access to an Android device.

What you can do though is to use the “manually enter code” option when setting up an account, and copy the code somewhere safe. Yes you are undermining the security, but you can then easily recover the account. Up to you.

If you use the (free) Authy app, the accounts do roam between your various devices. This must mean Authy keeps a copy on its cloud services, hopefully suitably encrypted. So it must be a bit less secure, but it is another solution.

Third, check out the recovery process for those accounts where you rely on your authenticator app or smartphone number. In Google’s case, for example, you can access backup codes – they are in the same place where you set up the authenticator account. These will get you back into your account, once for each code. I highly recommend that you do something to cover yourself against the possibility of losing your authenticator code, as Google is not easy to deal with in account recovery cases.

A password manager or an encrypted device is a good place to store backup codes, or you may have better ideas.

The important thing is this: a smartphone is an easy thing to lose, so it pays to plan ahead.

How Windows 10 Ransomware protection can cause install failures, LibreOffice for example

While researching a piece on Office applications I needed to install LibreOffice. The install failed with a message about an error creating a temporary file needed for installation.

image

Fortunately I knew where to look for the answer. Windows Ransomware Protection is a feature which whitelists the applications allowed to write data to the folders likely to contain the data you care about, such as documents and pictures. The idea is that malware which wants to encrypt these folders and then demand a ransom will find it harder to do so.

image

Ransomware protection can have side effects though. Operations like creating desktop shortcuts may fail because the desktop is one of the protected locations. That is just an annoyance; but in the case of LibreOffice, setup tried to write an essential file to a protected location and the install failed completely.

Solution: turn off Ransomware protection temporarily and re-run setup.

image

Microsoft Office 365 and Google G-Suite: why multi-factor authentication is now essential

Businesses using Office 365, Google G-Suite or other hosted environments (but especially Microsoft and Google) are vulnerable to phishing attacks that steal user credentials. Here is a recent example, which sailed through Microsoft’s spam and malware filters despite its attempts to use AI and other techniques to catch them.

image

If a user clicks the link and signs in, the bad guys have their credentials. What are the consequences?

– at best, a bunch of spam sent out from the user’s account, causing embarrassment and a quick password reset.

– at worst, something much more serious. Once an unauthorised party has user credentials, there are all sorts of social engineering possibilities to escalate the attack, obtain other credentials, or see what interesting data can be found in collaborative document stores and shared applications.

– another risk is to discover information about an organisation’s customers and contact them to advise of new bank details which of course direct payments to the attacker’s account.

The truth is there are many risks and it is worth every effort to prevent this happening in the first place.

However, it is hard to educate every user to the extent that you can be confident they will never click a link in an email such as the one above, or reveal their password in some other way – such as using the same one as one that has been leaked – check here to find out, for example.

Multi-factor authentication (MFA), which is now easy to set up on both Office 365 or G-Suite, helps matters by requiring users to enter a one-time code from their mobile, either via an authenticator app or a text message, before they can log in. It does not cost any extra and now is the time to set it up, if you have not already.

It seems to me that in some ways the prevalence of a few big providers in hosted email and applications has made matters easier for the hackers. They know that a phishing attack simulating, say, Office 365 support will find many potential victims.

The more positive view is that even small businesses can now easily use Enterprise-grade security, if they choose to take advantage.

I do not think MFA is perfect. It usually depends on a mobile phone, and given that possession of a user’s phone also often enables you to reset the password, there is a risk that the mobile becomes the weak link. It is well known that social engineering against mobile providers can persuade them to cancel a SIM and issue a new one to an impostor.

That said, hijacking a phone is a lot more effort than sending out a million phishing emails, and on balance enabling MFA is well worth it.

Mozilla Firefox and a DNS security dilemma

Mozilla is proposing to make DNS over HTTPS default in Firefox. The feature is called Trusted Recursive Resolver, and currently it is available but off by default:

image

DNS is critical to security but not well understood by the general public. Put simply, it resolves web addresses to IP addresses, so if you type in the web address of your bank, a DNS server tells the browser where to go. DNS hijacking makes phishing attacks easier since users put the right address in their browser (or get it from a search engine) but may arrive at a site controlled by attackers. DNS is also a plain-text protocol, so DNS requests may be intercepted giving attackers a record of which sites you visit. The setting for which DNS server you use is usually automatically acquired from your current internet connection, so on a business network it is set by your network administrator, on broadband by your broadband provider, and on wifi by the wifi provider.

DNS is therefor quite vulnerable. Use wifi in a café, for example, and you are trusting the café wifi not to have allowed the DNS to be compromised. That said, there are further protections, such as SSL certificates (though you might not notice if you were redirected to a secure site that was a slightly misspelled version of your banking site, for example). There is also a standard called DNSSEC which authenticates the response from DNS servers.

Mozilla’s solution is to have the browser handle the DNS. Trusted Recursive Resolver not only uses a secure connection to the DNS server, but also provides a DNS server for you to use, operated by Cloudflare. You can replace this with other DNS servers though they need to support DNS over HTTPS. Google operates a popular DNS service on 8.8.8.8 which does support DNS over HTTPS as well as DNSSEC. 

While using a secure connection to DNS is a good thing, using a DNS server set by your web browser has pros and cons. The advantage is that it is much less likely to be compromised than a random public wifi network. The disadvantage is that you are trusting that third-party with a record of which sites you visit. It is personal data that potentially could be mined for marketing or other reasons.

On a business network, having the browser use a third-party DNS server could well cause problems. Some networks use split DNS, where an address resolves to an internal address when on the internal network, and an external address otherwise. Using a third-party DNS server would break such schemes.

Few will use this Firefox feature unless it is on by default – but that is the plan:

You can enable DNS over HTTPS in Firefox today, and we encourage you to.

We’d like to turn this on as the default for all of our users. We believe that every one of our users deserves this privacy and security, no matter if they understand DNS leaks or not.

But it’s a big change and we need to test it out first. That’s why we’re conducting a study. We’re asking half of our Firefox Nightly users to help us collect data on performance.

We’ll use the default resolver, as we do now, but we’ll also send the request to Cloudflare’s DoH resolver. Then we’ll compare the two to make sure that everything is working as we expect.

For participants in the study, the Cloudflare DNS response won’t be used yet. We’re simply checking that everything works, and then throwing away the Cloudflare response.

Personally I feel this should be opt-in rather than on by default, though it probably is a good thing for most users. The security risk from DNS hijacking is greater than the privacy risk of using Cloudflare or Google for DNS. It is worth noting too that Google DNS is already widely used so you may already be using a big US company for most of your DNS resolving, but probably without the benefit of a secure connection.

Account options when setting up Windows 10, and Microsoft’s enforced insecurity questions

How do you sign into Windows 10? There are now four options. I ran through a Windows 10 setup using build 1803 (which was released in April this year) and noted how this has evolved. Your first decision: is this a personal or organisational PC?

image

If you choose Setup for an organisation, you will be prompted to sign into Office 365, also known as Azure AD. The traditional Domain join, for on-premises Active Directory, has been shunted to a less visible option (the red encircling is mine). In larger organisations, this tends to be automated anyway.

image

But this one is personal. It is a similar story. You are prompted to sign in with a Microsoft account, but there is another option, called an Offline account (again, the red circle is mine).

image

This “Offline account” was in Windows 7 and earlier the only option for personal accounts. I still recommend having an administrative “offline account” set up so you can always be sure of being able to log into your PC, even without internet. Think about some of the scenarios. Someone might hack your Microsoft account, change your password, and now you cannot even log onto your PC. Unless you have an offline account.

I’ve been awkward and selected Offline account. Windows, or rather Microsoft, does not like it. Note the mind games in the screenshot below. Although I’ve made a positive selection for Offline account, the default and highlighted option now is to change my mind. I do not like this.

image

Now I can set up my offline account. A screen prompts for a username, then for a password, all the time nagging that I should create an online account instead.

image

I type and confirm the password; but now I get this:

image

Yes, I have to create “security questions”, with no option to skip. If you try to skip, you get a “This field is required” message. Worse still, they are from a pre-selected list:

image

I really hate this. These are not security questions; they are insecurity questions. Their purpose is to let me (or someone else) reset the password, forming a kind of back door into the PC. The information in the questions is semi-secret; not impossible for someone determined to discover. So Microsoft is insisting that I make my account less secure.

Of course you do not have to give honest answers. You can call your first pet yasdfWsd9gAg!!hea. But most people will be honest.

Does it matter, given that a PC account offers rather illusory security anyway? Unless you encrypt the hard drive, someone who steals the PC can reset the password by booting into Linux, or take out the disk and read it from another PC. All true; but note that Microsoft makes it rather easy to encrypt your PC with Bitlocker, in which case the security is not so illusory.

Just for completeness, here is what comes next, an ad for Cortana:

image

Hey Cortana! How do I delete my security answers?

I do get why Microsoft is doing this. An online account is better in that settings can roam, you can use the Store, and you can reset the password from one PC to restore access to another. The insecurity questions could be a life-saver for someone who forgot their password and need to get back into their PC.

But such things should be optional. There is nothing odd about wanting an offline account.

Manage your privacy online through cookie settings? You must be joking.

Since the deadline passed for the enforcement of the EU’s GDPR (General Data Protection Register) most major web sites have revamped their privacy settings with new privacy policies and more options for controlling how your personal data is used. Unfortunately, the options offered are in many cases too obscure, too complex and too time-consuming to be of any practical value.

Recital 32 of the GDPR says:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data … this could include ticking a box when visiting an internet website … silence, pre-ticked boxes or inactivity should not indicate consent.

I am sure the controls on offer via major web properties are the outcome of legal advice; at the same time, as a non-legal person I struggle on occasion to see how they meet the requirements or the spirit of the legislation. For example, another part of Recital 32 says:

… the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided.

This post describes what I get if I go to technology news site zdnet.com and it detects that I have not agreed to its cookie management.

Note: before I continue, let me emphasize that there is lots of great content on zdnet, some written by people I know; the site as far as I know is doing its best to make business sense of providing such content, in what has become a hostile environment for professional journalism. I would like to see fundamental change in this environment but that is just wishful thinking.

That said, this is one of the worst experiences I have found for privacy-seeking users. Here is the initial banner:

image

Naturally I click Manage Settings.

Now I get a scrolling dialog from CBS Interactive, with a scroll gadget that indicates that this is a loooong document:

image

There is also some puzzling news. There are a bunch of third-parties whose cookies are apparently necessary for “our sites, products and services to function correctly.” These include cookies for analytics and also for Google ad-serving. I am not clear why these third-parties perform functions which are necessary to read a technical news site, but there we are.

I scroll down and reach a button that lets me opt out of being tracked by the third party advertisers using zdnet.com, or so it seems:

image

I want to opt out, so I click. Some of the options below are unchecked, but not many. Most of the options say “Opt out through company”.

It also seems pretty technical to me. Am I meant to understand what a “Demand Side Platform” is?

image

I counted the number of links that say “opt out through company”. There are 63 of them.

I click the first one, called Adform. Naturally, the first thing I see is a request to agree (or at least click OK to) their Cookie Policy.

image

I click to read the policy (remember this is only the first of 63 sites I have to visit). I am not offered any sort of settings, but invited to visit youronlinechoices or aboutads.info.

image

Well, I don’t want anything to do with Adform and don’t intend to return to the site. Maybe I can ignore the Adform Cookie Policy and just focus on the opt-out button above it.

image

Currently I am “Opted-in”. This is a lie, I have never opted in. Rather, I have failed to opt out, until I click the button. Opting out will in fact set a cookie, so that Adform knows I have opted out. I am also reminded that this opt out only applies to this particular browser on this particular device. On all other browsers and/or devices, I will still be “opted in”.

OK, one down, 62 to go. However scrolling further down the list I get some bad news:

image

In some cases, it seems, “this partner does not provide a cookie opt-out”. The best I can do is to “visit their privacy policy for more information”. This will require a search, since the link is not clickable.

How to control your privacy

What should you do if you do not want to be tracked? Attempting to follow the industry-provided opt-outs is just hopeless. It is mostly PR and attempting to tick legal boxes.

If you do not want to be tracked, use a VPN, use ad blockers, and delete all cookies at the end of each browsing session. This will be tedious for you though, since your browsing experience will be one of constant “I agree” dialogs, some of which you may be able to ignore, or others for which you have to click I Agree or endure a myriad of semi-functional links and settings,

Maybe the EU GDPR legislation is unreasonable. Maybe we have been backed into this corner by allowing the internet to be dominated by a few giant companies. All we can state for sure is that the current situation is hopelessly broken, from a privacy and usability perspective.

Spectre and Meltdown woes continue as Intel confesses to broken updates

Intel’s Navin Shenoy says the company has asked PC vendors to stop shipping its microcode updates that fix the speculative execution vulnerabilities identified by Google’s Project Zero team:

We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior.

This is a blow to industry efforts to fix this vulnerability, a process involving BIOS updates (to install the microcode) as well as operating system patches.

Intel says it has an “early version of the updated solution”. Given the length of time it takes for PC manufacturers to package and distribute BIOS updates for the many thousands of models affected, it looks like the moment at which the majority of active systems will be patched is now far in the future.

Vendors have not yet completed the rollout of the initial patch, which they are now being asked to withdraw.

The detailed microcode guidance is here. Intel also has a workaround which gives some protection while also preserving system stability:

For those concerned about system stability while we finalize the updated solutions, we are also working with our OEM partners on the option to utilize a previous version of microcode that does not display these issues, but removes the Variant 2 (Spectre) mitigations. This would be delivered via a BIOS update, and would not impact mitigations for Variant 1 (Spectre) and Variant 3 (Meltdown).

I am not sure who out there is not concerned about system stability? That said, public cloud vendors would rather almost anything than the possibility of code running in one VM getting unauthorised access to the host or to other VMs.

Right now it feels as if most of the world’s computing devices, from server to smartphone, are simply insecure. Though it should be noted that the bad guys have to get their code to run: trivial if you just need to run up a VM on a public cloud, more challenging if it is a server behind a firewall.

Why patching to protect against Spectre and Meltdown is challenging

The tech world has been buzzing with news of bugs (or design flaws, take your pick) in mainly Intel CPUs, going way back, which enables malware to access memory in the computer that should be inaccessible.

How do you protect against this risk? The industry has done a poor job in communicating what users (or even system admins) should do.

A key reason why this problem is so serious is that it risks a nightmare scenario for public cloud vendors, or any hosting company. This is where software running in a virtual machine is able to access memory, and potentially introduce malware, in either the host server or other virtual machines running on the same server. The nature of public cloud is that anyone can run up a virtual machine and do what they will, so protecting against this issue is essential. The biggest providers, including AWS, Microsoft and Google, appear to have moved quickly to protect their public cloud platforms. For example:

The majority of Azure infrastructure has already been updated to address this vulnerability. Some aspects of Azure are still being updated and require a reboot of customer VMs for the security update to take effect. Many of you have received notification in recent weeks of a planned maintenance on Azure and have already rebooted your VMs to apply the fix, and no further action by you is required.

With the public disclosure of the security vulnerability today, we are accelerating the planned maintenance timing and will begin automatically rebooting the remaining impacted VMs starting at 3:30pm PST on January 3, 2018. The self-service maintenance window that was available for some customers has now ended, in order to begin this accelerated update.

Note that this fix is at the hypervisor, host level. It does not patch your VMs on Azure. So do you also need to patch your VM? Yes, you should; and your client PCs as well. For example, KB4056890 (for Windows Server 2016 and Windows 10 1607), or KB4056891 for Windows 10 1703, or KB4056892. This is where it gets complex though, for two main reasons:

1. The update will not be applied unless your antivirus vendor has set a special registry key. The reason is that the update may crash your computer if the antivirus software accesses memory is a certain way, which it may do. So you have to wait for your antivirus vendor to do this, or remove your third-party anti-virus and use the built-in Windows Defender.

2. The software patch is not complete protection. You also need to update your BIOS, if an update is available. Whether or not it is available may be uncertain. For example, I am pretty sure that I found the right update for my HP PC, based on the following clues:

– The update was released on December 20 2017

– The description of the update is “Provides improved security”

image

So now is the time, if you have not done so already, to go to the support sites for your servers and PCs, or motherboard vendor if you assembled your own, see if there is a BIOS update, try to figure out it it addresses Spectre and Meltdown, and apply it.

If you cannot find an update, you are not fully protected.

It is not an easy process and realistically many PCs will never be updated, especially older ones.

What is most disappointing is the lack of clarity or alerts from vendors about the problem. I visited the HPE support site yesterday in the hope of finding up to date information on HP’s server patches,  to find only a maze of twist little link passages, all alike, none of which led to the information I sought. The only thing you can do is to trace the driver downloads for your server in the hope of finding a BIOS update.

Common sense suggests that PCs and laptops will be a bigger risk than private servers, since unlike public cloud vendors you do not allow anyone out there to run up VMs.

At this point it is hard to tell how big a problem this will be. Best practice though suggests updating all your PCs and servers immediately, as well as checking that your hosting company has done the same. In this particular case, achieving this is challenging.

PS kudos to BleepingComputer for this nice article and links; the kind of practical help that hard-pressed users and admins need.

There is also a great list of fixes and mitigations for various platforms here:

https://github.com/hannob/meltdownspectre-patches

PPS see also Microsoft’s guidance on patching servers here:

https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution

and PCs here:

https://support.microsoft.com/en-us/help/4073119/protect-against-speculative-execution-side-channel-vulnerabilities-in

There is a handy PowerShell script called speculationcontrol which you can install and run to check status. I was able to confirm that the HP bios update mentioned above is the right one. Just run PowerShell with admin rights and type:

install-module speculationcontrol

then type

get-speculationcontrolsettings

image

Thanks to @teroalhonen on Twitter for the tip.

Let’s Encrypt: a quiet revolution

Any website that supports SSL (an HTTPS connection) requires a  digital certificate. Until relatively recently, obtaining a certificate meant one of two things. You could either generate your own, which works fine in terms of encrypting the traffic, but results in web browser warnings for anyone outside your organisation, because the issuing authority is not trusted. Or you could buy one from a certificate provider such as Symantec (Verisign), Comodo, Geotrust, Digicert or GoDaddy. These certificates vary in price from fairly cheap to very expensive, with the differences being opaque to many users.

Let’s Encrypt is a project of the Internet Security Research Group, a non-profit organisation founded in 2013 and sponsored by firms including Mozilla, Cisco and Google Chrome. Obtaining certificates from Let’s Encrypt is free, and they are trusted by all major web browsers.

image

Last month Let’s Encrypt announced coming support for wildcard certificates as well as giving some stats: 46 million active certificates, and plans to double that in 2018. The post also notes that the latest figures from Firefox telemetry indicate that over 65% of the web is now served using HTTPS.

image
Source: https://letsencrypt.org/stats/

Let’s Encrypt only started issuing certificates in January 2016 so its growth is spectacular.

The reason is simple. Let’s Encrypt is saving the IT industry a huge amount in both money and time. Money, because its certificates are free. Time, because it is all about automation, and once you have the right automated process in place, renewal is automatic.

I have heard it said that Let’s Encrypt certificates are not proper certificates. This is not the case; they are just as trustworthy as those from the other SSL providers, with the caveat that everything is automated. Some types of certificate, such as those for code-signing, have additional verification performed by a human to ensure that they really are being requested by the organisation claimed. No such thing happens with the majority of SSL certificates, for which the process is entirely automated by all the providers and typically requires that the requester can receive email at the domain for which the certificate is issued. Let’s Encrypt uses other techniques, such as proof that you control the DNS for the domain, or are able to write a file to its website. Certificates that require human intervention will likely never be free.

A Let’s Encrypt certificate is only valid for three months, whereas those from commercial providers last at least a year. Despite appearances, this is not a disadvantage. If you automate the process, it is not inconvenient, and a certificate with a shorter life is more secure as it has less time to be compromised.

The ascendance of Let’s Encrypt is probably regretted both by the commercial certificate providers and by IT companies who make a bit of money from selling and administering certificates.

Let’s Encrypt certificates are issued in plain-text PEM (Privacy Enhanced Mail) format. Does that mean you cannot use them in Windows, which typically uses .cer or .pfx certificates?  No, because it is easy to convert between formats. For example, you can use the openssl utility. Here is what I use on Linux to get a .pfx:

openssl pkcs12 -inkey privkey.pem -in fullchain.pem -export -out yourcert.pfx

If you have a website hosted for you by a third-party, can you use Let’s Encrypt? Maybe, but only if the hosting company offers this as a service. They may not be in a hurry to do so, since there is a bit of profit in selling SSL certificates, but on the other hand, a far-sighted ISP might win some business by offering free SSL as part of the service.

Implications of Let’s Encrypt

Let’s Encrypt removes the cost barrier for securing a web site, subject to the caveats mentioned above. At the same time, Google is gradually stepping up warnings in the Chrome browser when you visit unencrypted sites:

Eventually, we plan to show the “Not secure” warning for all HTTP pages, even outside Incognito mode.

Google search is also apparently weighted in favour of encrypted sites, so anyone who cares about their web presence or traffic is or will be using SSL.

Is this a good thing? Given the trivia (or worse) that constitutes most of the web, why bother encrypting it, which is slower and takes more processing power (bad for the planet)? Note also that encrypting the traffic does nothing to protect you from malware, nor does it insulate web developers from security bugs such as SQL injection attacks – which is why I prefer to call SSL sites encrypted rather than secure.

The big benefit though is that it makes it much harder to snoop on web traffic. This is good for privacy, especially if you are browsing the web over public Wi-Fi in cafes, hotels or airports. It would be a mistake though to imagine that if you are browsing the public web using HTTPS that you are really private: the sites you visit are still getting your data, including Facebook, Google and various other advertisers who track your browsing.

In the end it is worth it, if only to counter the number of times passwords are sent over the internet in plain text. Unfortunately people remain willing to send passwords by insecure email so there remains work to do.

Thoughts on Petya/NotPetya and two key questions. What should you do, and is it the fault of Microsoft Windows?

Every major IT security incident generates a ton of me-too articles most of which lack meaningful content. Journalists receive a torrent of emails from companies or consultants hoping to be quoted, with insightful remarks like “companies should be more prepared” or “you should always keep your systems and security software patched and up to date.”

An interesting feature of NotPetya (which is also Not Ransomware, but rather a malware attack designed to destroy data and disrupt business) is that keeping your systems and security software patched and up to date in some cases did not help you. Note this comment from a user:

Updated Win10 CU with all new cumulative updates and Win10 Insider Fast latest were attacked and affected. Probably used “admin” shares but anyway – Defender from Enterprise just ignored virus shared through network.

Nevertheless, running a fully updated Windows 10 did mitigate the attack compared to running earlier versions, especially Windows 7.

Two posts about NotPetya which are worth reading are the technical analyses from Microsoft here and here. Reading these it is hard not to conclude that the attack was an example of state-sponsored cyberwarfare primarily targeting Ukraine. The main factors behind this conclusion are the lack of financial incentive (no serious effort to collect payment which in any case could not restore files). Note the following from Microsoft’s analysis:

The VictimID shown to the user is randomly generated using CryptGenRandom() and does not correspond to the MFT encryption, so the ID shown is of no value and is also independent from the per-drive file encryption ID written on README.TXT.

My observations are as follows.

1. You cannot rely on security software, nor on OS patching (though this is still critically important). Another example of this came in the course of reviewing the new SENSE consumer security appliance from F-Secure. As part of the test, I plucked out a recent email which asked me to download a virus (thinly disguised as an invoice) and tried to download it. I succeeded. It sailed past both Windows Defender and F-Secure. When I tested the viral file with VirusTotal only 4 of 58 anti-virus applications detected it.

The problem is that competent new malware has a window of opportunity of at least several hours when it is likely not to be picked up. If during this time it can infect a significant number of systems and then spread by other means (as happened with both WannaCry and NotPetya) the result can be severe.

2. Check your backups. This is the most effective protection against malware. Further, backup is complicated. What happens if corrupted or encrypted files are backed up several times before the problem is spotted? This means you need a backup that can go back in time to several different dates. If your backup is always online, what happens if a network intruder is able to manage and delete your backups? This means you should have offline backups, or at least avoid having a single set of credentials which, if stolen, give an attacker full access to all your backups. What happens if you think you are backed up, but in fact critical files are not being backed up? This is common and means you must do a test restore from time to time, pretending that all your production systems have disappeared.

3. If you are running Windows, run Windows 10. I am sorry to have to say this, in that I recognize that in some respects Windows 7 has a more coherent design and user interface. But you cannot afford to miss out on the security work Microsoft has done in Windows 10, as the second Microsoft article referenced above spells out. 

4. Is it the fault of Microsoft Windows? An interesting discussion point which deserves more attention. The simplistic argument against Windows is that most malware attacks exploit bugs in Windows, therefore it is partly Microsoft’s fault for making the bugs, and partly your fault for running Windows. The more plausible argument is that Windows monoculture in business gives criminals an easy target, with a huge array of tools and expertise on how to hack it easily available.

The issue is in reality a complex one and we should credit Microsoft at least with huge efforts to make Windows more secure. Users, it must be noted, are in many cases resistant to these efforts, perceiving them as an unnecessary nuisance (for example User Access Control in Vista); and historically third-party software vendors have also often got in the way, such as being slow or reluctant to apply digital signatures to software drivers and applications.

Windows 8 was in part an effort to secure Windows by introducing a new and secure model for applications. There are many reasons why this was unsuccessful, but too little recognition of the security aspect of these efforts.

The answer then is also nuanced. If you run Windows you can so with reasonable security, especially if you are serious about it and use features such as Device Guard, which whitelists trusted applications. If you switch your business to Mac or Linux, you might well escape the next big attack, not so much because the OS is inherently more secure, but because you become part of a smaller and less attractive target.

For a better answer, see the next observation.

5. Most users should run a locked-down operating system. This seems rather obvious. Users who are not developers, who use the same half a dozen applications day to day, are better and more safely served by running a computer in which applications are properly isolated from the operating system and on which arbitrary executables from unknown sources are not allowed to execute. Examples are iOS, Android, Chrome OS and Windows 10 S.  Windows 10 Creators Update lets you move a little way in this direction by setting it to allow apps from the Store only:

image

There is a significant downside to running a locked-down operating system, especially as a consumer, in that you cede control of what you can and cannot install to the operating system vendor, as well as paying a fee to that vendor for each paid-for installation. Android and iOS users live with this because it has always been that way, but in Windows the change of culture is difficult. Another issue is limitations in the Windows Store app platform, though this is becoming less of an issue thanks to the Desktop Bridge, which means almost any application can become a Store application. In gaming there is a problem with Steam which is an entire third-party Store system (apparently Steam bypasses the Windows 10 control panel restriction, though it does not run on Windows 10 S). Open source applications are another problem, since few are available in the Windows Store, though this could change.

If we really want Windows to become more secure, we should get behind Windows 10 S and demand better third-party support for the Windows Store.