Tag Archives: security

Fake TalkTalk Frequently Asked Questions

I use TalkTalk for broadband and landline – though I never signed up with TalkTalk, I signed up with a smaller provider that was taken over – and recently I have been plagued with calls from people claiming to be from TalkTalk, but who in fact have malicious intent. If I am busy I just put the phone down, but sometimes I chat with them for a while, to discover more about what they are trying to do.

Rather than write a long general piece about this problem, I thought the best approach would be a Q&A with answers to the best of my knowledge.

Why so many fake TalkTalk calls?

I have two landline numbers, and until recently only the non-TalkTalk number ever got called by scammers. This makes me think that the flood of TalkTalk calls is related to data stolen from the company, perhaps in October 215 or perhaps in subsequent attacks. Some victims report that scammers know their name and account number; in my case I don’t have any evidence for that. On a couple of occasions I have asked the caller to state my account number but they have given me a random number. However I do think that my telephone number is on a list of valid TalkTalk numbers that is circulating among these criminal companies.

How do I know if it is really TalkTalk?

My advice is to assume that is it not TalkTalk. If you think TalkTalk really wants to get in touch with you, put the phone down and call TalkTalk customer service, either from another number or after waiting 15 minutes to make sure that the person who called you has really terminated the call.

How does the caller know my Computer License ID?

A common part of these scripts is that the caller will show that he knows your “computer license ID” by guiding you to show it on your screen and then reading it to you. They do this by getting to you open a command window and type assoc:

image

The way this works is simple. The number you see next to .ZFSendToTarget is not a license ID. The abbreviation stands for Class ID and it is part of the plumbing of Windows, the same on every Windows PC.

What about all the malware errors and warnings on my PC?

This is a core part of the fake TalkTalk (and fake Microsoft) script. Our server has picked up warning messages from your computer, they say, and they show you a list of them.

The way this works is that the scammer guides you to open a Windows utility called Event Viewer, usually via the Run dialog (type eventvwr). Then they get you to filter it to show “Administrative events” which filters the log to show only errors and warnings.

Now, you have to agree that the number of errors and warnings Windows manages to generate is remarkable. My PC has over 9,000:

image

However, these messages are not generated by malware, nor are they broadcast to the world (or to TalkTalk servers). They are simply log entries generated by the operating system. If you have time on your hands, you can look up the reason for each one and even fix many of them; but in most cases they are just noise. Real malware, needless to say, does not make helpful logs of its activity but keeps quiet about it.

What does Fake TalkTalk really want to do?

Once your fake TalkTalk caller has persuaded you that something is wrong with your PC or router or internet connection, the next step is invariably to get remote access to your PC. They do this by guiding you to a website such as Ammyy or Logmein Rescue, and initiate a support session. These are legitimate services used by support engineers, but unfortunately if you allow someone untrustworthy to log onto your PC bad things will happen. Despite what the caller may tell you, these sessions are not just for messaging but enable the scammer to see your computer screen and even take over mouse and keyboard input.

Windows will generally warn you before you allow a remote session to start. You have to pass a dialog that says something like “Do you want to allow this app to make changes to your PC?” or similar. This warning is there for a reason! For sure say No if fake TalkTalk is on the line.

Note though that this remote control software is not in itself malware. Therefore you will see that the software that is trying to run is from a legitimate company. Unfortunately that will not protect you when someone who means you harm is at the other end of the connection.

OK, so Fake TalkTalk has a remote connection. What next?

Despite my interest in the goals of these scammers, I have never gone so far as to allow them to connect. There are ways to do this relatively safely, with an isolated virtual machine, but I have not gone that far. However I have seen reports from victims.

There is no single fake TalkTalk, but many organisations out there who do this impersonating. So the goals of these various organisations (and they are generally organisations rather than individuals) will vary.

A known scam is that the scammer will tell you a refund is due because of your slow internet connection. They show you that the sum has been paid, via a fake site, but oh dear, it is more than is due! For example, you are due £200 but have been paid £1200. Oops. Would you mind repaying the £1000 or I will be fired? So you send off £1000 but it turns out you were not paid any money at all.

Other possibilities are that your PC becomes part of a bot network, to be rented out to criminals for various purposes; or that the “engineer” finds such severe “problems” with your PC that you have to purchase their expensive anti-malware software or service; or your PC may be used to send out spam; or a small piece of software is installed that captures your keystrokes so your passwords will be sent to the scammer; or the scammer will search your documents for information they can use for identity theft.

Many possibilities, so for sure it is better not to let these scammers, or anyone you do not trust, to connect to your PC.

Who are the organisations behind Fake TalkTalk?

When I am called by TalkTalk impersonators, I notice several things. One is that the call quality is often poor, thanks to use of a cheap voice over IP connection from a far-off country. Second, I can hear many other calls taking place in the background, showing that these are not just individuals but organisations of some size. In fact, a common pattern is that three people are involved, one who initiates the call, a supervisor who makes the remote connection, and a third “engineer” who takes over once the connection is made.

One thing you can be sure of is that the are not in the UK. In fact, all the calls I have had seem to originate from outside Europe. This means of course that they are outside the scope of our regulators and difficult for police or fraud investigators to track down.

If you ask one of these callers where they are calling from, they often say they are in London. You can have some fun by asking questions like “what is the weather like in London?” or “what is the nearest tube station?”, they probably have no idea.

What is being done about this problem?

Good question. I have reported all my calls to TalkTalk, as well as using “Report abuse” forms on LogMeIn with the PIN numbers used by the criminals. On one occasion I had a scammer’s Google email address given to me; there is no way I can find to report this to Google which perhaps shows the limits of how much the company cares about our security.

I am not optimistic then that much of substance is being done or can be done. Addressing the problem at source means visiting the country where the scam is based and working with local law enforcement; even if that worked, other organisations in other countries soon pop up.

That means, for the moment, that education and warning is essential, imperfect though it is. TalkTalk, it seems to me, could do much better. Have they contacted all their customers will information and warnings? I don’t believe so. It is worried, perhaps, more about its reputation than the security of its customers.

DatAshur encrypted drives: protect your data but be sure to back it up too

The iStorage DataAshur USB flash drive is a neat way to encrypt your data. Lost USB storage devices are a common cause of data theft anxiety: in most cases the finder won’t care about your data but you can never be certain.

image

The DatAshur is simple to operate but highly secure, presuming it meets the advertised specification. All data written to the drive is automatically encrypted with 256-bit AES CBC (Advanced Encryption Standard with Cipher Block Chaining) and meets the US FIPS 140-2 standard. The encryption is transparent to the operating system, since decryption is built into the device and enabled by entering a PIN of 7 to 15 digits.

Note that a snag with this arrangement is that if your PC is compromised a hacker might be able to read the data while the drive is connected. If you are really anxious you could get round this by working offline, or perhaps using Microsoft’s clever Windows to Go (WTG) technology where you boot from a USB device and work in isolation from the host operating system. Unfortunately DatAshur does not support WTG (as far as I know) but there are alternatives which do, or you could boot into WTG and then insert your DatAshur device.

Normally you enter the PIN to unlock the drive before connecting it to a PC or Mac. This does mean that the DatAshur requires a battery, and a rechargeable battery is built in. However if the battery is exhausted you can still get your data back by recharging the device (it charges whenever it is plugged into a USB port).

OK, so what happens if a bad guy gets your device and enters PINs repeatedly until the right one is found? This will not work (unless you chose 1234567 or something like that) since after 10 failed tries the device resets, deleting all your data.

You should avoid, then, the following scenario. You give your DatAshur drive to your friend to show it off. “I’ve just updated all my expenses on this and there is no way you’ll be able to get at the data”. Friend fiddles for a bit. “Indeed,and neither can you”.

Here then is the security dilemma: the better the security, the more you risk losing access to your own data.

The DatAshur does have an additional feature which mitigates the risk of forgetting the PIN. You can actually set two PINs, a user PIN and an admin PIN. The admin PIN could be retained by a security department at work, or kept in some other safe place. This still will not rescue you though if more than 10 attempts are made.

What this means is that data you cannot afford to lose must be backed up as well as encrypted, with all the complexity that backup involves (must be off-site and secure).

Still, if you understand the implications this is a neat solution, provided you do not need to use those pesky mobile devices that lack USB ports.

The product tested has a capacity from 4GB to 32GB and has a smart, strong metal case. The plastic personal edition runs from 8GB to 32GB and is less robust. An SSD model offers from 30GB to 240GB, and larger desktop units support SSD or hard drive storage from 64GB to 6TB, with USB 3.0 for fast data transfer.

Prices range from around £30 inc VAT for an 8GB Personal USB stick, to £39.50 for the 4GB professional device reviewed here, up to £470 for the monster 6TB drive or £691 for a USB 3.0 external SSD (prices taken from a popular online retailer). The cost strikes me as reasonable for well-made secure storage.

More information on DatAshur is here.

Privacy, Google Now, Scroogled, and the connected world

2013 saw the launch of Google Now, a service which aspires to alert you to information you care about at just the right time. Rather than mechanical reminders of events 15 minutes before start time, Google Now promises to take into account location, when you are likely to have to leave to arrive where you want to be, and personal preferences. Much of its intelligence is inferred from what Google knows about you through your browsing patterns, searches, location, social media connections and interactions, and (following Google’s acquisition of Nest, which makes home monitoring kit) who knows what other data that might be gathered.

It is obvious that users are being invited to make a deal. Broadly, the offer is that if you hand over as much of your personal data to Google as you can bear, then in return you will get services that will make your life easier. The price you pay, loss of privacy aside, is more targeted advertising.

There could be other hidden costs. Insurance is one that intrigues me. If insurance companies know everything about you, they may be able to predict more accurately what bad things are likely to happen to you and make insuring against them prohibitively expensive.

Another issue is that the more you use Google Now, the more benefit there is in using Google services versus their competitors. This is another example of the winner-takes-all effect which is commonplace in computing, though it is a different mechanism. It is similar to the competitive advantage Google has already won in search: it has more data, therefore it can more easily refine and personalise search results, therefore it gets more data. However this advantage is now extended to calendar, smartphone, social media, online shopping and other functions. I would expect more future debate on whether it is fair for one company to hold all these data. I have argued before about Google and the case for regulation.

This is all relatively new, and there may be – probably are – other downsides that we have not thought of.

Microsoft in 2013 chose to highlight the privacy risks (among other claimed deficiencies) of engaging with Google through its Scroogled campaign.

image

Some of the concerns raised are valid; but Microsoft is the wrong entity to do this, and the campaign betrays its concern over more mundane risks like losing business: Windows to Android or Chrome OS, Office to Google Docs, and so on. Negative advertising rarely impresses, and I doubt that Scroogled will do much either to promote Microsoft’s services or to disrupt Google. It is also rather an embarrassment.

The red box above suits my theme though. What comes to mind is what in hindsight is one of the most amusing examples of wrong-headed legislation in history. In 1865 the British Parliament passed the first of three Locomotive Acts regulating “road locomotives” or horseless carriages. It limited speed to 4 mph in the country and 2 mph in the town, and required a man carrying a red flag to walk in front of certain types of vehicles.

red-flag

The reason this is so amusing is that having someone walk in front of a motorised vehicle limits the speed of the vehicle to that of the pedestrian, negating its chief benefit.

How could legislators be so stupid? The answer is that they were not stupid and they correctly identified real risks. Motor vehicles can and do cause death and mayhem. They have changed our landscape, in many ways for the worse, and caused untold pollution.

At the same time, the motor vehicle has been a huge advance in civilisation, enabling social interaction, trade and leisure opportunities that we could not now bear to lose. The legislators saw the risks, but had insufficient vision to see the benefits – except that over time, and inevitably, speed limits and other restrictions were relaxed so that motor vehicles were able to deliver the benefits of which they were capable.

My reflection is whether the fears into which the Scroogled campaign attempts to tap are similar to those of the Red Flag legislators. The debate around privacy and data sharing should not be driven by fear, but rather about how to enable the benefits while figuring out what is necessary in terms of regulation. And there is undoubtedly a need for some regulation, just as there is today for motor vehicles – speed limits, safety belts, parking restrictions and all the rest.

Returning for a moment to Microsoft: it seems to me that another risk of its Scroogling efforts is that it positions itself as the red flag rather than the horseless carriage. How is that going to look ten years from now?

Adobe’s security calamity: 2.9 million customer account details accessed

Adobe has reported a major security breach. According to the FAQ:

Our investigation currently indicates that the attackers accessed Adobe customer IDs and encrypted passwords on our systems. We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders. At this time, we do not believe the attackers removed decrypted credit or debit card numbers from our systems.

We are also investigating the illegal access to source code of numerous Adobe products. Based on our findings to date, we are not aware of any specific increased risk to customers as a result of this incident.

A few observations.

  • If the criminals downloaded 2.9 million customer details with name, address and credit card details the risk of fraud is substantial. Encryption is good of course, but if you have a large body of encrypted information which you can attack at your leisure then it may well be cracked. Adobe has not told us how strong the encryption is.
  • The FAQ is full of non-answers. Like, question: how did this happen? answer, Our investigation is still ongoing.
  • Apparently if Adobe thinks your credit card details were stolen you will get a letter. That seems odd to me, unless Adobe is also contacting affected customers by email or telephone. Letters are slow and not all that reliable since people move regularly (though I suppose if the address on file is wrong then the credit card information may well be of little use.)
  • Adobe says source code was stolen too. This intrigues me. What is the value of the source code? It might help a criminal crack the protection scheme, or find new ways to attack users with malicious PDF documents. A few people in the world might even be interested to see how certain features of say Photoshop are implemented in order to assist with coding a rival product, but finding that sort of buyer might be challenging.
  • Is the vulnerability which enabled the breach now fixed? Another question not answered in the FAQ. Making major changes quickly to such a large system would be difficult, but it all depends what enabled the breach which we do not know.
  • I’d like to see an option not to store credit card details, but to enter them afresh for each transaction. Hassle of course, and not so good for inertia marketing, but more secure.

Does anti-virus work? Does Android need it? Reflections on AVG’s security suite

I’m just back from AVG’s press event in New York, where new CEO Gary Kovacs (ex Mozilla) presented the latest product suite from the company.

image

Security is a huge topic but I confess to being something of a sceptic when it comes to PC security products. Problems include performance impact, unnecessary tinkering with the operating system (replacing the perfectly good Windows Firewall, for example), feature creep into non-security areas (AVG now does a performance tune-up product), and the fact that security software is imperfect. Put bluntly, it doesn’t always work; and ironically there was an example at a small business I work with while I was out there.

This business has AVG on its server and Microsoft Security Essentials on the clients, and somehow one of the clients got infected with a variant of a worm known as My little pronny which infects network shares. It may not be the exact one described in the link as these things mutate. Not too difficult to fix in this instance but a nuisance, and not picked up by the security software.

IT pros know that security software is imperfect, but uses do not; the security vendors are happy to give the impression that their products offer complete protection.

Still, there is no doubt that anti-malware software prevents some infections and helps with fixing others, so I do not mean to suggest that it is no use.

AVG is also a likeable company, not least because it offers free versions of its products that are more than just trialware. The freemium model has worked for AVG, with users impressed by the free stuff and upgrading to a paid-for version, or ordering the commercial version for work after a good experience with the free one.

Another key topic though is how security companies like AVG will survive the declining PC market. Diversification into mobile is part of their answer; but as I put it to several executives this week, Windows is particularly vulnerable thanks to its history and design, whereas operating systems like Android, iOS and Windows RT are designed for the internet and locked down so that software is only installed from curated app stores. Do we still need security software on such devices?

My further observation is that I know lots of people who have experienced Windows malware, but none so far who have complained about a virus on their Android or iOS device.

What then did I learn? Here is a quick summary.

AVG is taking a broad view of security, and Kovacs talked to me more about privacy issues than about malware. Mozilla is a non-profit that fights for the open web, and the continuity for Kovacs now with AVG is that he is working to achieve greater transparency and control for users over how their data is collected and shared.

The most striking product we saw is a free browser add-in called PrivacyFix. This has an array of features, including analysis of social media settings, analysis and blocking of ad trackers, and reports on issues with sites you visit ranging from privacy policy analysis to relevant information such as whether the site has suffered a data breach. It even attempts to rate your value to the site with the current settings; information which is not directly useful to you but which does reinforce the point that vendors and advertisers collect our data for a reason.

image

I can imagine PrivacyFix being unpopular in the ad tracking industry, and upsetting sites like Facebook and Google which gather large amounts of personal data. Facebook gets 4 out of 6 for privacy, and the tool reports issues such as the June 2013 Facebook data breach when you visit the site and activate the tool. Its data is limited though. When I tried it on my own site, it reported “This site has not yet been rated”.

AVG’s other announcements include a secure file shredder and an encrypted virtual drive called Data Safe which looks similar to the open source TrueCrypt but a little more user-friendly, as you would expect from a commercial utility.

AVG PC TuneUp includes features to clean the Windows registry, full uninstall, duplicate file finder, and “Flight mode” to extend battery life by switching off unneeded services as well as wireless networking. While I am in favour of making Windows leaner and more efficient, I am wary of a tool that interferes so much with the operating system. However AVG make bold claims for the efficacy of Flight Mode in extending battery life and perhaps I am unduly hesitant.

On the small business side, I was impressed with CloudCare, which provides remote management tools for AVG resellers to support their customers, apparently at no extra cost.

All of the above is Windows-centric, a market which AVG says is still strong for them. The company points out that even if users are keeping PCs longer, preferring to buy new tablets and smartphones than to upgrade their laptop, those older PCs sill need tools such as AVG’s suite.

Nevertheless, AVG seems to be hedging its bets with a strong focus on mobile, especially Android. We were assured that Android is just as vulnerable to Windows when it comes to malware, and that even Apple’s iOS needs its security supplementing. Even if you do not accept that the malware risk is as great as AVG makes out, if you extend what you mean by security to include privacy then there is no doubting the significance of the issue on mobile.

Hands on with Microsoft’s Azure Cloud Rights Management: not ready yet

If you could describe the perfect document security system, it might go something like this. “I’d like to share this document with X, Y, and Z, but I’d like control over whether they can modify it, I’d like to forbid them to share it with anyone else, and I’d like to be able to destroy their copy at a time I specify”.

This is pretty much what Microsoft’s new Azure Rights Management system promises, kind-of:

ITPros have the flexibility in their choice of storage locale for their data and Security Officers have the flexibility of maintaining policies across these various storage classes. It can be kept on premise, placed in an business cloud data store such as SharePoint, or it can placed pretty much anywhere and remain safe (e.g. thumb drive, personal consumer-grade cloud drives).

says the blog post.

There is a crucial distinction to be made though. Does Rights Management truly enforce document security, so that it cannot be bypassed without deep hacking; or is it more of an aide-memoire, helping users to do the right thing but not really enforcing it?

I tried the preview of Azure Rights Management, available here. Currently it seems more the latter, rather than any sort of deep protection, but see what you think. It is in preview, and a number of features are missing, so expect improvements.

I signed up and installed the software into my Windows 8 PC.

image

The way this works is that “enlightened” applications (currently Microsoft Office and Foxit PDF, though even they are not fully enlightened as far as I can tell) get enhancements to their user interface so you can protect documents. You can also protect *any* document by right-click in Explorer:

image

I typed a document in Word and hit Share Protected in the ribbon. Unfortunately I immediately got an error, that the network location cannot be reached:

image

I contacted the team about this, who asked for the log file and then gave me a quick response. The reason for the error was that Rights Management was looking for a server on my network that I sent to the skip long ago.

Many years ago I must have tried Microsoft IRM (Information Rights Management) though I barely remember. The new software was finding the old information in my Active Directory, and not trying to contact Azure at all.

This is unlikely to be a common problem, but illustrates that Microsoft is extending its existing rights management system, not creating a new one.

With that fixed, I was able to protect and share a document. This is the dialog:

image

It is not a Word dialog, but rather part of the Rights Management application that you install. You get the same dialog if you right-click any file in Explorer and choose Share Protected.

I entered a Gmail email address and sent the protected document, which was now wrapped in a file with a .pfile (Protected File) extension.

Next, I got my Gmail on another machine.

First, I tried to open the file on Android. Unfortunately only x86 Windows is supported at the moment:

image

There is an SDK for Android, but that is all.

I tried again on a Windows machine. Here is the email:

image

There is also note in the email:

[Note: This Preview build has some limitations at this time. For example, sharing protected files with users external to your organization will result in access control without additional usage restrictions. Learn More about the Preview]

I was about to discover some more of these limitations. I attempted to sign up using the Gmail address. Registration involves solving a vile CAPTCHA

image

but got this message:

image

In other words, you cannot yet use the service with Gmail addresses. I tried it with a Hotmail address; but Microsoft is being even-handed; that did not work either.

Next, I tried another email address at a different, private email domain (yes, I have lots of email addresses). No go:

image

The message said that the address I used was from an organisation that has Office 365 (this is correct). It then remarked, bewilderingly:

If you have an account you can view protected files. If you don’t have an Office 365 account yet, we’ll soon add support…

This email address does have an Office 365 account. I am not sure what the message means; whether it means the Office 365 account needs to sign up for rights management at £2 per user per month, or what, but it was clearly not suitable for my test.

I tried yet another email address that is not in any way linked to Office 365 and I was up and running. Of course I had to resend the protected file, otherwise this message appears:

image

Incidentally, I think the UI for this dialog is wrong. It is not an error, it is working as designed, so it should not be titled “error”. I see little mistakes like this frequently and they do contribute to user frustration.

Finally, I received a document to an enabled email address and was able to open it:

image

For some reason, the packaging results in a document called “Azure IRM docx.docx” which is odd, but never mind.

My question though: to what extent is this document protected? I took the screen grab using the Snipping Tool and pasted it into my blog for all to read, for example. The clipboard also works:

image

That said, the plan is for tighter protection to be offered in due course, at lease in “enlightened” applications. The problem with the preview is that if you share to someone in a different email domain, you are forced to give full access. Note the warning in the dialog:

image

Inherently though, the client application has to have decrypted access to the file in order to open it. All the rights management service does, really, is to decrypt the file for users logged into the Azure system and identified by their email address. What happens after that is a matter of implementation.

The consequences of documents getting into the wrong hands are a hot topic today, after Wikileaks et al. Is Microsoft’s IRM a solution?

Making this Azure-based and open to any recipient (once the limitation on “public” email addresses is lifted”) makes sense to me. However I note the following:

  • As currently implemented, this provides limited security. It does encrypt the document, so an intercepted email cannot easily be read, but once opened by the recipient, anything could happen.
  • The usability of the preview is horrid. Do you really want your trusted recipient to struggle with a CAPTCHA?
  • Support beyond Windows is essential, and I am surprised that this even went into preview without it.

I should add that I am sceptical whether this can ever work. Would it not be easier, and just as effective (or ineffective), simply to have data on a web site with secure log-in? The idea of securely emailing documents to external recipients is great, but it seems to add immense complexity for little added value. I may be missing something here and would welcome comments.

 

 

 

 

 

 

 

 

 

 

had to sign in twice since I didn’t check “Remember password!"

image

If you try recursion, it will package the already packaged file.

Ubuntu forum hack sets same-password users at risk

Canonical has announced a comprehensive security breach of its forums.

  • Unfortunately the attackers have gotten every user’s local username, password, and email address from the Ubuntu Forums database.
  • The passwords are not stored in plain text, they are stored as salted hashes. However, if you were using the same password as your Ubuntu Forums one on another service (such as email), you are strongly encouraged to change the password on the other service ASAP.
  • Ubuntu One, Launchpad and other Ubuntu/Canonical services are NOT affected by the breach.

If someone impersonates you on the Ubuntu forums it might be embarrassing but probably not a calamity. The real risk is escalation. In other words, presuming the attacker is able to work out the passwords (they have all the time in the world to run password cracking algorithms and dictionary attacks against the stolen data), it could be used to compromise more valuable accounts that use the same password.

Password recovery mechanisms can work against you. Businesses hate dealing with password reset requests so they automate them as much as they can. This is why Ubuntu’s warning about email accounts is critical: many web sites will simply email your password on request, so if your email is compromised many other accounts may be compromised too.

A better approach in a world of a million passwords is to use a random password generator alongside a password management database for your PC and smartphone. It is still a bit “all eggs in one basket” in that if someone cracks the password for your management database, and gets access, then they have everything.

It is a dreadful mess. Two-factor authentication, which involves a secondary mechanism such as a security token, card reader, or an SMS confirmation code, is more secure; but best reserved for a few critical accounts otherwise it becomes impractical. Two-factor authentication plus single sign-on is an even better approach.

What is mobile security? And do we need it?

I attended Mobile World Congress in Barcelona, where (among many other things) numerous security vendors were presenting their latest mobile products. I took the opportunity to quiz them. Why do smartphone users need to worry about security software, which many users were glad to leave behind with their PC? I observed that whereas I have often heard of friends or contacts suffering from PC malware, I have yet to hear anyone complain about a virus on their mobile or tablet.

I got diverse answers. NQ Mobile, for example, told me that while mobile malware is relatively uncommon in the USA and Europe, it is different in China where the company has a strong base. In China and some other territories, there are many Android-based mobiles for which the main source of apps is not the official Google Play store, but downloads from elsewhere, and malware is common.

Do you have an Android phone? Have you checked that option to “allow installation of non-Market apps”? One mobile gaming controller I received for review recently came with a free game. Guess what – to install the game you have to check that option, as noted in the documentation.

image

When you allow non-Market apps, you are disabling a key Android security feature, that apps can only be installed from the official store which, you hope, has some level of quality checking from Google, and the likelihood that malware that does slip through will be quickly removed. But what will users do, install the game, or refuse to disable the feature? I am reminded of those installation manuals for PC devices which include instructions to ignore the warnings about unsigned drivers. Most of us shrug and go ahead.

Nevertheless, for those of us not in China mobile malware is either uncommon, or so stealthy that few of us notice it (an alarming thought). Most of the responses I received from the security vendors were more along the lines that PC-style malware is only one of many mobile security concerns. Privacy is another one high on the list. When you install an app, you see a list of the permissions it is demanding, and sometimes the extent of them is puzzling. How do we know whether an app is grabbing more data than it should, for unknown purposes (but probably to do with ad targeting)?

Some of the mobile security products attempt to address this problem. Bitdefender Mobile Security includes an application audit which keeps track of what apps are doing. Norton Mobile Security scans for apps with “unusual permissions”.

Web site checking is another common feature. Software will attempt to detect phishing sites or those compromised with malware.

Perhaps the biggest issue though is what happens to your lost or stolen device. Most of the mobile security products include device tracking, remote lock and remote wipe (of course, some smartphones come with some of this built-in, like iOS and Find My iPhone).

If you do lose your phone, an immediate worry is the security of the data on it, or even worse, on an SD card that can be removed and inspected. Your contacts? Compromising photos? Company data? Remote wipe is a great feature, but could a smart thief disable it before you are able to use it?

Some products offer additional protection. NQ mobile offers a Mobile Vault for data security. It has a nice feature: it takes a photo of anyone who enters a wrong passcode. Again though, note that some smartphones have device encryption built-in, and it is just a matter of enabling it.

Windows Phone 8 is an interesting case. It includes strong Bitlocker encryption, but end users cannot easily enable it. It is enabled via Exchange ActiveSync policies, set through the Exchange Management Console or via PowerShell:

image

Why not let users set encryption themselves, if required, as you can on some Android phones? On Apple iOS, data encryption is automatic and can be further protected by a passcode, with an option to wipe all data after 10 failed attempts.

Encryption will not save you of course if a rogue app is accessing your data and sending it off somewhere.

Mobile security can feels like a phoney war (ha!). We know the risks are real, that smartphones are just small computers and equally vulnerable to malware as large ones, and that their portability makes them more likely to go astray, but most of us do not experience malware and mainly worry about loss or theft.

Businesses are the opposite and may care more about protecting data than about losing a device, hence the popularity of mobile device management solutions. The fact is though: some of that data is on the device and being taken everywhere, and it is hard to eliminate the risk.

Is mobile security a real problem? I hardly need to say this: yes, it is huge. Do you need anti-virus software on your phone? That is harder to answer, but unless you are particularly experimental with the apps you install, I am not yet convinced.

The frustrating part is that modern smartphones come with integrated security features many of which are ignored by most users, who find even a simple passcode lock too inconvenient to bother with (or perhaps nobody told them how to set it). It is hard to understand why more smartphones and tablets are not secure by default, at least for the easy things like passcodes and encryption.

App and privacy issues are harder to address, though maintaining properly curated app stores and only installing apps from there or from other trusted sources is a good start.

Another reason to use tablets: desktop anti-virus does not work

The New York Times has described in detail how it was hacked by a group looking for data on Chinese dissidents and Tibetan activists. The attack was investigated by security company Mandiant.

Note the following:

Over the course of three months, attackers installed 45 pieces of custom malware. The Times — which uses antivirus products made by Symantec — found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it, according to Mandiant.

Apparently the initial attack method was simple: emails with malicious links or attachments.

Symantec made an unconvincing defence of its products in a statement quoted by The Register:

Advanced attacks like the ones the New York Times described … underscore how important it is for companies, countries and consumers to make sure they are using the full capability of security solutions. The advanced capabilities in our endpoint offerings, including our unique reputation-based technology and behaviour-based blocking, specifically target sophisticated attacks. Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats. We encourage customers to be very aggressive in deploying solutions that offer a combined approach to security. Anti-virus software alone is not enough.

Could the New York Times hack have been prevented by switching on more Symantec features? Count me as sceptical; in fact, it would not surprise me if these additional features were on anyway.

Anti-malware solutions based on detecting suspicious behaviour do not work. The task is too difficult, balancing inconvenience, performance, and limited knowledge of what really is or is not suspicious. Further, dialogs presented to non-technical users are mystifying and whether or not the right response is made is a matter of chance.

This does not mean that secure computing, or at least more secure computing, is impossible. A Windows desktop can be locked-down using whitelisting technology and limited user permissions, at the expense of inconvenience if you need to run something not on the whitelist. In addition, users can avoid most attacks without the need of any anti-virus software, by careful avoidance of malicious links and attachments, and untrustworthy websites.

Aside: it is utterly stupid that Windows 8 ships with a new mail client which does not allow you to delete emails without previewing them or to see the real destination of an URL in the body of an email.

This kind of locked-down client is available in another guise though. Tablets such as those running iOS, Android or Windows RT (mail client aside) are designed to be resistant to attack, since apps are sandboxed and normally can only be installed via a trusted app store. Although users can bypass this restriction, for example by enabling developer permissions, this is not such a problem in a corporate deployment. The users most at risk are probably those least likely to make the effort to bypass corporate policies.

Note that in this context a Windows 8 Professional tablet such as Surface Pro is just another desktop and no more secure.

Another approach is to stop believing that the endpoint – the user’s device – can ever be secured. Lock down the server side instead, and take steps to protect just that little piece of functionality the client needs to access the critical data and server applications.

The key message though is this. Anti-virus software is ineffective. It is not completely useless, but can be counter-productive if users believe that because they have security software installed, they are safe from malware. This has never been true, and despite the maturity of the security software industry, remains untrue.

New types of client devices hold more promise as a route to safer personal computing.

Got a Ruby on Rails application running? Patch it NOW

A security issue has been discovered in Ruby on Rails, a popular web application framework. It is a serious one:

There are multiple weaknesses in the parameter parsing code for Ruby on Rails which allows attackers to bypass authentication systems, inject arbitrary SQL, inject and execute arbitrary code, or perform a DoS attack on a Rails application. This vulnerability has been assigned the CVE identifier CVE-2013-0156.
Versions Affected:  ALL versions
Not affected:       NONE
Fixed Versions:     3.2.11, 3.1.10, 3.0.19, 2.3.15

and also worth noting:

An attacker can execute any ruby code he wants including system("unix command"). This effects any rails version for the last 6 years. I’ve written POCs for Rails 3.x and Rails 2.x on Ruby 1.9.3, Ruby 1.9.2 and Ruby 1.8.7 and there is no reason to believe this wouldn’t work on any Ruby/Rails combination since when the bug has been introduced. The exploit does not depend on code the user has written and will work with a new rails application without any controllers.

You can grab patched versions here.

How quickly can an organisation patch its applications? As Sourcefire security architect Adam J. O Donnell observes, this is where strong DevOps pays dividends:

Modern web development practices have made major leaps when it comes to shortening the time from concept to deployment.  After a programmer makes a change, they run a bunch of automated tests, push the change to a code repository, where it is picked up by another framework that assures the changes play nice with every other part of the system, and is finally pushed out to the customer-facing servers.  The entire discipline of building out all of this infrastructure to support the automated testing and deployment of software is known as DevOps.

In a perfect world, everyone practices devops, and everyone’s devops workflow is working at all times.  We don’t live in a perfect world.

For many organizations changing a library or a programming framework is no small task from a testing and deployment perspective.  It needs to go through several steps between development and testing and finally deployment.  During this window the only thing that will stop an attacker is either some form of network-layer technology that understands how the vulnerability is exploited or, well, luck.

This site runs WordPress, and if I look at the logs I see constant attack attempts. In fact, I see the same attacks on sites which do not run WordPress. The bots that do this are not very smart; they try some exploit against every site they can crawl and do not care how many 404s (error showing page not found) they get. One in a while, they hit. Sometimes it is the little-used applications, the tests and prototypes, that are more of a concern than the busy sites, since they are less likely to be patched, and might provide a gateway to other sites or data that matter more, depending on how the web server is configured.