Category Archives: security

Hot news: the Internet is as insecure as ever

I’ve been writing about the Internet for years, and some of my earliest articles were about security problems. I’ve written about why anti-virus software is ineffective, how application insecurities leave web servers open to attack, and why we need authenticated email combined with collective whitelisting in order to solve the problem of spam and virus-laden emails.

What depresses me is that we have made little if any progress over the last decade. Email is broken, but I have to use it for my work. Recently I’ve been bombarded with PDF spam and ecard viruses, which for some reason seem to slip past my junk mail filter. Said filter does a reasonable job and I could not manage without it, but I still get false positives from time to time – genuine messages that get junked and might or might not be spotted when I glance through them. The continuing flow of garbage tells me that anti-virus software is still failing, because it comes from other machines that are already infected.

And what about comment spam? Akismet is fantastic; it claims to have caught 43,000 spam comments to this blog since I installed it in October last year. In the early days I used to glance through all of them and occasionally I did find a comment that was incorrectly classified. Now, the volume of spam comments makes that unfeasible, so no doubt there are some being needlessly junked.

Security is a huge and costly problem. Even when everything is running sweetly, anti-virus and anti-spam software consumes a significant portion of computing resources. Recently I investigated why an older machine with Windows XP was running slowly. It did not take long: Norton anti-virus was grabbing up to 60% of the CPU time. Disabling NAV made the machine responsive again. Nevertheless, the user decided to keep it running. What is the cost to all of us of that accumulated wasted time?

We have become desensitized to security problems because they are so common. I come across people who know they have viruses on their PCs, but continue to run them, because they have stuff to do and would rather put up with a “slow” machine than try to fix it. Other machines are compromised without the awareness of their owners. Those PCs are pumping out viruses and spam for the rest of is, or are part of the vast botnet army which is now an everyday part of the criminal tool chest.

I actually write less about security that I used to, not because the issue is of any less importance, but because it becomes boringly repetitive. Desensitized.

The frustration is that there are things we could do. Email, as I noted above, could be made much better, but it requires collective willpower that we seem to lack. A while back I started authenticating my emails, but ran into problems because some email clients did not like them. Users saw attachments and thought it might be a virus, or could not reply to the email. I had to remember to remove the authentication for certain recipients, and it became too difficult to manage, so I abandoned the experiment. That’s really a shame. Authentication in itself does not prevent spam, but it is an essential starting point.

Do we have to live with this mess for ever? If not, how long will it take until we begin to see improvement?

Technorati tags: , , , ,

Microsoft: .doc and .xls are dangerous

A common phenomenon in the tech world is when vendors trash their own past products in an effort to convince you of the value of shiny new ones.

Here is an example. Microsoft’s security advisory 937696 and the related KB 935865 tells us of the dangers posed by Office binary formats including .doc, .xls and .ppt:

MOICE uses the 2007 Microsoft Office system converters to convert the Office binary format files into the Office Open XML format. This process helps remove the potential threat that may exist if the document is opened in the binary format. Additionally, MOICE converts incoming files in an isolated environment. This helps protect the computer from a potential threat.

What’s MOICE? It’s the Microsoft Office Isolated Conversion Environment, proving that even after Silverlight, the department of verbose and meaningless names is alive and well in Redmond. It is an add-on to Office 2003 or 2007 that automatically converts Office binary formats to Office Open XML (OOXML). Further, administrators can now choose to implement File Block, which prevents users from opening specified binary document types without first converting them.

The presumption here is that OOXML documents are safer. Probably true, especially since documents containing macros now require a different extension (.docm, .xlm) to flag the fact that they contain macros.

A side effect is that MOICE spreads the adoption of OOXML. Like Joe Wilcox, I can’t help wondering whether it was this, rather than security, which has prompted this release.

OOXML has real advantages, yet it can also be tiresome. Users install Office 2007, email a Word document to someone, then get a perplexed reply saying that the document won’t open. I’ve been known to show people how to set the default back to the old binary formats to avoid this problem – I would love to know how many Office 2007 rollouts do this as a matter of course.

After all, it is late in the day for Microsoft to consider blocking these formats. The Sophos web site has a Top Ten Viruses page with a neat feature: you can see stats for the last 10 years. These confirm my hunch. Back in 1999, there were 9 office macro viruses in the top 10 (Sophos prefixes these with WM or XM). Today? None. Further, note that the top 10, according to Sophos, account for 94.6% of all viruses in the wild.

The reason is that in the intervening years Microsoft has built reasonably good macro protection into Office. A factor here is that emailed documents rarely need to contain macros, so if you double-click an attachment and it wants to run a macro, that’s a big clue that something is awry.

That said, there is clearly still some risk from macro viruses, or from documents with crafted corruptions that infect a PC. Recently, Open Office has also been shown to be vulnerable. So MOICE has a value, but is it enough to compensate for the cost in terms of inconvenience? After all, while Office binary formats are almost universally readable, that’s not the case for OOXML. If you run Windows, and have Office 2000 or higher, and broadband Internet, and sufficient rights to install the converter, then the process is reasonably smooth; but that is a long way from universal.

MOICE strikes me as low priority in security terms, but nevertheless an intriguing development in the battle for XML office format adoption.

 

Why you should keep UAC enabled on Vista

Ian Griffiths has a nice post on why you should not disable UAC, even if you are are a developer.

I’ve followed that advice and it works for me, though there are still one or two apps where I have to Run As Administrator.

That does not include Visual Studio 2005. Despite the warning which it issues, I find it works for me without it (I realise there are scenarios where this won’t be the case).

The intriguing thing is that (as Griffiths notes) even Microsoft is not solidly behind UAC. I’ve commented on this before.

Since there is still a myth that running Vista with UAC enabled results in an avalanche of intrusive dialogs, it’s worth popping up from time to time to say that it is not so.

Windows security affects all of us, even if you do not like Windows or use it. UAC (and IE7’s protected mode, which depends on it) is a step forward and worth supporting.

 

Technorati tags: , ,

Don’t just blame users for woeful security online

The BBC this morning reports that many net users are not safety aware. The piece is based on research by Get Safe Online, a UK Government-sponsored initiative to promote internet safety. More details of the survey are here. I’m intrigued by a couple of these figures. Apparently 45% of internet users only connect to “secure” wi-fi networks outside the home. That’s surprising since most public wi-fi is not secured; but why would you trust the security of someone else’s network anyway? I’m in the 55%.

There’s also some figures on passwords, showing that nearly 25% of users have a single password they use everywhere. Even more surprising, another 25% claim to use a different password for every site. It’s a mess either way. We will never get even a moderately secure internet without better authentication.

The key question, as this Get Safe Online press release observes, is about who should take responsibility for online safety – meaning everything from viruses and fraud to predatory chatroom impostors. Here are some popular candidates:

  • The ISPs
  • The banks (presumably for financial safety)
  • The individual
  • The security companies – Symantec, Sophos etc.
  • The operating system vendor – Apple, Microsoft etc
  • The Government – let’s regulate

I guess the answer is “all of the above”, though the role of security software is vastly exaggerated, especially that of anti-virus software which in reality does not work well – see Ed Bott’s recent piece The Sorry State of Security Software.

User education is welcome though anyone with technical knowledge will likely find the homely advice doled out by a site like Get Safe Online frustratingly inadequate. Online safety is difficult for all sorts of reasons. One problem is that users get confronted with decisions they are not equipped to make. Another issue is that even conscientious and informed users are forced to compromise in order to get their work done, like the occasion last week when Thawte advised me to turn off my firewall in order to buy its product.

The Internet will never be safe, but it can be made better. Strong authentication, no more passwords. Digitally signed emails. Networks of trust. Secure operating systems. It’s no good just blaming users, many of them are doing their best.

 

IE7 phishing site confusion

Preparing for a conference, I saved the agenda from a web page to a file, so that I could read it on the train. I used the IE “web archive” feature, which saves a page to a single file with the extension .mht. When I re-opened the page later, I was suprised to see the following warning:

Local file identified as phishing site

Something wrong here I reckon. Apparently my own hard drive is a phishing site.

I suppose IE7 has a point. After all, I’ve copied the page from one place to another, and although it looks like a page on the web, it isn’t. Then again, it isn’t criminal either. I’m using a feature of IE exactly as designed.

Amusing; but the difficulty I have with these kinds of false alarms is that they undermine the real ones. How is the non-technical user to know which warnings they can safely ignore? The danger is that they end up taking none of them seriously.

 

Technorati tags: , ,

WordPress hacked: where do we go from here?

WordPress founder Matt Mullenweg reports the bad news:

Long story short: If you downloaded WordPress 2.1.1 within the past 3-4 days, your files may include a security exploit that was added by a cracker, and you should upgrade all of your files to 2.1.2 immediately.

This is truly painful and highlights the inherent risk of frequent patching. I haven’t seen any estimates of how many websites installed the hacked code, but I’d guess it is in the thousands; the number of WordPress blogs out there is in the hundreds of thousands. Ironically it is the most conscientiously administered installations that have been at risk. Personally I’d glanced at the 2.1.1. release when it was announced, noted that it did not mention any critical security fixes, and decided to postpone the update for a few days. I’m glad I did.

Keeping up-to-date with the latest patches is risky because the patches themselves may be broken or, as in this case, tampered with. On the other hand, not patching means exposure to known security flaws. There’s no safe way here, other than perhaps multi-layered security. All the main operating systems – Windows, OS X, Linux distributions – have automatic or semi-automatic patching systems in place. Applications do this as well. We have to trust in the security of the source servers and the process by which they are updated.

Having said that, there are a few things which can be done to reduce the risk. One is code signing. Have a look at the Apache download site – note the PGP and MD5 links to the right of each download. These let you verify that the download has not been tampered with. Why doesn’t WordPress sign its downloads?*

Next question, of course, is how WordPress allowed its site to be hacked. Was it through one of the other known insecurities in the WordPress code, perhaps?

I’m also reminded of recent comments by Rasmus Lerdorf on how PHP does not spoonfeed security. There is a ton of insecure PHP code around; it’s a obvious target for hackers in search of web servers to host their content or send out spam.

*Update: See Mullenweg’s comment to this post. I looked at the download page which does not show the MD5 checksums. If you look at the release archive you can see MD5 links. Apologies. Having said that, why couldn’t the cracker just update the MD5 checksum as well? This is mainly a check for corrupt rather than hacked files. The PGP key used by Apache is better in that it links to the public key of the Apache developers. See here for an explanation.

Perhaps this is a good moment to add that the reaction of the WordPress folk has been impeccable in my view. They’ve acknowledged the problem, fixed it promptly, and are taking steps to prevent a repeat. Nobody should lose confidence in WordPress because of this.

 

Technorati tags: , ,

How secure is OpenID?

Everybody is talking about OpenID. Big players are adopting it. But should you trust it for things that matter – financial transactions, for example?

Here’s an important post from Microsoft’s identity architect Kim Cameron:

So let’s think about this.  Where is the root of trust?  In conventional systems like PKI or SAML or Kerberos, the root of trust is the identity provider.  I trust the identity provider to say something about the subject.  How do I know I’m hearing from the legitimate identity provider?  I have some kind of cryptographic key.  The relevant key distribution has a cost – such as that involved in obtaining or issuing public key certificates, or registering with a Key Distribution Center.

But in OpenID, the root of trust is the OpenID URL itself.  What you see is what you get.  In the example above, I trust Francis’ web page since it represents his thinking and is under his control.  His web page delegates to his OpenID identity provider (OP) through the link mechanism in (5).  Because of that, I trust his identity provider to speak on behalf of his web page.  How do I know I am looking at his web page or talking to his identity provider?  By calling them up on DNS.

I’m delving into the details here because I think this is what gives OpenID its legs.  It is as strong, and as weak, as DNS.  In other words, it is great for transactions that won’t attract criminal attack, and terrible for those that will.

And here’s Cameron’s conclusion:

OpenID cannot replace crypto-based approaches in which there are trusted authorities rather than trusted web pages.  But it can add a whole new dimension, and bring the “long tail” of web sites into the identity fabric.

Note that Cameron is not opposed to OpenID. Apart from anything else, he recognizes that this may well be the beginning of an identity revolution – part of a process, at the end of which we get a safer, less spam laden, less criminal-infested internet.

At the same time, he’s right. The whole OpenID structure hinges on the URL routing to the correct machine on the Internet. In other words, DNS. Now do some research on DNS poisoning. Scary.

Now, it strikes me that you can largely fix this by requiring SSL connections. In other words, have the OpenID URL be an https:// URL, and have the relying party (the website where you want to log in) check for a valid SSL certificate. Note thought that SSL must be used at every stage. OpenID lets you use your own URL as the identifier, but redirect to another OpenID identity provider. Both URLs must use SSL to maintain integrity.

Another idea is to use an OpenID for non-critical logins, however you define those.

Note that this issue is different from the phishing risk, for which CardSpace strikes me as a good solution.

 

Rasmus Lerdorf on security, hormones and PHP

PHP inventor Rasmus Lerdorf spoke yesterday at the Future of Web Apps conference in London. It was the highlight of the conference: at once funny, insightful, techie and thought-provoking.

“I had no intention of writing a language”, he told us. “I hate programming with a passion. It’s boring. It’s tedious. It’s hard. I love solving problems. You endure the pain to get to the end destination.”

In case there are any non-geeks reading, I should explain that PHP is the most popular server-side programming language on the Web. This blog is driven by a PHP application called WordPress. PHP is also free, and one of the big successes of open source.

Lerdorf related the history of PHP, which originally stood for “Personal Home Page tools”. They were little scripts he wrote for his own home page, “my own little hack to reuse the C code I had written”. He then shared his work with friends. He showed us some code samples. Here is PHP in 1994:

<!--getenv HTTP_USER_AGENT--> 
<!--ifsubstr $exec_result Mozilla--> 
Hey, you are using Netscape!<p> 
<!--endif-->

By 1995 PHP looked more like what we would recognize at PHP. By 2007 it has sprouted all sorts of modern object-oriented features and Lerdorf noted that while he understood the importance of these, it has somewhat moved away from its original intent as a quick and dirty tool.

Lerdorf made PHP a completely open source project in 1997. He was fed up with maintaining scripts for other people and realised that he could not do it alone. “No one person can possibly learn 20 different database APIs”. So he contacted all the people who had made suggestions to him, gave them access to PHP’s source on CVS (a source code management system), and relinquished control.

This was the lead-in to some reflections on why people bother to contribute to open source software. Lerdorf gives 4 reasons:

  1. Self-interest
  2. Self-expression
  3. Hormones
  4. Improve the world

The last of these is, in his view, the least important. But why hormones? His theory is that open source is one way geeks get human interaction, despite preferring keyboards and screens to going out and meeting people. It follows that factors like recognition (within their circle) and a sense of ownership are critical to successful open source projects, or even to any form of user-generated content. “You have to think about how people feel about themselves”, says Lerdorf. In fact, his comments chimed nicely with what Kevn Rose said about Digg.

Performance and security

Next, Lerdorf addressed the two major hurdles facing web applications. He is a strong believer in performance as a feature. “Unless you can make it work, there’s no point.” He dived into a couple of profiling tools to make his point, showing how to identify bottlenecks in PHP applications.

Security on the web is awful – I fully take the blame

Then security. “Security on the web today is awful. I know a lot of people blame PHP for that … I fully take the blame for some of it, but not all of it.”

What could he have done? Well, PHP does not spoonfeed security; Microsoft’s ASP.NET is actually better in that respect (my comment, not his). It could be more secure by design. On the other hand, as Lerdorf notes, “there was no such thing as cross-site scripting in 1995”. He gave us a great explanation of how cross-site scripting works; it is not the easiest thing to explain. PHP 5.2 has a new filter function for making user-input safe.

How to be safe on the web? “You can never click on a link. Sorry. Unless you understand everything in that link, and some of them are huge. You can never be sure that it is safe….most people are really easy to trick.”

Finally, Lerdorf gave us a few general comments on future directions, the possibilities opened up by geocoding in Flickr, for example. He says don’t make new portals, “We have enough portals out there.” Use the APIs published by major sites, and finally – make it fast.

Technorati tags: , , , , , ,

Digg will support OpenID

I’m at the Carson Future of Web Apps conference in London, where Kevin Rose is talking about Digg. My favourite comment:

You have to take it for what it is, it’s not a perfect system

Rose threw out a few comments about how he sees Digg evolving. One which interested me: it will support OpenID, which describes itself as:

an open, decentralized, free framework for user-centric digital identity.

I’m not sure that OpenID is going to solve many problems in itself – it is not necessarily a stronger form of authentication – but here as least is some progress in improving identity management.

AOL is also supporting OpenID, making all its accounts automatically OpenID accounts. I observed out to Edwin Aoki, an AOL Chief Architect who is also here, that using a single identity for multiple sites could make the problem worse, since when it gets compromised multiple sites are then at risk. He said that happens anyway, because users already use the same email address and password on multiple sites. A fair point.

I’m actually hoping to see Microsoft’s CardSpace getting wide adoption in tandem with OpenID, as it appears to be more resistant to phishing attacks.

Still, the story here is that OpenID is gaining momentum.

How secure is Windows Vista?

Tech journalists have a tough job. They are meant to take the vast complexity of things like computers and operating systems and translate them into terms that ordinary people can understand.

Of course there is never a one-to-one mapping between the complex and the simple. The simplified explanation is a compromise.

So let’s look at the question: how secure is Windows Vista? Unfortunately the question is not amenable to a simple answer. Perhaps the best you can do is to try and explain the issues, the ways in which it is more secure than earlier versions of Windows, the ways in which it remains insecure.

Now read this piece on weaknesses in Vista’s UAC (User Account Control). Looks bad, right? About some insightful researcher who “found out — from Microsoft officials — that the default no-admin setting isn’t even a security mechanism anymore.”

This is a misunderstanding of a typically balanced and well-reasoned piece by Microsoft’s Mark Russinovich on UAC in Vista. At least the link is there in the ZDNet article, so you can read it for yourself.

Apparently, “In an e-mail interview, the Polish malware researcher said she was “pissed off” by what she perceived as Russinovich’s flippant attitude to the potential risk.”

Frankly, I defy anyone to read and understand Russinovich’s article and call it “flippant”. He explains how the mechanism works, he explains why it works as it does, acknowledges areas of compromise, and shows how to achieve higher security if you want it:

Without the convenience of elevations most of us would continue to run the way we have on previous versions of Windows: with administrative rights all the time. Protected Mode IE and PsExec’s -l option simply take advantage of ILs to create a sandbox around malware that gets past other security defenses. The elevation and Protected Mode IE sandboxes might have potential avenues of attack , but they’re better than no sandbox at all. If you value security over any convenience you can, of course, leverage the security boundary of separate user accounts by running as standard user all the time and switching to dedicated accounts for unsafe browsing and administrative activities.

He’s right. And personally I think ZDNet is giving too much weight to the strident researcher who calls Vista security “a big joke“, while doing too little to examine the real issues which Russinovich explains.

Of course that doesn’t prevent Slashdot and others picking up the story and presuming, because that’s what they want to believe, that Vista security is shot to bits.

It’s not. It is a real advance on XP, not least because of the point Russinovich highlights:

Why did Windows Vista go to the trouble of introducing elevations and ILs? To get us to a world where everyone runs as standard user by default and all software is written with that assumption.

Update

This story gets more curious the more you investigate. The gist of this researcher’s original complaint was that Vista forced her to run setup and installer applications with local admin rights:

That means that if you downloaded some freeware Tetris game, you will have to run its installer as administrator, giving it not only full access to all your file system and registry, but also allowing e.g. to load kernel drivers!

It’s a fair point, though problematic on examination. Installing applications is an administrative task. Still, it’s correct that many installers do not need full admin rights, so the system could be more granular. Fortunately Vista covers this. You can disable the automatic elevation of setup applications in local security policy. In fact, enterprise rollouts have this disabled by default. The researcher is actually aware of this, but says:

Even though it’s possible to disable heuristics-based installer detection via local policy settings, that doesn’t seem to work for those installer executables which have embedded manifest saying that they should be run as administrator. I see the above limitation as a very severe hole in the design of UAC.

Now she’s lost me. The complaint has shifted – there is no problem running setup applications with less than full admin rights, but if the developer specifies with a manifest that full admin rights are required, then Vista automatically prompts for elevation. This of course is working as designed. If you downloaded a “freeware Tetris game” and discovered a manifest insisting on full admin rights, you would likely be wary in any case.

So where is the “very severe hole in the design of UAC”? There is a “severe hole” here, but it is not in the design of UAC. The core problem is that users may try to install malware. They are browsing the web, and perhaps come across a flashing advertisement that says their PC has spyware, but this utility will fix it. They download it. They pass a dialog warning that the file is from the internet and might not be safe. They pass a dialog requesting elevation. At this point, only anti-virus software or something like Windows Defender might save them. How do you fix this, without taking away the user’s right to do what they want with the computer they own?

That said, there is a weakness in UAC in the potential of non-elevated processes to interfere with elevated processed. Mark Russinovich covers this well in his post referenced above. Bottom line is that it’s still best not to run with full admin rights, even with UAC enabled. The long-term purpose of UAC is to get Windows across the hump of legacy applications to a point where local admin rights for day-to-day use are unnecessary.

Technorati tags: , ,