Jeff Attwood has lost the content from two popular blogs that he runs:
100% data loss at our hosting provider, CrystalTech.
He gives a little more detail here. He is now trying to recover data from search engine caches such as Google’s – a painful business, apparently; Google banned his IP.
Backup is a complex problem. I’d been meaning to post on the subject following another recent incident. Here’s a quote from an email a friend received from his ISP after asking whether the SQL Server database was backed up:
Needless to say, we do back the databases up every 12 hours to a remote location automatically.
Just 11 days later “a crucial disk” failed on that SQL Server; following which the ISP discovered that its recent back-ups were also “corrupt” and data was lost. In the end a data recovery specialist was enlisted and most, but not all data recovered.
No doubt the post-mortem will reveal multiple issues; but it shows that knowing backups are being done is insufficient. You have to do test restores as well, because the backup might not be working as well as you think.
In addition, as Attwood is now tweeting:
don’t trust the hosting provider, make your OWN offsite backups, too!
Good advice for those of us using commodity ISPs. But it also gives me pause for thought following the CloudForce event I attended earlier this week. A specialist like Salesforce.com has more resources to put into data resilience than any of its users. So if Salesforce.com (or Amazon, or Google, or Microsoft) is your ISP, is it then OK to leave backup to them?