More proof that one basket for all your eggs is bad: codespaces.com is gone

Codespaces.com ceased to exist on June 17th, 2014 because they failed to adhere to the standard advice of not putting all your eggs in one basket.  There are a number of things that they could have done to prevent this, but they apparently did none of them.

Before I continue, let me say this.  I know it’s been more than a year since I blogged.  I don’t know what to say other than I’ve been a lot busy building my new company.  Truth in IT now has six full time employees, several part time employees, and several more contractors.  We do a little bit of everything, including backup seminars, storage seminars, webinars, viral video production, lead nurturing programs, and some other things we’re working on in stealth at the moment.  Hopefully I’ll get back to blogging more often.  OK.  Back to the business at hand.

Here’s the short story on codespaces.com.  Their websites, storage, and backups were all stored in the Amazon.com egg basket.  Then on June 17th, they were subjected to a DDOS attack by someone who was going to extort money from them.  He gained access to their Amazon control panel.  When they took steps to try and fix the problem, he reacted by wiping out their entire company.  According to their site, he “removed all EBS snapshots, S3 buckets, all AMI’s, some EBS instances and several machine instances. In summary, most of our data, backups, machine configurations and offsite backups were either partially or completely deleted.”  I hate being a Monday morning quarterback, but this is what happens when you put all your eggs in one basket. 

I’m a fan of cloud services. (Truth in IT is run entirely in the cloud.)  I’m a fan of disk backups. (Truth in IT uses both a cloud-based sync and share service and a cloud-based backup service.)  But if it’s on disk and is accessible electronically, it is at risk.  Having your services, storage, and backups all accessible via the same system is just asking for it.  

I do not see this as a cloud failure.  I see this as a process and design failure.  They would have been just as likely to have this happen to them if they had done this in their data center. That is, if they used a single system to store their server images, applications and data, snapshots of that data, and extra copies of those snapshots.  Yes, using Amazon made it easier to do this by offering all of these services in one place. But the fact that it was in the cloud was not the issue — the fact that they stored everything in one place was the issue.

I love snapshot-based backups, which is what codespaces.com used. It should go without saying, though, that snapshots must be replicated to be any good in times like this.  However, as I have repeatedly my friends at companies that push this model of data protection, even a replicated snapshot can be deleted by a malicious admin or a rolling bug in the code.  So I still like some other kind of backups of the backups as long as they are accessible electronically.  

Use a third-party replication/CDP system to copy them to a different vendor’s array that has a different password and control panel.  Back them up to tape once in a while.  Had they done any of these things into a system that was not immediately controllable via the Amazon Control Panel, their backups would have been safer.  (The hacker would have had to hack both systems.)  However, since all server data, application data, and backup data were all accesible via a single Amazon.com console, the hacker was able to access their data and their backups via the same console.

I love cloud-based computing services.  There’s nothing wrong with them running their company on that.  But also storing their backups via the same Amazon console as their server?  Not so much.

I love cloud-based backups.  They are certainly the best way to protect cloud-based servers.  I’m also fine with such backups being stored on S3.  But if your S3 backups are in the same account as your AWS instances, you’re vulnerable to this kind of attack.

I also want to say that this is one of the few advantages that tape has — the ability to create an “air gap.”  As a removable medium, it can be used to place distance (i.e. an “air gap”) between the data you’re protecting and the protection of that data.  Store those backups at an offsite storage company and make retrieval of those tapes difficult.  For example, require two-person authentication when picking up backup tapes outside of normal operations.

For those of you backing up things in a more traditional manner using servers in a non-cloud datacenter, this still applies to you.  The admin/root password to your production servers should not be the same password as your development servers — and it should not be the same one as your backup servers.  Your backup person should not have privileged access to your production servers (except via the backup software), and administrators of your production servers should not have privileged access to your backup system.  That way a single person cannot damage both your production systems and the backups of those systems.

I would add that many backup software packages have the ability to run scripts before and after backups run, and these scripts usually run as a privileged user.  If a backup user can create such a script and then run it, he/she could issue an arbitrary command, such as deleting all data — and that script would run as a privileged user.  Look into that and lock that down as much as you can.  Otherwise, the backup system could be hacked and do just what this person did.

Don’t store all your eggs in one basket.  It’s always been a bad idea. 

 


Written by W. Curtis Preston (@wcpreston), four-time O'Reilly author, and host of The Backup Wrap-up podcast. I am now the Technology Evangelist at Sullivan Strickler, which helps companies manage their legacy data