Why tape drives are bad for backups

Specifically, this article is about why modern tape drives are a really bad choice to store the initial copy of your backups. It’s been this way for a long time, and I’ve been saying so for at least 10 years, in case anyone thinks I’ve been swayed by my current employer.  Tape is good at some things, but receiving the first copy of your backups isn’t one of them.  There are also reasons why you don’t want to use them for your offsite copy, and I’ll look at those, too.

 

Tape drive are too fast for incremental backups

  • Tape drives are too fast
    • In case you didn’t know it, modern tape drives essentially have two speeds: stop and very fast. Yes, there are variable speed tape drives, but even the slowest speed they run at is still very fast.  For example, the slowest an LTO-7 drive can go using LTO-7 media is 79.99 MB/s native.  Add compression, and you’re at 100-200 MB/s minimum speed!
  • Incremental backups are too slow
    • Most backups are incremental backups, and incremental backups are way too slow. A file-level incremental backup supplies a random level of throughput usually measured in single digits of MegaBytes per second. This number is nowhere near 100-200 MB/s.
  • The speed mismatch is the problem
    • When incoming backups are really slow, and the tape drives want to go very fast, the drive has no choice but to stop, rewind, and start up again. It does this over and over, dragging the tape head back and forth across the read write head in multiple passes. This wears out the tape and the drive, and is the number one reason behind tape drive failures in most companies.  Tape drives are simply not the right tool for incoming backups.  Disk drives are much better suited to the task.
  • What about multiplexing
    • Multiplexing is simultaneously interleaving multiple backups together into a single stream in order to create a stream fast enough to keep your tape drive happy. It’s better than nothing, but remember that it helps your backups but hurts your restores.  If you interleave ten backups together during backup, you have to read all ten streams during a restore — and throw away nine of them just to get the one stream you want. It literally makes your restore ten times longer.  If you don’t care about restore speed, then they’re great!

What about offsite copies?

Their have been many incidents involving tapes lost or exposed by offsite vaulting companies like Iron Mountain.  Even Iron Mountain’s CEO once admitted that it happens at a regular enough interval that all tape should be encrypted. I agree with this recommendation — any transported tape ought to be encrypted.

Tape is still the cheapest way to get data offsite if you are using a traditional backup and recovery system. If you’re using such a system, you have to buy an expensive deduplication appliance to make the daily backup small enough to replicate. These can be effective, but they are very costly, and there are a lot of limits to their deduplication abilities — many of which make them cost more to purchase and use.  This is why most people are still using tape to get backups offsite.

If you have your nightly backups stored on disk, it should be possible to get those backups copied over to tape.  That is assuming that your disk target is able to supply a stream fast enough to keep your tape drives happy, and there aren’t any other bottlenecks in the way.  Unfortunately, one or more of those things is often not the case, and your offsite tape copy process becomes as mismatched as your initial backup process.

In other words, tape is often the cheapest way to get backups offsite, but it’s also the riskiest, as tapes are often lost or exposed during transit. Secondly, it can be difficult to configure your backup system properly to be able to create your offsite tape copy in an efficient manner.

I thought you liked tape?

I do like tape.  In fact, I’m probably one of the biggest proponents of tape.  It has advantages in some areas.  You cannot beat the bandwidth of tape, for example.  There is no faster way to get petabytes of data from one side of the world to another.  Tape is also much better had holding onto data for multiple decades, with a much lower chance of bit rot.  But none of these advantages come into play when talking day-to-day operational backups.

I know some of you might think that I’m saying this just because I now work at a cloud-based backup company. I will remind you that I’ve been saying these exact words above at my backup seminars for almost ten years.  Tape became a bad place to store your backups the day it started getting faster than the network connection backups were traveling over — and that was a long time ago.

What do you think?  Am I being too hard on tape?

----- Signature and Disclaimer -----

For those of you unfamiliar with my work, I've specialized in backup & recovery for 25 years. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.

Is AWS Ready for Production Workloads?

Yes, I know they’re already there.  The question is whether or not Amazon’s infrastructure is ready for them.  And when I mean “ready for them,” I mean “ready for them to be backed up.”  Of course that’s what I meant.  This is backupcentral.com, right?

But as I prepare to go to Amazon Re:Invent after Thanksgiving, I find myself asking this question. Before we look at the protections that are avaialable for AWS data, let’s look at why we need them in the first place.

What are we afraid of?

There is no such thing as the cloud; there is only someone else’s datacenter.  The cloud is not magic; the things that can take out your datacenter can take out the cloud.  Yes, it’s super resilient and time-tested.  I would trust Amazons’ resources over any datacenter I’ve ever been in.  But it’s not magic and it’s not impenetrable – especially by stupidity.

  • Amazon zone/site failure
    • This is probably the thing Amazon customers are most prepared for.  All Amazon resources are continuously replicated to three geographically dispersed locations.  Something like 9/11, or even a massive hurricane or flood, should not affect the availability or integrity of data stored in AWS.  Caveat: replication is asynchronous, so you may lose some data.  But you should not lose your dataset.
  • Accidental deletion/corruption of a resource
    • People are, well, people. They do dumb things.  I’ve done dumb things. I can’t tell you the number of times I’ve accidentally deleted something I needed. And, no, I didn’t always have a backup.  Man, it sucks when that happens.  Admins can accidentally volumes, VMs, databases, and any kind of resource you can think of.  In fact, one could argue that virtualization and the cloud make it easier to do more dumb things.  No one ever accidentally deleted a server when that meant pulling it out of the rack.  Backups protect against stupidity.
  • Malicious damage to a resource
    • Hackers suck. And they are out there. WordPress tells me how many people try to hack my server every day.  And they are absolutely targeting companies with malware, ransomware, and directed hacking attacks.  The problem that I have with many of the methods that people use to protect their Amazon resources is that they do not take this aspect into account  – and I think this danger is the most common one that would happen in a cloud datacenter.  EC2 snapshots and RDS snapshots (which are actually copies) are stored in the same account they are backing up.  It takes extra effort and extra cost to move those snapshots over to another account.  And no one seems to be thinking about that.  People think about the resiliency and protection that Amazon offers – which it does – but they forget that if a hacker takes control of their account they are in deep doodoo.  Just ask codespaces.com.  Oh wait, you can’t.  Because a hacker deleted them.
  • Catastrophic failure of Amazon itself
    • This is extremely unlikely to happen, but it could happen. What if there were some type of rolling bug (or malware) that somehow affected all instances of all AWS accounts.  Even cross-account copies of data would go bye-bye.  Like I said, this is extremely unlikely to happen but it’s out there.

How do we protect against these things?

I’m going to write some other blog posts about how people protect their AWS data, but here’s a quick summary.

  • Automated Snapshots
    • As I said before, these aren’t snapshots in the traditional sense of the word.  These are actually backups.   You can use the AWS Ops Automator, for example, to regularly and automatically make a “snapshot” of your EC2 instance.  The first “snapshot” copies the entire EBS volume to S3.  Subsequent “snapshots” are incremental copies of blocks that have changed since the last snapshot.  I’m going to post more on these tools later.  Suffice it to say they’re better than nothing, but they leave Mr. Backup feeling a little queasy.
  • Manual copying of snapshots to another account
    • Amazon provides command-line and Powershell tools that can be used to copy snapshots to another account.  If I was relying on snapshots for data protection, that’s exactly what I would do.  I would have a central account that is used to hold all my snapshots, and that account would be locked down tighter than any other account. The downside to this tool is that it isn’t automated.  We’re now in scripting and manual scheduling land. For the Unix/Linux folks among us this might be no big deal. But it’s still a step backward for backup technology to be sure.
  • Home-grown tools
    • You could use rsync or something like that to backup some of your Amazon resources to something outside of Amazon.  Besides relying on scripting and cron, these tools are often very bandwidth-heavy, and you’re likely going to pay heavy egress charges to pull that data down.
  • Third-party tools
    • For some Amazon resources, such as EC2, you could install a third-party backup tool and backup your VMs as if they were real servers.  This would be automated and reportable, and probably the best thing from a data protection perspective. The challenge here is that this is currently only available for EC2 instances.  We’re starting to see some point tools to backup other things that run in AWS, but I haven’t seen anything yet that tackles the whole thing.

So is it ready?

As I said earlier, an AWS datacenter is probably more resilient and secure than most datacenters.  AWS is ready for your data. But I do think there is work to be done on the data protection front.  Right now it feels a little like deja vu.  When I start to think about shell scripts and cron, I start feeling like it’s the 90s.  It’s been 17 years since I’ve revisited hostdump.sh, the tool I wrote to automatically backup filesystems on a whole bunch of Unix systems.  I really don’t want to go back to those days.

----- Signature and Disclaimer -----

For those of you unfamiliar with my work, I've specialized in backup & recovery for 25 years. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.

Is a portable hard drive the best way to backup a laptop?

Short answer: no, it’s the worst way

Alright, the worst way would be to not back it up at all.  Sadly that’s the most common way. Other than that, the worst way would be to back it up to a portable hard drive.

Portable hard drives are unreliable

I have used portable hard drives for years, and I can’t tell you how many of them have failed in that time.  Let’s just say it’s in the dozens.  It could be the physics of putting a hard drive in such a small container.  That would explain how they fail much more often than the same drives in a laptop.  Maybe it gets too hot in those enclosures; maybe just being small like that allows them to get roughed up more than they do in a laptop.  All I know is they fail much more often than any hard drive I’ve ever had.  When the hard drive itself doesn’t fail, the electronics around it fail.

It’s with your laptop or PC

Using a portable hard drive as your backup means you’re probably storing it next to your PC or putting it into your laptop bag when you travel.  That means it’s right next to the thing it’s protecting.  So when the thing you’re protecting catches fire or gets stolen, your protection goes right along with it.  Remember, you’re just as likely (if not more likely) to have your laptop stolen as you are to have a hard drive failure.

What about DVD backup?

DVDs are more reliable than hard drives, but they have their own problems.  The biggest challenge is that the capacity and throughput are way off from what most people need. Hard drives can easily hold many hundreds of gigabytes of data — even terabytes.  Backing that up to even BluRay DVDs is going to take a lot of CDs and a lot of time.  The transfer rate of burning something in with a laser is pretty slow.

So what do you do, then?

I don’t see any other sensible method than to back it up automatically to a system designed to back up laptops and desktops over the Internet.  This could be a piece of software you purchase and install on systems in your datacenter.  If you go that route, however, you’re going to need to make sure the system works for people who aren’t on the corporate network.

What makes the most sense for this data is a cloud-based data protection system. It would support everyone no matter where they reside.  There are no hard drives to manage, no backup hardware to purchase and manage, and everyone everywhere can backup their computers and access their backups.

What do you think?  Is there a better way to back up laptops and desktops than the cloud?

 

 

----- Signature and Disclaimer -----

For those of you unfamiliar with my work, I've specialized in backup & recovery for 25 years. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.

Where does data come from: Laptops & desktops

The datacenter is no longer the center of data.  Data that needs to be protected comes from a variety of sources, most of which are not the datacenter. The first one I’m going to talk about is laptops and desktops.

There was a time when personal computers were used to access company data, rather than create it. In my first corporate job, I remember using a 3270 terminal to use Lotus 123 or Word Perfect.  Documents created in that terminal were not stored on that terminal; it had no hard drive or floppy drive!

(From IBM 3270 on Wikipedia)

Documents created on that computer were stored on the company’s servers in the datacenter. Then I was responsible for backing up those servers. I remember backing up hpfs01, or HP file server 01, where all that data was stored.

If you wanted to create data, you came to the office and you used the 3270 to do so.  No one took their data home.  No one created data at home.  Even once we added the ability to dial in from your home PC, you used a terminal emulator to telnet into the Lotus or WordPerfect server to do your actual work.

Enter Windows, stage left

I still remember the first time I saw Joe (his real name) using Windows in the office, and I remember they were using some new thing called Microsoft Word. I remember fighting the idea for so many reasons, the first of which was how was I supposed to back up the data on that guy’s floppy drive?   We forced that user to store any data he created in his home directory on hpfs01.  Problem solved.

We weren’t in danger of having Joe take his work home.  His PC was strapped to his desk, as laptops just weren’t a thing yet. I mean, come on, who would want to bring one of these things home?  (From http://www.xs4all.nl/~fjkraan/comp/ibm5140/ )

Enter the laptop

Once laptops became feasible in the mid to late 90s, things got more difficult. Many companies staved off this problem with corporate policies that forced employees to store data on the company server.

For a variety of reasons these approaches stopped working in the corporate world. People became more and more used to creating and storing data on their local PC or laptop.

A data protection nightmare

The proliferation of data outside the datacenter has been a problem since the invention of cheap hard drives.  But today it’s impossible to ignore that a significant amount of data resides on desktops and laptops, which is why that data needs to be protected.

It must be protected in a way that preserves for when that hard drive goes bad, or is dropped in a bathtub, or blows up in a battery fire.  All sorts of things can result in you needing a restore when you have your own hard drive.

It also must be protected in a way that allows that data to be easily searched for electronic discovery (ED) requests, because that is the other risk of having data everywhere. Satisfying an ED request for 100s of laptops can be quite difficult if you don’t have the ability to search for the needle in a haystack.

My next post will be about why portable hard drives are the worst way you can back up this important data.

Check out Druva, a great way to back up this data.

----- Signature and Disclaimer -----

For those of you unfamiliar with my work, I've specialized in backup & recovery for 25 years. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.

My head’s in the clouds: I just joined Druva

After almost 25 years of specializing in backup and data protection as an end user, consultant, and analyst, I’ve decided to work for my first vendor.  I started today at Druva.

Why a vendor?  Why Now?

I figured that it was time to put up or shut up. Put my money where my mouth is.  To fully understand this industry I have to experience it from all sides, and that includes the side trying to make it all happen.  I’ve been an end user, a consultant, and an analyst.  Now it’s time to try making it happen.

Why Druva?

I’ve been a fan of cloud-based data protection for some time now, as anyone who ever attended one of my backup schools can attest.  It makes the most sense for the bulk of the market and offers a level of security and availability simply not available with traditional solutions.

Anyone who has heard me speak knows I’m not anti-tape.   In fact, I think tape is a great medium for some things. But it hasn’t been the right medium for operational backup for quite some time.  Obviously more to come on this and other subjects.

But if disk is the right medium for operational backup, how do you get that data offsite to protect against disasters?  There are many answers to this question, but I have felt for a long time the best answer is to back up to the cloud.  If your first backup is to the cloud, then it’s already offsite.

Of course, having your only copy of data in the cloud can be problematic for large restores with a short RTO. This is why Druva has the ability to have a local copy of your data to facilitate such restores.

Druva was founded in 2008 by Jaspreet Singh and Milind Borate and it has over 4000 happy customers running its products.  Druva’s first product was inSync, which focuses on protecting & sharing data from desktops, laptops, and cloud applications such as Office365, GSuite, and Salesforce.com. Druva’s second product is Phoenix, which is designed to protect datacenters.  It protects VMware and Hyper-V workloads, as well as physical machines running Linux or Windows.   One of  Druva’s differentiators is that all data, regardless of source or type, is stored in a central deduplicated repository to facilitate data governance, ediscovery, and data mining.   I’ll be talking more about those things as I learn more about the company and its products.

This post was going to be longer, but the first day at my new job turned out to be a lot of work.  So I’ll keep it short and sweet. Mr. Backup has joined Druva!

Keep it cloudy, my friends.

----- Signature and Disclaimer -----

For those of you unfamiliar with my work, I've specialized in backup & recovery for 25 years. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.