Is dedupe more expensive than JBOD?

My good friend, Steve Duplessie, wrote a blog article that basically said that the issue that dedupe was designed to solve is “NO LONGER VALID” (Caps his).  He didn’t say dedupe is bad, but he says that he can buy JBOD for 1/9th the cost of a deduped system; “therefore, using deduplication to solve an economic scarcity issue is no longer legitimate.”  He also said that “if I’m off by 100%, I’m still 1/4 the cost.”

With all due respect to my good friend, I don’t agree with him any more on this than I do on whether or not you should root for the Patriots or the Chargers.  His logic is sound, but his numbers are off — off by a whole lot more than 100%.  And there’s another “scarcity” factor that he’s not considering.

Steve’s post is a follow-on to another post of his that was inspired by a book called “Free” by Chris Anderson.  Among other things, this book apparently talks about the concept of “scarcity” and how things that solve scarcity problems make money.  He says, and I agree, that backing up to disk was to solve the scarcity of time when backing up to tape.  Then dedupe was made to solve the scarcity of money, since backing up to disk costed too much.  But then he says that this is no longer an issue because cheap disk costs so much less now, and he makes his point by comparing the alleged cost of a 30 TB Data Domain system (which he says costs $90K) to a 30 TB JBOD system (which he says costs $10K). He’s saying therefore the issue that dedupe was trying to solve is no longer valid.

First, let’s talk the Data Domain pricing used in the blog post, as it definitely doesn’t match what I’m seeing. I verified today that the list price of a Data Domain system capable of holding 30 TB of backups is $32K  — not $90K.  That’s for a DD510 with one expansion tray that gives you 2.7 TB of usable capacity.  With an 11:1 dedupe ratio, which is a realistic ratio, that will hold 30 TB of backups.  Just for comparison, Quantum also has a 1.9 TB system that lists for $12.5K.  Two of those and a 10:1 dedupe ratio and you’ve got yourself 40 TB of backup capacity for $25K.  Taking out the additional capacity makes the effective list for 30 TB of Quantum about $19K ($25K * 30 / 40).

As to the JBOD system, I have no idea where I’m supposed to buy a 30 TB disk array for $10K.  Let’s look at a few disk systems.  A Dell MD 3000 configured with 15 1 TB drives sells direct for $16.5K, so that’s $33K for 30 raw TB (pre RAID5).  But I need 30 TB usable, which means I’ll need to buy three of these for a total cost of $50K.

If you’re OK with not buying a big brand name, the least expensive arrays I’ve seen in the middle enterprise are NexSAN arrays, and they list for about $30,000 for 30 TB — raw.  A reseller that I know says that their rule of thumb (based on the way customers usually configure them) puts them about $40K-45K for 30 usable TB.

OK, forget any kind of brand name and let’s just go for cheap.  The least expensive arrays that I’ve heard of (but never seen in a customer) are Promise arrays.  A Promise VTE310F array sells (direct) for $6789 with no hard drives.  Filling it with 12 1 TB disk drives from buy.com for $90 apiece adds $1080.  So that’s $7869 for 12 raw TB using the cheapest arrays and disks I can find.  I would need three of them to get to 30 usable TB, costing $23K.

It’s also important to point out that there are many classes of storage in the enterprise, and comparing a Promise array to a Data Domain or Quantum system is ignoring those classes completely.

So, that’s $19K-32K list price for the dedupe option and $23K-$50K street price for the JBOD option. Without looking at anything else, deduped storage is cheaper than JBOD disk — even if you build it yourself.

And that’s just the acquisition price.  Remember that the deduped system will be using at least 10 times less power and cooling than the JBOD system, and that’s a huge deal.  Steve said that power isn’t scarce and hasn’t been for a long time. Are you kidding me?  I’ve been in datacenters that have been told by the power company that they can’t have any more power.  And, even when it’s not scarce, it’s expensive — and anything we can do to reduce it is a good (and a green) thing.

Secondly, deduping backups enables replicating those backups, which simply isn’t possible in most datacenters without dedupe.  Trust me — bandwidth is scarce.  Now you can have onsite and offsite disk-based backups without moving tapes around.  If you want to make tape copies, make them offsite — and they never get moved.

Finally, Steve’s post missed another important point.  Disk is not in competition with deduped disk; it’s a component of deduped disk. So as disk gets cheaper, so does deduped disk.  As disk prices have fallen over the last several years, so has the per-GB pricing of dedupe systems.

So even if it costed more (which it doesn’t), then it would still be a Good Idea for those reasons.  And that’s all I have to say about that.

I like Steve a lot, but I think he needs to check his pricing numbers again.

Written by W. Curtis Preston (@wcpreston), four-time O'Reilly author, and host of The Backup Wrap-up podcast. I am now the Technology Evangelist at Sullivan Strickler, which helps companies manage their legacy data

23 comments
  • Well, I just bought a Dell MD1000 with 15x1TB drives and prepaid 5 years of 4-hour service for $9600, but you’re right, that’s still not 30TB for $10K.

    We’re currently an all-tape NetBackup shop, weighing our options for the future. How does the math work out when you start including the “tax” the software vendors charge for using smart deduping systems? For instance, I know Symantec charges you more money if you store your backups on “smart” deduping disk than on “dumb” JBOD disk.

    Some of the capacity charges that the disk vendors use are really making the price tags difficult for those of us who work in academia to swallow. Cheap commodity disk has made our file servers grow at an alarming rate, we’re seeing 50% annual growth in the number of TB we’re protecting. Tacking tens or hundreds of thousands of dollars per year onto our licensing bills is tough to swallow, especially when we still have to buy the hardware after that…

  • I will concede that there are ways you can buy storage cheaper. What I don’t concede is that any of the ones that I’ve seen are valid comparisons to the systems that we’re talking about in terms of reliability, manageability and throughput.

    But even the pricing that your brought up still didn’t match Steve’s pricing. In order to get that cheap, you have to start buying products I’ve never heard of that aren’t scalable and have no management built into them.

  • Quantum tells me that they want to store an uncompressed full backup of each target I would throw at it, and then it’ll dedupe off the subsequent fulls and incrementals.

    Assuming this is true, and I have 30TB I back up in a week normally, I’d need to get a 35-40TB DXi. We looked at something around half that size and got a bad case of sticker shock. I could throw a lot of SATA disk at my backups for what they would want me to purchase.

    So, is one of your base assumptions that dedupe happens at the second block your dedupe box receives? and if so, is this how DataDomain and the other players (besides Quantum) do it?

    That seems (to me) to be the pivot point for JBODs vs DeDupes. If it’s like Quantum’s (and this all assumes that it does, in fact, store a raw non-deduped image of each box you throw at it), then I’m storing a good 50-80% of my data on disk that’s 2 to 3 times more expensive, I may as well buy cheap disk.

    I don’t mean to pick on Quantum (I like their stuff), but when it was sales-pitched to me this way, I had some reservations.

  • When will the truth be exposed ? De-Duplication is a disaster for recovery, DR in particular, you can not design a backup infrastructure with recovery in mind, or at least you will have to make sure that your recovery window is very wide when thinking of implementing de-dup.
    Just an example, in a DR remote site, it would take an average of 40 hours to re-hydrate 4TB worth of de duplicated data so it would be ready for recovery, and this is on some high end machine with state of the art CPUs and a lot of memory
    For all the companies that are marketing DR and selling de-duplication, change your slogan.
    Being objective the only de-dup vendor that is really thinking about recovery is Sepaton, but they have other issues ๐Ÿ™‚

    The Doctor

  • [quote name=Doctor Recovery]it would take an average of 40 hours to re-hydrate 4TB worth of de duplicated data so it would be ready for recovery, and this is on some high end machine with state of the art CPUs and a lot of memory
    [/quote]
    Hang on a sec- this doesn’t look right. Your numbers seem to imply that "rehydration" is a process that occurs BEFORE you can restore any data- but that doesn’t make any sense with any of the disk-based dedupe stuff I’m aware of. You don’t really "rehydrate" in the sense of assembling the data back into a whole unit in order to restore it. The system just grabs the appropriate blocks for that file from wherever the database tells it they were stored. More like a highly fragmented disk.

    See Curtis’ post on Rehydration (https://backupcentral.com/content/view/247/47/) from June 21 for what I’m talking about.

    I’m not saying your numbers are wrong- with no major dedupe system in place at my company I certainly don’t have any evidence to the contrary- I’m just not sure the logic you seem to be laying out in this post holds up. I’d be very interested in seeing more detail on this.

  • You’re not going to find a stronger advocate of recovery over backup than me. Having said that, I have to say I completely disagree with your assertion. I’m guessing you had some bad sea urchin and now you’re declaring that all sushi is bad.

    I did post that rehydration is a myth, and frankly I’m not sure where it came from. As I said in that post, there is no such thing as a rehydration process. You’re either reading fragmented or non-fragmented data. See the blog post: https://backupcentral.com/content/view/247/47/ And you do seem to refer to this rehydration process (which does not exist IMHO) as if it’s a separate process, and I also know of no product that has that concept.

    Are the restore speeds of some dedupe systems absolute crap? Absolutely! It sounds like you have tested one of them. But you can’t dismiss an entire industry just because a few of their products have a major flaw. The concept isn’t flawed; the vendor’s design is.

  • Not challenging the honorable Mr. Preston but not all the facts are laid down as they should. to illustrate what really happens without getting into the number of reads or seeks a system makes to a disk I would like to take a step back and use a relative simple example. Assume you backed up a file or a directory with an imaginary recovery application that uses arj as its format. In this case the data goes directly to disk and I am not taking into account any compression algorithms above and beyond the compression that arj provides, which by the way would be a bad implementation according to the Shannon theory of compression. Now, the application took the native file, compressed it and wrote it to the target media.
    In time of recovery, the application calls the arj image it has created, uncompress the data and send it back to the client. otherwise, the poor client will not get back the data he had sent. zoom in on the operation and you will see that the recovery application makes the compression and uncompression of the file, utilizes the available horse power on the local system it is installed on.
    Now let’s move to the same scenario, but this time on tape with compression turns on, who is compressing and uncompressing the data ?, that’s write, the tape!
    Now, let’s take target based de duplication as another exmpale, who will be responsible to reconstruct the data to the state it was recieved and send it back to the requester ? correct, again, the de-dup device, and how much time does it take ? on average 100GB/Hr, is that good enough for disaster recovery ? not in my book !!
    Quoting one vendor that I challenged this with “this is the cost of doing de-duplication”, so yes, we save a lot of money on space (a myth ?) but we will spend more on wasted hours to get the data back to its rightful owners in case of a disaster.

    This is why vendors, mainly the one I mentioned in my earlier post are using “disk cache” to leave the latest backup image in tact for faster recovery, otherwise, we are all in one big long haul to get our business up and running again.

    Here is a call to the recovery application vendors, why won’t you be smart enough to create a full copy of the last backup ready for recovery in a “disk cache” right after the replication between the target de-dup device is completed, do it automatically and make your customers life easier ?

    The Doctor

  • I knew it. I knew it. I knew you had some bad sushi.

    You’re taking the limitation of ONE PRODUCT (e.g. 100 GB/hr) and assuming/claiming that it’s true of all dedupe vendors. It’s NOT. It’s only true of that vendor, which is why that vendor has a disk cache. The other vendors who don’t have this limitation don’t need the cache. And hopefully, when said vendor gets their crap together, neither will they! They’re working very hard to increase recovery speed.

    BTW, I’m not disputing that the dedupe system is that one responsible for turning the deduped bits into non-deduped bits; what I’m debating is that rehydration is anything close to what we’re talking about here. Rehydration implies that something is missing and you’re putting it back in; that does not occur. I’d rather use the term reassembly, as that’s closer to what happens. They just need to grab all the bits and bytes from the various places they happen to reside and send them to the appropriate place. As I said in the other post, this is the same thing that any filesystem does; it’s just that a dedupe filesystem is more fragmented than others. And yes, some of them (including the vendor of which you speak) made some very bad decisions during design that resulted in very bad restore speeds. Please do not condemn a whole industry because of one vendor.

  • I guess I was misunderstood, here are some more facts…we have tested a number of vendors starting from the market leaders and down the food chain, with data sets are are not “off the charts” in any shape or from, some user data, some Oracle and SAP DBs, for 30 days where the ultimate goal was to test disaster recovery…yes when I build an infrastructure I think about recovery first. You will be surprised, that the average speed to recover never passed the 100GB/Hr benchmark, some went better some went worse but that was the average. Now, from where I sit, backing up 2TB/Hr to get 100GB/Hr back on the same infrastructure is not acceptable, you know what even 200GB/Hr will not cut it, it does not justify the cost, any cost, regardless of vendor and yes, the industry as a whole is not there yet. So, you can use my real world example or you can dispute it…it is what it is. The vendor I have mentioned may have done some poor decisions but at least they had recovery in mind, and that is not to say my company purchased that solution.

    I would not argue about terms, after all it is just semantics. what is really important is how fast will I get my data back ?

    The Doctor

  • I can’t argue with your experience, but I can say that it doesn’t match mine. I have also tested dedupe systems, and have also talked to many, many people that are using them. 100 GB/hr is 27 MB/s! I’ve seen restore speeds like that with source dedupe systems, but not the only target dedupe system I’ve seen that is that bad is the one you are talking about with the cache.

    I’d really like to talk to you offline so we can speak more frankly. If you’re open to that, please send an email to curtis – at – backupcentral.com. Thanks.

  • Sorry The Doctor, that is not my experience at all.

    I have several inline de-dupe boxes that I am not allowed to replicate, so I have to create tape copies every day. In addition, the backup solution is TSM, so I am constantly reclaiming virtual volumes.

    Running dd from the VTL to tape I can reliably move 150MB/s(500GB/hr) from backups of any age. Add a horribly tuned TSM and a poorly tuned server into the mix and I still move 50-100mb/s (175-350GB/hr). Over the last 6 months, tape copies(100% read) ran 250-300GB/hr and reclaims(50/50 R/W) ran 200-250GB/hr.

    The last item is that these are all 2 to 3 years old. We just started using a box we purchased 3 months ago and are seeing twice the performance of the older boxes.

    I heartily agree with you on the lack of focus on restore speeds however. I get into arguments with every vendor over sizing requirements, since I size for the restore and they size for the backup.

  • If you backup at 2TB/Hr and get only 500GB/Hr on recoveries what did you gain ?
    Even with a 1TB/Hr box you still lose considerable amount of time on recoveries.
    Expectation is to have at least 3/4 of the backup performance with recovery.
    In my environment, I am moving about 10TB of data daily in 2 Hours (~5TB/Hr). If I want to meet my recoveries expectation I would need at least 7 X 1TB/Hr recovery boxes. Without going to the numbers buying 70TB of disk to hold my last 7 days or at least my last good full and some incrementals will be much cheaper and much more cost effective. don’t you agree ?

    I am glad that you size for recovery, maybe we should start a recovery customer forum ?

    The Doctor

  • I am not against investing top dollars in infrastructure, for those who know me I am the complete opposite. I am investing top dollars in scalable and reliable infrastructure always projecting my company’s data growth and business requirements for the next 2-3 years. I am investing top dollars at state of the art network and fibre cables, fast backplane network and SAN switches, state of the art backup servers, my staff actually test and tag my backup servers as 1TB/Hr server, 2TB/Hr server and so on. And most importantly I am investing in talent. I am using Legato as my recovery application, achieving almost wire speed on my 1Gbit and on average 650MB/s on my 10Gbit networks. My average daily backup success ratio is 99.9%. My recoveries success ratio is 100% and we are doing DR exercises every month, sometimes with real life twists.
    What I would not do is invest in technologies that looks nice on marketing slides but are far from being mature.

    The Doctor

  • I recommend that everyone test recovery speed before buying dedupe. I recommend they test single stream and aggregate speed. And I recommend that any vendor that says "that’s the cost of doing dedupe" should be immediately taken off the table as a viable vendor. Dedupe is possible without significant performance degradation during restores. I’ve seen it too many times to believe otherwise. But there is a lot of crap out there, and you have to test to see what is what. (I’m actually starting a company to do that, BTW.)

    I again ask that you contact me offline so I can understand more of what you’re trying to say. My email is curtis – at – backupcentral.com.

    You say that what he wrote proves your point. I don’t follow. You said:

    we have tested a number of vendors … the average speed to recover never passed the 100GB/Hr benchmark

    100 GB/Hr is 27 MB/s. He posted that he’s getting five times that, and you say it proves your point.

    You seem to think it proves your point because you’re comparing his 150 MB/s to your 2 TB/hr number. You can’t do that because you don’t know what his backup speed is. His boxes are a few years old. A few years ago there was no such thing as a target dedupe box that could go 2 TB/hr. In fact, Data Domain didn’t ship such a box until last month. And until they upgraded their code several months ago, their best number was about 1 TB/hr. You also don’t know (because he didn’t say in his post) whether his 150 MB/s is single stream or aggregate number. So I’m asking him for clarification on that.

    But my summary is that you said that dedupe was "a disaster for recovery" because restore speeds were never faster than 27 MB/s, and that definitely is not the case. It’s not the case in my experience, or in the experience of this user. Again, I’m not saying that this didn’t happen in your testing. I’m saying it’s not happening in other people’s testing.

  • My fault, I apologize.
    It was 100MB/Sec (360GB/Hr) on average.
    The vendor who told me that this is the cost of doing de-duplication is the same vendor that owns my recovery application, and that was before the latest acquisition.
    For customers who run backups at 150MB/s (~1 X LTO-4), still cheaper buying 2 drives and a lot of tapes than buying a single de-dup appliance so what are you saying, Curtis ? that de-duplication is the most expensive poor man solution for backup ?

    The Doctor

  • That number makes more sense. And I would agree that when you were testing that was probably the best single-stream number that was out there. Was that your single stream number or an aggregate number (running multiple restores simultaneously). FWIW, there are better ones now (as much as 300+ MB/s in a single stream), although some of them are still limited at just under 100 MB/s. In addition, their aggregate numbers are even higher.

    BTW, takes a big man to back down. Good on ya’. ๐Ÿ˜‰

  • that any DR scenario will include a single stream recovery, am I way off base with that assumption ? So I guess the answer is obvious.
    BTW, the appliance from the storage giant who I was referring to in my last post, did perform at 100GB/Hr (~30MB/s).
    But I believe, Curtis, the basic question still remains, is de-duplication is really an adequate technology from a price/recovery performance perspective and the answer behind the picture you are painting is still no, in my opinion.
    Any financial model you put on the table just don’t fit in and any “imaginary” savings/ROI that occur when deploying this technology can not be compared to the extended downtime your business will suffer during a real disaster.
    I second you that everyone should test, retest and then test again the technology before deploying it, that’s why a POC in my book is no less than 30 days of extensive performance/scalability testings, in most cases, I believe, you will find that the better investment is in the little things that makes a big difference when time comes.

    Looking ahead, I really like what Microsoft is doing with VSS, especially block level incremental transfers, I heard that VMWare are working on doing the same, if all recovery application vendors will adapt this it is going to be a wonderful world to live in.

    Have a great weekend !

    The Doctor

  • Single stream speed determines tape copy speed and it determines the speed of your high-priority restores. If you’ve got a multi-TB database to restore, you dang sure will care about single stream speed.

    If a dedupe product does this:
    1. has fast backups (all types)
    2. gets my stuff offsite quicker and safer than tape (no encryption required)
    3. can restore my stuff faster than tape (I know this is your sticking point)
    4. can restore my stuff at close to the speed of non-deduped disk
    5. has a similar cost to tape and disk (see the post this comment is in)

    Then why wouldn’t you want to buy it? You’re saying that it can’t meet those requirements, and particularly can’t meet #3 or #4. And I’m saying that I have seen systems that can. I’ve also seen systems that fail miserably at all of them. But the majority that I’ve seen are better are most all of those than tape is.

    You recommended just buying a tape drive or two. If people could actually keep an LTO-n tape drive happy, I’d be fine with that. But 90% of customers (that’s a hard number) that I have visited are getting less than half the rated throughput of their tape drives, and 50% (again, hard number) are getting less than 20% of the rated throughput. And that problem only gets worse as drives get faster. And the worse it gets the worse backups get. Disk is the perfect target for backups, and dedupe lets us keep it on disk longer — and a good dedupe system DOES NOT slow down restores.

    As to testing, I believe that if you’re going to store 90 days worth of backups in production, you should do that in test. But it doesn’t take 90 days to put 90 days of backups into a target dedupe system if you automate it using tape copies. You can usually do such a test in a fraction of real time.

    Glad to hear MS is getting on the bandwagon that NetApp & VMware started over 10 years ago. ๐Ÿ˜‰

  • Have you heard of storagemonkeys.com? They do a regular podcast over there and I’m a frequent guest. I’d love to invite you to come on their and let’s “duke this out” in an easier forum. You say your piece; I’ll say mine, etc. I promise to be civil. If you want to stay anonymous, I bet they’ll just introduce you as Dr. Recovery.

  • I do have an account with storagemonkeys.com, and no I didn’t vote for your blog :-).

    As for single stream, couldn’t agree more, however, for best possible recovery performance you must have a dedicated, noise free, channel from your recovery source to your target. Problem with a de-dupe appliance is it has only one database that keeps track of all the mess on the disk, and I don’t think you would dedicate an entire appliance just to recover a single application. in disaster recovery, time is crucial and the pressure is high, I mentioned real life twists, think about “hey Curtis your wife just called, the storm that destroyed your data center is destroying your house and your cat is trapped in the basement”, in your example a possible response will be ” wait honey, I have to wait for my SAP to be recovered, since single stream is the king, so I could start my Exchange recovery later,that will take a few hours so in the meantime I could come home and help evacuate you and fluffy”.
    Tape or traditional disk source will be much more efficient in that case. A good DR will take 25% longer than your last full backup window, and this is a hard proven repeatable fact in my organization. and for a DR to be successful the secret is do more, wait less.
    you say 90% can’t meet their tape’s MTS ? I want to believe that is the real problem, I thought “Mr. Backup” is seasoned enough to help those people solve this one and I really hope that your solution will not be, change your backup paradigm, suffer in recovery and throw more money on licenses and maintenance.
    One more thing, when a de-dup appliance that answers all the items you mentioned will be available and proven mature, I will be the first in line to buy it. Meanwhile, don’t waste our time. The last sales guy who tried to pitch me a de-dup product, now has to sing is ave maria song with a lot of faith and dedication if he ever wants to see me again.

  • Problem with a de-dupe appliance is it has only one database that keeps track of all the mess on the disk

    You are generalizing again. Not all of them even HAVE a database or anything like it that is used during restores. In addition, having one database/hash table/or anything else is not bad as long as it can respond to all the requests.

    And I think you’re misunderstanding the single stream concept. It doesn’t mean that other restores have to wait. The question is what is the fastest a single restore/copy can go if it only has one stream. Copies to tape by definition only have one stream, and often large restores also only have one stream (e.g. one large backup of one large filesystem). This number is important because an individual restore/copy can never go any faster. Once you know the single stream restore speed, you then want to know how many copies/restores of that speed the appliance can support. For example, a DD 880 supports a single stream restore speed of just over 300 MB/s, and it supports an aggregate speed of 900 MB/s for non-NetBackup backups, so it should support three such restores before individual restore speed starts degrading.

    In real life, restores rarely approach the maximum single stream restore speed of most appliances, as restores are usually throttled elsewhere (as you mentioned). But if you do need a really fast restore and you can design your infrastructure to support it, it’s important to know the single stream performance of the appliance(s) you’re considering.

    And of course I can help them get better with their tape drives. Currently my best advice is to go to disk first. It’s sooooo much easier.

  • because now you would have to answer to million dollar question.
    To get 900MB/s of restore speed I have to invest at least 400K to purchase a single 880.
    To get the same speed from LTO-4 I would need 6 drives, or maybe less. keep in mind that my environment which is probably not typical to what you see based on your description is designed from the ground up for fast recoveries. For these tapes with a nice high speed robot and 100 slots will cost me probably around the 200K. you know what, I’ll be generous with you and add a disk storage system for those clients who can’t push data fast enough for another 100K. Where is my savings with de-duplication ? you are right, on the paper.

    Disk is easier ? OK. For De-duplication appliances in VTL mode you can not use multiplexing, so you will have to create a lot of virtual tapes to satisfy your performance requirements, a management nightmare for any recovery application.
    For the NAS mode, multiplexing is not a problem, for Legato at least, but then you are hitting a size limit that will eventually affect your cloning performance, you put it well, single stream, it takes 12 hours to move 6TB of data to a single LTO-4, you want to cut the time ? create and manage multiple NAS shares, yes, disk is sooooo easy.

    I will finish this discussion with one of my favorite quotes: “I would rather die of thirst than drink form a cup of mediocrity – Stella Artois”

    The Doctor

  • First, no one — and I mean no one — gets 900 MB/s out of 6 LTO-4 tape drives. They don’t get it during backup — and they definitely don’t get it during recovery. As I’ve already said, 90% get 50% or less. AND, as long as you’re sizing your tape library, you need to take into account that most people only get about 50% media utilization at most. So I don’t think you’ve sized things properly for either backup or restore. Therefore I don’t think that’s a valid comparison.

    Having said that, you never heard me say that dedupe was a way to save money when compared to backing up to tape. It’s simply BETTER than backing up to tape. Backups are easier, restores are easier (and yes, typically faster), and management is easier with disk than tape. I never said it was cheaper than tape.

    It IS cheaper than backing up to disk without it, which was the original point of this post.