Target deduplication appliance performance comparison

The world of target dedupe has changed significantly since I wrote my first performance comparison of target deduplication vendors 18 months ago.  It wasn’t until I made the chart for this blog entry that I realized how much they had changed.

Caveat Emptor

As I said in the last post, please remember that I am publishing only what the vendor advertises.  Some vendors that didn’t like the numbers I published last time said that I’m validating the numbers by publishing them here and that’s nonsense; I am only collecting information from a variety of public sources and normalizing them so they can all be put into the same table.  If they say they support 4, 8, or 55 nodes in a global dedupe system, then that’s what I put here.  If they say they do 1000 MB/s, then that’s what I put here.  I have verified some of these numbers personally; however, since I haven’t verified them all, I do not believe it would be fair to speak publicly about the ones I have verified.  Not to mention that is actually how I make my living. 😉

Shameless Plug

Speaking of making a living… If you are an end user considering a purchase of one of these units, and you’d like to find out which of these claims are actually true, then a subscription to Truth in IT’s Backup Concierge Service is what you’re looking for.  $999 a year and you get to talk privately and anonymously to me, other backup experts, and other end users just like yourself that are using the product(s) you’re interested in.  $999 a year pays for itself with one purchase of or upgrade to any of the products we cover. Just imagine having Mr. Backup in your corner every time you have to talk to a backup vendor!  If you’re a vendor, sorry.  TruthinIT.com is for end users only.

Global Dedupe Matters

Before discussing the numbers, I once again feel it is important to explain why I allow some products to add the throughput of multiple nodes together, and I require other products to use only the numbers for a single node.  It’s simple.  If a vendor’s nodes behave like one node from a dedupe standpoint (i.e. they have global, multi-node, dedupe), then they can put them together.  If they behave like multiple independent nodes (i.e. they don’t have global dedupe), then why should they be allowed to put them together and call them one node?

I’ll give you two examples from the target dedupe market leader, EMC/Data Domain, to illustrate my point. Let’s talk about their new GDA, or Global Dedupe Appliance and the DDX.   The GDA uses NetBackup’s OST and Data Domain’s Boost to load balance data across two DD880s and ensures that data is globally deduped across both nodes.  It is two nodes acting as one; therefore, it is perfectly valid to publish its throughput rate as one number.  The DDX, on the other hand, is a different story.  EMC continues to advertise the DDX as “a 16-controller DDX array [that] provides up to 86.4 TB per hour throughput.”  The problem is that it’s only an array in the general sense (e.g. an impressive array of flowers), not in the IT sense (e.g. a disk array).  The DDX Array is really just 16 Data Domain boxes in a rack.  They know nothing of each other; if you send the same exact data to each of the 16 arrays, that data will be stored 16 times.  Therefore, I do not use the numbers for the DDX in this table.

Vendors More Open

One of the things that has changed since I did this 18 months ago is that vendors now publish all their numbers.  Specifically, the post-process vendors put their dedupe rates on their website, rather than just publishing their ingest rates.  I don’t know if it was this blog post that coerced them to do that or not, but I’d like to think I helped a little.  The result is that this table was published using only numbers that are publicly available on their website, and in no cases did I find a vendor that didn’t have their numbers somewhere publicly on their site.  Bravo, vendors.

Backup Only

I am publishing only backup numbers for two reasons.  The first (and biggest) reason is that they tend not to publish their restore numbers, and I wanted this post to use only published numbers.  The second is that (with a few exceptions), the performance numbers for restore tend to be in line with their performance numbers for backup.

Having said that, backup is one thing, restore is everything.  Just because fast disk-based backup devices usually make fast disk-based restore appliances, do not assume this to be the case.  Test everything; believe nothing.

If I’ve told you once, I’ve told you 1024 times

I used 1000, not 1024, when dividing and multiplying these numbers. If that bothers you, go to imdb.com and find some movies to submit goofs on and stop picking on me.  I was consistent, and that is what matters in a table like this, IMHO.

The Comparison

The vendors are listed alphabetically, of course.  The product names are all links to the documents from which I derived the numbers.  If the product is an inline product, then I put a number in the Inline Backup Speed column.  If it is a post-process product, then I put numbers in the Ingest Speed and the Dedupe Speed columns.  The Daily Backup Capacity is my attempt to compare the two different types of products (inline & post-process) side-by-side.  Assuming you’re going to dedupe everything, then you can really only ingest as much data in a day as you can dedupe in a day.  I took the value in the Inline Backup Speed column for inline vendors and the Dedupe Speed column for post-process vendors and multiplied it by 86400 (the number of seconds in a day), then divided by 1,.000,000 to get the number of terabytes they could back up in a day. The usable capacity is the maximum amount of space that you have to store deduped data on that particular appliance.  (This would be minus RAID overhead and does not include any deduplication.  The amount of backup data you could store on each appliance would be a function of what your deduplication ratio was multiplied times the usable capacity.)

Update: My first version of this table had NEC coming in at something like 16K MB/s, but it’s been updated with a much bigger number.  This is because I was using some older numbers from their website that they didn’t know were still there. I am now using the most up-to-date numbers.

Vendor

Product

Inline Backup Speed (MB/s)

Post-process Ingest Speed (MB/s)

Dedupe Speed (MB/s)

Daily Backup Capacity

Usable capacity

Notes

EMC

GDA

3555

N/A

N/A

307 TB

384 TB (raw)

2 nodes, NBU/OST only

DD880

1500

N/A

N/A

129 TB

192 TB (raw)

 

DD880 w/Boost

2444

N/A

N/A

211 TB

192 TB (raw)

NBU, NW only

Exagrid

EX10000E

N/A

5000

2000

172 TB

200 TB

10 nodes, NBU/OST only

EX10000E

 

3500

2000

172 TB

200 TB

10 nodes

FalconStor

VTL

N/A

12000

2000

172 TB

268 TB

8 VTL nodes
4 SIR nodes

Greenbytes

GB 4000

950 MB/s

N/A

N/A

82 TB

108 TB

 

HP

D2D4312

666

N/A

N/A

57 TB

36 TB

 

IBM

ProtecTier

1000

N/A

N/A

86 TB

1000 TB

2 nodes

NEC

HydraStor HS8-2000

27500

N/A

N/A

2376 TB

1320 TB

55 accelerator nodes,
110 storage nodes

Quantum

DXi 8500

 

1777

1777

153 TB

200 TB

 

Sepaton

S2100-ES2

N/A

4440

2314

200 TB

1600 TB

8 nodes

Symantec

NetBackup 5000

7166

N/A

N/A

619 TB

96 TB

6 nodes, requires NBU Media Server dedupe to get this throughput

 

Observations

The big winner here is NEC, coming in more than three times as fast as their closest competitor.  This is, of course, a function of the fact that they support global dedupe, and that they have the resources to certify a 55-node system.  (It helps to have an $86B company behind you.)  This is one of the reasons that I referred to them in a previous blog post as the best product you’ve never seen.  In addition to being fast, they also have a very interesting approach to availability and resiliency.  They actually got left out of the last comparison I did only due to an oversight on my part.

The big surprise to me personally is the NetBackup 5000, as it is the newest entry to this category.  It’s only for NetBackup, but it’s pretty impressive that they’re coming in second when they just entered the race.  This is also a function of global dedupe and them supporting six nodes in a grid.  I still don’t think this is a good move for Symantec, as it puts them right in competition with their hardware partners, but it is a respectable number.

Update (11/12): The NetBackup 5000 uses the NetBackup Media Server Deduplication option to get this performance number.  Like EMC’s Boost, the data is deduped before ever getting to the appliance.  They have not published what their dedupe throughput would be if you did not use this option.

Speaking of being a NetBackup customer, Data Domain is looking a lot better than they used to due to the advent of Boost, which supports NetBackup and NetWorker customers.  Boost works by running a plug-in on the NetBackup media server or NetWorker storage node, and doing some of the heavy lifting and deduping before it’s ever sent across the network.  This spreads the load out over more CPUs, and gives a significant effective increase in throughput to those boxes that support it.  Notice that Boost increases the effective throughput of the single-node DD880 to faster than the 8-node Sepaton, 4-node Falconstor, 2-node ProtecTier, or 10-node Exagrid system.  Having said that, I still think global dedupe is important, here and here are some old posts to explain why.  I’ve also got an article coming out next month on searchstorage.com about this as well.

I was kind of surprised that FalconStor doesn’t support more than four nodes yet, and their numbers might look very strange if you don’t know the reasoning behind them.  They support an 8-node VTL cluster, but they only support 4 SIR (dedupe) nodes behind that cluster (for a total of 12 nodes).  This is why they can ingest 12000 MB/s, but they can only dedupe 2000 MB/s, which severely limits their daily backup capacity to only 172 TB.

Another surprise was that Quantum came in with a respectable daily backup capacity of 153 TB a day, even though they do not support global dedupe.  That’s right behind Sepaton, which uses 8 nodes to do the same job.

Three vendors told me they were about to do major refreshes by the end of this year, but I decided I’d waited long enough to publish this table.  When they do their refreshes, I’ll refresh the table with another post.

Happy hunting!


Written by W. Curtis Preston (@wcpreston), four-time O'Reilly author, and host of The Backup Wrap-up podcast. I am now the Technology Evangelist at Sullivan Strickler, which helps companies manage their legacy data

16 comments
  • So glad that you took the time to put this together. AND that you pointed out the importance of restoration time. Getting things backed up is one thing, but as you wrote restoration is everything.

    Would have loved to see some sort of cost analysis woven into the post…just to show folks how this plays out in the budget. For example, does NEC’s speed come at a hefty cost, or not?

    Any plan to eventually extend this comparison beyond just appliance-based dedupe?

  • One day at a time…

    You have any idea how difficult it is to get level-field pricing out of these guys? They don’t publish it the way they publish performance numbers. What they do publish is like comparing the price of a 2 1/2 ton truck to a 1 ton pickup and an Toyota Forerunner. All different sizes.

    Then there’s the issue that all you can get is list pricing, and you get complaints from certain vendors that claim that their typical discount is bigger than the other guy’s typical discount, so that’s not fair. (While the statement may be true, that’s their own fault.) But it still makes it hard.

  • Lol…unfortunately, I do know how difficult it can be. Most of the time you’d think we were asking to pull their teeth.

    If they refuse to honor your requests go with whatever you can get your hands on. And if they consider your numbers unfair…well then it’s up to them to provide you with something more accurate on the record.

    Think of my comment as a low priority request for a future enhancement. 😉

  • Consider this a non-binding commitment to consider your low-priority enhancement request at the appropriate time. 😉

  • Nice work on the chart! I do have a couple of questions and a long comment:

    1) For you daily capacity numbers, it appears that you are assuming concurrent dedupe for the post-processing vendors and therefore adding the ingest AND dedupe speeds together. Is that correct?

    2) What is the speed of the storage that is sitting behind each solution, particularly for the post-processing vendors?

    The answer to questions 2 is important, particularly if my assumption for question 1 is correct. I say that because users should be aware that as you scale up or scale out appliances, the underlying storage, particularly, the storage contollers, is a potential bottleneck that is often overlooked.

    Using FalconStor as an example (because that is what I am most familiar with), to meet the target number of 14000 MB/sec, with concurrent processing, would require a storage solution with storage controllers capable of that aggregate performance. That means multiple arrays as part of the solution even with systems that are capabling of scaling up multiple controllers. It is ever worse if your backend storage array are systems that are capable of using only 2 controllers; now you’ve got VTL sprawl and your CapEx may be prohibitively high.

    Just something else to think about.

  • @Kenneth

    I am assuming concurrent dedupe, and all of them have that capability. But the number that I multiplied by 86400 was the slower number, which is always the dedupe number with post-process vendors.

    There is a question as to whether or not all of them could dedupe for 24 hours straight WHILE ingesting that same amount of data. I know that Quantum, Sepaton, and FalconStor claim to do be able to do that. Exagrid tends to configure most systems with a backup window and a dedupe window, so they might not hit that number in actual practice.

    As to the disk speed question, all I can say is “fast enough to do the job, or they wouldn’t be able to make the claims.” As to how much disk each one needs, that’s a whole different level of discussion, and it really goes back to the previous question about cost.

  • Qfter looking at most of the solutions above, We are purchasing netbackup puredisk solution for about 500 server backups. After testing the client side dedup, it quickly became clear that puredisk is onto a game changing backup strategy for backups. When you see a 10 minute full backup go down to 30 seconds on the second full with almost a 99% dedup, you startn to think differently about backups.

    We also purchased a 3u 36tb 12 core intel redhat system from pogolinux which is a perfect match as a puredisk media server.

  • Curtis, The HP H2D4312 I believe is actually the D2D4312, also it’ usable capacity before de-dupe and after raid overhead is 36TB’s, the 9TB figure is the entry point for the system.

  • Curtis, nice table. I hope you can add restore speed one day. Even better would be a real life comparison with realistic change rates, but I doubt anyone would ever be able to pull all the equipment together.

    I do think your calculation for post-processing is somewhat optimistic. The whole point of post-processing is that you cannot start dedupe until you are done ingesting. So the time you ingest gets subtracted from the dedupe time and therefor from the daily backup capacity.

    Just two examples;
    Falconstor needs 6 hours to dedupe one hour ingest, so there the daily volume would be 6/7th of 172TB, so 148TB.

    Quantum ingests 1777MB/s and dedupes 1777MB/s. That means it can dedupe 12 hours instead of 24, and daily backup capacity is 77TB.

  • Rob,

    All the post-processing vendors also support something called concurrent processing, which means dedupe begins, not after the entire backup is done, but after a virtual tape is “full” or a specific backup job is complete.

    Curtis mentioned that concurrent processing is assumed in his calculations.

  • Kenneth is correct.

    All of the vendors support concurrent processing, and most of them state that the dedupe speeds I published are supported while you are ingesting. I believe that only Exagrid does not make that statement.

    Specifically, Quantum supports 1777 MB/s of what they call Adaptive Dedupe, which is their term for 100% concurrent dedupe processing. It behaves very much like inline dedupe in that way, but they do write the original data to disk, so they cannot be called inline.

  • Curtis

    I’m curious why Netapp is not on the list. Clearly they market aggressively in this space. Thoughts on their global ability to dedupe and how they stack when performance is a requirement?

  • I do NOT believe they are marketing the ASIS as a target for other people’s backups. For example, if you take a look at their page on Data Protection solutions http://www.netapp.com/us/solutions/infrastructure/data-protection/backup-recovery.html you don’t see them talking about anything as a target dedupe appliance. Their data protection solutions are focused on a netapp-centric view of things and they really aren’t trying to compete with the likes of Data Domain with ASIS and a filer. If they went down that route, they’d have to start talking about things like backup throughput, dedupe rate, dedupe speed, etc. They don’t really talk about any of those things.

    I’m not saying it couldn’t be used for it, mind you. It’s just that it’s not really what it’s made for. As a result, I don’t think of them when I’m thinking of target dedupe solutions.

  • Were you able to get any information out of commvault ?

    Have been told that they have de-duplication engine improvements in version 9 released recently and am curious as to speed

  • Curtis,

    I just found out that Symantec does not support Datadomain BOOST so I have to disable it in order to get support. Guess I need to install Netbackup dedup (PureDisk) Yeah right!

  • I wanna say something,especially if I read a post that really grabs my attention. However, I won’t do it for the sake of doing it.USA Soccer JerseysI just think that I really like in your article point of view.