Login Form

About Me

RSS Feed

Mr. Backup Mr. Backup

Performance Comparison of Deduped Disk Vendors


[This article was slightly updated May 7, 2009.  New comments are in brackets.]

This blog entry is, to my knowledge, the first article or blog entry to compare the performance numbers of various [target] dedupe vendors side by side.  I decided to do this comparison while writing my response to Scott Waterhouse's post about how wonderful the 3DL 4000 is, but then I realized that this part was enough that it should be in a separate post.  Click Read More to see a table that compares backup and dedupe performance of the various dedupe products.

First, let's talk about the whole "global dedupe" thing, because it's really germane to the topic at hand.  Global dedupe only comes into play with multi-node systems.  A quick definition of global dedupe is when a dedupe system will dedupe everything against everything, regardless of which head/node it arrives at.  So if you have a four-node appliance, and the same file gets back up to node A and node B, the file will only get stored once.  Without global dedupe (otherwise known as local dedupe), the file would get stored twice.

Let's talk about Data Domain, as they currently own the [target] dedupe market hands down.  But they have local dedupe, not global dedupe.  (This is not to be confused with Data Domain's term global compression, which is what they called dedupe before there was a term for it.)  When you hit the performance limit of a single Data Domain box, their answer is to buy another box, and two DD boxes sitting next to each other have no knowledge of each other; they do not dedupe data together; they don’t share storage; you cannot load balance backups across them, or you will store each backup twice.  You send Exchange backups to the first box and Oracle backups to the second box.  If your Oracle backups outgrow the second box, you’ll need to move some of them to a third box.  It is the Data Domain way.  They are telling me they'll have global dedupe in 2010, but they don't have it yet.

What Data Domain is doing, however, is shipping the DDX “array,” which is nothing but markitecture.  It is 16 DDX controllers in the same rack.  They refer to this as an “array” or “an appliance” which can do 42 TB/hr, but it is neither an array nor an appliance.  It is 16 separate appliances stacked on top of each other.  It’s only an array in the general sense, as in “look at this array of pretty flowers.”  I have harped on this “array” since the day it came out and will continue to do so until Data Domain comes out with a version of their OS that supports global deduplication.  Therefore, I do not include this "array's" performance in the table at the end of this blog article.

When talking about the DDX "array," a friend of mine likes to say, "Why stop at 16?  If you're going to stack a bunch of boxes together and call them an appliance, why not stack a 100 of them?  Then you could say you have an appliance that does 50,000 MB/s!  It would be just as much of an appliance as the DDX is."  I have to agree.

In contrast, Diligent, Exagrid, Falconstor, and SEPATON all have multi-node/global deduplication.  Diligent supports two nodes, Falconstor four, SEPATON five, and Exagrid six.  So when Diligent says they have “a deduplication appliance” that dedupes 900 MB/s with two nodes, or SEPATON says their VTL can dedupe 1500 MB/s with five nodes, or Falconstor says they can dedupe 1600 MB/s with four nodes, or Exagrid says they can do 450 MB/s with six nodes, I agree with those statements – because all data is compared to all data regardless of which node/head it was sent to.  (I'm not saying I've verified their numbers; I'm just saying that I agree that they can add the performance of their boxes together like that if they have global dedupe.)

By the way, despite what you may have heard, I’m not pushing global dedupe because I want everything compared to everything, such as getting Oracle compared with Exchange.  I just want Exchange always compared to Exchange, and Oracle to Oracle – regardless of which head/node it went to.  I want you to be able to treat deduped storage the same way you treat non-deduped storage or tape; just send everything over there and let it figure it out.

NetApp, Quantum, EMC & Dell's [target dedupe products] have only local dedupe.  [Both EMC & Symantec have global dedupe in their source dedupe products.)  That is, each engine will only know about data sent to that engine; if you back up the same database or filesystem to two different engines, it will store the data twice.  (Systems with global dedupe would store the data only once.)  I therefore do not refer to two dedupe engines from any of these companies as “an appliance.” I don’t care if they’re in the same rack or managed via a single interface, they’re two different boxes as far as dedupe is concerned.

Backup and Dedupe Speed

No attempt was made to verify any of these numbers.  If a vendor is flat out lying or if their product simply doesn't work, this post is not going to talk about that.  (If I believed the FUD I heard, I'd think that none of them worked.)  I just wanted to put into one place all the numbers from all the vendors of what they say they can do.

For the most part, I used numbers that were published on the company's website.  In the case of EMC, I used an employee (although unofficial) blog.  Then I applied some math to standardize the numbers.  In a few cases, I have also used numbers supplied to me via an RFI that I sent to vendors.  If the vendor had global/multi-node/clustered dedupe, then I gave the throughput number for their maximum supported configuration.  But if they don’t have global dedupe, then I give the number for one head only, regardless of how many heads they may put in a box and call it “an appliance.”

For EMC, I used the comparison numbers found on this web page. EMC declined to answer the performance questions of my RFI, and they haven't officially published dedupe speeds, so I had to use the performance numbers published this blog entry on Scott Waterhouse's blog for dedupe speed.  He says that each dedupe engine can dedupe at 1.5 TB/hr.  The 4106 is one Falconstor-based engine on the front and one Quantum-based dedupe engine on the back.  The 4206 and the 4406 have two of each, but each Falconstor-based VTL engine and each Quantum-based dedupe engine is its own entity and they do not share dedupe knowledge.  I therefore divided the numbers for the 4206 and the 4406 in half.  The 4406’s 2200 MB/s divided by two is the same as the 4106 at 1100 MB/s.  (The 4206, by that math, is slower.)  And 1.5 TB/hr of dedupe speed translates into 400 MB/s.

Data Domain publishes their performance numbers in
this table.  Being an inline appliance, their ingest rate is the same as their dedupe rate.  They publish 2.7 TB/hr, or 750 MB/s for their DD690, but say that this requires OST.  It’s still the fastest number they publish, so that’s what I put here.  I would have preferred to use a non-OST number, but this is what I have.

Exagrid's numbers were taken from this web page, where they specify that their fastest box can ingest at 230 MB/s (830 GB/Hr).
  They have told me that their dedupe rate per box is 75 MB/s.  They support global dedupe for up to 6 nodes, so these numbers are multiplied times 6 in the table.

For Falconstor, I originally used this data sheet where they state that each node can back up data at 1500 MB/s and that they support 8 nodes in a deduped cluster.  (However, I subsequently found out that, despite what that data sheet says, they do not yet fully support an 8-node cluster. They have only certified a 4-node cluster, so I have updated the numbers according.) They have not published dedupe speed numbers, but they did respond to my RFI.  They said that each node could dedupe at 500 MB/s.

IBM/Diligent says here that they can do 450 MB/s per node, and they support a two-node cluster.  They are also an inline box, so their ingest and dedupe rates will be the same.  One important thing to note is that IBM/Diligent requires FC or XIV disks to get these numbers.  They do not publish SATA-based numbers.  That makes me wonder about all these XIV-based configs that people are looking at and what performance they're likely to get.

NetApp has
this data sheet that says that they do 4.3 TB/hr with their 1400.  However, this is like the EMC 4400 where it's two nodes that don't talk to each other from a dedupe perspective, so I divide that number in half to make to make 2150 GB/hr, or just under 600 MB/s.  They do not publish their dedupe speeds, but I have asked for a meeting where we can talk about them.

Quantum publishes this data sheet that says they can do 3.2 TB/hr in fully deferred mode and 1.8 TB/hr in adaptive mode.  (Deferred mode is where you delay dedupe until all backups are done, and adaptive dedupe runs while backups are coming in.)  I used the 3.2 TB/hr for the ingest speed and the 1.8 TB/hr for the dedupe speed, which translates into 880 and 500 MB/s, respectively.

Finally, with SEPATON, I used this data sheet where they say that each node has a minimum speed of 600 MB/s, and this data sheet where they say that each dedupe node can do 25 TB/day, or 1.1 TB/hr, or 300 MB/s.  Since they support up to 5 nodes in the same dedupe domain, I multiplied that times 5 to get 3000 MB/s of ingest and 1500 MB/s of dedupe speed.

Backup & dedupe rates for an 8-hour backup window

Vendor Ingest Rate (MB/s) Dedupe Rate (MB/s) Caveats
EMC 1100 400 2 node data cut in half (no global dedupe)
Data Domain 750 750 Max performance with OST only, NFS/CIFS/VTL performance appx 25% less
Exagrid 1388 450 6 node cluster
Falconstor/Sun 6000 2000 8 node cluster, requires FC disk
IBM/Diligent 900 900 2 node cluster, requires FC or XIV disk
NetApp 600 Not avail. 2 node data cut in half (no global dedupe)
Quantum/Dell 880 500 Ingest rate assumes fully deferred mode (would be 500 otherwise)
SEPATON/HP 3000 1500 5 nodes with global dedupe

However, many customers that I've worked with are backing up more than 8 hours a day; they are often backing up 12 hours a day.  If you're backing up 12 hours a day, and you plan to dedupe everything, then the numbers above change.  (This is because some vendors have a dedupe rate that is less than half their ingest rate, and they would need 24 hours to dedupe 12 hours of data.)  If that's the case, what's the maximum throughput each box could take for 12 hours and still finish it's dedupe within 24 hours? (I'm ignoring maintenance windows for now.)  This means that the ingest rate can't be any faster than twice that of the dedupe rate, if the dedupe is allowed to run while backups are coming in. 

This meant I had to change the Quantum number because the original number assumed that I was deferring dedupe until after the backup was done.  If I did that, I would only have 12 hours to dedupe my 12 hour backup. Therefore, I switched to its adaptive mode, where the dedupe is happening while the backup is coming in.

Backup & dedupe rates for a 12-hour backup window

Vendor Ingest Rate (MB/s) Dedupe Rate (MB/s) Caveats
EMC 800 400 2 node data cut in half (no global dedupe)
Data Domain 750 750 Max performance with OST only, NFS/CIFS/VTL performance appx 25% less
Exagrid  900  450  6 node cluster
Falconstor/Sun 4000 2000 8 node cluster, requires FC disk
IBM/Diligent 900 900 2 node cluster, requires FC or XIV disk
NetApp 600 Not avail. 2 node data cut in half (no global dedupe)
Quantum/Dell 500 500 Had to switch to adaptive mode
SEPATON/HP 3000 1500 5 nodes with global dedupe

Dedupe everything?

Some vendors will probably want to point out that my numbers for the 12-hour window only apply if you are deduping everything, and not everybody wants to do that.  Not everything dedupes well enough to bother deduping it.  I agree, and so I like dedupe systems like that support policy-based dedupe.  (So far, only post-process vendors allow this, BTW.) Most of these systems support doing this only only at the tape level.  For example, you can say to dedupe only the backups that go to these tapes, but not the backups that go to those tapes.  The best that I've seen in this regard is SEPATON, where they automatically detect the data type.  You can tell a SEPATON box to dedupe Exchange, but not Oracle.  But I don't want to do tables that say "what if you were only deduping 75%, or 50%," etc.  For comparison-sake, we'll just say we're deduping everything.  If you're deduping less than that, do your own table. ;)

Restore Speed

When data is restored, it must be re-hydrated, re-duped, or whatever you want to call it.  Most vendors claim that restore performance is roughly equivalent to backup performance, or maybe 10-20% less. 

One vendor that's different, if you press them on it, is Quantum, and by association, EMC and Dell.  They store data in its deduped format in what they call the block store.  They also store the data in its original un-deduped, or native format, in what they call the cache.  If restores are coming from the cache, their speed is roughly equivalent to that of the backup.  However, if you are restoring from the block pool, things can change significantly.  I'm being told from multiple sources that performance can drop by as much as 75%.  They made this better in the 1.1 release of their code (improving it to 75%), and will make it better again in a month, and supposedly much better in the summer.  We shall see what we shall see.  Right now, I see this as a major limitation of this product.

Their response is simply to keep things in the cache if you care about restore speed, and that you tend to restore more recent data anyway. Yes, but just because I'm restoring the filesystem or application to the way it looked yesterday doesn't meaning I'm only restoring from backups I made yesterday.  I'm restoring from the full backup from a week ago, and the incrementals since then.  If I only have room for one day of cache, only the incremental would be in there.  Therefore, if you don't want to experience this problem, I would say that you need at least a week of cache if you're using weekly full backups.  But having a week of cache costs a lot of money, so I'm back to it being a major limitation.


Well, there you go!  The first table that I've seen that summarizes the performance of all of these products side-by-side.  I know I left off a few, and I'll add them as I get numbers, but I wanted to get this out as soon as I could.


0 #35 Matthew O'Keefe 2009-09-23 13:20
Often dedupe performance on the first write of new data is lower than later re-writes of (substantially the same) data, so are the performance numbers you've discussed mostly for first writes or later writes? It would seem that the most common case is re-writing mostly the same data, so perhaps there is a reason to focus on quoting that number. I'd appreciate your viewpoint on this issue.
0 #34 W. Curtis Preston 2009-07-13 21:51
That's pretty much it!
0 #33 jonronix 2009-07-13 16:43
I noticed that you mentioned in your comments that NEC has global dedup, but they are not in your list. Is there a reason for this? Or do you just not have this information for them?
0 #32 W. Curtis Preston 2009-05-07 21:39
Thanks for joining the discussion and for your polite demeanor, even though it's obvious you really didn't like one part of the post. ;-)

I realize I didn't specify that I'm specifically talking about target dedupe, but I am. Perhaps I'll update it just to say that, and insert the word target in a number of places.

I am not an analyst. I am not a paid blogger. Data Domain has not paid me a dime to do anything. None of the vendors mentioned above are my clients. In fact, I'm as much of an annoyance to Data Domain as I am to EMC and others. (They'd really rather I stop pointing out that they don't have global dedupe.) If I say something it's because I believe it to be fact or at a minimum I believe it as my own opinion.

In fact, if you had continued reading the paragraph where the sentence to which you objected was found, you'd see that I gave Data Domain more crap than praise. I basically said, "Yeah, they own the market, BUT they still don't have global dedupe." (I point this out in advance because SOME would argue that I must be wrong on global dedupe because the market leader doesn't have it. I want you to see that I know who they are in the marketplace, but I also want to you to see that they don't have global dedupe.

Now. as to the "owning the market" comment, I should have put the word "target" in there, so I will (and have edited the original comment to reflect that):

"Let's talk about Data Domain, as they currently own the target dedupe market hands down."

They've got around 3000 customers and many more shipped systems than any vendor of which I'm aware, and that number goes up every day. The mindshare they have with end users is also unparalleled. When I talk to customers and I'm talking about target dedupe, they automatically start talking about Data Domain, as if the two are synonymous. If EMC works hard enough and long enough, and continues the practices to which I alluded in my other post (http://www.backupcentral.com/content/view/234/47/), they might indeed change this, but I certainly feel that the statement holds true today.

If they don't own the target dedupe market, I don't know who does.
0 #31 Mike Dutch 2009-05-07 17:37
Hi Preston,

I'm the Co-Chair of the SNIA DMF Data Deduplication and Space Savings Special Interest Group (DDSR SIG). I currently work at EMC and have previously worked at IBM, HDS, VERITAS, and Troika Networks (acquired by QLogic).

Some comments on a few of your statements:

"Global dedupe only comes into play with multi-node systems"

After a year of vigorous debate by DDSR SIG members, the industry consensus on what global data deduplication means is captured by this definition:

Data deduplication which stores only unique data across multiple deduplication systems. For example, global data deduplication stores only unique data across multiple target appliances or sends and stores only unique data from multiple source clients.

At first glance this agrees with your initial comment but it does not coincide with your later comment: "NetApp, Quantum, EMC & Dell, have only local dedupe."

Are you restricting your comments to target data deduplication only? EMC has both source and target implementations. Your statement is at odds with the facts regarding EMC Avamar (just one example) which supports global data deduplication.

"Let's talk about Data Domain, as they currently own the dedupe market hands down."

I've been in the storage business over 30 years in management, engineering, product management, marketing, field support, and consulting roles. I get that you need to evangelize the desires of your clients to make money. However, statements like the above do a disservice to all of our customers. Try to stick with the facts.
0 #30 W. Curtis Preston 2009-04-24 01:03
I specifically said I didn't verify any of the numbers, that I was just compiling all of the numbers that each company published.

I actually spoke directly to Falconstor regarding the difference between the PDF file you referenced and this page www.falconstor.com/en/pages/?pn=VTLFeatures, and they said that the latter was more up to date -- that they had just qualified 8 nodes in their cluster, and had not updated the PDF version yet. (Hey, Falconstor! Update your stinking PDF already!)

As to SEPATON's "exaggerated" claims of 500:1 dedupe ratio, consider this. When they were using those numbers, they were talking about "backup-over-backup" dedupe, meaning last night's backup got reduced by 500:1. While the numbers they were giving were valid (when looking at them that way), I and others counseled them that it made them look silly, as no one cared about how last night's backup got deduped. What we care about is how much ALL my backups were getting reduced. The result is that they changed their messaging about that a while ago; they don't claim those numbers any more. Look all over their site, and the most you'll is 50:1, and it will have caveats that say that this is most likely to happen in an Exchange-centric environment. (Try a google of "site:www.sepaton.com.com 50:1" or "site:www.sepaton.com 40:1" and you will find hits. What you won't find is "site:www.sepaton.com 500:1." So I really wouldn't say that they are more likely to exaggerate than anyone else.

I actually think all of you are exaggerating. But since I can't verify (without independent testing) how MUCH each of you are exaggerating, I'm just publishing advertised numbers.

I completely agree with you on the need for an independent test. It will be the subject of a later blog.
0 #29 Alex Sons 2009-04-22 06:06
Hi Curtis,

Thanks for collecting this info. Valuable and arguable, the best combo for any blog posting.

Below I'll delve into IBM's TSM storagepool features and how I would see a best fit between backup, restore, storage capacity and performance.

IMO, you should dedup wat is best suitable for dedup.

When backing up fileserver data, TSM only backups new and changed files which they call forever incremental. Chances are you'd see low dedup ratios. So, instead of using expensive VTL capacity, expensive both in costs as in performance, you could best store the fileserver data in a filebased storagepool. You could name this a software-VTL without any dedup or compression. Creating such a pool of cheap 1/1.5 TB SATA drives may give you both the backup and restore performance you would need for fileserver data.

Database backups typically are full backups each night, so dedup could work out very nice, both in terms of storage capacity and restore performance. Backup performance is somewhat trickery. Using multiple streams each to a seperate LTO4 drive would easily outperform any dedup solution. As such is only best practice for really big databases I'll take this not into consideration for now.

The one really nice feature Diligent offers (oops, nowadays IBM!) is that it can take a LUN from almost any storage system. I myself would be very curious how Compellent would work out as disk storage for a Diligent VTL. Compellent is rather cheap, writes incoming datablocks to Tier 0 (SSD) or Tier 1 (FC disk) and is able to migrate all new datablocks overnight to SATA disks. In short, it uses its SSD/FC storage as a cache for lower cost RAID5 SATA storage layers and this could be very effective in both backup and restore performance.

If this Diligent/Compellent combo really sings it could be a nice solution for any Backup Server (NetBackup, CommVault, TSM, etc.).

Sadly, till date I have not had the possibility to test such a solution :sad:
0 #28 udubplate 2009-04-04 23:28

As mentioned above, each solution has a varying degree of effect based on how the solution is designed as well as how its used. By reversing the curve, it may be good enough for some but that is dependent on what your requirements are, where 99% of your restores are coming from (ie the last backup or not), how much data is being stored on the device (there's a big difference between 1 week vs 1 year of retention for example), and various other factors. As should always be the case, everyone should test the solutions, and make sure they're testing performance of restores based on the desired retention period you have (ie don't simply test restore speeds for a week's worth of backups if you're going to retain a years worth on the device as the effect you mention may vary widely based on time parameters based on the solution).
0 #27 Jeremy 2009-04-02 17:26

Giving preference to the most recent copy, simply reverses the performance / age curve. A number of systems use a portion of the disk space as a first level cache to keep the most recent copy fast. In those cases the performance curve is U shaped which can completely hide any advantages to "forward referencing" the data for many workloads.

Fundamentally, "forward referencing" doesn't solve the problem of having to seek all over the disk to build a volume stream. In fact, the problem of defragmenting reclaimed space becomes harder and more important. For large systems its possible the defragmenting/reclamation process becomes the system bottleneck.

If you fail to adequately defragment the space reclaimed from existing volumes, then the space to store the new volumes ends up being scattered in non optimal ways across the array as time progresses. If you cannot find large contiguous regions to store the new data, then you end up seeking.

If the user is backing up extremely well behaved data, where references between data streams are close in time, and those references are fairly large, the problem won't initially be as noticeable. In that circumstance its probably possible to even have datasets which don't fragment. The space is reused before the system reaches a capacity where another stream is interleaving into the space of a stream still stored on the machine. As the machine fills up, this behavior is going to be minimized. Its also going to be minimized if the volumes are expiring and being reused at diffrent rates.

The problem will probably be pushed off to the point where the sales guys are long gone. I'm not sure I would want to be the guy left standing there waiting for the system to rebuild an "archive" tape, or wondering why the dedupe process no longer completes in its window.
0 #26 udubplate 2009-04-01 01:01

As Curtis identified, what you mention has a varying degree of effect based on how the solution is designed. On factor is the method of referencing that is used in the vendors deduplication algorithms. For those that use Forward Referencing (the minority it seems use it, SEPATON is one of the ones that does, I believe there are others), the most recent data is kept in its undeduplicated format as opposed to Reverse Referencing where the reverse is often true and the most recent backup is the "most deduplicated" for lack of a better term. Forward Referencing creates some unique challenges from a design perspective (especially when you begin to talk about deduplicated replication), but the idea is that most restores are done from the most recent backup so that's the one that you want to be in an undeduplicated format where the effect you mention does not exist.

Sponsored Links