The Rehydration Myth

I think that the concept that deduplication vendors “rehydrate” data is completely wrong and needs to be abandoned.  This “rehydration” or “reduping” data is blamed for the penalty that some dedupe vendors have on reads, but I think the concept is completely incorrect.  Click Read More to see what I’m talking about.

This is not to say that some deduplication systems restore/read much slower than they write.  (Not all, mind you, but some.)  Still others have some sort of dedupe penalty on reads.  Some, however, have no such penalty.  What I’m saying is that the concept known as rehydration doesn’t really exist.

First let’s talk about filesystems.  When a filesystem attempts to store a file on disk, it tries to keep it as contiguous as possible (i.e. keeps it together on disk).  However, rarely can it do that completely.  It therefore breaks the file up into segments and stores them, still trying to keep the segments as large as possible.  The reason for attempting to store it contiguously is that it’s much faster to do disk reads than disk seeks.  A file that is in one contiguous spot on disk needs one disk seek and one disk read.  A disk reading a file that was broken up into ten segments must do ten disk seeks and ten reads; this takes longer.

When a filesystem reads a file from disk, it has to know where the different segments of data are, and it requests said segments.  The closer together the segments are, the fewer disk seeks the disk has to do, and the faster the file is given to the requesting application.   If the ratio of disk seeks to disk reads is very high, then a filesystem is said to be highly fragmented.  Windows includes a defragmentation application to do its best to resemble all the filesystem segments as contiguously as possible, and there is a significant change in performance when a filesystem is fully defragmented.

Now let’s look at deduplication systems.  Dedupe vendors (NAS & VTL alike) basically have a fancy filesystem that writes segments of data and keeps track of where it wrote them.  Frankly, this isn’t that much different than any other filesystem.  The only difference is that there is another application on top of the filesystem that makes sure that the filesystem only stores unique segments.  The challenge is that this by default means that a given set of data (e.g. a given full backup) is automatically stored much less contiguously than it would have been stored if you weren’t deduplicating.

When the dedupe system must read the data, the application keeping track of the segments passes the various segments to the “filesystem” to be read.  The “filesystem” then requests those blocks from disk.  If the vendor does nothing to optimize for reads, then this means that the ratio of disk seeks to disk reads is going to be very high, resulting in the “rehydration penalty” that we’re familiar with.  But please note, no actual rehydration is occurring here.  It’s just a series of disk seeks and disk reads just like any other filesystem; however, what it is is a very fragmented filesystem.  In addition, more recent backups will tend to be more fragmented than older backups, making the restores of older data faster than the restores of newer data.  Ooops.  (If anything, newer restores should be faster, since most of your big restores are from yesterday’s version.)

In addition to the inherit fragmentation issue, dedupe systems may also be slowed down by the method they use to keep track of the unique segments.  Some systems have a direct reference method that requires no “lookup” to find all the segments, which is analogous to how a typical filesystem works.  Others require some sort of lookup to identify all the segments, which can be slower, of course, than no lookup.  Others may have other inefficiencies in their system, such as requiring the read of a large segment just to get a smaller segment of data out of it.

But all of them have the fragmentation issue, so what do dedupe vendors do to overcome this inherit problem with dedupe?  Let’s take a look at a few:

  • Disk cache
    • Only possible with post-processing dedupe systems, one design is to store last night’s backups in their native format.  That will allow for very fast restores and copies, as it will be stored contiguously.
  • Forward referencing
    • This is possible with vendors that have last night’s backups in their native format (again post-processing vendors only).  When they find two segments of data that are redundant, they delete the older version of the segment instead of the newer one.  This, by default, keeps more recent data more contiguous than older data.
  • Built-in defrag
    • Many systems watch data over time and defrag/reorganize data in order to overcome the fragmentation problem.  This is the method deployed by most inline vendors.

In the end, the only thing that matters is read performance.  How you get there is irrelevant.  Make sure you test the restore and copy performance of any vendor you are considering.  I’d also test restore performance of newer and older data.  Newer data should restore at the same speed (or better) as older data.  If not, it’s only going to get worse over time.

Written by W. Curtis Preston (@wcpreston), four-time O'Reilly author, and host of The Backup Wrap-up podcast. I am now the Technology Evangelist at Sullivan Strickler, which helps companies manage their legacy data

1 comment