Does Bit Error Rate Matter?

Magnetic devices make mistakes; the only question is how many mistakes will they make. I was presenting a slide a while ago that listed the Bit Error Rates (BERs) for various magnetic media, and someone decided to take issue with it.  He basically claimed that it was all bunk and that modern-day magnetic devices don’t make mistakes like I was talking about.  He said he had worked several years in businesses that make disk drives, and that there was essentially no such thing as a bit error.  I thought I’d throw out a few other thoughts on this issue and see what others think.

Note: The first version of this post referred to Undetectable Bit Error rate vs just Bit Error Rate.  The more common term is just BER.  But when it’s UBER, the U apparently refers to unrecoverable, not undetectable.

He said that all modern devices (disk and tape) do read after write checks, and therefore they catch such errors.  But my (albeit somewhat cursory) knowledge of ECC technology is that the read after write is not a block-for-block comparison.  My basic understanding is that a CRC hash is calculated of the block before the write, the write is made, the block is read back, the CRC is calculated on what was read back, and if they match all is good.  HOWEVER, since the CRC is so small (12-16 bits), there is a possibility that the block doesn’t match, but the CRC does match.  The result is an undetected bit error.  (This is my best attempt at understanding how ECC and UBERs work.  If someone else who has deep understanding of how it really works can explain it in plain English, I’m all eyes.)

There was a representative in the room from a target dedupe vendor, and he previously worked at another target dedupe vendor.  He mentioned that both vendors do high-level checking that looks for bit errors that disk drives make, and that they had found such errors many times — which is why they do it.

I once heard Stephen Foskett (@sfoskett) say that he thinks that any modern disk array does such checking, and so the fact that some disk drives have higher UBERs than others (and all are higher than most tape drives) is irrelevant.  Any such errors would be caught by the higher level checks performed by the array or filesystem.

For example, an object storage system (e.g. S3) can product a high-level check on all objects to make sure that the various copies of the object do not change. If any of them show a change, it would be flagged via that check, and the corrupted object would be replaced.  It’s a check on top of a check on top of a check. ZFS has similar checking.

But if all modern arrays do such checks, why do some vendors make sure they mention that THEY do such checking, suggesting that other vendors don’t do such checks? 

Unless someone can explain to me why I should, I definitely don’t agree with the idea that UBERs don’t matter.  If drives didn’t make these errors, they wouldn’t need to publish a UBER in the first place.  I somewhat agree with Stephen — if we’re talking about arrays or storage systems that do higher-level checks.  But I don’t think all arrays do such checks.  So I think that UBER still matters. 

What do you think?

Continue reading

----- Signature and Disclaimer -----

Written by W. Curtis Preston (@wcpreston). For those of you unfamiliar with my work, I've specialized in backup & recovery since 1993. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.

Why doesn’t everyone do source dedupe?

It seems to me that source dedupe is the most efficient way to backup data, so why is it that very few products do it? This is what I found myself thinking about today.

Source dedupe is the way to go

This is my opinion and always has been.  Ever since I first learned about Avamar in 1998 (when it was called Undoo). If you can eliminate duplicate data across your enterprise – even before its sent – why wouldn’t you want to do that?  It saves bandwidth and storage. Properly done, it makes the backups faster and does not slow down restores. Its even possible to use dedupe in reverse to speed up restores.

If properly done, it also reduces the CPU load on the client. A typical incremental backup (without dedupe) and a full backup both use way more compute cycles than those that are used to generate whatever hash is being used to do the dedupe.

You save bandwidth, storage, and CPU cycles.  So why don’t all products do this?

Inertia

Products that have been around a while have a significant code base to maintain. Changing to source dedupe requires massive architectural changes that can’t be easily added into the mix with an existing customer. It might require a “rip and replace” from the old to the new, which isn’t what you want to do with a customer.

Update: An earlier version of the post said some things about specific products that turned out to be out of date. I’ve removed those references. My question still remains, though.

No mandate

None of the source dedupe products have torn up the market. For example, if Avamar became so popular that it was displacing the vast majority of backup installations, competitors would have been forced to come up with an answer. (The same could be true of CDP products, which could also be described as a much better way to do backups and restores. Very few true CDP products have had significant success.)  But the market did not create a mandate for source dedupe, and I’ve often wondered why.

Limitations

Many of the source dedupe implementations had limitations that made some think that it wasn’t the way to go.  The biggest one I know of is that restore speeds for larger datasets were often slower than what you would get if you used traditional disk or a target dedupe disk. It seemed that developers of source dedupe solutions had done that venerable sin of making the backup faster and better at the expense of restore speed.

Another limitation of both source and target dedupe – but ostensibly more important in source dedupe implementations –  is that the typical architectures used to hold the hash index topped out at some point. The “hash index,” as it’s called, could only handle datasets of a certain size before it could no longer reliably keep up with the backup speed customers needed.

The only solution to this problem was to create another hash index, which creates a dedupe island.  This reduces the effectiveness of dedupe, because apps backed up to one dedupe island will not dedupe against another dedupe island.  This increases bandwidth usage and the overall cost of things, since it will store more data as well.

This is one limitation my current employer worked around by using a massively scalable no-SQL database – DynamoDB – that is available to us in AWS. Where typical dedupe products top out at a 100 TB or so, we have customers with over 10 PB of data in a single environment, all being deduped against each other.  And this implementation doesn’t slow down backups or restores.

What do you think?

Did I hit the nail on the head, or is there something else I’m missing?  Why didn’t the whole world go to source dedupe?

----- Signature and Disclaimer -----

Written by W. Curtis Preston (@wcpreston). For those of you unfamiliar with my work, I've specialized in backup & recovery since 1993. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.