A couple of weeks ago I got into a twight (twitter fight) with Storagezilla over whether or not the term near-CDP should be allowed to exist. He opened the dialogue with this parting shot mini-blog post, then went on to refer to near-CDP as non-CDP in this longer blog post. My side is that, while the term isn’t perfect, it’s as good as we’ve got.
First off let me say that I’ve got nothing against Mark, and he’s not the only person in this industry that feels the way he does. But it’s easier to argue with a person than a nebulous mob. So here I go.
I’ll sum up Marks’ arguments with quotes from our twight. Someone asked how one would back up 2 PB, and I responded with near-CDP. He said, “Near CDP? It’s either CDP or it’s a snapshot. My Toyota is nearly a Rolls Royce Phantom because they both have four wheels.” He feels that (what I would call) true CDP “offers an RPO at an individual block level close to zero. Not 20 minutes or an hour ago,” and that when “they use the term near CDP they mean near zero. Well how near zero? … 3600 seconds it’s [sic, I believe he meant isn’t] near to anything when 600 seconds of data loss can cost some transactional environments a ton of money.” Finally he said that “it’s nonsense a warping of the language. Nearly breathing. You are or you’re not. English isn’t that hard a language…Continuous means uninterrupted in time. Near continuous, interrupted in time. NCDP. Interrupted Data Protection. Call it IDP.”
Once upon a time a few companies (e.g. Revivio, Livevault, Kascha) invented something called continuous data protection. It was essentially real-time replication that kept a log of the replicated bits so that it could present the data being replicated at any point in time in the continuum. If you dropped a table 3 seconds ago, it could “undrop” it for you — no problem. These vendors submitted the term to SNIA for inclusion into the SNIA lexicon. There was some excitement around CDP from those who truly felt they had come up with something unique, and from some pundits in the industry that saw it as the Next Great Thing in backups. (For the record, I was a non-believer at the time. I felt it was too much, too soon, and too expensive.)
Due to the excitement, there were other companies (e.g. NetApp, Microsoft, Symantec) who had products that they felt also could be called “continuous data protection.” They wanted what they did to be included in the definition. The CDP vendors fought back and won. Everyone in the industry seems to know that to qualify for the term CDP, you have to be able to recover to any point in time. The official SNIA definition of CDP is, “A class of mechanisms that continuously capture or track data modifications enabling recovery to previous points in time.” In order to qualify for that, you must continuously capture/track/store all changes; only true CDP products do that.
But the other guys weren’t going away. They felt that what they did was almost as good as what the CDP vendors did, and they wanted to differentiate what they did from “regular” backup. The term “near-CDP” was coined. The sources I have actually suggest that it came from the true CDP side of the camp as a compromise, because the snapshot/replication folks wanted nothing short of the term CDP. So somebody said “what if we call it near-CDP? That way you can get what you want, and we can continue to differentiate that what we do is better.” And the term near-CDP was born, although it didn’t make it into the SNIA dictionary. (Right now, I’m sure, those fighting against the term are saying “Ah hah!”)
I went along with the term for very similar reasons. I didn’t want snapshots & replication being called CDP, but I thought that snapshots & replication were cool enough that they should get a term that was better than “snapshots and replication.” My thought was that it is block-level incremental-based (like CDP), and it does offer significantly better RPOs than traditional backups (like CDP), but it’s just not quite continuous, so I was OK with calling it near-continuous. The full truth is that I actually wrestled quite a bit with the term (for similar reasons that Mark is arguing), and tried really hard to coin my own term. But every term I came up with was more derogatory than definitive. (I tried periodic data protection, hourly data protection, but my favorite was occasional data protection. LOL) So I gradually agreed to and started using the term.
BTW, this is the same problem that I have with some of the modern definitions thrown out by Mark and others. Non-CDP and Interrupted data protection are both pejorative terms and only serve the “this isn’t real CDP” part of the discussion. No “snapshot and replication” vendor would agree to use those terms. As a vendor of what I would call a near-CDP product (snapshots & replication on CX), I would think Mark would be concerned with that.
And so I find myself several years later arguing the validity of the term that I myself had issues with.
Let’s first get the grammar issues out of the way. I definitely do not agree with those who feel that near-continuous is an oxymoron. An oxymoron would be periodically continuous, as those are antonyms. I also don’t have an issue with term nearly-continuous outside of the CDP context, and neither do others. A google search for the term nearly continuous turned up several uses where people wanted to describe something that was close enough to being continuous to call it nearly continuous. Mark argued that continuous is a binary state and you can’t be near-continuous any more than you can be near-breathing. Obviously you can’t be nearly breathing, but you can be nearly dead. I hear all the time that the flight I’m on is nearly full. ( do object when they say it’s really full, though, as once you’re full, you can’t get more full. It’s a binary condition.) People often also say that we’re “nearly there” or “almost there.” I think if I searched harder, I would find thousands of examples of the use of near or nearly to modify binary states like continuous. I might even argue that this is the primary purpose of the word! I therefore reject any claims that it is simply an invalid term in English.
Even Mark suggested that his initial problem with the term was not grammar, but that the typical snapshot period was not close enough to continuous to qualify for the term. (He did this when he asks “how near zero” and said that 3600 seconds was not near enough to zero when 600 seconds can mean a lot of money.
My argument is that the next period of time that backups are typically done in is once a day, which is 86400 seconds. I would suggest that 3600 seconds is a lot nearer to 0 than it is to 86400 seconds. This suggests that near is a relative term. If you’re driving from Boston to Los Angeles, it’s OK to say that your nearly there when you’re in Palm Springs stopping for a byte to eat. (You’ve driven 44 hours of a 46 hour trip that probably included a couple of overnight stays.) But if you’re only driving from Palm Springs to LA, it doesn’t seem as right to say that you’re nearly there when you’re in Palm Springs. (You’ve driven 4 of 6 hours.)
My core point is that near-CDP has more in common with CDP than it does with traditional backups. Backups are done in batches that are typically run only once a day, and each batch transfers a whole lot more than the bytes that have changed. They usually transfer full files when only a few bytes have changed, and often transfer repeated full backups. They’re the reason dedupe exists.
CDP and near-CDP, on the other hand, work throughout the day transferring blocks that have changed from the source server to the target server. A true CDP product is continuously replicating as changes happen. Near-CDP products accomplish this in two different ways. Some (like NetApp) take a snapshot, then replicate the blocks necessary to create that snapshot. Others (like Symantec & Microsoft) continuously replicate — just like CDP products — but occasionally create a snapshot of what they’re replicating.
They send and store data more efficiently than backups (you don’t need dedupe if you’re using either CDP or near-CDP). They offer much tighter RPOs than backup (from 0 to 3600). And most importantly, they both offer the application the ability to mount a read-write copy of the backed-up data to be used immediately if the primary is damaged. That’s the bees knees, the mutts nutts, whatever you want to call it. Backup can’t do any of that, but CDP and near-CDP can.
Yes, 3600 seconds is a long time. What is an OK period of time? 600 seconds? Than take your snapshots every 10 minutes; that’s totally possible with a good near-CDP product. Some of them can even take snapshots every few seconds (e.g. FalconStor). It’s up to the customer to look at what is possible with both products and to weigh the risks and benefits against each other and make their own decision. (I think I’ll do another post about those risks and benefits. The purpose of this post is simply to defend the term near-CDP.)
I believe in the end that this is the real problem. CDP vendors don’t want you comparing CDP products to near-CDP products. (Which is what’s bound to happen if you call them “near-CDP.”) Why? Because my research shows that most companies’ requirements can be met just fine by near-CDP products. Or put another way, while many companies would like a zero RPO, when they see what they have to do to get a zero RPO, then weigh the cost and risks of that against their business requirements, and say, “You know, a one-hour RPO is good enough.” Or they combine near-CDP with log shipping so the RPO is one-hour minus the logs that had been successfully replicated. A few companies must have a zero-length RPO and will go to any length or cost to get it.
A problem I have
There are a whole lot of people selling, buying and implementing CDP systems that use asynchronous replication to get the data to the target system. Well that isn’t any more continuous than near-CDP is continuous. Because the real RPO will be anywhere from seconds to hours depending on your system’s ability to replicate the data. In fact, sometimes the CDP system can get so far behind that it enters a special mode where less information is stored. Again, this is far from being continuous. As long as everyone buying the system realizes this, and they have a way to monitor how far behind their target system is getting, then I’m perfectly fine with that. But in a world where SEs and consultants of large backup companies can’t even configure a backup system to make a tape drive happy, I’m quite skeptical that this happens as much as it should. (I’m absolutely amazed at what customer tell me their vendors tell them is “best practice,” or how vendors say things must be configured.)
Anyway, that’s my story and I’m sticking to it.
----- Signature and Disclaimer -----
Written by W. Curtis Preston (@wcpreston). For those of you unfamiliar with my work, I've specialized in backup & recovery since 1993. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.