There have been some interesting options that have recently presented themselves for using deduplicated, replicated backups on a regular basis. If you’re not familiar with the challenges of using these backups without some help, I’ll explain those first, and then go into the solutions to these problems that are showing up.
Update: I’m talking about backups from a regular backup software product that are going to a VTL or IDT (intelligent disk target). I know there are all kinds of other backups out there that use replcation. In this blog post, I’m talking about regular ol’ backups.
Those of you that have followed the blog know that I’m a fan of using deduplication and replication to get backups offsite without shipping tapes. However, there are challenges with using those replicated backups to do things like make copies to tape, or (Heaven forbid) perform restores.
If you want to use them only in the case of disaster, what most people do is use a standby main backup server and then figure out some way to regularly get the backup catalog/database over there. This can be done via replication of the catalog/database itself, replication of catalog/database backups to the disk device, or even shipping catalog/database backup tapes to the remote site. The down side to the latter two approaches is the downtime required to restore the backup catalog/database before restores can begin. The first approach works well, but many people do not use it for some reason.
Those that are more focused on day-to-day copies to tape at the replication destination have thought of using a media server/storage node/device server in the remote site that can access the replicated backups, but there are problems with this approach. If we’re talking a NAS system, the replicated copy is served from a different hostname, so the backup software will be not look for it there. . (I have thought about hacking host files so that the remote backup server thinks it’s mounting the same NAS system that’s in the source site, but I’m not sure if that will work.) If we’re talking a VTL, the matching bar codes will cause the backup software to think that one tape is in two places, so that definitely doesn’t work.
The first solution to this problem was Symantec’s OST, which I’ve already blogged about here and here. The second solution was with CommVault. Although I have not tested it, they tell me that they can have a media agent watch a replication target directory and look for backups that show up there after replication. Since the media agent will have access to the database that is tracking what’s in those backups, they’re saying that this would allow you to use the media agent at the remote site to copy backups to tape on a regular basis. It appears that this will work only with NAS products, as they would still have the matching bar code problem with VTLs.
I also talked with the HP people yesterday where they updated me on what they’re doing with Data Protector. They also have a unique solution to this problem if you use Data Protector and the HP VLS (a VTL based on SEPATON).
Their solution is enabled by a new feature in DP that allows one server to import backup catalog entries from another DP server. This allows you to move a tape (or replicate a virtual tape) to another DP server and have the other DP server ask the first DB server for the catalog/database entries of what’s on that tape.
So their solution starts with a separate DP server in the replicated site (not a media server). It then uses an automated script that is run occasionally via a scheduler. The script looks for new virtual tapes that weren’t there the last time the script ran, and then it asks the source backup server for the backup catalog information for those tapes. Once that data is imported from the source server to the destination DP server, the tapes can be used for whatever you want — copies to tape or DR restores. It’s essentially like the best option described above (mirroring of the catalog) without having to use mirroring software.
I look forward to other solutions to this problem.
----- Signature and Disclaimer -----
Written by W. Curtis Preston (@wcpreston). For those of you unfamiliar with my work, I've specialized in backup & recovery since 1993. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.