A few weeks ago, I posted a blog entry about building my own backup system using rsync, instead of using MacOS’s built-in Time Machine. I did this for a number of reasons. Well, it’s been a few weeks and I’ve learned some lessons. So I thought I’d share them!
The first thing I learned is that ssh will sometimes time out when rsyncing via ssh between two macs. (This is probably the case between other systems.) I’m guessing it’s when it’s building the file list from a very big drive. To fix that problem, I edited /etc/sshd_config on each Mac. In this scenario, the “backup server” is the ssh client and the “backup client” is the sshd server, since we are sshing from the backup server to the backup client. Therefore, on the backup client, I edit the sshd_config file and added the value ClientAliveInterval 60. It tells the server that if 60 seconds goes by without hearing from the client (which is what will happen during a large file transfer, or while building the file list), it should send a message via the encrypted channel and request a response from the client. If it gets a response (which it should, since the client is alive and well) it will keep the connection alive. This will happen over and over until things start going again.
But since this setting is really designed to notice a dead client, I also added the ClientAliveCountMax 4 value to tell it to give up after a few tries. (In our config, this value will never be reached, as it is a count of the number of times it sent a message to the client and it DIDN’T respond. In our config, it will respond just fine.)
Sometimes files change or disappear during a sync. When this happens, rsync exits with a 24 instead of a 0. My experience so far is that it rarely happens (as I’m usually backing up at night), but it happens sometimes. But since it happens sometimes I put in the script that 24 or 0 is OK.
Although MacOS is based on Unix, it has the ability to ignore Unix things occasionally, like case or ownership. By default when you make a new drive in MacOS (except when you’re installing the OS), it does not enable ownership on the drive. Meaning that no matter who writes a file to the drive, the file ends up being owned by unknown/unknown.
I use a separate backup drive for backing up my OS drives. It’s just a SATA drive in a 3rd party enclosure. My 250 GB drive I had in there filled up right at the time the 750 GB drive in my Drobo needed to be replaced by a 2 TB drive. I swapped the 750 for the 250 and now I had tons of space! Except that in a few days it filled up! What in the world?
After investigating, I saw that each subdirectory under ./backups/
When the iMac that is the backup server has a cron entry to wakeup and back everybody up, it sometimes find that all the other Macs are asleep. Although they supposedly are supposed to have the WAKEONLAN feature, and I’ve read a few articles on how to wake them up via the LAN, I have not been able to do it. My only solution was to create a cron job on each client to wake up just before backup time and (ironically) use the Unix sleep command to get it to stay awake until it’s appointed time. I sure would like a more elegant solution to this. I may actually move toward having them all back themselves up to the server using their own cron jobs, cause this is downright silly. But I’m open to suggestions.
Once the things above were worked out, it is a very fast backup method. I backup over 4 TB of data in under 10 mins. I also provide an offsite backup of backupcentral.com & truthinit.com by rsyncing their backup files (which are automatically created via cpanel every night) to the same drive.
Backup of BackupCentral
That takes about seven more minutes because I’m copying down hundreds of MB of files every night. Not only does this give me an offsite backup, it gives me more granularity than what cpanel offers me. (While they do keep a monthly and weekly backup, the daily backup is overwritten every day. So I can either restore to last night or last week. Yuck.) So I’ve always copied it offsite every day. This way is much more elegant and treats my backupcentral data the same way I treat me other data. One system to monitor.
Monitoring and Reporting
Now that I’ve got things completely working, I need to build some reporting into this thing. It would be nice if I didn’t have to log in to the one iMac every day to check my status. Of course, that means I’m going to have to figure out how to get the iMac to email me. Excuse me while I go google some stuff.
I hope that some of you find this useful.
----- Signature and Disclaimer -----
Written by W. Curtis Preston (@wcpreston). For those of you unfamiliar with my work, I've specialized in backup & recovery since 1993. I've written the O'Reilly books on backup and have worked with a number of native and commercial tools. I am now Chief Technical Architect at Druva, the leading provider of cloud-based data protection and data management tools for endpoints, infrastructure, and cloud applications. These posts reflect my own opinion and are not necessarily the opinion of my employer.