Backing up a Linux (or Other *nix) System 134
bigsmoke writes "My buddy Halfgaar finally got sick of all the helpful users on forums and mailing lists who keep suggesting backup methods and strategies to others which simply don't, won't and can't work. According to him, this indicates that most of the backups made by *nix users simply won't help you recover, while you'd think that disaster recovery is the whole point of doing backups. So, now he explains to the world once and for all what's involved in backing up *nix systems."
Dump (Score:4, Informative)
http://www.freebsd.org/cgi/man.cgi?query=dump&apr
I still use tar, but ideally I'd like to use dump. As it is now, each server makes its own backups, copies them to a central server, which then dumps them all to tape. The backup server also holds one previous copy in addition to what got dumped to tape. It has come in handy on many occasions.
It does take some planning, though.
Backups (Score:4, Informative)
One thing not mentioned is encryption. The backups should be stored on a media or machine seperate from the source. In the case of the machine you will likely be backing up more than one system. If it is a centralized backup server then all someone has to do is break into that system and they have access to the data from all the systems. Hence encrypted are a must in my book. The servers should also push their data to the backup server, as a normal user on the backup server, instead of the backup server pulling it from the servers.
I used to use hdup2, but the developer abandoned it for rdup. The problem with rdup is it writes straight to the filesystem. Which brings up all kinds of problems, like the ones mentioned in the article. Lately I have been using duplicity. It does everything I want it to. I ran into a few bugs with it, but once I worked around them it has worked very well for me. I have been able to do restores on multiple occasions.
Amanda (Score:5, Informative)
Does the trick for my organization.
Mondoarchive (Score:4, Informative)
Alternative to backup (Score:3, Informative)
I use a wonderful little tool/script called rsnapshot to backup our servers to a remote location. It's fast as it uses rsync and only transmits the portions of files that have changed. It's effortless to restore as the entire directory tree appears in each backup folder using symlinks, and it's rock solid.
Essentially the best part of this solution is it's low maintenance and the fact that restorations require absolutely no manual work. I even have an intermediate backup server that holds a snapshot of our users home directories... my users can connect to the server via a network share and restore any file that has existed in their home directory in the last week by simply copying and pasting it... changed files are backed up every hour.
Sure, the data is not as compressed as it could be in some backup solutions, and it's residing on a running server so it's subject to corruption or hack attempts. But my users absolutely love it. And it really doesn't waste much space unless a large percentage of your data changes frequently, which would consume a lot of tape space as well.
Re:Backups (Score:5, Informative)
For bare metal restore, there's not much that beats a compressed dd copy of the boot sector, the boot partition and the root partition. Assuming that you have a logical partition scheme for the base OS, a bootable CD of some sort and a place to pull the compressed dd images from, you can get a server back up and running in a basic state pretty quickly. You can also get fancier by using a tar snapshot of the root partition instead of a low-level dd image.
Then there are the fancier methods of bare metal restore that use programs like Bacula, Amanda, tar, tape drives.
After that, you get into preservation of OS configuration. For which I prefer to use things like version control systems, incremental hard-link snapshots to another partition and incremental snapshots to a central backup server. I typically snapshot the entire OS, not just configuration files and the hardlinked backups using ssh/rsync keep things manageable.
Finally we get into data, and there's two goals here. Disaster recovery and archival. Archive backups can be less frequent then disaster recovery backups since the goal is to be able to pull a file from 2 years ago. Disaster recovery backup frequency depends more on your tolerance for risk. How many days / hours are you willing to lose if the building burns down (or if someone deletes a file).
You can even mitigate some data loss scenarios by putting versioning and snapshots into place to handle day-to-day accidential mistakes.
Or there's simpler ideas, like having backup operating systems installed on the partition (a bootable root with an old, clean copy) that can be booted in an emergency, run no services other then SSH, but have the tools to let you repair the primary OS volumes. Or going virtual with Xen where your servers are just files on the hard drive of the hypervisor domain and you can dump them to tape.
Consistent backups (Score:3, Informative)
So you need a carefully-written, carefully-reviewed, carefully-tested procedure, and you need lockfiles to guarantee that it's not being run twice at once, that nothing else starts the server you shut down while the backup is going, etc. A lot of sysadmins screw this up - they'll do things like saying "okay, I'll run the snapshot at 02:00 and the backup at 03:00. The snapshot will have finished in an hour." And then something bogs down the system and it takes two, and the backup is totally worthless, but they won't know until they need to restore from it.
These systems put a lot of effort into durability by fsync()ing at the proper time, etc. If you just copy all the files in no particular order with no locking, you don't get any of those benefits. Your blind copy operation doesn't pay any attention to that sort of write barrier or see an atomic view of multiple files, so it's quite possible that (to pick a simple example) it copied the destination of a move before the move was complete and the source of the move after it was complete. Oops, that file's gone.
Re:Dump (Score:5, Informative)
I find dump to be the best backup tool for unix systems. One disadvantage is that it deals with whole file systems, which means things have to be partitioned intelligently before hand. I think that's actually a Good Thing (TM).
Re:A quick reply from the author of the article (Score:2, Informative)
Please don't take this the wrong way, but how in the world could you do any sort of proper research for a technical article on backing up Unix systems without having run across the dump command (and its OS-specific variants: ufsdump, xfsdump, efsdump, and AIX backup)? It's not a FreeBSD-specific command. It or a similarly-named variant exists just about everywhere except on Linux. Linux used to have a proper ext2dump, but Linus decided that dump was deprecated because it was too difficult to make it work in the grand new VM/disk-cache subsystems of recent Linux kernels.
It works nothing like MS-DOS backup programs that used the FAT archive bit. It uses date comparisons and dumps low-level filesystem structures to a storage medium. That means:
To operate dump, you have dump "levels". Level 0 is a full filesystem dump. Level 1 contains files that changed since the last level 0 dump. Level 2 contains files that changed since the last level 1 dump, and so on. A file /etc/dumpdates contains a log of backup activity and is used for date comparisons when doing dumps at levels other than 0. In a classic tape rotation, you'd do a level 0 dump once a week, a level 1 the rest of the week (to separate tapes), a level 0 dump to a different tape the next week, and rotate through the level 1 backup tapes again.
Dump and restore are particularly useful for doing system images on systems like Solaris, where the native tar command doesn't always know about extended filesystem attributes.
Re:Dump (Score:3, Informative)
Yes, different file systems need their own versions of "dump" and "restore", because the operations happen at file system level, and need to be able to back up and recover any special features of the file system.
As for producing a viable backup of a heavily used file system, dump is certainly superior to a tar or otherwise trying to work at a file level. With dump, you will be able to get a snapshot copy of files that are locked. But true, a consistent backup of active file system can only be done by either re-mounting the volumes read only, using a shadow copy, or techniques like breaking a mirror.
As for ACLs, alternate streams and other fs-specific features, native dumps are one of the very few ways you can back up files and retain this data.
I back up three machines here faithfully every night with xfsdump, and yes, I've had to restore due to hardware failure and upgrades, so I know they're viable. Since xfsdump supports differential backups (not to be confused with incremental backups), I use a staggered Tower of Hanoi approach. From the crontab of one of the machines: ... where xfsbackup is a script that performs a dump of all file systems in fstab with the dump flag set, at the level specified, and mounting/unmounting if necessary, and only after completion without errors, removing older backups of the same or lower level in this set. (Directly overwriting one backup with its replacement is a typical newbie mistake -- if the machine crashes during backup, you then have no backup at all.)
The use of differential backups instead of incremental allows for a much smaller number of required volumes, and diminishes the risk of a deleted file being restored -- in my case, I need at most 5 volumes per set, and usually 3 or less, with each set holding up to 16 days. This makes restores much quicker too.
The down side is that you will back up the same data more than once; whenever you stay at or go up in backup levels, the same files will be backed up again, even if there's no new changes. In practice, this is minor problem, with a predictable pattern, so resources can be allocated accordingly.
Regards,
--
*Art
Oh, so many problems... (Score:3, Informative)
There is also a separate utility which can split any file into multipile pieces. It's called "split". They can be joined together with cat.
As for mtimes, I ran his test. touch a; touch b; mv b a... Unless the mtimes are identical, backup software will notice that a has changed. This is actually pretty damned reliable, although I'd recommend doing a full backup every now and then just in case. Of course, we could also check inode (or the equivalent), but the real solution would be a hash check. Reiser4 could provide something like this -- a hash that is kept current on each file, without much of a performance hit. But this is only to prevent the case where one file is moved on top of another, and each has the exact same size and mtime -- how often is that going to happen in practice?
Backing up to a filesystem: Duh, so don't keep that filesystem mounted. You might just as easily touch the file metadata by messing with your local system anyway. Sorry, but I'm not buying this -- it's for people who 'alias rm="rm -i"' to make sure they don't accidentally delete something. Except in this case, it's much less likely that you'll accidentally do something, and his proposed solutions are worse -- a tar archive is much harder to access if you just need a single file, which happens more than you'd expect. We used BackupPC at my last job, but even that has a 1:1 relationship between files being backed up and files in the store, except for the few files it keeps to handle metadata.
No need to split up files. If you have to burn them to CD or DVD, you can split them up while you burn. But otherwise, just use a modern filesystem -- God help you if you're forced onto FAT, but other than that, you'll be fine. Yes, it's perfectly possible to put files larger than 2 gigs onto a DVD, and all three modern OSes will read them.
Syncing: I thought filesystems generally serialized this sort of thing? At least, some do. But by all means, sync between backup and clean, and after clean. But his syncs are overkill, and there's no need to sleep -- sync will block until it's done. No need to sync before umount -- umount will sync before detaching. And "sync as much as possible", taken to a literal extreme, would kill performance.
File system replication: You just described dump, in every way except that I don't know if dump can restrict to specific directories. But this doesn't really belong in the filesystem itself. The right way to do this is use dm-snapshot. Take a copy-on-write snapshot of the filesystem -- safest because additional changes go straight to the master disk, not to the snapshot device. Mount the snapshot somewhere else, read-only. Then do a filesystem backup.
"But the metadata!" I hear him scream. This is 2006. We know how to read metadata through the filesystem. If you know enough to implement ACLs, you know enough to back them up.
As for ReiserFS vs ext3, there actually is a solid reason to prefer ext3, but it's not the journalling. Journalling data is absolutely, completely, totally, utterly meaningless when you don't have a concept of a transaction. I believe Reiser4 attempts to use the write() call for that purpose, but there's no guarantee until they finish the transaction API. This is why databases call fsync on their own -- they cannot trust any journalling, whatsoever. In fact, they'd almost be better off without a filesystem in the first place.
The solid reason to prefer ext3 is that ReiserFS can run out of potential keys. This takes a lot longer than it takes ext3 to run out of inodes, but at least you can check how many inodes you have left. Still, I prefer XFS or Reiser4, depending on how solid I need the system to be. To think that it comes down to "ext3 vs reiserfs" means this person has obviously never looked at the sheer number of options available.
As for network backups, we used both BackupPC and DRBD. BackupPC to keep things sane -- only one backup per day. DRBD to replicate the backup server over the network to a remote copy.