Backing up a Linux (or Other *nix) System 134
bigsmoke writes "My buddy Halfgaar finally got sick of all the helpful users on forums and mailing lists who keep suggesting backup methods and strategies to others which simply don't, won't and can't work. According to him, this indicates that most of the backups made by *nix users simply won't help you recover, while you'd think that disaster recovery is the whole point of doing backups. So, now he explains to the world once and for all what's involved in backing up *nix systems."
Dump (Score:4, Informative)
http://www.freebsd.org/cgi/man.cgi?query=dump&apr
I still use tar, but ideally I'd like to use dump. As it is now, each server makes its own backups, copies them to a central server, which then dumps them all to tape. The backup server also holds one previous copy in addition to what got dumped to tape. It has come in handy on many occasions.
It does take some planning, though.
Re:Dump (Score:5, Informative)
I find dump to be the best backup tool for unix systems. One disadvantage is that it deals with whole file systems, which means things have to be partitioned intelligently before hand. I think that's actually a Good Thing (TM).
Re: (Score:2)
One disadvantage is that it deals with whole file systems
NetBSD's dump [gw.com] supports files too, not just filesystems.
Re:Dump (Score:5, Insightful)
First, looking at this statement it seems that you have never had to run backups in a sufficiently diverse environment. Dump "proper" has a well known problem - it supports only a limited list of filesystems. It originally supported UFS and was ported to support EXT?FS. It does not support JFS, XFS, ReiserFS, UDF and so on (last time I looked each used to have its own different dump-like utility). In the past I have also ran into some entertaining problems with it when dealing with posix ACLs (and other bells-n-whistles) on ext3fs. IMHO, it is also not very good at producing a viable back up of heavily used filesystems.
Second, planning dumps is not a rocket science any more. Nowdays, dumps can be planned in advance in an intelligent manner without user intervention. This is trivial. Dump is one of the supported backup mechanisms in Amanda and it works reasonably well for cases where it fits the bill. Amanda will schedule dumps at the correct levels without user attendance (once configured). If you are backing to disk or tape library you can leave it completely unattended. If you are backing to other media you will need only to change cartridges once it is set-up. Personally, I prefer to use the tar mechanism in Amanda. While less effective it supports more filesystems and is better behaved in a large environment (my backup runs at work are in the many-TB range and they have been working fine for 5+ years now).
Now back to the overall topic, the original ASK Slashdot is a classic example of "Ask Backup Question" on slashdot. Vague question with loads of answers which I would rather not qualify. As usually what is missing is what are you protecting against. When planning a backup strategy it is important to decide what are you protecting against: cockup, minor disaster, major disaster or compliance.
Re: (Score:3, Informative)
Yes,
Re: (Score:2)
This is not a downside, this is an advantage. One of the ways to increase the probability of recovery is to do this. Unfortunately the human brain (without probability theory training) is not very well suited to this. It is even less suited to follow the changes in the filesystems over time and change these estimates on every backup run so the best is for the backup system to does this for you. This is possibly the best feature in amanda -
Re: (Score:2)
Have you tried bacula? I've heard stories of people migrating from amanda to it, although probably less so these days now that
amanda supports spanning many tapes.
And my pet peeve, neither amanda, bacula or any commercial program I know of supports extended attributes (ACL:s, SELinux labels). #"@%&
Re: (Score:2)
Re: (Score:2)
What about backup and recovery ?
When backing up and recovering files with a SELinux system, care must be taken to preserve SELinux context information. Use the star command to backup SE Linux contexts on Fedora, Red Hat Enterprise Linux (and probably most systems with a recent version of star).
For example,
star -xattr -H=exustar -c -f output.tar [files]
Also the dump and restore utilities for Ext2/3 have been updated to work with XATTRs (and therefore SE Linux contexts). They should w
Re: (Score:2)
Re: (Score:2)
If you're going to go and quote something, please make sure that it is still relevant? I'm not entirely sure that more current versions, say 15 years younger, might not still have the same problems, but I think a re-match is in order to get some real information, here.
The problem with dump. (Score:2)
Now if you use a volume manager you can create snapshots and back those up instead. Unfortunately most filesystems don't have a way of being told that a snapshot is being taken, and to checkpoint themselves. With the exception of XFS. I think there's a patch for ext3 to do this as well, but I don't know which distros include it by default.
I am of the opinion that the safest route is to do a backup at the mounted level of the filesystem from a snapshot from use
Why not FFS? (Score:2)
You can use it if you want. UFS1 is supported R/W. UFS2 exists as read-only.
However ext3 is capability identical to FFS with additional journaling options. In the beginning linux was using minix/xiafs. ext was introduced to help transition from that while bringing modern features to the table. Each evolution on the FS has been forward compatible to ease transition.
extX, reiser, jfs and xfs.
Each of them have a purpose:
extX: simple, low-overhead, modest size limits, online re
Backups (Score:4, Informative)
One thing not mentioned is encryption. The backups should be stored on a media or machine seperate from the source. In the case of the machine you will likely be backing up more than one system. If it is a centralized backup server then all someone has to do is break into that system and they have access to the data from all the systems. Hence encrypted are a must in my book. The servers should also push their data to the backup server, as a normal user on the backup server, instead of the backup server pulling it from the servers.
I used to use hdup2, but the developer abandoned it for rdup. The problem with rdup is it writes straight to the filesystem. Which brings up all kinds of problems, like the ones mentioned in the article. Lately I have been using duplicity. It does everything I want it to. I ran into a few bugs with it, but once I worked around them it has worked very well for me. I have been able to do restores on multiple occasions.
Re:Backups (Score:5, Informative)
For bare metal restore, there's not much that beats a compressed dd copy of the boot sector, the boot partition and the root partition. Assuming that you have a logical partition scheme for the base OS, a bootable CD of some sort and a place to pull the compressed dd images from, you can get a server back up and running in a basic state pretty quickly. You can also get fancier by using a tar snapshot of the root partition instead of a low-level dd image.
Then there are the fancier methods of bare metal restore that use programs like Bacula, Amanda, tar, tape drives.
After that, you get into preservation of OS configuration. For which I prefer to use things like version control systems, incremental hard-link snapshots to another partition and incremental snapshots to a central backup server. I typically snapshot the entire OS, not just configuration files and the hardlinked backups using ssh/rsync keep things manageable.
Finally we get into data, and there's two goals here. Disaster recovery and archival. Archive backups can be less frequent then disaster recovery backups since the goal is to be able to pull a file from 2 years ago. Disaster recovery backup frequency depends more on your tolerance for risk. How many days / hours are you willing to lose if the building burns down (or if someone deletes a file).
You can even mitigate some data loss scenarios by putting versioning and snapshots into place to handle day-to-day accidential mistakes.
Or there's simpler ideas, like having backup operating systems installed on the partition (a bootable root with an old, clean copy) that can be booted in an emergency, run no services other then SSH, but have the tools to let you repair the primary OS volumes. Or going virtual with Xen where your servers are just files on the hard drive of the hypervisor domain and you can dump them to tape.
Re: (Score:2)
I think his complaints are no longer relevant. rdiff-backup has a --compare-hash option, though I haven't checked the details. Maybe the author should give it another look...
Besides, if you
Re: (Score:1)
Re:Backups (with right formatting) (Score:1)
"I think his complaints are no longer relevant. rdiff-backup has a --compare-hash option, though I haven't checked the details. Maybe the author should give it another look.. "
The hash is stored in the meta information, and the compare option does only that, comparing the live system to your archive. It does not say anything about the change-detection behaviour used during a backup.
"Besides, if you have an accurate timeserver (you should! time is
Re: (Score:2)
True, but my assumption (which again, I haven't checked) is that they wouldn't have stored this hash if they weren't doing something with it. I don't think the sanity check uses any information that's not gathered for normal operation.
True. Your ba
Re: (Score:1)
The hash information feature was included after I suggested a feature for hash-change-checking. The hash is already stored, because that was easy to do, but the change checking never got implemented.
Re: (Score:2)
Why on earth would you look at the mtime? That's what ctime is for!
Re: (Score:2)
Anyhow, good luck with your article, and give dump and cpio a spin :-)
Regards,
--
*Art
Amanda (Score:5, Informative)
Does the trick for my organization.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Actually, I remember reading about it on the Amanda page. How long has it had this capability?
Re: (Score:2)
Mondoarchive (Score:4, Informative)
Re: (Score:2)
Mondo is absolutely vital in this regard - it allows you to restore from bare metal, and backs up and restores systems flawlessly. I've had to use it
Re: (Score:2)
Is it any faster that way? My only real complaint about Mondo is that it takes several hours to back up my 26 GB system to DVD+R, even with compression turned off... and for most of that time, I'm watching a progress bar stuck at 100% ("Now backing up large files") even as it burns disc after disc after disc.
Re: (Score:1)
/. is slipping (Score:2)
Re:/. is slipping (Score:4, Funny)
dd if=/dev/sda | rsh user@dest "gzip -9 >yizzow.gz"
And then just restore with
rsh user@dest "cat yizzow.gz | gunzip" | dd of=/dev/sda
Jeez. Was that so tough?
Re: (Score:2)
cat
so the unused fs blocks compress well.
Well, I'd really use a filesystem backup tool (that way you can restore to an upgraded filesystem / partitioning scheme, as well as not bothering to backup unused inodes). The only thing I ever use dd for is backing up the partition table & MBR:
dd if=/dev/sda of=/mnt/net/backup/asdf.img bs=512k count=1
Just remember, after you restore, re-run fdisk to adjust you partit
Lone-Tar. (Score:3, Insightful)
Add a scsi controller, and Drive Of Your Choice, and sleep well.
Simple (Score:1, Redundant)
Alternative to backup (Score:3, Informative)
I use a wonderful little tool/script called rsnapshot to backup our servers to a remote location. It's fast as it uses rsync and only transmits the portions of files that have changed. It's effortless to restore as the entire directory tree appears in each backup folder using symlinks, and it's rock solid.
Essentially the best part of this solution is it's low maintenance and the fact that restorations require absolutely no manual work. I even have an intermediate backup server that holds a snapshot of our users home directories... my users can connect to the server via a network share and restore any file that has existed in their home directory in the last week by simply copying and pasting it... changed files are backed up every hour.
Sure, the data is not as compressed as it could be in some backup solutions, and it's residing on a running server so it's subject to corruption or hack attempts. But my users absolutely love it. And it really doesn't waste much space unless a large percentage of your data changes frequently, which would consume a lot of tape space as well.
Re: (Score:1)
1. Hardware failure: Oops, I just spilled juice all over the motherboard, and shorted the HDD.
2. Accidents: Oops, I just deleted a file.
3. Accident discovery: Oops, I deleted a file a week ago, that I didn't mean to.
4. Accident discovery2: Damn, I need the file I deleted 6 months ago.
5. Once restored, the file should have all the exact time stamps it did when it was backed up.
A REAL backup should let you r
Re: (Score:1)
And grandparent wasn't quite right... the backup uses HARDlinks, not SYMlinks, so restoration is truly effortless (and yes, time/date/gid/uid/mode are all preserved).
Re: (Score:1)
Then please link to the article. I would like to know about it.
Re: (Score:2)
Especially the inability to properly handle files that are in use makes it a poor choice for backing up a running system. Unless you can kick off all users and stop all services and scheduled jobs on a machine while the rsync runs, I wouldn't recommend it at all. You may
Sparse files (Score:2)
99% of the time there is only one sparse file of any significance on your machine:
Unless you really care about the timestamp of each users' prior login, you can safely exclude this file from the backup. Following a restore, "touch
Re: (Score:2)
Obviously, you don't have database files, nor use p2p, then. (Start the download of a few ISO's with a p2p program, and you'll have gigabytes of sparse file non-data.)
Re: (Score:2)
As for p2p, no. Then again, I'm not clear why you would even -try- to back up p2p downloads in progress. Seems like prime candidates for exclusion from the backup process.
Re: (Score:2)
You exclude files from a backup on a system level, not a user level. You can't go into each and every user's home directory and scan for what can be backed up and what can be excluded. You back that all up. Period.
If a user can cripple or trash your backup by creating a 2 TB sparse file, then you don't have a viable backup system.
Regards,
--
*Art
Re: (Score:2)
A quick reply from the author of the article (Score:2, Interesting)
A quick reply from the author of the article before I go to sleep:
About dump. So, that's a freebsd command? I've always suspected it existed, doing the very thing the man page described, because of the dump field in
The suggestions (for soft
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Informative)
Please don't take this the wrong way, but how in the world could you do any sort of proper resea
Re: (Score:1)
Re: (Score:2)
That's
One more thing (Score:2, Interesting)
Consistent backups (Score:3, Informative)
So you need a carefully-written, carefully-reviewed, carefully-tested procedure, and you need lockfiles to guarantee that it's not being run twice at once, that nothing else starts the server you shut down while the backup is going, etc. A lot of sysadmins screw this up - they'll do things like saying "okay, I'll run the snapshot at 02:00 and the backup at 03:00. The snapshot will have finished in an hour." And then something bogs down the system and it takes two, and the backup is totally worthless, but they won't know until they need to restore from it.
These systems put a lot of effort into durability by fsync()ing at the proper time, etc. If you just copy all the files in no particular order with no locking, you don't get any of those benefits. Your blind copy operation doesn't pay any attention to that sort of write barrier or see an atomic view of multiple files, so it's quite possible that (to pick a simple example) it copied the destination of a move before the move was complete and the source of the move after it was complete. Oops, that file's gone.
Re: (Score:1)
Oops, I meant "consistent" here. "Atomic view" is nonsense.
Re: (Score:2)
It's mentioned, just buried (Score:2)
Section 7 recommends syncing and sleeping and warns "consider a tar backup routine which first makes the backup and then removes the old one. If the cache isn't synced and the power fails during removing of the old backup, you may end up with both the new and the old backup corrupted".
Re:Consistent backups (Score:4, Interesting)
Re: (Score:2)
I was under the impression that even with FSFS you still needed to use the hotcopy.py script in order to get a guaranteed consistent backup.
Re: (Score:2)
I originally thought so, too, but check out this thread [svn.haxx.se]. Old revision files are never modified, old revprop files are modified only when you do "svn propset --revision", and new files are created with a unique tempfile name then svn_fs_fs__move_into_place [collab.net]. My backup script does some additional sanity checking (ensures the dir is an fsfs repository of version 1 or 2, e
It costs a little money (Score:2)
Backup Edge (Score:2)
http://www.microlite.com/ [microlite.com]
bash, tar and netcat (Score:1)
backing up your system with bash, tar and netcat [blogspot.com]
Re: (Score:1)
It's good to see Anonymous Cowards bitching about other people's work. Maybe you could post some of yours so we can pick it apart.
Re: (Score:1)
I haven't read your blog, but I'm assuming it is as described. Both of those sentences are almost certainly true and aren't things you can argue about (fact and opinion presented as such.)
Re: (Score:1)
As for my audience, I write for those that are interested in information and not bitching about grammar. It's fun
Random thoughts (Score:2)
I think the article does a good job of explaining how to backup, but maybe just as important is "why?". There are some posts that say put everything on a RAID or use mirror or dd. What they fail to address is one important reason to backup: human error. You may wipe a file and then a week later need to recover it. If all you're
Nonsense (Score:2)
Re: (Score:2)
Re: (Score:2)
The article also has outright falsehoods in it: For instance, ReiserFS can be configured to do data journaling (it just doesn't call it that), and has had this ability for quite some time now. And IIRC, ReiserFS4 can't be configured to disable data journaling.
It's odd how
www.bacula.org (Score:3, Insightful)
Works fine with my autoloaders, and it's open source.
Re: (Score:2)
(OTOH, I prefer it that way in the long run, because it forces me to learn the ins/outs of the system. Which is better then click-click-click-done and then not knowing how to fix it when things go pear-shaped.)
Re: (Score:2)
The docs onsite are pretty valuable, and they walk you through setup nicely. Installation isn't too bad; even a default MySQL or PostgreSQL installation on the box can be prepped and ready to go with the prov
Re: (Score:2)
All this just
Informative, well written (Score:2)
Re: (Score:1)
This Article explains many things which I hope will be very useful to many of us.
Good Job Dude.
Little to say... (Score:2)
In order to ensure I'm never in a tough spot, I made a custom bootable image using my distro's kernel and utilities. Then I made a bzip2 -9 compressed tar backup of my notebook hard drive, which is just small enough to fit on a single CD... (With DVD-Rs these days, the situation is even better).
Re: (Score:1)
Another very key point which was missed (Score:2)
Here's a real life case in point that I came across with a Fortune 500 company. This company had recently aquired a small startup, who's system administration skills were lacking. Before movi
Re: (Score:1)
Re: (Score:2)
And time was of the essense, because having a bunch of engineers sitting around waiting for their files adds up to a significant amount of money.
IT departments in large companies are a little funny, in
Re: (Score:2)
Re: (Score:2)
It was clearly a failure of process here. Having built my own home-grown RAID systems from scratch, I find them quite useful. Like any system, incorrect usage will lead to problems. Such was the case here.
Indeed, one of the options was to build one simply for the storage here. Had I been guaranteed reimbursement for this, one could've been put together in a day or so.
Unfortunately, the reimbusement was an issue.
Hey thanks... (Score:3, Funny)
Signed
The Helpful People on forums and mailing lists
Arguably worthless (Score:4, Insightful)
tar, gtar, dd, cp, etc. are not backup programs. These are file or filesystem copy programs. Backups are a different kettle of fish entirely.
Amanda is a pretty good option. There are many others. The tool really isn't that important other than that (a) it maintains a catalog, and (b) it provides comprehensive enough scheduling for your needs.
The schedule is key. Deciding what needs to get backed up, when it needs to get backed up, how big of a failure window you can tolerate, and such is the real trick. It can be insanely difficult when you have a hundred machines with different needs, but fundamentally, a few rules apply to backups:
For backups:
1) Back up the OS routinely.
2) Back up the data obsessively.
3) Document your systems carefully.
4) TEST your backups!!!
For restores:
1) Don't restore machines--rebuild.
2) Restore necessary config files.
3) Restore data.
4) TEST your restoration.
All machines should have their basic network and system config documented. If a machine is a web server, that fact should be added to the documentation but the actual web configuration should be restored from OS backups. Build the machine, create the basic configuration, restore the specific configuration, recover the data, verify everything. It's not backups, it's not a tool, it's not just spinning tape; it's the process and the documentation and the testing.
And THAT'S how you save 63 billion dollar companies.
Righteous Backup (Score:2)
Its new, but it shows a lot of promise. It uses a kernel module to take consistent backups of partitions at the file system block level and store them on a remote server. The cool part is, it tracks changes. If you haven't rebooted your machine since the last backup, it takes a few seconds to send the changed blocks and almost no CPU usage. It can also interpret the file system in any incremental backup to restore individual files. Not to mention b
dirvish (Score:1)
mdadm (Score:2)
every week. Simple, predictable, and works fine in the face of lots of small files.
# cat
Personalities : [raid1]
md1 : active raid1 sdf1[6] sdb1[4] sdd1[3] sdc1[2] sde1[1]
488383936 blocks [6/4] [_UUUU_]
[============>........] recovery = 61.6% (301244544/488383936) finish=231.7min speed=13455K/sec
# mount | grep backup
Oh, so many problems... (Score:3, Informative)
There is also a separate utility which can split any file into multipile pieces. It's called "split". They can be joined together with cat.
As for mtimes, I ran his test. touch a; touch b; mv b a... Unless the mtimes are identical, backup software will notice that a has changed. This is actually pretty damned reliable, although I'd recommend doing a full backup every now and then just in case. Of course, we could also check inode (or the equivalent), but the real solution would be a hash check. Reiser4 could provide something like this -- a hash that is kept current on each file, without much of a performance hit. But this is only to prevent the case where one file is moved on top of another, and each has the exact same size and mtime -- how often is that going to happen in practice?
Backing up to a filesystem: Duh, so don't keep that filesystem mounted. You might just as easily touch the file metadata by messing with your local system anyway. Sorry, but I'm not buying this -- it's for people who 'alias rm="rm -i"' to make sure they don't accidentally delete something. Except in this case, it's much less likely that you'll accidentally do something, and his proposed solutions are worse -- a tar archive is much harder to access if you just need a single file, which happens more than you'd expect. We used BackupPC at my last job, but even that has a 1:1 relationship between files being backed up and files in the store, except for the few files it keeps to handle metadata.
No need to split up files. If you have to burn them to CD or DVD, you can split them up while you burn. But otherwise, just use a modern filesystem -- God help you if you're forced onto FAT, but other than that, you'll be fine. Yes, it's perfectly possible to put files larger than 2 gigs onto a DVD, and all three modern OSes will read them.
Syncing: I thought filesystems generally serialized this sort of thing? At least, some do. But by all means, sync between backup and clean, and after clean. But his syncs are overkill, and there's no need to sleep -- sync will block until it's done. No need to sync before umount -- umount will sync before detaching. And "sync as much as possible", taken to a literal extreme, would kill performance.
File system replication: You just described dump, in every way except that I don't know if dump can restrict to specific directories. But this doesn't really belong in the filesystem itself. The right way to do this is use dm-snapshot. Take a copy-on-write snapshot of the filesystem -- safest because additional changes go straight to the master disk, not to the snapshot device. Mount the snapshot somewhere else, read-only. Then do a filesystem backup.
"But the metadata!" I hear him scream. This is 2006. We know how to read metadata through the filesystem. If you know enough to implement ACLs, you know enough to back them up.
As for ReiserFS vs ext3, there actually is a solid reason to prefer ext3, but it's not the journalling. Journalling data is absolutely, completely, totally, utterly meaningless when you don't have a concept of a transaction. I believe Reiser4 attempts to use the write() call for that purpose, but there's no guarantee until they finish the transaction API. This is why databases call fsync on their own -- they cannot trust any journalling, whatsoever. In fact, they'd almost be better off without a filesystem in the first place.
The solid reason to prefer ext3 is that ReiserFS can run out of potential keys. This takes a lot longer than it takes ext3 to run out of inodes, but at least you can check how many inodes you have left. Still, I prefer XFS or Reiser4, depending on how solid I need the system to be. To think that it comes down to "ext3 vs reiserfs" means this person has obviously never looked at the sheer number of options available.
As for network backups, we used both BackupPC and DRBD. BackupPC to keep things sane -- only one backup per day. DRBD to replicate the backup server over the network to a remote copy.
Just The Files (For Linux) (Score:1)
deep sigh (Score:2)
oh the joy of having archival snapshots of each day, instantly available.
most of all i miss singing along to yesterday [bell-labs.com].
My personal backup solution... (Score:2)
Basically, each workstation runs a cron job (or under Windows, task manager
NetBackup for us... (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)