Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Backing up a Linux (or Other *nix) System 134

bigsmoke writes "My buddy Halfgaar finally got sick of all the helpful users on forums and mailing lists who keep suggesting backup methods and strategies to others which simply don't, won't and can't work. According to him, this indicates that most of the backups made by *nix users simply won't help you recover, while you'd think that disaster recovery is the whole point of doing backups. So, now he explains to the world once and for all what's involved in backing up *nix systems."
This discussion has been archived. No new comments can be posted.

Backing up a Linux (or Other *nix) System

Comments Filter:
  • Dump (Score:4, Informative)

    by Fez ( 468752 ) * on Thursday October 12, 2006 @07:46PM (#16415981)
    I'd say he hasn't seen the "dump" command on FreeBSD:
    http://www.freebsd.org/cgi/man.cgi?query=dump&apro pos=0&sektion=0&manpath=FreeBSD+6.1-RELEASE&format =html [freebsd.org]

    I still use tar, but ideally I'd like to use dump. As it is now, each server makes its own backups, copies them to a central server, which then dumps them all to tape. The backup server also holds one previous copy in addition to what got dumped to tape. It has come in handy on many occasions.

    It does take some planning, though.
    • Re:Dump (Score:5, Informative)

      by Retardican ( 1006101 ) on Thursday October 12, 2006 @08:56PM (#16416951) Homepage
      If you are going to talk about dump, you can't leave out why dump is the best. From the FreeBSD Handbook:

      17.12.7 Which Backup Program Is Best?

      dump(8) Period. Elizabeth D. Zwicky torture tested all the backup programs discussed here. The clear choice for preserving all your data and all the peculiarities of UNIX file systems is dump. Elizabeth created file systems containing a large variety of unusual conditions (and some not so unusual ones) and tested each program by doing a backup and restore of those file systems. The peculiarities included: files with holes, files with holes and a block of nulls, files with funny characters in their names, unreadable and unwritable files, devices, files that change size during the backup, files that are created/deleted during the backup and more. She presented the results at LISA V in Oct. 1991. See torture-testing Backup and Archive Programs. [dyndns.org]

      I find dump to be the best backup tool for unix systems. One disadvantage is that it deals with whole file systems, which means things have to be partitioned intelligently before hand. I think that's actually a Good Thing (TM).
      • by kv9 ( 697238 )

        One disadvantage is that it deals with whole file systems

        NetBSD's dump [gw.com] supports files too, not just filesystems.

      • Re:Dump (Score:5, Insightful)

        by arivanov ( 12034 ) on Friday October 13, 2006 @02:36AM (#16419743) Homepage
        I find dump to be the best backup tool for unix systems.

        First, looking at this statement it seems that you have never had to run backups in a sufficiently diverse environment. Dump "proper" has a well known problem - it supports only a limited list of filesystems. It originally supported UFS and was ported to support EXT?FS. It does not support JFS, XFS, ReiserFS, UDF and so on (last time I looked each used to have its own different dump-like utility). In the past I have also ran into some entertaining problems with it when dealing with posix ACLs (and other bells-n-whistles) on ext3fs. IMHO, it is also not very good at producing a viable back up of heavily used filesystems.

        Second, planning dumps is not a rocket science any more. Nowdays, dumps can be planned in advance in an intelligent manner without user intervention. This is trivial. Dump is one of the supported backup mechanisms in Amanda and it works reasonably well for cases where it fits the bill. Amanda will schedule dumps at the correct levels without user attendance (once configured). If you are backing to disk or tape library you can leave it completely unattended. If you are backing to other media you will need only to change cartridges once it is set-up. Personally, I prefer to use the tar mechanism in Amanda. While less effective it supports more filesystems and is better behaved in a large environment (my backup runs at work are in the many-TB range and they have been working fine for 5+ years now).

        Now back to the overall topic, the original ASK Slashdot is a classic example of "Ask Backup Question" on slashdot. Vague question with loads of answers which I would rather not qualify. As usually what is missing is what are you protecting against. When planning a backup strategy it is important to decide what are you protecting against: cockup, minor disaster, major disaster or compliance.

        • Cockup - user deleted a file. It must be retrieved fast and there is no real problem if the backups go south once in a while. Backup to disk is possibly the best solution here. Backup to tape does not do the job. It may take up to 6 hours to get a set of files of a large tape. By the end you will have users taking matters in their own hands.
        • Minor disaster - server has died taking fs-es with it. Taking a few hours to recover it will not get you killed in most SMBs and home offices. Backup to disk on another machine is possibly the best solution here. In most cases this can be combined with the "cockup" recovery backup.
        • Major disaster - flood, fire, four horsemen and the like. For this you need offsite backup or a highly rated fire safe and backup to suitable removable media. Tape and high speed disk-like cartridges (Iomega REV) are possibly the best solution for putting in a safe. This cannot be combined with the "cockup/minor disaster" backups because the requirements contradict. You cannot optimise for speed and reliability/security of storage at the same time. Tapes are slow, network backup to remote sites is even slower.
        • Compliance - that is definitely not an Ask Slashdot topic.
        As far as with what to backup on unix IMO the answer is amanda, amanda or amanda:
        • It plugs into supported and well known OS utilities so if worst comes to worst you can extract the dump/tar from tape and use dump or tar to process it by hand. Also, if you change something on the underlying OS the backups no longer stop working. For example while ago, I had that problem with Veritas which kept going south on anything but old stock RedHat kernels (wihtout updates). So at one point I said enough is enough, moved all of the Unix systems to amanda and never looked back since (that was 5+ years ago)
        • It is fairly reliable and network backup is well supported (including firewall support on linux).
        • It is not easy to tune (unix is userfriendly...), but can be tuned to do backup jobs where many high end commercial backup programs fail.
        • It supports tape backup (including libraries), disk backup and various weird media (like REV)
        • It works (TM).
        • Re: (Score:3, Informative)

          by arth1 ( 260657 )

          Dump "proper" has a well known problem - it supports only a limited list of filesystems. It originally supported UFS and was ported to support EXT?FS. It does not support JFS, XFS, ReiserFS, UDF and so on (last time I looked each used to have its own different dump-like utility). In the past I have also ran into some entertaining problems with it when dealing with posix ACLs (and other bells-n-whistles) on ext3fs. IMHO, it is also not very good at producing a viable back up of heavily used filesystems.

          Yes,

          • by arivanov ( 12034 )
            The down side is that you will back up the same data more than once

            This is not a downside, this is an advantage. One of the ways to increase the probability of recovery is to do this. Unfortunately the human brain (without probability theory training) is not very well suited to this. It is even less suited to follow the changes in the filesystems over time and change these estimates on every backup run so the best is for the backup system to does this for you. This is possibly the best feature in amanda -

        • by joib ( 70841 )
          Good post.

          Have you tried bacula? I've heard stories of people migrating from amanda to it, although probably less so these days now that
          amanda supports spanning many tapes.

          And my pet peeve, neither amanda, bacula or any commercial program I know of supports extended attributes (ACL:s, SELinux labels). #"@%&
          • by ryanov ( 193048 )
            Bacula definitely supports ACL's. I'm not sure about SELinux labels, but it seems to be there are ways to back them up independently and then backup the file that is exported. I could be wrong, but I bet you that it's doable.
          • by ryanov ( 193048 )
            FYI, from an SELinux FAQ:

            What about backup and recovery ?

            When backing up and recovering files with a SELinux system, care must be taken to preserve SELinux context information. Use the star command to backup SE Linux contexts on Fedora, Red Hat Enterprise Linux (and probably most systems with a recent version of star).

            For example,
            star -xattr -H=exustar -c -f output.tar [files]

            Also the dump and restore utilities for Ext2/3 have been updated to work with XATTRs (and therefore SE Linux contexts). They should w
          • by arivanov ( 12034 )
            I have heard of bacula and I have looked at the list of supported features on a few occasions, but I have never seen any need to migrate. I also know a few admins who have migrated to it from amanda. It has always been for one of the following reasons:
            • Multitape support - most people simply do not know that amanda can support multiple tapes and tape libraries. Many of the ones who know do not know how to circumvent the file-does-not-span-a-tape limitation. For this I use automounter+nis to move things aro
      • by geschild ( 43455 )

        [...]She presented the results at LISA V in Oct. 1991. See torture-testing Backup and Archive Programs.
        (emphasis mine)

        If you're going to go and quote something, please make sure that it is still relevant? I'm not entirely sure that more current versions, say 15 years younger, might not still have the same problems, but I think a re-match is in order to get some real information, here.
      • Can't use it on a live filesystem. No guarantee.
        Now if you use a volume manager you can create snapshots and back those up instead. Unfortunately most filesystems don't have a way of being told that a snapshot is being taken, and to checkpoint themselves. With the exception of XFS. I think there's a patch for ext3 to do this as well, but I don't know which distros include it by default.

        I am of the opinion that the safest route is to do a backup at the mounted level of the filesystem from a snapshot from use
  • Backups (Score:4, Informative)

    by StarHeart ( 27290 ) * on Thursday October 12, 2006 @07:51PM (#16416045)
    The article seems like a good one, though I think it may be a little too cautious. I would need to hear some real world examples before I would give up on incremental backups. Being able to store months worth of data seems so much better than being only able to store weeks because you aren't doing incremental backups.

        One thing not mentioned is encryption. The backups should be stored on a media or machine seperate from the source. In the case of the machine you will likely be backing up more than one system. If it is a centralized backup server then all someone has to do is break into that system and they have access to the data from all the systems. Hence encrypted are a must in my book. The servers should also push their data to the backup server, as a normal user on the backup server, instead of the backup server pulling it from the servers.

        I used to use hdup2, but the developer abandoned it for rdup. The problem with rdup is it writes straight to the filesystem. Which brings up all kinds of problems, like the ones mentioned in the article. Lately I have been using duplicity. It does everything I want it to. I ran into a few bugs with it, but once I worked around them it has worked very well for me. I have been able to do restores on multiple occasions.
    • Re:Backups (Score:5, Informative)

      by WuphonsReach ( 684551 ) on Thursday October 12, 2006 @08:25PM (#16416533)
      The problem with suggesting backup solutions is that everyone's tolerance of risk differs. Plus, different backup solutions solve different problems.

      For bare metal restore, there's not much that beats a compressed dd copy of the boot sector, the boot partition and the root partition. Assuming that you have a logical partition scheme for the base OS, a bootable CD of some sort and a place to pull the compressed dd images from, you can get a server back up and running in a basic state pretty quickly. You can also get fancier by using a tar snapshot of the root partition instead of a low-level dd image.

      Then there are the fancier methods of bare metal restore that use programs like Bacula, Amanda, tar, tape drives.

      After that, you get into preservation of OS configuration. For which I prefer to use things like version control systems, incremental hard-link snapshots to another partition and incremental snapshots to a central backup server. I typically snapshot the entire OS, not just configuration files and the hardlinked backups using ssh/rsync keep things manageable.

      Finally we get into data, and there's two goals here. Disaster recovery and archival. Archive backups can be less frequent then disaster recovery backups since the goal is to be able to pull a file from 2 years ago. Disaster recovery backup frequency depends more on your tolerance for risk. How many days / hours are you willing to lose if the building burns down (or if someone deletes a file).

      You can even mitigate some data loss scenarios by putting versioning and snapshots into place to handle day-to-day accidential mistakes.

      Or there's simpler ideas, like having backup operating systems installed on the partition (a bootable root with an old, clean copy) that can be booted in an emergency, run no services other then SSH, but have the tools to let you repair the primary OS volumes. Or going virtual with Xen where your servers are just files on the hard drive of the hypervisor domain and you can dump them to tape.
    • by slamb ( 119285 ) *

      The article seems like a good one, though I think it may be a little too cautious. I would need to hear some real world examples before I would give up on incremental backups. Being able to store months worth of data seems so much better than being only able to store weeks because you aren't doing incremental backups.

      I think his complaints are no longer relevant. rdiff-backup has a --compare-hash option, though I haven't checked the details. Maybe the author should give it another look...

      Besides, if you

      • [quote]I think his complaints are no longer relevant. rdiff-backup has a --compare-hash option, though I haven't checked the details. Maybe the author should give it another look...[/quote] The hash is stored in the meta information, and the compare option does only that, comparing the live system to your archive. It does not say anything about the change-detection behaviour used during a backup. [quote]Besides, if you have an accurate timeserver (you should! time is unbelievably important to software in
        • OK, my slashdot noobness is revealed. Here's the post again...

          "I think his complaints are no longer relevant. rdiff-backup has a --compare-hash option, though I haven't checked the details. Maybe the author should give it another look.. "

          The hash is stored in the meta information, and the compare option does only that, comparing the live system to your archive. It does not say anything about the change-detection behaviour used during a backup.

          "Besides, if you have an accurate timeserver (you should! time is
          • by slamb ( 119285 ) *

            The hash is stored in the meta information, and the compare option does only that, comparing the live system to your archive. It does not say anything about the change-detection behaviour used during a backup.

            True, but my assumption (which again, I haven't checked) is that they wouldn't have stored this hash if they weren't doing something with it. I don't think the sanity check uses any information that's not gathered for normal operation.

            [Time-based checking is not safe...touch example]

            True. Your ba

            • True, but my assumption (which again, I haven't checked) is that they wouldn't have stored this hash if they weren't doing something with it. I don't think the sanity check uses any information that's not gathered for normal operation.

              The hash information feature was included after I suggested a feature for hash-change-checking. The hash is already stored, because that was easy to do, but the change checking never got implemented.

              True. Your backup from before the move will be correct, so if you were to

          • by arth1 ( 260657 )

            No, it's not (safe, I mean). Do this:

            touch a b
            edit a and b to be the same length but different content
            stat a b
            mv b a
            stat a
            a will now have the mtime b had first. mtime+size is not changed, file is not backed up.

            This is a danger in my opinion.

            Why on earth would you look at the mtime? That's what ctime is for!

            % echo foo >a
            % echo bar >b
            % stat a b | grep -v Uid
            File: `a'
            Size: 4 Blocks: 8 IO Block: 4096 regular file
            Device: 343h/835d Inode: 18437034 Links: 1
            Access: 2006-10-13 13

  • Amanda (Score:5, Informative)

    by Neil Blender ( 555885 ) <neilblender@gmail.com> on Thursday October 12, 2006 @07:52PM (#16416061)
    http://www.amanda.org/ [amanda.org]

    Does the trick for my organization.
    • by fjf33 ( 890896 )
      Does the trick for me at home. :)
    • by Dadoo ( 899435 )
      Yeah, Amanda has all the capabilities you need to do enterprise backups, except possibly the most important one: the ability to span tapes.
      • by Noksagt ( 69097 )
        Except that AMANDA now has tape spanning [zmanda.com].
      • Yes, sorry, this article is clearly intended to teach one how to backup in a large scale environment. I reread the article. It's funny, the first time around I missed the part about the author's prefered backup file size is 650MB (he likes to burn them to CDs). I italicized the part about CDs because I didn't want anyone to get scared. It's a very enterprisey technology.
        • Actually, I burn them to DVDs. But, I don't really have one specific target audience in mind. Large enterprise setups require more work and more specific apps, of course. I didn't know of Amanda, but I have it on my TODO.
    • No recommendations for bacula? Or are they not even comparable?
  • Mondoarchive (Score:4, Informative)

    by Mr2001 ( 90979 ) on Thursday October 12, 2006 @07:52PM (#16416063) Homepage Journal
    Mondoarchive [csiro.au] works pretty well for backing up a Linux system. It uses your existing kernel and other various OS parts to make a bootable set of backup disks (via Mindi Linux), which you can use to restore your partitions and files in the event of a crash.
    • Yes, I couldn't believe someone had written an article about backing up a linux system and didn't refer even once to Mondo! (Or to any other backup software, either! I mean, OK, it's cool to know how to back things up yourself, but data recovery isn't a game ... I'd stick with something straightforward and reliable, personally, rather than rolling your own!)

      Mondo is absolutely vital in this regard - it allows you to restore from bare metal, and backs up and restores systems flawlessly. I've had to use it
      • by Mr2001 ( 90979 )

        My only piece of advice, if creating optical backups, is to backup to your harddisk, then burn the images and verify the burns against the images, rather than burning the discs on the fly.

        Is it any faster that way? My only real complaint about Mondo is that it takes several hours to back up my 26 GB system to DVD+R, even with compression turned off... and for most of that time, I'm watching a progress bar stuck at 100% ("Now backing up large files") even as it burns disc after disc after disc.

      • I put it on my TODO list to check out, Mondoarchive I mean.
  • The article has been up for over 20 minutes and still no RTFM followed by a cryptic dd command? For shame.
    • by LearnToSpell ( 694184 ) on Thursday October 12, 2006 @09:19PM (#16417221) Homepage
      RTFM n00bz!!

      dd if=/dev/sda | rsh user@dest "gzip -9 >yizzow.gz"

      And then just restore with
      rsh user@dest "cat yizzow.gz | gunzip" | dd of=/dev/sda

      Jeez. Was that so tough?
      • by rwa2 ( 4391 ) *
        .... you forgot to

        cat /dev/zero > /frickenlargefillerfile; rm /frickenlargefillerfile

        so the unused fs blocks compress well. /noob/ ;-]

        Well, I'd really use a filesystem backup tool (that way you can restore to an upgraded filesystem / partitioning scheme, as well as not bothering to backup unused inodes). The only thing I ever use dd for is backing up the partition table & MBR:

        dd if=/dev/sda of=/mnt/net/backup/asdf.img bs=512k count=1

        Just remember, after you restore, re-run fdisk to adjust you partit
  • Lone-Tar. (Score:3, Insightful)

    by mikelieman ( 35628 ) on Thursday October 12, 2006 @08:04PM (#16416253) Homepage
    Cron based backup with compression/encryption, rewind, bitlevel verify, send email re: success/failure.

    Add a scsi controller, and Drive Of Your Choice, and sleep well.

  • Simple (Score:1, Redundant)

    Amanda [amanda.org]

  • by jhfry ( 829244 ) on Thursday October 12, 2006 @08:16PM (#16416425)
    I have come to the conclusion, that unless a tape backup solution is necessary it is often easier to backup to a remote machine. Sure, archive to tape once in a while, but for the primary requirement of a backup... rsync your data to a seperate machine with a large and cheap raid array.

    I use a wonderful little tool/script called rsnapshot to backup our servers to a remote location. It's fast as it uses rsync and only transmits the portions of files that have changed. It's effortless to restore as the entire directory tree appears in each backup folder using symlinks, and it's rock solid.

    Essentially the best part of this solution is it's low maintenance and the fact that restorations require absolutely no manual work. I even have an intermediate backup server that holds a snapshot of our users home directories... my users can connect to the server via a network share and restore any file that has existed in their home directory in the last week by simply copying and pasting it... changed files are backed up every hour.

    Sure, the data is not as compressed as it could be in some backup solutions, and it's residing on a running server so it's subject to corruption or hack attempts. But my users absolutely love it. And it really doesn't waste much space unless a large percentage of your data changes frequently, which would consume a lot of tape space as well.
    • I think your solution is pretty good, but Backup should protect you from all of the following:

      1. Hardware failure: Oops, I just spilled juice all over the motherboard, and shorted the HDD.
      2. Accidents: Oops, I just deleted a file.
      3. Accident discovery: Oops, I deleted a file a week ago, that I didn't mean to.
      4. Accident discovery2: Damn, I need the file I deleted 6 months ago.
      5. Once restored, the file should have all the exact time stamps it did when it was backed up.

      A REAL backup should let you r
      • My rsnapshot scheme fulfills all those requirements... hourly backups (24 archived), daily (seven archived), weekly (four archived), monthly (six archived). Since the backups are on a remote machine in a different facility it deals with hardware failures very well.

        And grandparent wasn't quite right... the backup uses HARDlinks, not SYMlinks, so restoration is truly effortless (and yes, time/date/gid/uid/mode are all preserved).
    • by arth1 ( 260657 )
      rsync can't handle files that are locked or modified during sync, nor can it handle alternate streams and security labels. ACLs and extended attributes only work if the remote system has the exact same users/groups as the source machine.

      Especially the inability to properly handle files that are in use makes it a poor choice for backing up a running system. Unless you can kick off all users and stop all services and scheduled jobs on a machine while the rsync runs, I wouldn't recommend it at all. You may
  • A comment about sparse files:

    99% of the time there is only one sparse file of any significance on your machine: /var/log/lastlog

    Unless you really care about the timestamp of each users' prior login, you can safely exclude this file from the backup. Following a restore, "touch /var/log/lastlog" and the system will work as normal.
    • by arth1 ( 260657 )
      99% of the time there is only one sparse file of any significance on your machine: /var/log/lastlog

      Obviously, you don't have database files, nor use p2p, then. (Start the download of a few ISO's with a p2p program, and you'll have gigabytes of sparse file non-data.)
      • Lots of database files. Which database are you using that has huge amounst of empty space in the files?

        As for p2p, no. Then again, I'm not clear why you would even -try- to back up p2p downloads in progress. Seems like prime candidates for exclusion from the backup process.
        • by arth1 ( 260657 )
          Then again, I'm not clear why you would even -try- to back up p2p downloads in progress. Seems like prime candidates for exclusion from the backup process.
          You exclude files from a backup on a system level, not a user level. You can't go into each and every user's home directory and scan for what can be backed up and what can be excluded. You back that all up. Period.
          If a user can cripple or trash your backup by creating a 2 TB sparse file, then you don't have a viable backup system.

          Regards,
          --
          *Art
          • You must be working in a very different environment than I am. My users have yet to create a large sparse file.
  • Hi there,

    A quick reply from the author of the article before I go to sleep:

    About dump. So, that's a freebsd command? I've always suspected it existed, doing the very thing the man page described, because of the dump field in /etc/fstab. But I have never actually seen a machine which had the dump command... It's possibly not very safe BTW. If it works like DOS's Archive bit, than it can't be trusted: it can be set manually. Some DOS apps even used them as copy protection mechanism...

    The suggestions (for soft
    • by adam872 ( 652411 )
      DUMP has existed in various incarnations on various O/S's for eons. I used ufsdump/ufsrestore on Solaris just the other day to recover from a failed root disk on one of our old Sun servers. Worked an absolute treat. Boot from the CD or network (if you have Jumpstart): format, newfs, ufsrestore, installboot, reboot, done....
    • Re: (Score:2, Informative)

      About dump. So, that's a freebsd command? I've always suspected it existed, doing the very thing the man page described, because of the dump field in /etc/fstab. But I have never actually seen a machine which had the dump command... It's possibly not very safe BTW. If it works like DOS's Archive bit, than it can't be trusted: it can be set manually. Some DOS apps even used them as copy protection mechanism...

      Please don't take this the wrong way, but how in the world could you do any sort of proper resea

      • As I stated in the intro, my experience is with Linux (perhaps I should remove the word "mostly" ...). The lack of dump on Linux can be blaimed for my ignorance. But, I will investigate it, of course.
    • by arth1 ( 260657 )
      The author of the article wrote:
      But I have never actually seen a machine which had the dump command...

      That's ... interesting. I've never seen a system without a dump command. All commercial Unix varieties I've used (SunOS,IRIX,HPUX,AIX,DecOS) have them, and so do GNU/Linux distributions like SuSE and Redhat. The above tidbit of information makes me wonder about the credentials of the author.
  • One more thing (Score:2, Interesting)

    by halfgaar ( 1012893 )
    Oh, one more thing, encryption. I was in doubt whether to include it or not. I use different encryption schemes for my backups (LUKS for external HD and GPG for DVD burning), but I decided this can be left to the reader. I may include a chapter on it, after all.
  • Consistent backups (Score:3, Informative)

    by slamb ( 119285 ) * on Thursday October 12, 2006 @08:56PM (#16416949) Homepage
    This article totally neglects consistency. Recently I've put a lot of effort into getting consistent backups of things:
    • PostgreSQL by doing pg_dump to a file (easiest, diffs well if you turn off compression), pg_dump over a socket (better if disk space is tight, but you send the whole thing every time), or an elaborate procedure based on archive logs. (It's in the manual, but essentially you ensure logfiles aren't overwritten during the backup and that you copy files in the proper order.)
    • Other ACID databases with a write-ahead log in a similar way.
    • Subversion fsfs is really easy - it only changes files through atomic rename(), so you copy all the files away
    • Subversion bdb is a write-ahead log-based system, easiest way is "svnadmin hotcopy".
    • Perforce by a simple checkpoint (which unfortunately locks the database for an hour if it's big enough) or a fancy procedure involving replaying journals on a second metadata directory...and a restore procedure that involves carefully throwing away anything newer than your checkpoint.
    • Cyrus imapd...I still haven't figured out how to do this. The best I've got is to use LVM to get a snapshot of the entire filesystem, but I don't really trust LVM.
    • ...
    • If you're really desperate, anything can be safely backed up by shutting it down. A lot of people aren't willing to accept the downtime, though.

    So you need a carefully-written, carefully-reviewed, carefully-tested procedure, and you need lockfiles to guarantee that it's not being run twice at once, that nothing else starts the server you shut down while the backup is going, etc. A lot of sysadmins screw this up - they'll do things like saying "okay, I'll run the snapshot at 02:00 and the backup at 03:00. The snapshot will have finished in an hour." And then something bogs down the system and it takes two, and the backup is totally worthless, but they won't know until they need to restore from it.

    These systems put a lot of effort into durability by fsync()ing at the proper time, etc. If you just copy all the files in no particular order with no locking, you don't get any of those benefits. Your blind copy operation doesn't pay any attention to that sort of write barrier or see an atomic view of multiple files, so it's quite possible that (to pick a simple example) it copied the destination of a move before the move was complete and the source of the move after it was complete. Oops, that file's gone.

    • by slamb ( 119285 )
      or see an atomic view of multiple files

      Oops, I meant "consistent" here. "Atomic view" is nonsense.

    • In fairness to the author, while he does not go into the details, TFA does stress the importance of alternative methods for transactional systems such as the ones you are referring to.
    • Section 4 brings up the issue of data files from running applications and agrees with your recommendation of pg_dump or shutting down to do the backup.

      Section 7 recommends syncing and sleeping and warns "consider a tar backup routine which first makes the backup and then removes the old one. If the cache isn't synced and the power fails during removing of the old backup, you may end up with both the new and the old backup corrupted".
    • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday October 12, 2006 @09:58PM (#16417653) Homepage Journal
      The '-L' option to FreeBSD's dump command makes an atomic snapshot of the filesystem to be dumped, then runs against that snapshot instead of the filesystem itself. While that might not be good enough for your purposes, it's nice to know that the backup of database backend file foo was made at the same instant as file bar; that is, they're internally consistent with one another.
    • Subversion fsfs is really easy - it only changes files through atomic rename(), so you copy all the files away

      I was under the impression that even with FSFS you still needed to use the hotcopy.py script in order to get a guaranteed consistent backup.
      • by slamb ( 119285 ) *

        I was under the impression that even with FSFS you still needed to use the hotcopy.py script in order to get a guaranteed consistent backup.

        I originally thought so, too, but check out this thread [svn.haxx.se]. Old revision files are never modified, old revprop files are modified only when you do "svn propset --revision", and new files are created with a unique tempfile name then svn_fs_fs__move_into_place [collab.net]. My backup script does some additional sanity checking (ensures the dir is an fsfs repository of version 1 or 2, e

  • We've been using Amazon's S3. [amazon.com] It has a great API, pretty easy to use. I was concerned about storing sensitive data there, but we worked out a good encryption scheme (that I won't detail) and now I'm able to really restore everything from anywhere with no notice. My city could sink into the ocean and I could be in Topeka, and I could bring things back up as long as I had a credit card.
  • Encryption, Compression, Bit-Level verification, Bootable disaster recovery, Commercial support...

    http://www.microlite.com/ [microlite.com]

  • I wrote an article a while back about how to do backups over the network using command line tools. I did it to bounce my system to a bigger hard drive, but I'm sure it could be automated and put to some good use if you wanted. Disaster recovery is as easy as booting with a livecd and untarring.

    backing up your system with bash, tar and netcat [blogspot.com]

  • I deal with some aggregate 2 terabytes of storage on my home file servers. What works for me won't work for an enterprise corporate data center, but maybe some things are useful...

    I think the article does a good job of explaining how to backup, but maybe just as important is "why?". There are some posts that say put everything on a RAID or use mirror or dd. What they fail to address is one important reason to backup: human error. You may wipe a file and then a week later need to recover it. If all you're
  • What can I say? I just did two successful system restores today from my "tar cjlf /" created system backups. I did several more in the last few years. Never had problems. I think this guy is just trying to sound mysterious and knowledgeable....
    • by gweihir ( 88907 )
      ... of course that would be "tar cjlf target_file.tar.bz2 /". ...
    • by sl3xd ( 111641 ) *
      I agree completely. I work for a company whose method of Linux installation is frequently... boot CD, unpack tarball. It takes a bit of care to make sure you don't mangle the permissions & other metadata, but it's not that mystical.

      The article also has outright falsehoods in it: For instance, ReiserFS can be configured to do data journaling (it just doesn't call it that), and has had this ability for quite some time now. And IIRC, ReiserFS4 can't be configured to disable data journaling.

      It's odd how
  • www.bacula.org (Score:3, Insightful)

    by Penguinisto ( 415985 ) on Thursday October 12, 2006 @10:26PM (#16417939) Journal
    Bacula, baby!

    Works fine with my autoloaders, and it's open source.

    /P

    • And moderately difficult to install... Don't get me wrong, it's our platform of choice and I'm working on setting up a central backup server using it. But I reckon that I still have a few hours of reading before I'll have it up and running and making backups.

      (OTOH, I prefer it that way in the long run, because it forces me to learn the ins/outs of the system. Which is better then click-click-click-done and then not knowing how to fix it when things go pear-shaped.)
      • Actually, it's not as bad as it first appears. When I first eyeballed it (and was looking at alternatives that weren't so OSS), I realized even then that it was worth the time I spent learning it - the tech support call savings alone would be well beyond valuable, let alone the price tag (free!) :) ).

        The docs onsite are pretty valuable, and they walk you through setup nicely. Installation isn't too bad; even a default MySQL or PostgreSQL installation on the box can be prepped and ready to go with the prov

        • We used to use NovaStor... but that has never worked well on the Windows boxes. So now I'm setting up a 1.3TB 4-disk RAID10 server (expandable to 2.6TB) and we're going to use Bacula for the Unix/Linux boxes and to backup the data on the Windows servers as well. There's also a set of 500GB IDE drives that we take offsite weekly that are on a WinXP box that I have to work into the equation. The amount of data that we have to backup daily is about 200GB but only a few percent changes daily.

          All this just
  • The article addressed a question that has been nagging at the back of my mind but I haven't gotten around to figuring out the answer to. I like the way that the article is to the point, and very in depth. The author does a good job of explaining the various aspects of the files and the importance of preserving them, and then goes on to detail the steps necessary to preserve them.
    • Disaster Recovery Plans depends on user to user. Depends on what all you want to backup. For e.g. I wont like to backup permissions on some files (maybe MP3s) but may need for some of them.

      This Article explains many things which I hope will be very useful to many of us.

      Good Job Dude.
  • He is complaining about people who suggest backing-up with "tar cvz /" but really, the only thing missing is the: "p". I use it extensively and it just works (not for databases, but that should go without saying).

    In order to ensure I'm never in a tough spot, I made a custom bootable image using my distro's kernel and utilities. Then I made a bzip2 -9 compressed tar backup of my notebook hard drive, which is just small enough to fit on a single CD... (With DVD-Rs these days, the situation is even better).
  • That was a good article. However, there was one very key point which was missed. Specifically, the importance of using Open Source tools. Which this might be implied by the references, the author (like most people) have never faced a disaster situation where Open Source was the only way to do the backup and recovery.

    Here's a real life case in point that I came across with a Fortune 500 company. This company had recently aquired a small startup, who's system administration skills were lacking. Before movi

    • by Barnoid ( 263111 )

      Anyway, it was decided to backup the filesystem before attempting to recover the files. Absolutely everything broke when trying to do this, as Linux doesn't handle petabyte (or even terabyte) files properly. There are subtle problems with all of the utilities (find, ls, cp, cpio, and tar, to name just a few). While this isn't surprising, when you're trying to make a backup, it presents a serious problem.

      In the end, I actually had to modify GNU tar to handle these problems. This was particularly amusing, as

      • by btarval ( 874919 )
        dd wasn't an option as we didn't have enough free disk space for making such an image. We'd have had to have either set up another large RAID array, or have bought a new NAS server. Both would take time just to get the approval; and in the case of the NAS server, it would be a significant amount of time.

        And time was of the essense, because having a bunch of engineers sitting around waiting for their files adds up to a significant amount of money.

        IT departments in large companies are a little funny, in

        • by cr0sh ( 43134 )
          I won't pretend to know the situation, because you were there and I wasn't, but from your description it doesn't sound like the problem was a "home-grown RAID array" which triggered the mess. What triggered the mess was a failure to follow a good process for the move. The fact that they didn't allow the copying of the home folders to each user's desktop was the first mistake, the second mistake was just shutting off the power instead of performing a proper shutdown. While a real RAID array sub-system would
          • by btarval ( 874919 )
            I agree completely; you are quite correct. Thank you for pointing that out.

            It was clearly a failure of process here. Having built my own home-grown RAID systems from scratch, I find them quite useful. Like any system, incorrect usage will lead to problems. Such was the case here.

            Indeed, one of the options was to build one simply for the storage here. Had I been guaranteed reimbursement for this, one could've been put together in a day or so.

            Unfortunately, the reimbusement was an issue.

  • by Seraphim_72 ( 622457 ) on Friday October 13, 2006 @12:22AM (#16418937)
    My buddy Halfgaar finally got sick of all the helpful users on forums and mailing lists...
    Hey thanks, Fuck You too.

    Signed
    The Helpful People on forums and mailing lists
  • Arguably worthless (Score:4, Insightful)

    by swordgeek ( 112599 ) on Friday October 13, 2006 @12:22AM (#16418939) Journal
    When you work in a large environment, you start to develop a different idea about backups. Strangely enough, most of these ideas work remarkably well on a small scale as well.

    tar, gtar, dd, cp, etc. are not backup programs. These are file or filesystem copy programs. Backups are a different kettle of fish entirely.

    Amanda is a pretty good option. There are many others. The tool really isn't that important other than that (a) it maintains a catalog, and (b) it provides comprehensive enough scheduling for your needs.

    The schedule is key. Deciding what needs to get backed up, when it needs to get backed up, how big of a failure window you can tolerate, and such is the real trick. It can be insanely difficult when you have a hundred machines with different needs, but fundamentally, a few rules apply to backups:

    For backups:
    1) Back up the OS routinely.
    2) Back up the data obsessively.
    3) Document your systems carefully.
    4) TEST your backups!!!

    For restores:
    1) Don't restore machines--rebuild.
    2) Restore necessary config files.
    3) Restore data.
    4) TEST your restoration.

    All machines should have their basic network and system config documented. If a machine is a web server, that fact should be added to the documentation but the actual web configuration should be restored from OS backups. Build the machine, create the basic configuration, restore the specific configuration, recover the data, verify everything. It's not backups, it's not a tool, it's not just spinning tape; it's the process and the documentation and the testing.

    And THAT'S how you save 63 billion dollar companies.
  • I like to use Righteous Backup: http://www.r1soft.com/ [r1soft.com]

    Its new, but it shows a lot of promise. It uses a kernel module to take consistent backups of partitions at the file system block level and store them on a remote server. The cool part is, it tracks changes. If you haven't rebooted your machine since the last backup, it takes a few seconds to send the changed blocks and almost no CPU usage. It can also interpret the file system in any incremental backup to restore individual files. Not to mention b
  • Dirvish [dirvish.org] written in perl and using rsync it is a fast disc to disc backup. enjoy.
  • by jab ( 9153 )
    My personal favorite is to swap a pair of disks in and out of a super-redundant RAID
    every week. Simple, predictable, and works fine in the face of lots of small files.


    # cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sdf1[6] sdb1[4] sdd1[3] sdc1[2] sde1[1]
    488383936 blocks [6/4] [_UUUU_]
    [============>........] recovery = 61.6% (301244544/488383936) finish=231.7min speed=13455K/sec

    # mount | grep backup
    /dev/sdg
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday October 13, 2006 @06:04AM (#16420863) Journal
    So his complaint about GNU Tar is that it requires you to remember options... Just look at his Dar command! Seriously, I just do "tar -cjpSf foo.tar.bz2 bar/ baz/" and it just works. And since you should be automating this anyway, it doesn't matter at all.

    There is also a separate utility which can split any file into multipile pieces. It's called "split". They can be joined together with cat.

    As for mtimes, I ran his test. touch a; touch b; mv b a... Unless the mtimes are identical, backup software will notice that a has changed. This is actually pretty damned reliable, although I'd recommend doing a full backup every now and then just in case. Of course, we could also check inode (or the equivalent), but the real solution would be a hash check. Reiser4 could provide something like this -- a hash that is kept current on each file, without much of a performance hit. But this is only to prevent the case where one file is moved on top of another, and each has the exact same size and mtime -- how often is that going to happen in practice?

    Backing up to a filesystem: Duh, so don't keep that filesystem mounted. You might just as easily touch the file metadata by messing with your local system anyway. Sorry, but I'm not buying this -- it's for people who 'alias rm="rm -i"' to make sure they don't accidentally delete something. Except in this case, it's much less likely that you'll accidentally do something, and his proposed solutions are worse -- a tar archive is much harder to access if you just need a single file, which happens more than you'd expect. We used BackupPC at my last job, but even that has a 1:1 relationship between files being backed up and files in the store, except for the few files it keeps to handle metadata.

    No need to split up files. If you have to burn them to CD or DVD, you can split them up while you burn. But otherwise, just use a modern filesystem -- God help you if you're forced onto FAT, but other than that, you'll be fine. Yes, it's perfectly possible to put files larger than 2 gigs onto a DVD, and all three modern OSes will read them.

    Syncing: I thought filesystems generally serialized this sort of thing? At least, some do. But by all means, sync between backup and clean, and after clean. But his syncs are overkill, and there's no need to sleep -- sync will block until it's done. No need to sync before umount -- umount will sync before detaching. And "sync as much as possible", taken to a literal extreme, would kill performance.

    File system replication: You just described dump, in every way except that I don't know if dump can restrict to specific directories. But this doesn't really belong in the filesystem itself. The right way to do this is use dm-snapshot. Take a copy-on-write snapshot of the filesystem -- safest because additional changes go straight to the master disk, not to the snapshot device. Mount the snapshot somewhere else, read-only. Then do a filesystem backup.

    "But the metadata!" I hear him scream. This is 2006. We know how to read metadata through the filesystem. If you know enough to implement ACLs, you know enough to back them up.

    As for ReiserFS vs ext3, there actually is a solid reason to prefer ext3, but it's not the journalling. Journalling data is absolutely, completely, totally, utterly meaningless when you don't have a concept of a transaction. I believe Reiser4 attempts to use the write() call for that purpose, but there's no guarantee until they finish the transaction API. This is why databases call fsync on their own -- they cannot trust any journalling, whatsoever. In fact, they'd almost be better off without a filesystem in the first place.

    The solid reason to prefer ext3 is that ReiserFS can run out of potential keys. This takes a lot longer than it takes ext3 to run out of inodes, but at least you can check how many inodes you have left. Still, I prefer XFS or Reiser4, depending on how solid I need the system to be. To think that it comes down to "ext3 vs reiserfs" means this person has obviously never looked at the sheer number of options available.

    As for network backups, we used both BackupPC and DRBD. BackupPC to keep things sane -- only one backup per day. DRBD to replicate the backup server over the network to a remote copy.
  • On my personal Arch Linux system at home, I prefer to simply backup my home directory and the xorg.conf configuration file. Linux is fast and easy to reinstall (at least Arch and Slackware is), so I don't really worry about bare metal recovery. Windows, which I also like to run, takes forever to install and is far more likely to have problems. That's where I am interested in bare metal recovery.
  • i'm now using linux after having been on a plan 9 [bell-labs.com] system for years, and i really, really, really miss venti [bell-labs.com] and fossil [bell-labs.com].

    oh the joy of having archival snapshots of each day, instantly available.

    most of all i miss singing along to yesterday [bell-labs.com].

  • I have a backup solution I use at home which saved my butt once after I fubar'ed my server with a bad Debian update (was trying to do an update to Woody, but they had already switched things over to Sarge, and things got really messed up). While it isn't something that would be scalable for business (ah, who am I kidding - do not use this in a real IT department, please!), it has worked pretty well for me at home on my small network.

    Basically, each workstation runs a cron job (or under Windows, task manager

  • We use NetBackup from Symantec (formerly Veritas). Supports all our distros and even FreeBSD & Mac. Works like a charm.

One man's constant is another man's variable. -- A.J. Perlis

Working...