Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

What is Your Backup Policy? 124

higuita asks: "A few days ago, I was asked to check our backups policy, how they are being applied and to try to make it safer and more useful. Being new to the company, I started to check what is being done right now and found several problems. Since I don't have much experience with enterprise backups, what are the most used backup policies, software and global ideas about this issue? We have less than 1000 workstations (Windows and Macs), about 20 Oracle and Exchange servers (split between Windows, Solaris, and Linux), and it all needs to be backed up. Right now, we use the HP data protector with several tapes, where most things have a weekly full backup and daily incremental backups, and that most full backups are archived permanently in a safe we have for this purpose. We also have off-site storage for backups, as well. What practices and policies do Slashdot users implement for backups they perform at their office (home backups practices I am not interested in)?"
"I've investigated Veritas NetBackup and other solutions, and I'm also curious if Amanda could be better or at approximate the features offered by HP Data Protector. What backup software have you used that you found enjoyable with the least bit of hassle?

I've thought about using Dirvish to backup the user's homes to a cheap server with several HDs, and only backup to tapes once every 15 days or even once a month. They will lose their Windows permissions, but I don't think that matters much, since this is just for safekeeping the users' work. I thought about making full backups of the servers every 15 days with daily incremental backups. This way I will free up tape drives' time and gain more flexibility with the backup schedule.

I would love it if users worked off of file servers, but right now this just isn't possible. It's a planned addition that we still don't have the time to make."
This discussion has been archived. No new comments can be posted.

What is Your Backup Policy?

Comments Filter:
  • by georgewilliamherbert ( 211790 ) on Wednesday May 31, 2006 @09:45PM (#15441085)
    For that many systems, use a professional, enterprise grade, commercial solution. The open source stuff doesn't supply the same manageability.

    AND FOR GOD'S SAKE, REGULARLY VERIFY THAT YOU CAN READ THE TAPES BACK... More sites have been screwed by backup tapes that weren't readable than any other failure mode. Verifying every tape is best. Second best is every weekly. Random samples, but covering every single drive's tape output at least once a month, are poor third place.

    The two obvious software suggestions are Veritas/Symantec NetBackup and Legato Networker.

    Weekly fulls and daily incrementals are good. Your offsite schedule should be checked to ensure that you have a relatively recent restore point both onsite (in case of data loss) and offsite (in case of building loss).

    In terms of offsites, having a prepared plan for where and how to restore (Disaster Recovery and Business Continuity) is also important. But those all start with "Go get the tapes...".
    • by Anonymous Coward
      I'm extremely less than impressed with 'enterprise backup solutions', regardless the endless touting they receive in the vendor-supported trade rags and by vendor-supported 'industry analysts'. The 'enterprise backup solutions' I've seen, have been so clunky and hard to use, setting up and maintaining backups is a full time job. When you think about ACTUALLY DOING A DR using these 'products' they often come up short.

      Also, sorry, but reinstalling the OS, and then restoring files from tape is NOT acceptable D
      • BMR has been standard for years.

        I've seen attempts to build large enterprise backup environments with "simple open" software. They melt down somewhat short of the size that the original questioner is asking about, typically.

        I've built environments with NBU and used Legato, at large sites (much larger than the original questioner). They just work. Configuring them initially can be non-trivial if you have no prior experience with them, but once set up right they just work.

        Throwing a bunch of open source te
        • BMR has been standard for years.

          AMANDA and others have been deployed in large institutions for years too.

          I've seen attempts to build large enterprise backup environments with "simple open" software. They melt down somewhat short of the size that the original questioner is asking about, typically.

          I've certainly seen a lot of "home-rolled" scripts with tar & what not abused this way. I haven't seen an AMANDA installation that failed to scale. Have you seen problems with "not-so-simple" open source softw

          • by georgewilliamherbert ( 211790 ) on Thursday June 01, 2006 @03:45AM (#15442786)
            I use plenty of stuff for which I have the source code. Going back to the 4.2mumble BSDs, through SunOS, Linux, Solaris, the various x86 BSDs, and plenty of applications (this is Mozilla I'm /.ing with, and before that a long line of other open source browsers). I have no problem with installing large Linux farms, using Apache for an enterprise web deployment, using MySQL for moderate sized databases (or PostgreSQL, though I haven't deployed it personally).

            Tape backup... NBU wins. Legato's a close second. Sorry, charlie. Open source as a category does not suck. The open source backup stuff doesn't suck, for small to medium sized sites. It's not enterprise class, though, and most of the trick to succeeding in IT is knowing when the tools you use aren't applicable anymore and how to figure out what are.

            NBU can't RAIT, but it can stream across multiple tapes, and can write duplicate tapes if you want redundancy. And you can extract the files off tape with tar if you have to.

            Amanda certainly doesn't suck, but it's not NBU.
            • most of the trick to succeeding in IT is knowing when the tools you use aren't applicable anymore and how to figure out what are.

              I agree entirely with this statement.

              The open source backup stuff doesn't suck, for small to medium sized sites. It's not enterprise class....Amanda certainly doesn't suck, but it's not NBU.

              In what way have you found that Amanda does not scale? How have you found the proprietary software to be better?

              NBU can't RAIT, but it can stream across multiple tapes, and can write duplicate

              • NBU advantages:
                • Master server / slave (media) server
                  • Central management point for the whole enterprise's backups (master server)
                • User friendly restore management for end users
                • Application-aware hot / warm backup plugins for enterprise apps like Oracle, SAP, Peoplesoft, Siebel, Informix, Sybase, Exchange ....
                • Optional global management of multiple sites from a master master server
                • Native clients for all OSes including Windows
                • Tape vaulting management software addon
                • Support for arbitrarily large tape libra
      • Try TSM. DR is one of its strongest suits!

        It's really pretty darned incredible. One command, and your TSM environment is rebuilt. We use the DR capabilities multiple times per year. Works great.
    • I've used Amanda, Bakula, Netbackup, Networker and by far the best of the bunch for enterprise size networks is TSM. Easily. Netbackup is something I still have cold sweats and nightmares about, ok, not quite nightmares, just the occasional cold sweat. It's really a small network system which has been kludged to "enterprise" class. TSM was designed for managing large network backups from the start.

       
      • I would agree with this, having used various of the products mentioned, with the following comments ...

        1. Be aware that TSM is quite expensive!
        2. If you go with TSM get decent training for it. I have worked with several systems which have been setup incorrectly because the person(s) setting up the TSM system had not had sufficient training in order to configure things properly.
        3. (Related to 2), make sure you know how to recover your TSM system in the event of a full DR, (not difficult if you know what you
    • I am surprised you did not mention Tivoli Storage Manager. TSM just about rocks for these requirements and is ultra scalable. You can also set it up in such a way that the backups will simultaneously go to multiple tapes for the most critical data and that data can be periodically audited for readability etc.

      http://www-306.ibm.com/software/tivoli/sw-atoz/ind exS.html [ibm.com]

      • Hear hear!!

        TSM is by far the best backup product I've ever used .... ever.

        I just don't worry about getting my data back -- I know it's safe. It's NEVER a concern.

        And even if the onsite tape(s) are damaged, TSM is smart enough to call out for the offsite copy so it can rebuild a new onsite copy. Slick. Really, really slick. :)

        I wouldn't even -look- at other products if you're a large enterprise.
    • Another product that is big in Europe and gaining acceptance in the US is Atempo Time Navigator [atempo.com]. It has broad x-platform support on both the server and client sides, and if you have a substantial OS X install base that you need to backup, it's one of the few products that we've identified that can scale to handle hundreds or thousands of Macs. It can also provide "opportunistic" backups for laptops, which unlike desktops don't fit well into a predefined backup schedule.

      - Gregg

  • don't make the mistake that one guy did
    the office was in the North Tower --- The "offsite backup" was in the South Tower

    oops
    i would suggest minimum different zip codes different time zones would be best
    other than that Grand father > Father >Son GF gets sent offsite
    • If you live in Southern California, there are four seasons:

      Fire, Flood, Mud, and Earthquake

      In which case, the best case for off site backup is out of state, like Las Vegas or something. This also gives you an excellent excuse for monthly road trips to "check out the quality of the backups"

      That said, for simple off site backups, solutions like MOZY.com do just fine for a small small business. Otherwise, something like LiveVault.com is recommended. There are plenty of vendors out there.

      Another thing is the

      • Fire, Flood, Mud, and Earthquake

        Close, but no cigar. The four seasons in Southern California are Fire, Flood, Earthquake and Riot. I should know; I'm the one who posted that to rec.humor.funny about fourteen years ago. Besides, Mud is just a subsidiary of Flood.

        • Well I thought that the mud would act as a lubricant to the earthquake faults, setting up Earthquake season. Earthquakes then cause more fires. fires burn off the ground cover enabling floods when the rain comes, which creates mud to enable the earthquakes.

          So it becomes a nice natural cycle for California.

          Riots work well as part of a slightly different cycle.

          So you are the guy with the sideburns? excellent.

          Although there seem to be earlier mentions of that phrase in various versions in other groups [google.com] prio

          • Well I thought that the mud would act as a lubricant to the earthquake faults, setting up Earthquake season.

            Not unless the mud can seep 5 miles underground or more. And, yes, I'm The Guy With The Sideburns. It's a long story, and doesn't belong here. Glad to see I'm recognized. I'd have used Sideburns as my handle here but it was taken.

    • by a9db0 ( 31053 ) on Thursday June 01, 2006 @11:56AM (#15445654)
      i would suggest minimum different zip codes different time zones would be best

      Sounds funny but very true. Backups across town aren't terriby useful if across town is flat too. Sound farfetched? Ask a sysadmin in Miami how far off he ships his backups. If he was there when Andrew visited, I'll bet they're in New Mexico.

      This may seem a tad offtopic, but it is relevant:

      You have to think through both distance from and access to your backups as a part of disaster recovery planning. Backup isn't just recovering the CEO's email, though that is a (hopefully) far more frequent occurance than recovering from a hurricane/fire/mudslide/blizzard. Easy access to the backup media is important for daily operations. Recovery from disaster is quite a bit more complex. Your backup solution needs to be able to cover the full spectrum - from yestarday's lost spreadsheet to the area flattened by mother nature.

      Personally, I keep two backups - one here locally, one 1000 miles away in another state. Backup to CD here, online rsync in NC.

      "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Variously attributed, frequently to Andrew Tanenbaum
      • "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Variously attributed, frequently to Andrew Tanenbaum

        I was in a meeting with the late Dr. John Hendrickson in the 1990s or 80s and file transfer options for moving large files between the Academy of Natural Sciences in Philadelphia and the Benedict Estuarian Research Labs near Washington, D.C. were being discussed. Today, we'd transfer the data constantly over a broadband connection while making local backup ar

    • I'm not in IT for my company so I only know part of it, from observation mainly.

      First: All important files are to be kept on network fileservers - big RAID boxes which keep backups automagically, as configured, as part of their normal operation. All workstations automount them, all home directories are on them, laptops sync to them when on LAN, etc. (There are also "scratch" filesystems for temporary files - build intermediates, chip simulations and their results, etc. These are cheap, fast, and non-red
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday May 31, 2006 @09:51PM (#15441113)
    This will take a LOT of research on your part.

    You'll need to identify each application that is being used, where its data is being stored and what type of "backup" is needed for it.

    Don't forget to include "backups" of the system software. There's nothing more annoying than having to rebuild a system, and you have a backup of the data, but you cannot find the install CD.

    Older *nix systems were far easier than the "modern" PC-based servers. I could backup my old Sequent box to a bootable tape. If anything went wrong, I could boot the tape and re-write the system. This is somewhat supported now on some of the PC-based servers.

    Anyway, back to the "backups". Once you have the systems identified, then you'll need to look at what scenarios you'll need to plan for.

    #1. Server crash.
    The data on the disk is destroyed. The OS is destroyed. But the hardware is okay.

    #2. The building burns down.
    All of your servers are now smoking heaps of plastic. So's your desk. And all the CD's you had.

    #3. 5 years from now someone wants a critical policy that was deleted 3 years ago.

    I spend most of my time kicking co-workers to get them to NOT just dump data any where that has free space and to NOT just throw up a new web server without telling me.
    • "You'll need to identify each application that is being used, where its data is being stored and what type of "backup" is needed for it."

      I second this. Nothing's worse than someone telling you "back up this system, full once a week, incrementals every other day, all local drives, blah blah" and then not telling you they've got some database on it (you can't back up a live database by just copying the files.) Of course, when failure hits, guess what needs to be restored and isn't usable?
      • Or how about the database the backup software uses. I have seen other peoples solutions go down and after rebuilding the backup machine there wasn't a record of what was on what tape to be restored. I had to re-index each tape and restore from there. Then try and check the files for the newest ones.(incrementals spanning different tapes as well as recycled tapes so at best you have the changes to a few file but not the orignial to restore to.) It took weeks instead of hours or even days to get it cloe enoug
    • Option #3 brings in a whole new discussion on Retention Policy.

      when you're backing up a few TB of data as we do in the company I work for, and with many cost contraints imposed, you have to look at what is most likely to be requested of the backups.

      Options 1 & 2 are both Disaster Recovery scenarios. The only difference being the scale of the disaster.

      Option 3 is "an" end-user stupidity scenario, which goes along with the "oh crap, I accidentally hit shift+delete and not shift+end to highlight files"

      we h
    • You missed a few:

      #4: User deletes a file deemed by somebody important to be critical and you have to get it back.

      Its amazing how much money is spent planning for the once-in-a-lifetime Twin-Towers disaster event, and how little is spent on the daily occurance of user-error. Unfortunately "User is an idiot" doesn't wash when its the company's financial records or the birthday party shots of the CEO's kid.

      - Don't permit users to save things to their local disks. Ensure all files go onto a share that can be ce
  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday May 31, 2006 @09:57PM (#15441140) Homepage
    This may just be a wording issue, but it looks like you want to back up the desktops. Is that true?

    I can't think of any good reason to do that. All the important data should be on the server. If the user wants to save a picture on the local disk to use as a background or something that's one thing (although I wouldn't allow that myself) but nothing important should be on those disks.

    Past that, I don't have the experience to help you. All I can do is reiterate what another poster has already put up. Check the backups. I can't tell you how many stories I've heard about backups that "went fine" until someone needed data. Stories where the tapes were so old they almost shredded themselves in the drives. Stories of "backing up" for at least 6 months onto a cleaning tape (I bet the drive was in good condition though!). Stories of the backup data being garbage because of a faulty cable or something. The backup is worthless if you can't get the data back off it successfully.

    • I can't think of any good reason to do that. All the important data should be on the server. If the user wants to save a picture on the local disk to use as a background or something that's one thing (although I wouldn't allow that myself) but nothing important should be on those disks.

      Parent is correct - to an extent. There is still probably a requirement to bring a failed desktop up and running quickly if there is a problem that requires a desktop restoration.

      If centrally storing data is the way to take c

      • I agree. I assumed that the image of the computer(s) would be included in the backup. Having those images will save you a ton of time, even if each image is only for 50 computers.

        That said, there is a big difference between backing up the images and backing up each individual desktop in the company.

  • I dump stuff on undergrads. They've got to be good for something.

    /heh, just Kidding. I just mirror my scsi disks with a big ultra-ATA device weekly and daily.
    • > I just mirror my scsi disks with a big ultra-ATA device weekly and daily.

      You might like my backup software, Chroniton [cpan.org]. It will happily run from cron and make incremental backups (and allow you to easily restore from one). It also stores everthing to the filesystem, so even if my software crashes and burns (which it won't; it's heavily tested in practice and with unit tests :), your data will still be just fine. All of your file's metadata is safely versioned and archived, as well. Take a look, it's
      • Thanks, but I have a couple of cron jobs running with my own bourne shell scripts for backup. Restore is easy since I just rsync when backing up.
      • Version .03? Is the developer still only on the first line of code?
        • > Version .03? Is the developer still only on the first line of code?

          Perl programs traditionally start at 0.01 and move up by "hundredths" from there. Development releases contain an underscore, to prevent confusion. For example, the first test release of Chroniton was 0.01_1.

          If it makes you feel better, just mentally multiply the version number by one million... then my software is at version 30000!
  • by Millenniumman ( 924859 ) on Wednesday May 31, 2006 @09:58PM (#15441148)
    My backup strategy consists of hoping that my hard drive doesn't fail before I get a new computer/hard drive. It's worked so far, even with a laptop.
  • the best way ive found yet is to back everything up to /dev/null its incredibally fast and saves you on storage space for backup tapes.
  • I pray.
  • I use and recommend rsnapshot [rsnapshot.org] for taking disk-to-disk backups of unix based servers and PCs. It has a *really* slick directory structure where each daily/weekly/monthly backup directory is a *full* snapshot - but by using hard links, it only saves the changed files multiple times. Also, because it uses rsync, it only copies changed files across the network, and can use ssh no problem.

    It's downsides: it's basically just a wrapper for rsync. It requires a lot of babysitting (if your backups fail for some r

  • 1 click down, yell "Clear" and hit the gas.
  • Paper (Score:5, Informative)

    by NetDanzr ( 619387 ) on Wednesday May 31, 2006 @10:28PM (#15441319)
    My backup copy is paper. Granted, it gets a little awkward when I move, as I currently have six large file boxes of that stuff, but I know that as long as I keep it reasonably safe from humidity/mice it'll outlive all my computer media and file format changes.

    At work we do the same, only to a larger extent. We've got an on-site and off-site storage, and each piece of information is printed in two copies to be stored at each. All that in addition to your usual Veritas tape and CD-RW backups, which we do for convenience of restoring lost data, but which we don't trust enough to eliminate paper copies.

    • If you are making backups every week, you only need them to last one week. Paper makes sense for an archive if you plan on needing the data long after you have stopped creating new data, but while you are working a short-term, cheap, space efficient and environmentally friendly solution is better.
    • When has the paper backup proved useful? I can't imagine running even a small/medium business and being able to retrieve useful information after a disaster. And then what do you do? Rekey it into a brand new system?

      I think paper based backups would be fine if you had a paper-based business, but if you use databases to make it easier to getting to stuff, a paper-based recovery seems crazy.

      You'd be far better off IMHO to get your tape backups to a state where they are reliable. Even if that means running a f
      • When has the paper backup proved useful? I can't imagine running even a small/medium business and being able to retrieve useful information after a disaster. And then what do you do? Rekey it into a brand new system?

        That's why I mentioned that we also keep electronic copies, for convenience, but ultimatelly paper copies are the primary backup. It works very well even in database-driven environment, as long as you don't update fields in a database, but add new rows. And that's exactly what we're doing at

      • It cant be *that* hard to restore backups from paper....right?

        http://ars.userfriendly.org/cartoons/?id=19971127 [userfriendly.org]
  • by John the Kiwi ( 653757 ) <kiwi.johnthekiwi@com> on Wednesday May 31, 2006 @10:41PM (#15441390) Homepage
    I think you're jumping the gun a little here.

    The first question you need to ask is:

    What is the time frame for your servers to be restored in should servers and such completely fail?

    If you don't know that answer to that question then how does your company know how much money to budget? Are you bound by HIPAA or Sarbanes-Oxley? You should know how much is your company's data worth prior to assigning a bidget.

    Are some of your database servers supposed to be up 24x7? Maybe you should look at distributed transactions across databases located at different sites so if one server fails you still have everything live? Have you timed how long it takes to rebuild your servers to confirm your allotted time in your disaster recovery plan? Has your company considered imaging servers/ Is it possible to?

    Have you consulted your disaster recovery plan? Have you checked with suppliers to see how long replacement parts will take to order? I can't tell you how many administrators get caught out by buying an expensive tape drive only to have it fail along woith the server and nothing can be restored until a new one can be sourced.

    Without requirements, a disaster recovery time frame you will never be in control in the event of a disaster.

    Your companies board of directors/owners will need this information. It's called operating under conditions of "due care and diligence".

    If something goes wrong and you can't tell your boss exactly what is required and how long it will take to recover then you're working in the wrong job - a big part of being a network administrator is planning for ANY event.

    Oh, most of the time my customers are happy with Robocopy. I hate paying for expensive hardware and backup software solutions when I can write something much simpler and document it properly rather than depending on someone else's buggy software. Of course this depends on the industry and their requirements.

    Make sure that your boss completely understands these questions and issues. Ask him to see the current Business Continuity plan and Disaster Recovery documentation before you touch anything on those servers - can't stress that enough.

    Hope that helps, sorry it's brief but if you're in charge of backups it's your job to be ANAL and PEDANTIC.
    • Mod parent up (Score:2, Interesting)

      by tengu1sd ( 797240 )
      Before you start spending money you need to know what the company requirements are. There are excellent tools and options, including real time raid-1 over mutliple sites, but the business case will drive your requirements.

      Servers - how long can they be down? Do you have replacement plans in case your data center gets hit by the next earthquake/hurricane/fill_in_the_disaster. Having tapes off site means nothing if you don't have hardware for restore. Can you get Hardware X if everyone else is looking f

  • Real men don't use backups, they post their stuff on a public ftp server and let the rest of the world make copies.
  • What's a good inexpensive backup package for Windows that saves data encrypted to tape?

    The Help in Backup Exec mentions that the password (if specified) will be required when accessing the files from within any Backup Exec program. I assume that means the data on the tape is not encrypted? I searched Symantec's Backup Exec 10d's online PDF manual but "encrypt" appears to be available only for DLO (Desktop/Laptop Option).

    Maybe NovaBACKUP http://www.novastor.com/pcbackup/backup/n_backup.h tml [novastor.com] ?
    • It's not something I've ever looked at, but I kind of doubt that encrypted backups are likely to be popular enough at the "serious backup" level for the simple reason that tape manufacturers advertise the (average) compressed capacity of the tape, with compression being done in hardware by the drive. This has generally been about twice the actual raw, uncompressed capacity of the tape. (It may be higher now; it's been a couple years since I went to virtual tapes.) Well-encrypted data is uncompressible, o
      • In an era of larger/cheaper tapes/drives and news stories about the tapes being stolen (or unauthorized reading/copying), I should hope encryption is at least an option in the (several-hundred dollar) tape backup packages. Besides, there are plenty of encryption programs for Windows and USB sticks. I want this available in the scheduled backup tape job definitions : )
    • AMANDA does encryption. It even does RAIT, tape changes, tape spanning, client compression, and so on. I've used it for 8 years and have yet to be disappointed.
      • While it is true that AMANDA can do client or server side data encryption and/or transport encryption, I'd not suggest using it for win32 servers (as the grand parent asks). If you're able to put in a cheap *nix backup server, running AMANDA under cygwin certainly works. But I don't know if client-side encryption works (client-side compression hasn't worked for cygwin clients) & don't know how well using AMANDA with kerb/ssh on cygwin works.
  • by OiBoy ( 22100 )
    We moved all of our servers to VMware virtual machines. Now we back them all up every night, some of them we even back up multiple times a day. We tried esxRanger first, but it took too long (back up of all of the VMs took 4 days) and used too much space. Then we moved to esXpress, which does differential backups of VMs, so it is MUCH faster and uses MUCH less space. We keep 30 days worth of backups online, but once a week we cut tapes of the monthly full and that week's differentials and ship it off-si
  • ...that says "Severe Tire Damage!"

  • Who bothers with backups? I've personally never wasted any time backing

    A fatal exeeption 0E has occurred at 0137:BFFA21C9. The current application will be terminated.

      * Press any key to terminate the current application
      * Press CTRL+ALT+DEL again to restart your computer. You will lose any unsaved information in all applications.

                      Press any key to continue _
  • by SlappyBastard ( 961143 ) on Wednesday May 31, 2006 @11:16PM (#15441569) Homepage
    Please God... please say someone took the project home on CD, or we're fucked!
  • get a real file server,a small tape robot and veritas.
  • I don't give two hoots for a backup policy. What you need is a data recovery policy. When will I need to recover data, and how will it that be attained.

    I've been working with Symantec (formerly Veritas) Netbackup in my workplace for the past 6 years. About 6 months ago I became one of the backup admins, and the biggest barrier I have to break with our clients is the backup mentality - I must backup everything all the time...

    Generally your data recovery will happen from two triggers:

    1. A user broke his ow
  • by tverbeek ( 457094 ) on Wednesday May 31, 2006 @11:25PM (#15441608) Homepage
    ...actually turn your upper body around, so you can look in the direction you're driving.

    Think of the children!

  • Rsync is very good at keeping two servers in sync with minimal bandwidth and disk activity, and can be configured so that you never lose a past revision. I have it set up so we have the latest copy, two weeks of revisions, and one previous revision for each file on every file share.

    Some special consideration is needed for Windows servers. Some files get locked so they can't be read by rsync. We're not backing up anything that we'd run into that problem with, and we back up during a period of inactivity, but
  • You write that you're archiving your old backups. This is good, of course, for several reasons. You need multiple copies in case the newest one isn't usable, and you may need to acess old data. However, how far back do you plan to go in saving old data? If you just keep all backups from now on, you'll have an endlessly rising storage fee because they'll just take up more and more room, and the chances you'll need the older data will get smaller and smaller. Part of creating a good backup policy is deci
  • Remember to change the tapes!

    the cron scripts don't work otherwise!

    #!/bin/sh

    # Daily backup script

    rm -rf /var/db/mysql_tmp
    mkdir /var/db/mysql_tmp /usr/local/etc/rc.d/000.mysql-server.sh stop
    cp -R /var/db/mysql/./ /var/db/mysql_tmp/ /usr/local/etc/rc.d/000.mysql-server.sh start
    find /home/*/public_html /home/*/Mail /var/mail /usr/local/www/ /etc -newer /root/backup/last_backup -and \( -type f -or -type l \) > /root/backup/daily_increment
    find /var/db/mysql_tmp \( -type f -or -type l \) >> /root/backup
  • Paraphrasing a certain Mr. Torvalds:
  • Comment removed based on user account deletion
  • I never NEVER backup. It is futile, a huge waste of time, and a monumental risk. The only time I have ever lost data was while performing backups. Let me give you an example.

    Way back around 1979, it was my first serious development job, and as the junior programmer in the shop I had the onerous duty of performing the weekly backups of our production drive, containing all the code for our accounting software development. We had a big 10Gb Corvus hard drive (the original Winchester) networked to our Apple IIs
    • That's just ducky when the building burns down, the office is vandalized, the hardware is stolen, someone deletes the files, the fire system malfunctions and triggers the automatic sprinkler system, you hit 'delete' when you meant to hit 'enter', it turns out that your source control didn't quite control your source as much as you thought it had, you fire the wrong person, you hire the wrong person, someone does something they shouldn't have been doing and the equipment gets impounded, the bills weren't pai
      • You weren't paying attention. I specifically said that due to my diligent maintenance, I have a 0% hard drive failure rate over the last 20 years. I just retired a server with an Atlas 10K SCSI drive that ran 24/7/365 for over 5 years without a single problem, not even a soft error. That's what happens when you buy quality products, like high-end SCSI drives instead of cheapshit IDE drives.

        Yes, I am invulnerable. My OS and apps are backed up on their original distribution discs. My handmade data is archived
        • I guess I'm glad you feel invulnerable with the backup scheme you've crafted (and make no mistake, it is a backup - despite that you choose to call it archiving), but really you're just playing roulette with your data.

          Now, having offended you, let me agree with some of the things you say. =)

          Your assertion about maintaining a complete system backup is pretty spot on. The data is what you want to keep safe and the applications are perfectly able to be reloaded from the source media (provided, of course, tha
          • Please do not project your hardware neuroses onto me, I'm not using cheapshit white box PeeCees loaded with Windoze malware, I use pro Macs.

            Your backup system isn't a procedure, it is a fetish. You claim your "rework window" is only 24 hours. How many 24-hour time slots have you wasted over the past few years, doing unnecessary backups?
            • Hardware neuroses, Windoze malware, PeeCees and "pro Macs". THAT explains it. You're one of those old-school Macintosh persecution-complex cultists. All becomes clear now. I do happen have a pair of white-box Windows machines for gaming, though the rest of my gear is low to mid-grade server-class x86 hardware running Linux or FreeBSD (Tyan and Supermicro stuff). I also happend to have some Mac hardware (although you'd probably sneer at my Powerbook for not being "pro" enough).

              I spend approximately 10 t
        • FUD? Ignoring for the moment that you've proven yourself to be little more than an elitist ass, one who doesn't seem to have much experience in a larger organization, these are not scare tactics. They are simple facts. Just because they run counter to your ignorance does not mean that they don't have applicability in the real world. And, as I said before, the risk isn't the loss of the data as much as the business interruption. All the diligent maintenance in the world isn't going to make a bit of differenc
    • We had a big 10Gb Corvus hard drive (the original Winchester)...

      That didn't sound right, so I did a little checking. FOLDOC [foldoc.org] tells me that the drives got their name because they had two 30meg volumes, rather like the Winchester 30-30. If you really were working with a 10Gig drive, it wasn't a Winchester, and it wasn't in 1979, either, because they didn't have drives that big back then.

      • Non-removable disk drives were commonly referred to as Winchesters back then, even if they were not made by IBM. IBM pioneered the technology that later became universal in the disk drive industry. The original poster probably mistyped GB instead of MB.
      • I might be wrong about the size, but these were the original "Winchester" models, now that you mention it I don't recall if they were 10, 20 or 30Mb, all I remember is that they were so huge (to us then) and that they took goddam forever to back up to floppies. The drives were big grey boxes with a translucent plastic lid, you could see the heads move across the platter. Then those were placed inside a Corvus white box. My understanding is that these Winchester drives were originally produced for IBM mainfr
    • No offense, but you've shown your ignorance here. First of all, the original poster is discussing corporate backups, while you're talking about backups for your own personal stuff. What works for your one or two computers at home is not likely to work for a company with hundreds of workstations and dozens of servers.

      Many of your statements just flat out don't make sense when you consider larger scale or corporate computing environments.

      For example: System and app backups are totally useless. Sys configs a
  • Two thoughts related to storage:

    - Consider carefully whether you trust your tape safe. I've seen tapes damaged at temperatures lower than some tape safes are rated for.

    - If you have offsite backups, you should also have offsite tape drives. If your main site is destroyed in some catastrophic disaster, it's not too hard to get emergency replacements for server hardware, especially x86. But urgently sourcing the right model of tape drive (in many cases a model that is a few years old) can be a nightmare. Whil
  • is http://www.avamar.com/ [avamar.com]

    The Backup server or cluster of servers store 20KB blocks keyed to the block's SHA-1 hash.

    Smart agents on each backup client chunks each new file to be backed up into 20KB blocks and calculates SHA-1 hashes which it compares against the backup server.

    If the block is new (not on the backup server) the block itself is transfered.
    If the block is old, the backup server stores an extra reference to the block for the client/file.

    The end result is..
    a) a 1000 windows backup clients will res
  • I don't bother with backups. I've got a airtight policy in case of a HD crash or any other form of data loss:
    1)Look shocked and terrified.
    2)Yell.
    3)Scream.
    4)Pull hear.
    5)Bang head to wall.
    6)sit quitely sobbing a corner.
    7)Kick the cat.
    8)Replace HD. (if necessary).
    9)Reinstall software.
    10)Kick cat again.
    11)redownload mp3s, movies, games and pron.
    12)Feed cat.
    13)Mail goatse.cx pictures to random innocent people as an act of pointless revenge.
    14)Make futile threats to a deity that if it happens again
  • AMANDA is really great software. In my past job, we used Retrospect (then from Dantz). That was a nightmare--it used some proprietary archiving format & we weren't able to retrieve some things. AMANDA uses standard dump or tar files (well, as standard as 'dump' is, I guess), so I'm confident that that'll never happen. It also has a first-class scheduling system. Every night, we fill almost exactly one full tape. There are very few disks which don't get a nightly incremental & we have it config
  • So far the best "backup" software I've used is rsync.

    I used to work at one of the worlds most well known web hosting companies where among other things I ran their backup system. It started out with Arkeia and a 120tape library with 6 AIT3 drives. Arkeia was crap though (this was 3yrs ago), it was such a pain to setup and the trying to restore ANY amount of data would literally take days just to scan its local database. Trying to restore just one file would take 6hrs just for it to scan its local database..
  • I run the same version of my OS on QEMU and have it rsync the data.

  • I have a rosary backup policy. My prefered saints to pray to are Mary, Don Bosco, St. Ignatius of Loyola and St. IGNUcius.
  • My company does a couple of things that I thought I'd share with you. First, we run a multi-terabyte SAMBA fileserver, on which we have both departmental/project shares and every user's 'My Documents' folder. Second, we have a group policy that maps everyone's 'My Documents' folder to the appropriate SAMBA fileserver directory. Finally, my IT policy is communicated to all new and existing company computer users explaining this setup, and the fact that - aside from mail - the only user data backed up is l
    • For a small additional fee, you can ensure your tape data is encrypted.

      The problem with encrypting backups is that if - on the backup - one bit of data becomes corrupted, the entire backup is likely to be worthless. Since most times when doing a restoration of data, this corruption happens when you need the data most (Murphy's Law), you will come to regret the decision. At least on an unencrypted tape, you can sometimes (with a lot of work) start in the middle of the tape (or other backup medium) and work

  • First, determine your needs. Are you backing data up for disaster recovery purposes, data protection purposes, or archiving purposes to meet regulatory requirements? Or maybe some combination of the three? How long does this data need to be stored?

    The most common technique is a weekly full backup with daily incremental backups. Depending upon your file retention requirements, you may be able to re-use the incremental tapes or you may have to append to them and then cycle them out when they are full.
  • We use what we call a "finger drive" (not to be confused with thumb drive). After a catastrophic failure, we are all driven to finger pointing.
  • But anyway I'd actually try to apply my 1-person-shop strategy if I would be maintaining that much.
    It may sound crazy for most people but it goes like this:

    1) All critical data on central servers. No critical data on workstations, ever.

    2) Critical Stuff for MS stored on Unix via Samba (Asuming your using Ethernet and not some Turbo Protokoll I don't know of)

    3) A guy responsible for backup including taking this weeks backup home + a standin for him. Both have necessary root access and have specific payd tim
  • Here's what I do when I need to back up:

    1. Depress the clutch pedal.
    2. Put the gearshift into "Reverse"
    3. Slowly let out the clutch pedal while pressing lightly on the accelerator pedal

    It works really well, and I can almost always recover from those backups too.

  • I zip all my files and name it "Naked pictures of (insert star name here)". Then I publish the torrent. Cheap distributed offsite backup.

    SD
  • I use a table-driven script calling rsync --link-dest onto coraid aoe racks, then archive offsite to LTO3. I back up everything nightly.

    But the guys here who wanted to buy a product, rather than build a solution, spent months researching all the alternatives and they even got demo hardware and software and trialed the majors on site. Their finding was that Comvault knocks the doors off everything out there for really large volumes of data on multiple operating systems. Veritas and Legato were among the o

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...