Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Disk Repair Tools for Linux? 25

Sujes asks: "Despite all the hype about how Linux is more stable or better than Windows, it seems ironic that nobody talks about any kind of disk/file repairing tools on Linux. In fact, I can't even restore a file that is accidentally or otherwise lost by a simple rm command. Does anyone know where should I look for these valuable recovery tools, if such things even exist on Linux?"
This discussion has been archived. No new comments can be posted.

Disk Repair Tools for Linux?

Comments Filter:
  • shame on you for suggesting Win2k to "fix" a problem with linux. We are preparing for a Win2k migration and everybody is banging their head against the wall because Windows 2000 server is so unbelievably obtuse and difficult to work with. Its to the point where our dept (an NT shop!) is pouring over samba books to figure out how we can avoid win2k as much as possible. Windows 2000 is cool and unbelievably verstile. A.D. has some incredible potential in the enterprise. But linux makes it easier, more stable, and more fun.
  • by Anonymous Coward
    alias rm = 'rm -i'

    tada.

    More usefully, there is a util somewhere that has a libc wrapper that turns unlink() into mv(file,~/.trash/file) and has a reaper daemon.
    Don't rember it's name..
  • ... http://www.praeclarus.demon.co.uk/tech/e2-undel/ This is a collection of programs and procedures that (as previously mentioned) offer no guarantees, but should certainly be a part of any admin's toolkit. Not just for problems, but for the educational value as well. Good stuff!
  • I personally don't want an rm command to be undoable. Ever. If I'm using what I consider a secure OS, I need to consider deletes also secure. That means remove, and wipe over with \0's. Think of the security consequences of your sysadmin being able to access those "for your eyes only" documents that you decided to delete for safety, only to be undeleted...

    IIRC NT actually takes this approach too.

    As far as disk recovery tools go - that's what fsck does. It just comes with the OS - we don't consider this a tool that you should have to purchase separately.
  • I know all too well that when you delete stuff it should be gone, and I agree -- despite the fact that I once killed two hours of work by careless use of 'rm -r'... what concerns me is massive fs crashes. I had one once, rendering an
    ext2fs useless, and it had important data on it.

    I know that if I knew the specs for ext2 the way I know some other things, I'd write up a quick program to recover the odd file, but I don't. I grabbed the most recent 'rescue disk' I could find and ran ext2fsck, with no luck.

    I feel that fs recovery should be payed more attention. I know those files are still recoverable, but the fs itself is unmountable and (according to e2fsck) irrepairable. Sucks to be me.

    Now, I don't have the right to bitch. My philosophy with linux revolves around why I started using it. The reason is twofold: Power, my god, does it ever have power, especially when combined with my second reason; learning, you must learn -- and you can. Honestly, I should be learning e2fs like I learned anything else I had to do for myself, and write said program, and release it to all of you folk.

    Perhaps augment e2fsck? Whatever. But I don't have the time. Alas, full time jobs -- boon to money, bane to free time.

    ---
    It's not smut it's data
  • That means remove, and wipe over with \0's.

    I don't think rm does this by default, there is a ext2 attribute to cause deleted files to be securely deleted but I am not sure if its actually implemented. the chattr man page says only c and u are unimplemented (compress and undeleteable) so maybe it is...

    from man chattr:
    When a file with the `s' attribute set is deleted, its blocks are zeroed and written back to the disk.

    this is not exactly the most convenient way to go if you want every file you delete to be securely deleted however, in that case exchanging rm for wipe is probably what you want.

    --
    blah blah blah
    • Think of the security consequences of your sysadmin being able to access those "for your eyes only" documents that you decided to delete for safety, only to be undeleted...

    If you have plaintext documents you want to keep secret on a system whose sysadmin you don't trust, you have much bigger things to worry about than undeletion. If the ability of the sysadmin to undelete files is a security issue, it means you should be using an encrypted fs.

  • by Anonymous Coward
    'Think of the security consequences of your sysadmin being able to access those "for your eyes only" documents '

    Here's security -- If you don't own the machine, don't put any personal data on it. I know that your work has a really nice T3, but download your porn at home.
  • by retep ( 108840 ) on Tuesday January 25, 2000 @05:41AM (#1339428)

    Due to the design of ext2fs, and *many* other filesystems, there is no way to undelete a file. In dos the directory entry would be left behind and you *might* be able to undelete your file, *if* the file wasn't fragmented. Just about any other good filesystem such as ext2fs, NTFS, etc. quickly reclaims the space taken up from deleted files for performence and doesn't save the directory entry.

    If you do delete a file you might be able to recover it, but there aren't many ways to automate ext2fs undeletion. For help see the Ext2fs-Undeletion mini-HOWTO.

  • Unfortunately, IIRC, ext2 as it stands is not immune to all crashes. That is, there exist certain situations in which the structure of your file system cannot be restored after a crash. This most often manifests as files turning up in /lost+found.

    Unmountable FSes, however, shouldn't happen unless you SERIOUSLY mangle things. In your case, is it a superblock problem, and is it possible to manually request the use of a backup superblock? Alternately, you might be able to mount the FS without the preceeding FS check and get some of the data off it.

    As far as making a more fundamentally robust FS for Linux goes, there are a number of projects out there working on log-structured filesystems, some of which will be usable in the near future.
  • I done the following to solve this problem in the past (not actually with ext2, but the solution works for all fs's)

    If you know "keywords" in the data (i.e.; you have some means of recognizing your data from context), then you ccan pretty easily do the following (in a shell script -- you will want to tinker a bit):

    • use 'dd' to read the data off of the raw partition (i.e.; don't mount it, just use
      • /dev/fd<A><N>
      or
      • /dev/sd<A><N>
      -- wherever your disk is. If the partition table is also hosed, you can just use omit the partition id, and you will get the whole disk
    • set the blocksize (e.g. bs=4k) to be the blocksize of the fs you created (if you rememeber it)
    • pipe into "split" to divvy the result into blocksize (or other handy size) chunks
    • if you are looking only for text (by far the easiest to recover) you can get rid of the extra info usings strings -a. You can do this either before or after the split, but be aware that after using "strings", the blocksize will have no relation to the blocks you are getting from "split"
    • use "grep" to determine if any of your keywords are in a block. I usually built a list of occurances, so that after this step I can rank the blocks as most likely to be part of the file(s) I want.
    • This should give you a set of blocks which you can stick together in files. I have used this method to recover a couple of things; most notably a novel my sister was in the process of writing (without a backup -- she knows better now).

  • There are many ways to accidentally destroy files. Overwriting a file with the wrong stuff is just as bad as removing the file.

    It isn't a backup system, it is a restore system. All people care about is if the restore works. Backups are important, because restore can't work without them.

    I use different mechanisms for recovering old files. I use RCS for some things, because that gives me a changelog back to the birth of the file. I only use it on files that I knew were important when I created or changed them. I use rsync to keep a current copy of the whole system. This backs up things that I didn't know were important until I destroyed them.

  • Unmountable FSes, however, shouldn't happen unless you SERIOUSLY mangle things.

    Well, it is mountable, but unusable. I can't get *any* data off of it. I don't even need full fs repair, I just want to recover a few inodes -- I'd welcome them in /lost+found. I just wish that I could recover the intelligible data that I know is there.

    I doubt it's just a superblock problem, but I'll go ahead and try anyhow (I still have the partition in it's crashed state), I've got nothing to lose if that goes wrong anyhow.

    At the very least, thanks for your advice.

    ---
    It's not smut it's data
  • use 'dd' to read the data off of the raw partition That is exactly my backup plan. Altough I hadn't thought of doing the blocksize thing, it's a great idea (thanks). The only trouble is that much of the data is binary -- worse still, much of that is compressed, so it has no 'keywords' save for the header.
    ---
    It's not smut it's data
  • If you are frightened of unsecured deleting, why you don`t use a simple dd-command ala dd if=/dev/null of=file? 100% should be deleted
  • #!/bin/sh

    if [ $1 = "-rf" ]; then
    &nbsp&nbsp&nbsp&nbsp /bin/rm $@
    else
    &nbsp&nbsp&nbsp&nbsp if [ $HOME/.trashcan = $PWD ]; then
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp /bin/rm $@
    &nbsp&nbsp&nbsp&nbsp else
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp stamp=`date '+%j.%H%M%S'`
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp echo "moving files to ~/.trashcan/"
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp for arg in $@; do
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nb sp&nbsp&nbsp filenm=`echo $arg | sed 's/\//V/g'`
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nb sp&nbsp&nbsp mv $arg $HOME/.trashcan/$stamp.$filenm
    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp done
    &nbsp&nbsp&nbsp&nbsp fi
    fi


    Then just name this something like 'remove', put it on your $PATH and make an alias/script called 'dump' that 'rm -i $HOME/.trashcan/*' then alias rm remove (assuming tcsh alias syntax).

    This has the advantage of being able to see where the file came from if you interpret the 'V' for '/'. It's a little slow, but if you're concerned about accidentally deleting a file, it helps a lot.
  • I think you meant to say dd if=/dev/zero of=file count=
    you could probably replace with $(POSIXLY_CORRECT=1 du file | cut -f1), but that only works if the file is not sparse.
    anyway..., you get the point.
    #define X(x,y) x##y
  • That was supposed to be
    dd ... count=<blocks in file>
    and you can replace <blocks in file> with $(...)

    The "post as plain text" option drops angle brackets. Grrrrrr. Bug report send to cmdrtaco.
    #define X(x,y) x##y
  • http://users.linuxbox.com/~recover/

    This is a program that is almost ready for normal use that tries to make undeletion easier.
    It's not a perfect solution, but should work if the file has been deleted recently.
    It askes you for whatever information you know about the file, then searches for the correct inode.
  • by Cris ( 7932 )
    I'm sad to see no mention of this yet--the Reiser filesystem offers exponentially better performance (I've done some nonofficial benchmarking to back up their claims) and more importantly offers journaling. As far as disk integrity goes, a journaling file system (unless it REALLY sucks) is unaffected by crashes because every change is journaled. Therefore the need for disk repair tools, etc is negligible (though there are a few maintenance tools). In high performance situations, Reiser is almost the no-brainer choice as far as FS on linux go (though at the moment you cant directly boot from reiser, you can make everything but your little /boot partition reiser by hand. People are working on a more elegant solution I'm told). But aside from that, it's also a pretty sound decision for the typical server to use. Stable, lightning fast and reliable...
  • Disk repair is not recovering a deleted file. Disk repair is recovering files/directories after filesystem damage.

    If you want to recover deleted files, you should ahead of time be using one of the methods which others have mentioned...or be using a filesystem/database which allows recovery of past contents (ie, store everything in something like CVS and decide how long to retain until permanent deletion).

  • strings /dev/hda | less


    where hda is the device name.


    this works, but it's not too pretty. i've recovered about 50 megs of e-mail from a fat32 drive this way.
  • [NOTE: This is not a flame.]

    The above script would probably be useful for most uses, but it still doesn't help when you issue the command from a different (/usr/local) dir than you *think* (/usr/local/prod-install-dir) you are in. Yes, this does happen on rare occurances (every 1.5 - 2 years) to very competent *nix people (as in last Friday night to me).

    What would be nice would be for something to hold 'deleted' files from rm -rf on disk (unless partition is totally full) for a minute (to allow me to come to my senses) and then actually commit it to disk to the file system.
  • The problem is, you get used to rm asking you for confirmation, so when you hop over to a system that doesn't have this in place, you could be toast.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...