I prefer hellanzb.
It is written in python. All you do it put an nzb in the directory that it is watching, wait a few hours, and you have all your data, par'ed, decoded, patched together, and unrared, sitting in the output folder.
If you have a network storage server, you can conveniently share the queue directory with samba or NFS, and centralize all your downloading.
It's also open source and you can use it over SSH, so you can have downloads ready for you when you get home from work!
I tried to do the same thing on an Ubuntu drive that had bad sectors, but the security on the files prevented them from being accessed.
Security like... file permissions? Did you try backing up the files as root?
Posting to remove 'Informative' moderation. I will think before moderating...
Consider a page that is full of URLs. Think about how many URLs are transmitted to your computer right now just to load this page. I count 1295 right now, just in tags.
Personally, I'm not concerned, but if you want to see how many tags are in any page, paste this into your address bar and press Enter.
I've only tried this in FF3, and of course URLs can be more places than in an href="" string of an tag...
Do you think the IWF site could be added? BBC? National Geographic? Youtube? Only one way to find out.
There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann