Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:This attitude makes me sick and I'm tired of it (Score 1) 1016

Your colleague did not get back 100% of the files that were on the system before the overwrite. They only got back what was still resident that didn't get overwritten by the ghost image restore. The ghost image only had the used portions of the drive copied, it did not have a full image of the drive.

Comment Re:Just zero it (Score 5, Informative) 1016

With specialized equipment drives can be easily recovered when wiped by zeros. With even more sophisticated methods drives that have been written over several times can be recovered layer by layer.

Easily?!?! Layer by layer? Do you have personal experience recovering overwritten data in this fashion? If so, please let me know, my company would hire you in a heartbeat and you could name your price. This concept has been trashed over and over again on slashdot, and nothing has changed since the last time.

Think of the signal for a single bit on a platter like a digital Jackson Pollock painting using only two colors. One color represents a 1, the other color is a 0. Each write is a new splatter that gives the picture a new dominant color, but there are still some pixels left over from the previous splatters around the edges. Once the color of a pixel is set, it is set until you change it and you have no idea if the same color pixel next to it was written at the same time or not. You can just see that the overall color of the entire image is one or the other.

Now let's say, just for the sake of this discussion, that you were actually able to read these mythical "layers", or more appropriately, all the drops of the various splatters. Because this is a magnetic signal and not a physical layer of paint, you have no idea when any given pixel was written compared to the one next to it. There is no concept of a layer because the signals aren't stacked on top of each other, they are all next to each other. Now let's say that you have somehow managed to design and build the most sensitive magnetic read head ever conceived so that it is able to read the signal of every single molecule in the space that this bit occupies. That's great. Now you've determined that there were a bunch of 1's and 0's. Which order were they written in? Did that 1 from over to the left come before or after that 0 that you read from the lower right?

Assuming you got that figured out, now you need to get the next 7 bits just to make a byte. Did you get all 8 bits from the same write put together? Or did you screw one up because you got the ordering of your "layers" a little mixed up?

Now that you've got an entire byte reconstructed, you need to do the same with the other 511 bytes for the sector. Did you get all 4,096 bits for the sector correct for your "layer" of data? I'm a little skeptical...

Now go get the rest of your file, because it probably isn't all contained within a single 512 byte sector, and it may very well be written to different regions of the drive if it wasn't all written as a contiguous allocation. Depending on the file system and the size of the file, it's could be guaranteed that it is not contiguous - ext3 will be non-contiguous if it is larger than 6K.

Now that you have every bit recovered for a single file, did you get every bit correct? You're most likely in trouble if you screw up even a single bit and try to open it with the native application. LZ-based compression used for the file? It's almost sure to be busted as soon as you hit that bad bit and you won't be able to decompress anything beyond it. Different files have different tolerances, but unless you plan to look at everything with a hex editor, you're probably going to have a lot of trouble. Even something like a Word document (*.doc, not the *.docx) isn't going to be as easy as you think because the file does its own allocations of 64 bytes at a time within the file. If you did any edits, or have anything other than just plain text that is all the same font style, your text is no longer contiguous. If that Word document is using the new format (*.docx), then you out of luck because it is using a variant of LZ compression.

Oh, the file was a picture? No, still not always going to help you. Certain graphics file formats, like JPEG, do tolerate some corruption of the data (depending on where the corruption shows up), but some are just as fragile as a compressed data file.

Now repeat this for every file until you find files that are actually valuable to you. The amount of effort needed to reconstruct anything that has been overwritten far exceeds the value of whatever data it was.

Submission + - 10 Excellent Examples of Fake Photography (funspedia.com)

cellncell writes: Photo manipulation is a really wonderful art. Its uses, cultural impact, and ethical concerns have made it a subject of interest beyond the technical process and skills involved. Photo manipulation gives a realistic view of an unreal picture.

Comment Re:Clean Power (Score 1) 1049

I moved into a ~new home (3 year old house in same aged neighborhood) almost 4 years ago as well with about 30 recessed flood lights. I replaced 5 out of 6 in the kitchen with CFLs and have already had 2 of them burn out and the one incandescent is still going--as are all 24 other incandescent flood lights in the house.

I am quite unimpressed with the CFLs so far. I'm planning to stock up on the old technology over the next couple years.

Comment Re:They Why ZFS? (Score 1) 235

I personally find it interesting that even though file system compression has been around for a long time, not many people actually use it.

ZFS is one of the first, if not the first, file systems that I've noticed enable it by default. It's interesting that MS doesn't enable it by default.

Comment Re:They Why ZFS? (Score 1) 235

You are correct, that was part of my reasoning. Though I generally view compression support in a file system as an unfavorable feature for various reasons.

One of which is for performance for a workstation scenario. If I have all cores running at high utilization, I'd rather they be working on whatever processes I've requested instead of trying to compress data for writing as well. Space is cheap at that scale.

Admittedly, that is for my own, personal usage. In a data center with a lot of rarely touched, and ever rapidly increasing amounts of data I would strongly consider compression.

My other distaste for file system compression is that it adds another layer of complexity to overall storage. If something goes wrong, compression does not make things easier in terms of recovery. At times it completely kills it.

Comment Re:They Why ZFS? (Score 1) 235

From my observations, it appears that ZFS tests the benefit of compression before actually writing the data. Each block written for a file may or may not be compressed. The compression type is stored in each block pointer.

I agree with your choice to turn compression off for an MP3 collection. It saves the effort of attempting to compress every block before writing it.

Comment Re:They Why ZFS? (Score 5, Informative) 235

Which of the ZFS features most impact its performance?

Compression enabled by default can't help (available in btrfs).

Checksum for all blocks probably doesn't help, but definitely helps detect corrupt data/corruption (available in btrfs).

Forcing any file that requires more than a single block to use a tree of block pointers probably doesn't help. The dnode only has one block pointer and the block pointer can only point to a single block (no extents). On the plus side, the block size can vary between 512 bytes and 64 KiB per object, so slack space is kept low. If more than a single block is necessary it creates a tree of block pointers. Each block pointer is 128 bytes in size, so the tree can get deep fairly quick.

Three copies of almost all file system structures (such as inodes, but called dnodes in ZFS) by default can't help (which are compressed of course).

Real Time Strategy (Games)

StarCraft II To Be Released On July 27 220

Blizzard announced today that StarCraft II: Wings of Liberty, the first game in a series of three, will be released on July 27. The game will contain the Terran campaign (29 missions), the full multiplayer experience, and "several challenge-mode mini-games," with "focused goals designed to ease players into the basics of multiplayer strategies." It will launch alongside the revamped Battle.net, which we've previously discussed. Blizzard CEO Mike Morhaime said, "We've been looking forward to revisiting the StarCraft universe for many years, and we're excited that the time for that is almost here. Thanks to our beta testers, we're making great progress on the final stages of development, and we'll be ready to welcome players all over the world to StarCraft II and the new Battle.net in just a few months."

Slashdot Top Deals

Old programmers never die, they just hit account block limit.

Working...