Comment Re:Umm... (Score 3, Funny) 510
That is why one uses RAID 6 with lower tier drives and hot spares.
Works great until 3 drives in the RAID fail.
Better make it a RAID-60 just to be safe. And maybe mirror that too.
That is why one uses RAID 6 with lower tier drives and hot spares.
Works great until 3 drives in the RAID fail.
Better make it a RAID-60 just to be safe. And maybe mirror that too.
I recently witnessed two different RAID-6 setups belonging to two different shops where they each had a single drive start to fail, so the hot spare kicked in and started rebuilding. While it was rebuilding a second drive started to fail. No problem! It's RAID-6! While both drives were being rebuilt, a third drive failed.
Drives of a feather fail together. Thank you, I'll be here all week. Try the veal.
I have CenturyLink 10mb/s DSL in the Minneapolis suburban area and got 27-32ms to 8.8.8.8.
Your colleague did not get back 100% of the files that were on the system before the overwrite. They only got back what was still resident that didn't get overwritten by the ghost image restore. The ghost image only had the used portions of the drive copied, it did not have a full image of the drive.
With specialized equipment drives can be easily recovered when wiped by zeros. With even more sophisticated methods drives that have been written over several times can be recovered layer by layer.
Easily?!?! Layer by layer? Do you have personal experience recovering overwritten data in this fashion? If so, please let me know, my company would hire you in a heartbeat and you could name your price. This concept has been trashed over and over again on slashdot, and nothing has changed since the last time.
Think of the signal for a single bit on a platter like a digital Jackson Pollock painting using only two colors. One color represents a 1, the other color is a 0. Each write is a new splatter that gives the picture a new dominant color, but there are still some pixels left over from the previous splatters around the edges. Once the color of a pixel is set, it is set until you change it and you have no idea if the same color pixel next to it was written at the same time or not. You can just see that the overall color of the entire image is one or the other.
Now let's say, just for the sake of this discussion, that you were actually able to read these mythical "layers", or more appropriately, all the drops of the various splatters. Because this is a magnetic signal and not a physical layer of paint, you have no idea when any given pixel was written compared to the one next to it. There is no concept of a layer because the signals aren't stacked on top of each other, they are all next to each other. Now let's say that you have somehow managed to design and build the most sensitive magnetic read head ever conceived so that it is able to read the signal of every single molecule in the space that this bit occupies. That's great. Now you've determined that there were a bunch of 1's and 0's. Which order were they written in? Did that 1 from over to the left come before or after that 0 that you read from the lower right?
Assuming you got that figured out, now you need to get the next 7 bits just to make a byte. Did you get all 8 bits from the same write put together? Or did you screw one up because you got the ordering of your "layers" a little mixed up?
Now that you've got an entire byte reconstructed, you need to do the same with the other 511 bytes for the sector. Did you get all 4,096 bits for the sector correct for your "layer" of data? I'm a little skeptical...
Now go get the rest of your file, because it probably isn't all contained within a single 512 byte sector, and it may very well be written to different regions of the drive if it wasn't all written as a contiguous allocation. Depending on the file system and the size of the file, it's could be guaranteed that it is not contiguous - ext3 will be non-contiguous if it is larger than 6K.
Now that you have every bit recovered for a single file, did you get every bit correct? You're most likely in trouble if you screw up even a single bit and try to open it with the native application. LZ-based compression used for the file? It's almost sure to be busted as soon as you hit that bad bit and you won't be able to decompress anything beyond it. Different files have different tolerances, but unless you plan to look at everything with a hex editor, you're probably going to have a lot of trouble. Even something like a Word document (*.doc, not the *.docx) isn't going to be as easy as you think because the file does its own allocations of 64 bytes at a time within the file. If you did any edits, or have anything other than just plain text that is all the same font style, your text is no longer contiguous. If that Word document is using the new format (*.docx), then you out of luck because it is using a variant of LZ compression.
Oh, the file was a picture? No, still not always going to help you. Certain graphics file formats, like JPEG, do tolerate some corruption of the data (depending on where the corruption shows up), but some are just as fragile as a compressed data file.
Now repeat this for every file until you find files that are actually valuable to you. The amount of effort needed to reconstruct anything that has been overwritten far exceeds the value of whatever data it was.
Drill Baby Drill
Simple, yet effective.
I moved into a ~new home (3 year old house in same aged neighborhood) almost 4 years ago as well with about 30 recessed flood lights. I replaced 5 out of 6 in the kitchen with CFLs and have already had 2 of them burn out and the one incandescent is still going--as are all 24 other incandescent flood lights in the house.
I am quite unimpressed with the CFLs so far. I'm planning to stock up on the old technology over the next couple years.
I personally find it interesting that even though file system compression has been around for a long time, not many people actually use it.
ZFS is one of the first, if not the first, file systems that I've noticed enable it by default. It's interesting that MS doesn't enable it by default.
You are correct, that was part of my reasoning. Though I generally view compression support in a file system as an unfavorable feature for various reasons.
One of which is for performance for a workstation scenario. If I have all cores running at high utilization, I'd rather they be working on whatever processes I've requested instead of trying to compress data for writing as well. Space is cheap at that scale.
Admittedly, that is for my own, personal usage. In a data center with a lot of rarely touched, and ever rapidly increasing amounts of data I would strongly consider compression.
My other distaste for file system compression is that it adds another layer of complexity to overall storage. If something goes wrong, compression does not make things easier in terms of recovery. At times it completely kills it.
From my observations, it appears that ZFS tests the benefit of compression before actually writing the data. Each block written for a file may or may not be compressed. The compression type is stored in each block pointer.
I agree with your choice to turn compression off for an MP3 collection. It saves the effort of attempting to compress every block before writing it.
Which of the ZFS features most impact its performance?
Compression enabled by default can't help (available in btrfs).
Checksum for all blocks probably doesn't help, but definitely helps detect corrupt data/corruption (available in btrfs).
Forcing any file that requires more than a single block to use a tree of block pointers probably doesn't help. The dnode only has one block pointer and the block pointer can only point to a single block (no extents). On the plus side, the block size can vary between 512 bytes and 64 KiB per object, so slack space is kept low. If more than a single block is necessary it creates a tree of block pointers. Each block pointer is 128 bytes in size, so the tree can get deep fairly quick.
Three copies of almost all file system structures (such as inodes, but called dnodes in ZFS) by default can't help (which are compressed of course).
If only there were some unique invention they could license that was capable of such a process as rotating a piece of paper or an electronic image... Excuse me, I feel an urgent need to contact a patent attorney.
I couldn't agree more. Any mention of the word "mule" and I start hearing that music. It's a shame that it isn't in this version.
Excuse me: 4 KiB sector size.
Marriage is the sole cause of divorce.