Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:legal papers *are* sent ("served") by email (Score 1) 594

I seriously, seriously hope my sarcasm meter's broken, because you can't possibly be serious when saying something as reprehensibly stupid as "So, yes, there is a legal obligation to check your inbox on a regular basis". That's the reason you posted as an AC, right? Because you didn't want this coming back to your actual /. account?

Comment Nuts... I was hoping for Webkit... (Score 1) 556

At a glance, I thought that the article title meant that Firefox 4.0 was going to be based upon the Chrome browser, and therefore Webkit... no such luck, I guess. A browser which has full compatibility with the Firefox legacy of plug-ins, and runs on the Webkit rendering engine would almost certainly replace Safari as my default browser on both my Macintosh and my PC -- and I would hazard a guess that I'm not the only one who could say this. What's more, then the "browser wars" would effectively be whittled (back) down to a boxing match between Internet Explorer and Webkit, instead of this wild-and-crazy-free-for-all that's been going on ever since Netscape gave up the fight and sold out to AOL. Maybe then, the collective market share of all of these webkit-based browsers might drive web development more strongly to a "standards centered" philosophy of design and away from the "IE workaround" philosophy of design.

Ah, well. A guy can dream, can't he?

Comment Re:Ripoff (Score 1) 487

It depends - again it's a question of the numbers. Although there is some good evidence that a PS is the most likely part to fail in a rig and as such you should know what your plan is for a failure. A large storage setup could provide similar mitigation with multiple units assuming your storage pool is large enough or a less critical storage pool could be mitigated with a cold spare.

Comment Re:Not ZFS? (Score 1) 487

http://my.safaribooksonline.com/9780596521974/ch04

4.1.1. Data Integrity in HDFS

HDFS transparently checksums all data written to it and by default verifies checksums when reading data. A separate checksum is created for every io.bytes.per.checksum bytes of data. The default is 512 bytes, and since a CRC-32 checksum is 4 bytes long, the storage overhead is less than 1%.

Datanodes are responsible for verifying the data they receive before storing the data and its checksum. This applies to data that they receive from clients and from other datanodes during replication. A client writing data sends it to a pipeline of datanodes (as explained in Chapter 3), and the last datanode in the pipeline verifies the checksum. If it detects an error, the client receives a ChecksumException, a subclass of IOException.

When clients read data from datanodes, they verify checksums as well, comparing them with the ones stored at the datanode. Each datanode keeps a persistent log of checksum verifications, so it knows the last time each of its blocks was verified. When a client successfully verifies a block, it tells the datanode, which updates its log. Keeping statistics such as these is valuable in detecting bad disks.

Aside from block verification on client reads, each datanode runs a DataBlockScanner in a background thread that periodically verifies all the blocks stored on the datanode. This is to guard against corruption due to "bit rot" in the physical storage media. See Section 10.1.4.3 for details on how to access the scanner reports.

Slashdot Top Deals

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...