Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Parity declustering (Score 4, Interesting) 444

Actually I like the parity declustering idea that was linked to in that article, seems to me if implemented correctly it could mitigate a large part of the issue. I have personally encountered the hard error on RAID5 rebuild issue, twice, so there definitely is a problem to be addressed...and yes, I do now only implement RAID6 as a result.

For those who haven't RTFATFALT (RTFA the f*** article links to), parity declustering, as I understand it, is where you have, say, an 8 drive array, but where each block is written to only a subset of those drives, say 4. Now, obviously you loose 25% of your storage capacity (1/4), but consider a rebuild for a failed disk. In this instance only 50% of your blocks are likely to be on your failed drive, so immediately you cut your rebuild time in half, halving your data reads, and therefore your chance of encountering a hard error. Larger numbers of disks in the array, or spanning your data over fewer drives, cuts this further.

Now, consider the flexibility you could build into an implmentation of this scheme. Simply by allowing the number of drives a block spans to be configurable on a per block basis, you could then allow any filesystem that is on that array to say, on a per file basis, how many disks to span over. You could then allow apps and sysadmins to say that a given file needs to have the maximum write performance, so diskSpan=2, which gives you effectively RAID10 for that file (each block is written to 2 drives, but with multiple blocks in the file is likely to be written to a different pair of drives, not quite RAID10, but close). Where you didn't want a file to consume 2x its size on the storage system, you could allow a higher diskSpan number. You could also allow configurable parity on a per block basis, so particularly important files can survive multiple disk failures, temp files could have no parity. There would need to be a rule however that parity+diskSpan is less than or equal to the number of devices in the array.

Obviously there is an issue here where the total capacity of the array is not knowable, files with diskSpan numbers lower than the default for the array will reduce the capacity, numbers higher will increase it. This alone might require new filesystems, but you could implement todays filesystems on this array as long as you disallowed the per-block diskSpan feature.

This even helps for expanding the array, as there is now no need to re-read all of the data in the array (with the resulting chance of encountering a hard error, adding huge load to the system causing a drive to fail, etc). The extra capacity is simply available. Over time you probably want a redistribution routine to move data from the existing array members to the new members to spread the load and capacity.

How about you implement a performance optimiser too, that looks for the most frequently accessed blocks and ensures they are evenly spread over the disks. If you take into account the performance of the individual disks themselves, you could allow for effectively a hierarchical filesystem, so that one array contains, say, SSD, SAS and SATA drives, and the optimiser ensures that data is allocated to individual drives based on the frequency of access of that data and the performance of the drive. Obviously the applications or sysadmin could indicate to the array which files were more performance sensitive, so influencing the eventual location of the data as it is written.

The Courts

Are DMCA Abuses a Temporary or Permanent Problem? 163

Regular Slashdot contributor Bennett Haselton wrote in with a story about the DMCA. He starts "On January 16, a man named Guntram Graef who invoked the Digital Millennium Copyright Act to ask YouTube to remove a video of giant penises attacking his wife's avatar/character in the virtual community "Second Life", retracted the claim and stated that he now believes the video was not a copyright violation. (He had sent similar notices to BoingBoing and the Sydney Morning Herald just for posting screen shots of the video.) His statements in a C-Net interview suggest that he didn't mean to alienate the anti-censorship community and was probably angry over what he saw as a sexually explicit attack on his wife. But the event sparked renewed debate over the DMCA and what constitutes abuse of it. I sympathize with Graef and I admire him for admitting an error, but I still think the incident shows why the DMCA is a bad law." Hit that link below to read the rest of his story.

A Competition To Replace SHA-1 159

SHA who? writes "In light of recent attacks on SHA-1, NIST is preparing for a competition to augment and revise the current Secure Hash Standard. The public competition will be run much like the development process for the Advance Encryption Standard, and is expected to take 3 years. As a first step, NIST is publishing draft minimum acceptability requirements, submission requirements, and evaluation criteria for candidate algorithms, and requests public comment by April 27, 2007. NIST has ordered Federal agencies to stop using SHA-1 and instead to use the SHA-2 family of hash functions."

Submission + - Swedish bill to sniff internet traffic presented

swehack writes: "The Swedish defense minister today presented a bill(Swedish) to let FRA(National Defence Radio Establishment) listen to all internet and radio traffic sent over Swedish borders. Internet service providers will be forced to allow access to their border points where FRA will be allowed to filter for certain search patterns. The search patterns the FRA will filter traffic for will be decided by the FRA. The swedish minister of defense, Mikael Odenberg, noted that the bill only applies to outsie threats, information transferred between two swedes will not be used."

Submission + - Microsoft breaks South African Patent Law

An anonymous reader writes: Microsoft is ignoring a South African law that disallows software patents. South Africa does not examine the validity of patents registered, and Microsoft has used this loophold to register illegal patents. Once patents slip through, it can cost up to R1000000 (roughly $142 000) to invalidate the patents. Microsoft's national technical officer suggested that it was the government's fault for not enforcing the law.

If mathematically you end up with the wrong answer, try multiplying by the page number.