Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Not very useful. (Score 1) 145

I'm running the Toshiba DT01ACA300 drives mentioned in the report, not had a single one fail over several years of usage. Compare that to the Seagate ST3000DM001, also in that report, I had 10 of them at one point, and over 4 years 90% of them failed (not counting those replaced in the first year under warranty!). They report a nearly 30% failure rate, which is comparable to my experience. Only one Seagate left, and I expect that will be gone within the year (it's got a hot spare waiting to take over when it does).

My Toshiba drives (and a couple of HGST HDS5C3030ALA630, which became the DT01ACA300 Toshibas after the plant transfer to Toshiba) were installed as the Seagates died, have up to 27,700 power on hours (3+ years), and so far flawless reliability. They don't look so reliable in their report, but the failure rate they report is low enough that its not unexpected that I wouldn't have seen one die yet.

I had sworn off Seagates, but it looks like it may have just been one bad model. Useful to see these sort of numbers released as it's helped to remind me not to so easily write off the entire companies drives. Having said that, that specific drive is still available in the retail channel but I wouldn't touch it with a bargepole.

Comment Re:Sigh (Score 1) 418

Exactly right. I'm a Scot who voted no at the last referendum, my decision was never in doubt, and I'm fed up with all the calls to repeat the referendum again. This said the UK exiting the EU would make me strongly reconsider my No vote, and I'd probably support having a new referendum whatever my eventual decision on my vote.

Comment Parity declustering (Score 4, Interesting) 444

Actually I like the parity declustering idea that was linked to in that article, seems to me if implemented correctly it could mitigate a large part of the issue. I have personally encountered the hard error on RAID5 rebuild issue, twice, so there definitely is a problem to be addressed...and yes, I do now only implement RAID6 as a result.

For those who haven't RTFATFALT (RTFA the f*** article links to), parity declustering, as I understand it, is where you have, say, an 8 drive array, but where each block is written to only a subset of those drives, say 4. Now, obviously you loose 25% of your storage capacity (1/4), but consider a rebuild for a failed disk. In this instance only 50% of your blocks are likely to be on your failed drive, so immediately you cut your rebuild time in half, halving your data reads, and therefore your chance of encountering a hard error. Larger numbers of disks in the array, or spanning your data over fewer drives, cuts this further.

Now, consider the flexibility you could build into an implmentation of this scheme. Simply by allowing the number of drives a block spans to be configurable on a per block basis, you could then allow any filesystem that is on that array to say, on a per file basis, how many disks to span over. You could then allow apps and sysadmins to say that a given file needs to have the maximum write performance, so diskSpan=2, which gives you effectively RAID10 for that file (each block is written to 2 drives, but with multiple blocks in the file is likely to be written to a different pair of drives, not quite RAID10, but close). Where you didn't want a file to consume 2x its size on the storage system, you could allow a higher diskSpan number. You could also allow configurable parity on a per block basis, so particularly important files can survive multiple disk failures, temp files could have no parity. There would need to be a rule however that parity+diskSpan is less than or equal to the number of devices in the array.

Obviously there is an issue here where the total capacity of the array is not knowable, files with diskSpan numbers lower than the default for the array will reduce the capacity, numbers higher will increase it. This alone might require new filesystems, but you could implement todays filesystems on this array as long as you disallowed the per-block diskSpan feature.

This even helps for expanding the array, as there is now no need to re-read all of the data in the array (with the resulting chance of encountering a hard error, adding huge load to the system causing a drive to fail, etc). The extra capacity is simply available. Over time you probably want a redistribution routine to move data from the existing array members to the new members to spread the load and capacity.

How about you implement a performance optimiser too, that looks for the most frequently accessed blocks and ensures they are evenly spread over the disks. If you take into account the performance of the individual disks themselves, you could allow for effectively a hierarchical filesystem, so that one array contains, say, SSD, SAS and SATA drives, and the optimiser ensures that data is allocated to individual drives based on the frequency of access of that data and the performance of the drive. Obviously the applications or sysadmin could indicate to the array which files were more performance sensitive, so influencing the eventual location of the data as it is written.

The Courts

Are DMCA Abuses a Temporary or Permanent Problem? 163

Regular Slashdot contributor Bennett Haselton wrote in with a story about the DMCA. He starts "On January 16, a man named Guntram Graef who invoked the Digital Millennium Copyright Act to ask YouTube to remove a video of giant penises attacking his wife's avatar/character in the virtual community "Second Life", retracted the claim and stated that he now believes the video was not a copyright violation. (He had sent similar notices to BoingBoing and the Sydney Morning Herald just for posting screen shots of the video.) His statements in a C-Net interview suggest that he didn't mean to alienate the anti-censorship community and was probably angry over what he saw as a sexually explicit attack on his wife. But the event sparked renewed debate over the DMCA and what constitutes abuse of it. I sympathize with Graef and I admire him for admitting an error, but I still think the incident shows why the DMCA is a bad law." Hit that link below to read the rest of his story.
Encryption

A Competition To Replace SHA-1 159

SHA who? writes "In light of recent attacks on SHA-1, NIST is preparing for a competition to augment and revise the current Secure Hash Standard. The public competition will be run much like the development process for the Advance Encryption Standard, and is expected to take 3 years. As a first step, NIST is publishing draft minimum acceptability requirements, submission requirements, and evaluation criteria for candidate algorithms, and requests public comment by April 27, 2007. NIST has ordered Federal agencies to stop using SHA-1 and instead to use the SHA-2 family of hash functions."

Slashdot Top Deals

If all else fails, immortality can always be assured by spectacular error. -- John Kenneth Galbraith

Working...