Submission + - Ask Slashdot: Looking for RAID Calculator
Bomarc writes: I’m looking for "RAID calculator" — that will provide recommendations for optional settings based on hardware information data entry; a way to calculate or warn that the optional parameters of controller and/or OS to keep the drive from "thrashing". Here I define "thrashing" as a way to reduce or eliminate the need to read and re-write a sector(s) that has just been written to. Most of what I've found so far is a size calculator, and if you need one of these, I believe that you are in the wrong business.
Example: a hard drive as an example that I’m currently using is a WD red 2 TB Drive for NAS (WD20EFRX). This drive has a 64MB buffer; a sustained read/write speed off 147 MB/s; bytes per sector 512(logical) / 4096(physical) bytes per sector; 3,907,029,168 sectors; 2,000,398 MB space; connected (in this instance) to a Dell Perc 5 with 256MB RAM – that can be configured to a stripe size with data segments of 8, 16, 32, 64, and 128 Kbytes. Under the OS, the sector size includes 512, 1024, 2048, 4096, 8192, 16K, 32K and 64K. The drive bays vary from 2 to 10 drives per array per system (2 drives as RAID 1; 4 as RAID 5, 6 or 10; 6, 8 and 10 drives as RAID 5 or 6)
In this example: The hard drive utilizes 4K bytes (physical) per sector; so with a 4 bay system (RAID 5 with 3 data drives; one parity drive) would result in a single stripe of 12K (with 16K of physical data that would include parity) data being written to the drive in one pass. Note however: That 12K does not go evenly into any of the stripe size, nor does it go evenly into the OS sector size. The result is "thrashing". The user will see a performance degradation (depending on where it occurs) as the controller reads a sector from the drive, merge the data with the outgoing RAID data, and re-writes the physical data to the drive for the sector(s) that are out — bound. If you are lucky to be writing large files, hopefully the logic in the controller will keep the “thrashing” process to a minimum. In an extreme example: you could have a stripe size of 8K and an OS sector size 128 k; with this configuration it could take 16 writes to get the data out — and we haven’t even dealt with hard drive sector size issues; that could bump the number up 128 writes for a medium sized RAID array!
So, back to the question: Has someone made available a "RAID calculator" out there that takes in these considerations — and shows or warns the user that there might be a problem, and/or hints the best configuration for a given hardware setup?
Example: a hard drive as an example that I’m currently using is a WD red 2 TB Drive for NAS (WD20EFRX). This drive has a 64MB buffer; a sustained read/write speed off 147 MB/s; bytes per sector 512(logical) / 4096(physical) bytes per sector; 3,907,029,168 sectors; 2,000,398 MB space; connected (in this instance) to a Dell Perc 5 with 256MB RAM – that can be configured to a stripe size with data segments of 8, 16, 32, 64, and 128 Kbytes. Under the OS, the sector size includes 512, 1024, 2048, 4096, 8192, 16K, 32K and 64K. The drive bays vary from 2 to 10 drives per array per system (2 drives as RAID 1; 4 as RAID 5, 6 or 10; 6, 8 and 10 drives as RAID 5 or 6)
In this example: The hard drive utilizes 4K bytes (physical) per sector; so with a 4 bay system (RAID 5 with 3 data drives; one parity drive) would result in a single stripe of 12K (with 16K of physical data that would include parity) data being written to the drive in one pass. Note however: That 12K does not go evenly into any of the stripe size, nor does it go evenly into the OS sector size. The result is "thrashing". The user will see a performance degradation (depending on where it occurs) as the controller reads a sector from the drive, merge the data with the outgoing RAID data, and re-writes the physical data to the drive for the sector(s) that are out — bound. If you are lucky to be writing large files, hopefully the logic in the controller will keep the “thrashing” process to a minimum. In an extreme example: you could have a stripe size of 8K and an OS sector size 128 k; with this configuration it could take 16 writes to get the data out — and we haven’t even dealt with hard drive sector size issues; that could bump the number up 128 writes for a medium sized RAID array!
So, back to the question: Has someone made available a "RAID calculator" out there that takes in these considerations — and shows or warns the user that there might be a problem, and/or hints the best configuration for a given hardware setup?