Journal JWSmythe's Journal: Linux RAID performance benchmarks
I am setting up a new server, which has to be as fast as I can make it. Quantifiable results are king here. Hopefully this will help others out, but I strongly recommend doing your own testing on your own configuration.
I wrote a couple scripts. One formats the array with a specific filesystem. The second reads and writes. Basically (in psuedocode)
echo 0 > a
while (i < 31)
cp a b
cat b > a
end
Here are the results, sorted by speed then RAID level. My apologies for the layout on here. I copy&pasted it from an OpenOffice spreadsheet.
fs raid level format (sec) write 1g (sec)
xfs 0 2 20
jfs 0 n/a 20
ext2 0 60 20
ext4dev 0 48 22
ext3 0 62 22
reiser 0 n/a 25
ext4 5 77 32
ext4 0 49 32
ext4 1 61 33
xfs 5 9 48
jfs 5 n/a 50
ext4dev 5 74 50
ext2 5 93 55
reiser 5 n/a 58
ext3 5 94 61
jfs 1 n/a 66
xfs 1 2 68
reiser 1 n/a 69
ext4dev 1 59 70
ext2 1 63 70
ext3 1 68 72
The same list, ordered by filesystem and then raid level.
fs raid level format (sec) write 1g (sec)
ext2 0 60 20
ext2 1 63 70
ext2 5 93 55
ext3 0 62 22
ext3 1 68 72
ext3 5 94 61
ext4 0 49 32
ext4 1 61 33
ext4 5 77 32
ext4dev 0 48 22
ext4dev 1 59 70
ext4dev 5 74 50
jfs 0 n/a 20
jfs 1 n/a 66
jfs 5 n/a 50
reiser 0 n/a 25
reiser 1 n/a 69
reiser 5 n/a 58
xfs 0 2 20
xfs 1 2 68
xfs 5 9 48
The machine for this test is a dual 4 core Opteron 2350 (8 cores total) with 64Gb RAM, 3 integrated nVidia MCP55 SATA controllers, and 4 500Gb Western Digital WD5001ABYS-0 SATA drives. The OS is a plain installation of Slamd64 12.2 (Slackware for AMD64). uname reports:
root @ vsql2 (/proc) uname -a
Linux vsql2 2.6.27.7 #1 SMP Sun Dec 7 22:31:27 GMT 2008 x86_64 Quad-Core AMD Opteron(tm) Processor 2350 AuthenticAMD GNU/Linux
I have not customized the kernel at all, which may lead to performance increases beyond this. This wasn't a performance test, it was a filesystem and raid comparison. For example, better SATA drivers should improve the performance, but that should directly scale.
The RAID configuration is as follows. Each partition is a 100Gb partition, so they're each working with the same size space.
root @ vsql2 (/proc) cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdd2[2] sdc2[1] sdb2[0]
104864192 blocks [3/3] [UUU]
md2 : active raid5 sdd3[2] sdc3[1] sdb3[0]
209728384 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid0 sdd1[2] sdc1[1] sdb1[0]
314592576 blocks 64k chunks
unused devices: <none>
I wrote a couple scripts. One formats the array with a specific filesystem. The second reads and writes. Basically (in psuedocode)
echo 0 > a
while (i < 31)
cp a b
cat b > a
end
Here are the results, sorted by speed then RAID level. My apologies for the layout on here. I copy&pasted it from an OpenOffice spreadsheet.
fs raid level format (sec) write 1g (sec)
xfs 0 2 20
jfs 0 n/a 20
ext2 0 60 20
ext4dev 0 48 22
ext3 0 62 22
reiser 0 n/a 25
ext4 5 77 32
ext4 0 49 32
ext4 1 61 33
xfs 5 9 48
jfs 5 n/a 50
ext4dev 5 74 50
ext2 5 93 55
reiser 5 n/a 58
ext3 5 94 61
jfs 1 n/a 66
xfs 1 2 68
reiser 1 n/a 69
ext4dev 1 59 70
ext2 1 63 70
ext3 1 68 72
The same list, ordered by filesystem and then raid level.
fs raid level format (sec) write 1g (sec)
ext2 0 60 20
ext2 1 63 70
ext2 5 93 55
ext3 0 62 22
ext3 1 68 72
ext3 5 94 61
ext4 0 49 32
ext4 1 61 33
ext4 5 77 32
ext4dev 0 48 22
ext4dev 1 59 70
ext4dev 5 74 50
jfs 0 n/a 20
jfs 1 n/a 66
jfs 5 n/a 50
reiser 0 n/a 25
reiser 1 n/a 69
reiser 5 n/a 58
xfs 0 2 20
xfs 1 2 68
xfs 5 9 48
The machine for this test is a dual 4 core Opteron 2350 (8 cores total) with 64Gb RAM, 3 integrated nVidia MCP55 SATA controllers, and 4 500Gb Western Digital WD5001ABYS-0 SATA drives. The OS is a plain installation of Slamd64 12.2 (Slackware for AMD64). uname reports:
root @ vsql2 (/proc) uname -a
Linux vsql2 2.6.27.7 #1 SMP Sun Dec 7 22:31:27 GMT 2008 x86_64 Quad-Core AMD Opteron(tm) Processor 2350 AuthenticAMD GNU/Linux
I have not customized the kernel at all, which may lead to performance increases beyond this. This wasn't a performance test, it was a filesystem and raid comparison. For example, better SATA drivers should improve the performance, but that should directly scale.
The RAID configuration is as follows. Each partition is a 100Gb partition, so they're each working with the same size space.
root @ vsql2 (/proc) cat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdd2[2] sdc2[1] sdb2[0]
104864192 blocks [3/3] [UUU]
md2 : active raid5 sdd3[2] sdc3[1] sdb3[0]
209728384 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid0 sdd1[2] sdc1[1] sdb1[0]
314592576 blocks 64k chunks
unused devices: <none>
Linux RAID performance benchmarks More Login
Linux RAID performance benchmarks
Slashdot Top Deals