RAID Controller Shoot-Out 88
mikemuch writes "ExtremeTech has a comparison with benchmarks of three RAID controllers from Adaptec, LSI Logic, and Promise, and along the way gives you a little refresher course on RAID in general and why you want to use it: faster throughput, longer uptime, and improved data security. Motherboard RAID controllers do well when there's 'very little or no load on the CPU, I/O bus, and memory bandwidth. But with heavy traffic and processor loads, the limitations of the shared bus and the benefits of intelligent RAID's integrated IOP and memory cache have a more significant impact.'"
Moral of the story... (Score:3, Insightful)
Re:Moral of the story... (Score:3, Insightful)
Re:Moral of the story... (Score:4, Informative)
Friday, 4PM (Score:2)
If you can get onsite service contracts in your area this is a very good selling point for them. If you're > 100 miles outside of a "major" metro area, good luck.
I've never had an SMP server slow down with software RAID-1 mirrors that I could notice.
Re:Moral of the story... (Score:1)
Re:Moral of the story... (Score:2)
Re:Moral of the story... (Score:1)
Re:Moral of the story... (Score:3, Informative)
I don't know why the meme that software (or pseudo-hardware) RAID5 "sucks up" CPU cycles continues to propogate.
A 300Mhz Pentium 2 has a RAID5 checksumming speed (in Linux) of about 800M/s. So at the more realistic speeds of 50 - 75M/s your average PC's RAID5 array can write at before the bus and physical drives hit their limits and on any remotely modern CPU, the processing overhead of checksumming is ins
Re:Moral of the story... (Score:2)
It's because anandtech and tomshardware "experts" (like the grandparent) believe everything they read in a forum post. This is also the reason everyone believes Via chipsets suck even though they haven't in 5 years.
Re:Moral of the story... (Score:1)
Re:Moral of the story... (Score:2)
Re:Moral of the story... (Score:1)
Re:RAID0 is evil and must die. (Score:4, Insightful)
Re:RAID0 is evil and must die. (Score:3, Insightful)
In case you didn't realize it, the purpose of RAID0 is to increase performance. People running it are not concerned about data loss on those drives; they are trading off reliability for increased performance. and by the way, It's DUMAS!
Re:RAID0 is evil and must die. (Score:1)
Le père ou le fils?
Re:RAID0 is evil and must die. (Score:2)
Re:RAID0 is evil and must die. (Score:1)
Re:RAID0 is evil and must die. (Score:2)
I have two systems, both are fairly nice. My video editing rig boots from solid state media (a 4 gig CF card) and runs on a raid0 for the video. The moment the transcoding/editing/whatever is done it pipes over the network to the other machine which is running raid5.
-nB
Re:RAID0 is evil and must die. (Score:2)
I have a 250GB system drive and 2x250GB in RAID0 as a media dump. I can install games and toss huge RARs on the RAID, and if a drive bites the dust so be it.
Re:RAID0 is evil and must die. (Score:2, Insightful)
Re:RAID0 is evil and must die. (Score:2, Insightful)
Sure, if you use RAID0 to store your important documents and don't back up, you're either a masochist or your teacher should reconsider the decision to mainstream you. However, if speed is more important than data safety RAID0 has its place.
One example is gamers. The kind of gamers who sometimes have a computer only for gaming. Other than their saved games, the data integrity isn't all that important as a reinstall could take place in an afternoon.
There are also many fields which require fast read/writ
Re:RAID0 is evil and must die. (Score:2, Interesting)
Re:RAID0 is evil and must die. (Score:4, Insightful)
Re:RAID0 is evil and must die. (Score:1)
Re:RAID0 is evil and must die. (Score:2)
And personally, I wouldn't capture 640x400 at 24bits without using a lossless codec like HuffYUV. That cuts the size down a good bit without sacrificing q
Re:RAID0 is evil and must die. (Score:2)
Your numbers sound very low. A newish (160G+, 8M+ cache) 7200RPM SATA drive should be able to sustain 40 - 70M/s (depending on the part of the disk) for stre
Re:RAID0 is evil and must die. (Score:2)
From what I've seen, copying from 250GB to 250GB (both SATA drives or one SATA and one PATA) gets me 25-40MB/s read and 25-40MB/s write. Whether that's running into some other sort of bottleneck or is simply a limitation of the O/S (WinXP) and file system (NTFS) is beyond what I care about. Copying between the
Re:RAID0 is evil and must die. (Score:5, Funny)
"Hey, let's double our chances of data loss by distributing data over TWO drives instead of one.
Dumbass."
Thank you for your comment, Dumbass, but around here you don't need to sign your name to each comment. The system does that for you automatically.
Re:RAID0 is, essentially, evil (Score:1)
I manage a few small data centers and we depend on RAID 1 and RAID 5 (and redundant servers) to keep our business running. Down-time is expensive, but rebuilding a machine from scratch is expensive too. So we don't use any software RAID or software-assisted on-board "controllers" - it's hard to call them controllers. RAID Filters? RAID bridges? RAID-like adapters? I love real RAID controllers. Everybody I interview has to explain RAID and something about how it works
Print version of article (Score:2, Informative)
Annoying "next page" articles...
Re:Print version of article (Score:2)
Re:Print version of article (Score:2)
eSATA Hardware RAID with port multiplier (Score:2, Interesting)
Re:Stupid cheap fakeraids (Score:2)
Vendors supply drivers for most OSes, while Linux has it's own drivers built into the Kernel for some of these cards. Also, I believe the Adaptec and the LSI Logic are both true raid cards (XOR is handled on the cards for RAID5). While the other 2 cards are fakeraids.
Rebuilding? (Score:3, Interesting)
There's a stigma associated with host-based controllers that trying to rebuild an array with them is tantamount to masochism. I think it comes from the fact that an intelligent controller can rebuild an array through BIOS-only intervention.
Anyone care to shed some light on how rebuilding arrays compares when using intelligent vs host-based controllers?
Re:Rebuilding? (Score:3, Informative)
Sure.
All the BIOS RAID interfaces I've seen (mostly MegaRaid and Adaptec) suck hard. About as friendly as a cobra, and slightly more dangerous if you do the wrong thing.
Software RAID interfaces can do better - But few actually do.
However, I wouldn't suggest choosing one or the other based on the friendlyness of rebuilding - Whichever you choose, when you eventually need to replace a d
Re:Rebuilding? (Score:3, Informative)
Re:Rebuilding? (Score:1)
That's just more fodder toward buying a used filer (NetApp) running RAID4 and WAFL...
(Warning: Shameless work plug and great stock tip above.)
How did they work under? (Score:5, Interesting)
How about calling it the Windows RAID controller shoot out?
ExtremeTech should just change it's name to Mainstream tech and get it over with.
Yup (Score:3, Insightful)
The ExtremeTech article was a complete waste of time.
Re:Yup (Score:2)
Do they support Hot Swap enclosures?
How long to rebuild after a drive replacement?
Yea pretty useless.
Re:Yup (Score:3, Informative)
Re:Yup (Score:1)
*nix RAID Support (Score:5, Informative)
However, Adaptec has refused to provide documentation so that the OpenBSD project may improve the drivers.
"Note: In the past year Adaptec has lied to us repeatedly about forthcoming documentation so that RAID support for these (rather buggy) raid controllers could be stabilized, improved, and managed. As a result, we do not recommend the Adaptec cards for use."
Other *nix variants might support the Adaptec and Promise cards a little more, but the hardware fully supported by OpenBSD is generally well-supported across all *nix variants.
Out of the cards reviewed, only the one from LSI is worth buying. Adaptec may have a little support, but it's not a good idea to purchase any RAID cards from them until they start providing better documentation.
Re:*nix RAID Support (Score:1)
Re:*nix RAID Support stay away from Adaptec (Score:1, Informative)
Very disappointing article (Score:1, Informative)
Where's Areca? They're the performance leader in this market, and their pricing is now in line with competition.
Where's 3Ware?
What about other host-based RAID solutions? Broadcom? Marvell?
Don't even get me started about what they tested. This is just not a serious RAID review. I strongly urge folks who are interested in this subject to Google for
The article may be an advertisement... (Score:2)
The article may be an advertisement disguised as an article. Possibly they don't want to benchmark 3Ware because it would win. Judging from the article, possibly this is an Adaptec ad.
--
U.S. Taxpayer Karma: If you contribute money to kill people, expect your own quality of life to diminish.
Most ATA RAID controllers are unreliable (Score:4, Insightful)
Of ATA controllers, our experience shows that 3ware controllers are the least unreliable. That is, they generally suck, because they have demonstrated performance problems and other weird failures that 3ware couldn't help us resolve, but they suffer from the least data corruption.
For whatever reason, the on-board controllers are the worst. They seem nice and perform well enough, but they have the highest rate of data corruption.
It may or may not surprise you that software RAID is relatively reliable. With a RAID1, you'd think you're twice as likely to corrupt data on writes, because you have to send the same data twice to two different drives. Sure, having them both bad is unlikely, but at a later time, how do you know which copy of a given sector is correct? But we think that removing an unreliable hardware RAID controller from the data path and just having the relatively simple ATA controller in the way reduces chances of a problem. Just a guess.
If you want truly reliable hardware RAID, you need to spend your life savings on an industrial-strength SCSI RAID controller.
The moral of the story is that there's really no such thing as 100% reliable data storage. If you want speed and don't care about reliability, RAID0 is for you. Other RAID levels add redundancy, which is nice in theory, but add hardware complexity that offsets some of the advantage. For my critical data, I store to CD and DVD ROM. And I make multiple copies of those, because those aren't all that reliable either.
Re:Most ATA RAID controllers are unreliable (Score:3, Informative)
Re:Most ATA RAID controllers are unreliable (Score:4, Informative)
Re:Most ATA RAID controllers are unreliable (Score:2)
I recently installed Ubuntu Dapper Drake onto it with no problems. The supplied kernel had the driver already, so no need to fiddle about patching kernel source and building the driver.
Re:Most ATA RAID controllers are unreliable (Score:2)
Re:Most ATA RAID controllers are unreliable (Score:3, Insightful)
I've dealt with both (hardware RAID, hardware RAID that is really software RAID and Linux Software RAID) and for a smaller company, Software RAID wins out. I don't have to worry about driver issues, I don't have to worry about keeping 2 extra RAID controllers on-hand in case the first one fries, and I can move the disks to another machine with different hardware and still get the RAID back up and running.
Software RAID seems very flexi
Re:Most ATA RAID controllers are unreliable BUNK (Score:1)
I built a sizeable (by 2003 standards) 4-drive RAID5 array using a cheap P3 motherboard and an Adaptec 2400A RAID controller on Feb. 16, 2003. It's still running swimmingly, serving up my media and user directories, and has never had any glitches -- not even a failed disk. Data corruption? None. If you have data corruption on a RAID5 controller, you need to seriously reconsider your parts list. (Not saying the ATA-2400A
This isn't a shootout (Score:3, Interesting)
If they tested multiple series of the Adapatec, LSI Logic, and some 3ware cards, I would be more impressed, but this just seems like an all out advertisement to me.
AOE is better than any of that crap (Score:2)
These guys are behind the technology: http://www.coraid.com/ [coraid.com]
If you don't like Open Source, you won't like it yet. Wait a few years and there will be a version you'll like, the economics of it are compelling. But right now you need to be able to write your own init
Re:AOE is better than any of that crap (Score:1)
From http://linuxdevices.com/news/NS3189760067.html [linuxdevices.com]
You can get thi
Re:AOE is better than any of that crap (Score:2, Informative)
I'm using it to back up a couple of terabytes nightly with rsync --link-dest. (See Mike Rubel's site [mikerubel.org] if you're not familiar with that trick).
Performance feels about the same as the $200,000 (US dollars) fiberchannel SAN array sitting next to it, but I
Re:AOE is better than any of that crap (Score:1)
Re:AOE is better than any of that crap (Score:1)
There's some nice stats located here [ieiworld.com]
Now, I don't know about you, but a single Gb/s ethernet port is going to have a hard time filling up any of those busses except the oldest one
Screw theory. (Score:2)
I recently replaced a ~2 TB duplexed u320 SCSI RAID array with a ~8 TB AoE array. Same hosts - same OS - no changes except from SCSI RAID to AoE RAID. My bus is PCI-133 and I have two GB ethernet ports on the motherboard, four more on two Intel PCI cards (not all of tho
Support for deleting a logical drive? (Score:2)
Also, have they gotten any better about continuing to run during single-disk failures? With the megaraid cards I've used its about 50/50 whether the raid subsystem crashes during a
Re:Support for deleting a logical drive? (Score:1)
The partitioning should be done by whatever operating system that you are running. A true hardware raid will convince the OS that you have a single hard drive (depending on how you configure the controller, I guess). As far as upgrading to larger drives, I would imagine that with my 3ware card, I could add a single drive and rebuild the array (via the
Re:Support for deleting a logical drive? (Score:2)
Many naive users throw all drives in a single massive RAID 5 logical drive and then partition that drive at the OS level. I call that naive because when you set up the array that way you get very poor speed performance: IO from all running programs has
Re:Support for deleting a logical drive? (Score:1)
Re:Support for deleting a logical drive? (Score:1)
RAID5 doesn't gain any performance until you hit 4 or more drives. RAID5 on three drives is usually a performance penalty. Gen
Re:Support for deleting a logical drive? (Score:2)
Lets say I have 8 20-gig hard disks. I want to build a web server with a MySQL backend. The mysqld process will be heavily disk bound on files that reside in
Option 1:
8-way RAID 5, 140 gigs usable.
Option 2:
8-way RAID 5, 140 gigs usable.
Re:Support for deleting a logical drive? (Score:2)
Multiple logical drives over the same physical disks - especially at different RAID levels - usually hurt performance because of the different access patterns to the drives and the way OSes try to optimise the order of (physical) disk operations.
This i
Re:Support for deleting a logical drive? (Score:1)
Re:Support for deleting a logical drive? (Score:2)
Most Adaptec controllers I've used can do it.
Re:Support for deleting a logical drive? (Score:1)
I should also note that I concluded that all the low-end IDE controllers are a waste of money compared to software RAID available with any decent OS. Once you get into the realm of hardware IDE RAID controllers that star
Re:Support for deleting a logical drive? (Score:2)
Actually I can recall seeing this option on way more SCSI than IDE/SATA RAID controllers. Most IDE/SATA controllers I've used (eg: 3ware) only let you specify drives to create and array from and don't then allow you to
Re:Support for deleting a logical drive? (Score:1)
My experience is a little dated: LSI/Mylex extremeRaid and 250 series, AMI MegaRaid (several, all older), the Dell PeRC controller circa 2002 or so). On the IDE front it's all IDE, no SATA: Pro
Re:Support for deleting a logical drive? (Score:2)
He means if you define multiple RAID arrays on the card ("logical drives") can you delete them individually.
Don't trust them farther than you can throw them (Score:4, Informative)
So apparently the SI3114R doesn't monitor S.M.A.R.T. data, and it's error-handling capabilities fall somewhere between "shitty" and "non-existant". No big deal for me; I was only inconvenienced by having to re-install operating systems and applications.
The moral of this long-winded story is that you generally get what you pay for. This isn't the first bad experience I've had with on-board RAID controllers. If your data is important, then spend the appropriate money (think in terms of data replacement cost), do the appropriate research, and invest in a RAID setup that's right for your situation. If your protected data consists of anything more important than your Oblivion saved games, your mobo's RAID controller (or the $39 Fry's special) is probably the wrong choice.
And if anyone cares to know, I'm now using the NVRAID on the mobo (we'll eventually see if it handles failures more gracefully), and I use an Areca ARC-1110 [areca.us] on my server. I can attest that the Areca card does handle failures extremely well, albeit noisily.
Re:Don't trust them farther than you can throw the (Score:4, Insightful)
Re:Don't trust them farther than you can throw the (Score:2)
Re:Don't trust them farther than you can throw the (Score:1)
Dissapointing (Score:2)
why no scsi raid controllers? (Score:2)
If you really do *need* a RAID setup, it seems stupid to ignore the SCSI angle on things, because SCSI RAID controllers are much more mature in features/performance/reliability and obviously aimed at a market which is less tolerant of cheap'n'nasty.
I know that SCSI is a lot more cost and, compared to SATA, not all that muc