


Experiences w/ Software RAID 5 Under Linux? 541
MagnusDredd asks: "I am trying to build a large home drive array on the cheap. I have 8 Maxtor 250G Hard Drives that I got at Fry's Electronics for $120 apiece. I have an old 500Mhz machine that I can re-purpose to sit in the corner and serve files. I plan on running Slackware on the machine, there will be no X11, or much other than SMB, NFS, etc. I have worked with hardware arrays, but have no experience with software RAIDs. Since I am about to trust a bunch of files to this array (not only mine but I'm storing files for friends as well), I am concerned with reliability. How stable is the current RAID 5 support in Linux? How hard is it to rebuild an array? How well does the hot spare work? Will it rebuild using the spare automatically if it detects a drive has failed?"
Works great (Score:5, Informative)
Re:Works great (Score:2)
I love the idea of firewire, too, it makes perfect sense, 'cause if you are gonna have raid reliability, then you might as well have hot-swap. (note to self: save up for firewire enclosures).
I would have 2 years of uptime, but NOOOOO 1 national power outage and 1 drive crash (perfect recovery). Uptime 71 days =( [I WAS at 260 at one point]
just make sure that the drives are up to that much spin-time, a
Re:Works great (Score:5, Interesting)
that tool checks the SMART info on the disk about posible failures..
I do a lot of software raids and with smartctl, no drive crash has ever surprised me. i always had the time to get a spare disc and replace it on the array before something unfunny happened.
do a smartctl -t short
read the online page of it:
http://smartmontools.sourceforge.net/
A example of a failing disc:
http://smartmontools.sourceforge.net/examp
a example of the same type of disc but with no errors:
http://smartmontools.sourceforge.net/exa
Software raid works perfect on linux... and combined with LVM the things gets even better
For those looking (Score:3, Informative)
Even better smartd options (Score:4, Informative)
You can get smartd to execute tests automatically, using the -s option.
In my smartd.conf file, I have :
-s (L/../../7/03|S/../.././05)
on the device lines, which means do a weekly online long test at 3 am Sunday, and a daily online short test at 5 am every day.
mdadm running as a daemon, and watching the md arrays is also a good idea.
Re:Works great (Score:2)
Re:Works great (Score:3, Insightful)
OP said he switched to fireware for hot swapping reasons alone, that is why I mentioned SATA as an alternative.
If you're beant on having an external RAID 5, you're probably safest going with a DIY gigabit ethernet NAS.
Stick with hardware RAID (Score:2, Informative)
Though from what I hear, software RAID on Linux works decently.
Re:Stick with hardware RAID (Score:5, Insightful)
Consider--your ATA RAID controller dies three years down the road. What if the manufacturer no longer makes it?
Suddenly, you've got nearly 2 TB of data that is completely unreadable by normal controllers, and you can't replace the broken one! Oops!
Software RAID under Linux provides a distinct advantage, because it will always work with regular off-the-shelf hardware. A dead ATA controller can be replaced with any other ATA controller, or the drives can be taken out entirely and put in ANY other computer.
Re:Stick with hardware RAID, mod this up! (Score:2, Interesting)
Moderators, mod this up!
Or, have two backup RAID controllers. (Score:5, Informative)
This is a VERY big issue. We've found that Promise Technology RAID controllers have problems, and the company doesn't give tech. support when the problems are difficult, in our experience.
--
Government data compares Democrat and Republican economics. [futurepower.org]
Promise is SHRAID, not RAID (Score:4, Informative)
Good, relatively inexpensive IDE and SATA RAID can be had with 3Ware Controllers [3ware.com]. 2-drive models start around $140, and they support up to 12 drives on their more expensive controllers. The drives appear as a single physical device to the O/S, whether it's Windoze, Linux, BSD, DOS 3.1, etc.
Another source of true hardware RAID (Score:5, Informative)
Re:Stick with hardware RAID (Score:5, Interesting)
This happened to me. The card was sorta still working... could read, with lots of errors usually recoverable, but writing was flakey.
Luckily, even after about 3 years, 3ware (now AAMC) [3ware.com] was willing to send me a free replacement card. They answered the phone quickly (no long wait on hold), they guy I talked with knew the products well, and he had me email some log files. He looked at them for about a minute, asked some questions about the cables I was using, and then gave me an RMA number.
The new card came, and my heart sank when I saw it was a newer model. But I plugged the old drives in, and it automatically recognized their format and everything worked as it should.
This might not work on those cheapo cards like Promise that really are just multiple IDE controllers and a bios that does all the raid in software. Yeah, I know they're cheaper, but the 3ware cards really are very good and worth the money if you can afford them.
RaidWeb.com has nice hardware too. (Score:3, Informative)
The only
Re:Stick with hardware RAID (Score:3, Informative)
This is also a good reason to use mirroring rather than fancier schemes like striping or RAID-5, if you can afford the capacity hit. You can always mount the drive individually.
Re:Stick with hardware RAID (Score:2)
From what I read software is just as good as hardware RAID these days, and sometimes better. But its only what I read, i dont have first hand info.
Re:Stick with hardware RAID (Score:3)
There are reasons that HW raid is used in all of the top end setups, because its faster. The CPU might not be 3Ghz Extreme edition or some other recognizable market friendly junk, but make no mistake the performance is there. Most of the time chips used on cards (not only RAID cards) are designed for that specific purpose and
Re:Stick with hardware RAID (Score:3, Informative)
Stick with Linux RAID. It knows how to do it better.
Re:Stick with hardware RAID (Score:3, Interesting)
Back when I was using a PII-450 as a file server, I tried out software RAID on 3 x 80 Gb IDE disks. It mostly worked fine - except when it didn't. Generally problems happened when the box was under heavy load - one of the disks would be marked bad, and a painful rebuild would ensue. Once two disks were marked bad - I follwed the terrifying instructions in the "RAID How-To", and got all my data back. That was the last straw for me...I decided that I didn't have time to wat
Re:Stick with hardware RAID (Score:5, Informative)
I disagree with this. Here's why: the most important thing is your data. Hardware RAID works fine until the controller dies. Once that happens, you must replace it with the same type of controller, or your data is basically gone, because each manufacturer uses its own proprietary way of storing the RAID metadata.
Software RAID doesn't have that problem. If a controller dies, you can buy a completely different one and it just won't matter: the data on your disk is at this point just blocks that are addressable with a new controller in the same way that they were before.
Another advantage is that software RAID allows you to use any kind of disk as a RAID element. If you can put a partition on it, you can use it (as long as the partition meets the size constraints). So you can build a RAID set out of, e.g., a standard IDE drive and a serial ATA drive. The kernel doesn't care -- it's just a block device as far as it's concerned. The end result is that you can spread the risk of failure not just across drives but across controllers as well.
That kind of flexibility simply doesn't exist in hardware RAID. In my opinion, it's worth a lot.
That said, hardware RAID does have its advantages -- good implementations offload some of the computing burden from the CPU, and really good ones will deal with hotswapping disks automatically. But keep in mind that dynamic configuration of the hardware RAID device (operations such as telling it what to do with the disk you just swapped into it) is something that has to be supported by the operating system driver itself and a set of utilities designed to work specifically with that driver. Otherwise you have to take the entire system down in order to do such reconfiguration (most hardware RAID cards have a BIOS utility for such things).
Oh, one other advantage in favor of software RAID: it allows you to take advantage of Moore's Law much more easily. Replace the motherboard/CPU in your system and suddenly your RAID can be faster. Whether it is or not depends on whether or not your previous rig was capable of saturating the disks. With hardware RAID, if the controller isn't capable of saturating the disks out of the box, then you'll never get the maximum performance possible out of the disks you connect to it, even if you have the fastest motherboard/CPU combination on the planet.
Also, SW RAID is partition based (Score:3, Insightful)
Also, it's partition based, not disk based (under Linux, at least). This means that with just two drives you can create one two-disk RAID-1 array (for safety) and one two-disk RAID-0 array (for performance). Just create two partitions on each drive, pair the first partition on each drive in a RAID-0 config and the second partitions as RAID-1.
You can't do a single RAID-1/0 array with only two disks though. Yo
Re:Stick with hardware RAID (Score:3, Interesting)
stick with hardware (Score:2, Insightful)
my 2 cents
Here is a better question (Score:4, Insightful)
Re:Here is a better question (Score:4, Interesting)
Other than that, 3ware has been decent for us. We are about to put into service a new 9500 series 12 port SATA card.
I wish I could say our ACNC SATA to SCSI RAIDs have been as reliable. We have three ACNC units, two of them went weird after we did a firmware upgrade that tech support told us to do, lost the array.
We call tech support and they say "oh we didn't remember to tell you when you upgrade from the version you are on, you will lose your arrays".
Bottleneck (Score:2)
Bottleneck is not CPU (Score:3, Insightful)
gigabit fiber (Score:3, Informative)
Performance Tips (Score:5, Informative)
If you've got a lot of data that is read/re-read or written/re-read by clients then RAM really helps, streaming stuff which doesn't get many repeat accesses (eg running a movie editing suite) it might not help at all
For performance its often worth sacrificing a bit of space and going RAID 1. Again depends if you need the space first or performance first.
Obviously don't put two drives of a raid set on the same IDE controller as master/slave or it'll suck. Also if you can find a mainboard with multiple PCI busses that helps.
Finally be aware that if you put more than a couple of add on IDE controllers on the same PCI bus it'll suck - thats one of the big problems with software raid 5 versus hardware which is less of a problem with raid 1 - you are doing a lot of repeated PCI bus copies and that hurts the speed of drives today.
I use raid1 everywhere, disks may be cheap but you have to treat them as unreliable nowdays.
Where I used to work. (Score:4, Informative)
Re:Where I used to work. (Score:2)
Vinum with FreeBSD (Score:3, Informative)
BTW, I switched from Linux to FreeBSD for the server years ago for the stability.
Mod UP please! (Re:Vinum with FreeBSD) (Score:4, Interesting)
Anybody here remember Walnut Creek's huge ftp archive at "cdrom.com" which back in it's heyday of the late 1990's used to be the biggest, most highest traffic ftp download site on the planet? They used a combination of Vinum software raid and Mylex hardware raid to handle the load. I remember reading a discussion article from them once that until you get a totally ridiculous volume of ftp sessions hammering away at ther arrays, that Vinum was actually a slight bit faster than the hardware array controller.
Don't screw around - hardware is better. (Score:5, Informative)
Re:Don't screw around - hardware is better. (Score:4, Interesting)
Hardware RAID5 is fine if your sole goal is reliability. If you need even an iota of performance, then go with software RAID5. The 3wares have especially abysmal RAID5 performance, specially older series like the 75xx and 85xx cards. 3ware's admitted it, and something targeted for fixing in the 95xx series (haven't gotten my hands on those yet, so I don't know).
As for software RAID reliability, I find that Linux's software RAID is much more forgiving than even the most resilient of hardware RAIDs. I've lost 4 drives out of a 12 drive system at the same time, and Linux has let me piece the RAID back together and I've lost nothing. Was the machine down? Yes. Did I lose data? No. Compare that with a 3ware hardware RAID system where I lost 2 drives. Even thought I probably could have salvaged 99% of the data off that array, the 3ware just would not let me work with that failed array.
Also, on any reasonably modern system, the software RAID will be faster. You just have a much faster processor to do the RAID processing for you. The added overhead of the RAID5 processing is nothing compared to a 1-2GHz processor.
This is a very flawed logic (Score:4, Informative)
This logic doesn't hold. Let's first talk about the performance.
Also, on any reasonably modern system, the software RAID will be faster. You just have a much faster processor to do the RAID processing for you. The added overhead of the RAID5 processing is nothing compared to a 1-2GHz processor.
The actual RAID processing is relatively easy, and any RAID solution, be it hardware or software, that is worth anything will not have any trouble doing the logic (perhaps the cards mentioned are indeed not worth anything). The processing isn't your limiting factor; it is data thoughput. This is where hardware shines. A lot of extra data has to be shipping in and out to maintain and validate the RAID. This can easily saturate busses. A hardware solution allows the computer to communicate only the "real" data between itself and the hardware device, and then allows that device to take the burden of communicating with the individual drives on their own dedicated busses. Sure, that device can become overwhelemed, but I submit to you that if it does, it was poorly designed.
I am not saying that one shouldn't consider software RAID solutions. Just don't consider them because you think the performance will be better.
Now lets talk about data recovery.
I've lost 4 drives out of a 12 drive system at the same time, and Linux has let me piece the RAID back together and I've lost nothing. Was the machine down? Yes. Did I lose data? No. Compare that with a 3ware hardware RAID system where I lost 2 drives. Even thought I probably could have salvaged 99% of the data off that array, the 3ware just would not let me work with that failed array.
Let us be clear: we are talking about RAID5. In RAID5, you simply cannot lose more than one drive without losing data integrity. And it isn't like you can get back some of your files; the destruction will be evenly distributed over your entire logical volume(s) as a function of the striping methodology. So it is quite impractical to recover from this scenario. I don't know what kind of system was being employeed with this 12-drive array that can withstand a 1/3 array loss, but it certainly wasn't a straight RAID5. I can come up with some solutions that would allow such massive failure, but then we aren't comparing apples to apples. I'd be very interested in knowing what the solution was in this example case. It should also be noted that we don't know how many drives were in the system that lost 2 drives, much less what kind of RAID configuration was being used. No conclusion can be derived from the information provuded.
As an aside, more often than not, when we as individuals want a large cheap array, we are less concerned about performance than reliability. We put what we can into the drives, and we hope to maximize our data/$ investment while minimizing our chances for disaster. A software RAID5 is a good solution. Some posts have said that if you can spend so much on the drives, what's stopping you from spending on a nice hardware controller? I submit that perhaps he's broke now! And besides, a controller that can RAID5 8 drives is quite the expensive controller indeed. This has software RAID written all over it.
Re:Don't screw around - hardware is better. (Score:4, Informative)
Don't forget that hardware raid is a single point of failure. The best solution for the absolute best redundancy and performance is software raid set up to be fault tolerant of controller failures. For example, put two seperate scsi cards in the box, and software mirror your data between them, and then stripe on top of that for added performance if you have the drives. When using striping and mirroring together, always mirror at the lowest level, then stripe on top of that.
The basic idea is:
C == controller
D == disk
R == virtual raid disk
C1 --> D1,D2,D3
C2 --> D4,D5,D6
R1 = mirror(D1,D4)
R2 = mirror(D2,D5)
R3 = mirror(D3,D6)
R4 = stripe(R1,R2,R3)
One Drive per controller (Score:4, Insightful)
Also, give 'mdadm' a whirl - a little nicer to use than the legacy raidtools-1.x (Neil's stuff really rocks!)
Software RAID5 has been working extrememly well for us, but it is NOT a replacement for a real backup strategy.
CONFIGURE IT RIGHT!! small parts... (Score:5, Informative)
First, ensure that all of the drives are IDE masters. Don't double up slaves and masters.
Secondly, DON'T create gigantic partitions on each oft he 250's and then RAID them together, you will get bitten, and bitten hard.
Here's the skinny...
1) Ensure that your motherboard/IDE controllers will return SMART status information. Make sure you install the smartmon tools, configure them to run weekly self tests, and ensure you have smartd running so that you get alerted to potentially failing drives ahead of time.
2) Partition your 250GB drives into 40 GB partitions. Then use RAID5 to pull together the partitions across the drives. If you want a giant volume, create a Linear RAID group of all of the RAID5 groups you created and create the filesystem on top of that.
Here's why, this is the juice.
To keep it simple, let's say there are 20 secotrs per drive. When a drive gets an uncorrectable error on a sector, it will be kicked out of the array. By partitioning the drive into 5 or 6 partitions, let's say hd(a,c,e,g,i,k,l)1 are in one of the RAID5 groups, which contain sectors 1-4 (out of the fake 20 we made up earlier)
If sector 2 goes bad on
By partitioning the disks you localize the failures a little, thus creating a more likely recovery scenario.
You wind up with a few RAID5 sets that are more resilient to multiple drive failures.
If you are using a hot spare, your rebuild time will also be less, at least for the RAID5 set that failed.
I hope this makes sense.
My advice to you is to bite the bullet and simply mirror the disks. That way, no matter how badly they fail you'll have some chance of getting some of the data off.
Comment removed (Score:5, Informative)
Re:CONFIGURE IT RIGHT!! small parts... (Score:3, Informative)
If you have one large partition and impending drive failure wipes out any cylindar on that drive, all the data on it is shot. That drive won't be used at all during the rebuild... a rebuild of 250Gb. You are at risk if, during any time of the long rebuild, a 2nd drive fails completely or even coughs up a bad cylinda
Re:CONFIGURE IT RIGHT!! small parts... (Score:3, Informative)
Not quite. In my experience, bad sectors are only remapped by the drive firmware on write. Attempts to read bad sectors will return errors. This makes sense if you think about it; you might be trying to recover data, and the sector might be readable once in a hundred tries, but if you're writing to the secto
Software RAID on Linux (Score:5, Informative)
mdadm will allow a "spare pool" shared between multiple RAID devices and smartd will check the state of the disk controllers at regular intervals. You should put the system _and_ the disks on UPS to avoid losing data in the event of a power failure (the disks need to write their cache to the physical media before it evaporates). Set up something (mdadm or smartd) to email you in the event of a disk failure, or you may be running in degraded mode for quite a while before you discover it (unless you look at
All in all it seems to work fairly well if you spread the disks across multiple channels, if you have enough RAM for page (buffer) cache, and if you get reliable disks. I have a 4-disk SCSI storage box that I have in RAID 5 mode. It has been running for over two years. The server failed and I had to move it, that is when I discovered mdadm -- A LIFE (DATA) SAVER!
500 MHz? (Score:3, Informative)
If you're spending $960 for the disks at Fry's, why not spend another $80 to $250 at that same Fry's and get a current generation motherboard and CPU (they have package deals that are dirt cheap).
For $80, you can get a 5x faster processor, and a much newer chipset with ATA133 and Serial ATA.
For $250, you can get a board with multiple PCI busses, PCI-X and a chipset capable of handling much more throughput than a cheap PC motherboard.
The I/O bandwidth will be your bottleneck with an 8 drive RAID array. The standard 32bit / 33MHz PCI bus only does about 1Gbps. Serving a gigabit ethernet connection will use all your bandwidth by itself.. when you have 8 ATA drives fighting the NIC for bandwidth, you can see a clear problem.
If you're spending that much for the drives, don't hamstring it by skimping on the motherboard. And, in any case, once you have a Linux box installed, you inevitably start using it for many tasks (caching proxy, mail server, ftp server, dns server, www server, etc). So, a beefier system will stand up better.
Software RAID is probably ok for you (Score:5, Insightful)
The scenario you've mentioned is probably OK to use a software RAID. I use it in a production enviroment without problem with a higher stress that your setup will probably have.
I'd suggest you to consider the following items
a) cooling system - those HD can generate a lot of heat. Buy a full tower case and add those HD coolers to make sure your HDs stay cool
b) Buy the HDs from different brands and stores - RAID5 (either hardware or software) can recover from one drive. If you buy all from the same brand/store chances are that you end up with 2+ drives with the same defective hardware
c) cpu - if you are going to use this number of drives the processor will be a majo bottleneck. Do not forget that RAID5 XOR your data to calculate the parity.
d) partition scheme - use smaller partitions and group them together using LVM. This you help you to recover from a smaller problem without taking a lot of time to reebuild the array
Re: (Score:3, Insightful)
Re:Software RAID is probably ok for you (Score:3, Insightful)
Be aware 2.6 LVM still seems to have some 2Tb limits.
Worked for me in production (Score:4, Informative)
Software RAID Experinces (Score:5, Informative)
I manage a lot of servers remotely. I started out using the hardware RAID support on my server's mobos. But there were issues with that.
First, it was hard getting Linux driver support (I think drivers were available, but it was a matter of downloading them. And I don't beleive they worked on the 2.6 kernel's I used).
Then the RAID setup required BIOS settings. When you only have remote access to a server (and no KVM-o-IP) that means you need to work through a tech at the DC. Not, umm, ideal.
And finally, there was the issue of 'what if I need to move these disks to a different server'. One that doesn't have the same raid controller. Well, it wouldn't work.
Anyway, I ended up using software raid. I've used it now on a few dozen servers. And I'm really happy with it. Performance seems fine, albeit I'm not using it in really IO critical environments like a dedicated database server. In in 99% of cases I'd now use software raid in preference to hardware raid.
What follows are a few tips I'd like to pass along that may be a help with getting a software raid setup...
If you get the chance setup RAID on / and /boot via your OS installer (on a new system). Doing it afterwards is a real pain [tldp.org].
Build RAID support and RAID1,and RAID5 into the kernel (not as modules). You'll need that if you boot from a raid1 boot partition. Note: if you are using RAID5 you'll need RAID1 built in (since I beleive in the event of a failed disk the raid personaility swaps from RAID5 to RAID1).
With a 2.6 kernel build I've been getting "no raid1 module" errors at the make install phase when building with a RAID-ed / or /boot. The 'fix' is to compile the RAID support you need into the kernel (not as modules) then run: /sbin/mkinitrd -f /boot/initrd-2.6.8.1.img 2.6.8.1 --omit-raid-modules (substituting your kernel image name/version).
Every now and then I've had the kernel spit a drive out a raid array. I've found that sometimes the kernel may be being overly cautious. You can often raidhotremove then raidhotadd it back again. And you may never see a problem again. If you do, it probably really is time to replace the disk.
Rebuilding a RAID array goes smoothly. It happens in the background when the Linux machine is in multi user mode. The md code rebuild guarantees a minimum rebuild rate. From memory it takes about an hour or two to do a 200GB RAID1 array.
You can see the RAID rebuild status in /proc/mdstat. I run a very simple script [rimuhosting.com] to check the RAID status each day and send out an email if it is broken.
If you are using a RAID-ed /boot, grab the latest lilo [rr.com] since IIRC it has better RAID support than what is in the distros I use.
Hard drive-wise I've been happy with Seagate Barracudas. I've had to replace a few failed Western Digital drives. (Just my recommendation from experience, it could just have been good/bad luck on my part).
One neat trick with Software raid is that your drives don't have to be the same size. You do RAID on partitions. And your raid array sizes itself according to the smallest common denominator in the array.
Tip: always create a bit of spare space on any device you are RAID-ing. e.g. a 4GB swap partition. Then if you have a drive fail and it needs to be replaced, and your replacement varies in size slightly you'll still be able to use it. Not all 40/120/200GB drives are created with equal sizes :).
In summary: Software RAID=good. Decent performance. I've had no real kernel bugs with it. No need for BIOS access. Easy to move drives between servers. Easy to monitor failures. Non-intrusive/minimal downtime when recovering a failed devi
A few other hints (Score:3, Informative)
If you run smartmontools, you can configure smartd to not only monitor the SMART status of the disks, but also execute online tests - have a look at the "-s" option of smartd. For my RAID1 array, for each device, I have -s (L/../../7/03|S/../.././05) entries.
mdadm also has a daemon mode which can monitor the arrays, and if there are any failures, send an email to a designated email address.
Lots of experience...all good (Score:4, Informative)
[root@media root]# more bonnie20.log
Bonnie 1.2: File '/raid/Bonnie.27772', size: 2097152000, volumes: 10
Writing with putc()... done: 14517 kB/s 83.2 %CPU
Rewriting... done: 25060 kB/s 17.1 %CPU
Writing intelligently... done: 41987 kB/s 29.5 %CPU
Reading with getc()... done: 18830 kB/s 96.1 %CPU
Reading intelligently... done: 82754 kB/s 62.2 %CPU
Using an older processor/motherboard is probably not a huge concern. I've used 300 MHz Celerons before. Of course, your performance might not be as high as this, but if you are using this as network attached storage (NFS or SMB), you will likely be limited to 12 MB/sec due to fast ethernet. If you have (and need) gigabit transfer speeds, you should probably use a better motherboard/CPU.
Lastly, remember that you shouldn't skimp on power supplies and an UPS that automatically shuts the system down. The *only* data loss I have ever had on raid5 arrays came because of power-related issues. Heed my warning! 8)
Re:Lots of experience...all good (Score:3, Informative)
That information may be (and probably is) outdated with reg
Use NFS...duh. (Score:5, Funny)
Here's what you do with those 8 fine drives of yours.
You'll need 9 486's. Get some sort of *nix on each one, preferably several different Linux variants and at least 2 BSD machines (I'd say more, but you know, netcraft confirms and all....) and get them all networked together. Put one drive each in 8 of the machines, format with the filesystem that's most convenient for the system on each box, and get an NFS server going serving that partition.
Then, on the ninth box, mount all the NFS shares and software RAID them.
Trust me. This is exactly what you want to do, and anybody who says different is a dumbass. People who point out what they will invariably say are "obvious shortcomings" of this setup are merely trolls, and not worth your time reading.
Buy yourself a good hardware raid card (Score:4, Informative)
Do yourself a favor and get a good hardware raid controller and make sure it has good Linux support. Promise sucks. They advertise Linux support on the box - they lie, only with specific 2.4 kernels. 3Ware has good driver support for Linux included with the Linux kernel source code.
-Aaron
PCI bottleneck (Score:3, Interesting)
I don't know if anyone makes PCI-X ATA-133 controllers (non-RAID), so in the final analysis it might be best to get a 3ware card with a 64bit connector and plop it in a long slot. Of course, you need a pretty nice motherboard for that. I guess I haven't gone shopping recently, but they weren't that common the last I checked (and everyone is going to head for PCI-Express shortly anyway).
Of course, it all depends on what you'll use the machine for. If it's just file serving over a 100Mbit network, there's no need to worry that much about speed. It's only a big deal if you're concerned about doing things really fast. I believe good 3ware RAID cards can read data off a big array at 150-200 MB/s (maybe better). My local LUG put a ~1TB array together for an FTP mirror with 12 disks (using 120GB and 160GB drives, if I remember right) about 2 years ago, and testing produced read rates of about 120 MB/s on a regular PCI box (I think.. my memory is a bit flaky on that). Of course, I don't think anything was being done with the data (wasn't going out over the network interface, to my knowledge, just being read in by bonnie++ I suspect).
raid5 software is great (Score:3, Informative)
also, software raids are hardware independent. they can be modified easily while booted and without rebooting. if a hot-swapable drive is used, downtime can be eliminted by a hot-swap and a rebuild of a failed drive.
also, i have been in a discussion about the new cachefs patch in rescent mm kernel patches(or maybe nitro?), allowing you to use a cache in ram with any filesystem, so you could mount your raid array through the cachefs with a given amount of RAM for write cache
AND, linux software raid works on a per-partition basis, so you can mix and match drive sizes without wasting space. 8 250GB drives can mate up with 4 300GB drives, and then the wasted 200GB can be made into another array.
you can easily add IDE cards and increase the size of your array.
you can spread your array over a large number of IDE cards for better redundancy, no single card will criple your array, and IDE cards are much cheaper than hardware raid cards.
LINUX can be booted from a software raid! while is has trouble on some hardware raids!(driver issues)
i run a software raid5 over 12 seagate 120GB drives with no problems. i get great transfer speeds accross the (gigabit)network and it's easy to manage drive spindown because the system sees each individual drive while hardware raid solutions typically only allow the system to see the array as a single device.
most hardware arrays are mainly configured at boot time. to build or repair an array, your system will not be working. if you run a linux fileserver/firewall, your firewall doesn't function on hardware raid rebuild, while it does in software.
--
though i would go with a faster processor, you should have very good luck, reliability and performance from an 8 device software raid5. and have a nice 1.7TB array
Spend the extra $200 and do it right... (Score:3, Informative)
Spend the extra $200 on a 4 port card... put a *big* fan on the drives because that's the #1 killer and you'll be happy.
Pat
my experiences with software raid (Score:3, Informative)
Anyhow I bought a 3ware 7450 Raid controller and haven't looked back - its brutally fast (over 20-30 megs a second in a sequental write), fully supported in linux and it a piece of cake to setup.
Its not bad at recovering either - I had a power failure and the ups failed later on - machine restarted of course when the power came back on and the 3ware controller automatically rewrote all the parity on the disks - everything was fine. While it wrote the parity the system was up and running instantly (raid was in a fail state of course).
Fine (Score:3, Informative)
No problems at all. I once had an IDE controller fail - I replaced it (had to reboot of course), and Linux rebuilt the array automagically.
I have not tried using a hot spare.
Warning: a lot of the documentation out there on the web about Linux software RAID is very out of date. If you go this route, DEFINITELY buy the book "Managing RAID on Linux" (O'Reilly). Also be prepared to compile the "raidtools" package, which you need to set up arrays.
I have since added an 8-disk system based on 3Ware's 9000 series SATA RAID controller. I recommend 3Ware for higher-performance systems. (I have 8 250GB disks in a single 1.6TB RAID-5, I get about 180MB/sec read, 90MB/sec write.)
raid5 + debian (Score:3, Interesting)
When i started out, firefox was loading in 2 seconds and it now appears to be taking around 4 seconds to load. At least i think those mesurements are ok. If you want real speed, i'd think about using raid01 as it seems 4 discs in a raid0 array would be faster than 8 in a raid5? I'm not too sure about that, but raid5 is significantly slower than raid0 apparantly. Also, using those other 4 discs to mirror the raid0 array could be more usful then raid5s parity/crc redundancy.
Heat will be a problem (Score:3, Informative)
8 of those suckers are going to get toasty without plenty of auxilliary cooling.
Re:Heat will be a problem (Score:3, Interesting)
After I set it up for the first time, I had a drive die on me really quickly and noticed when I replaced it that it was murderously hot. As in "burning my fingers" hot. So I went and bought these little hd cooling fans that fit in front of a 5 1/4" drive bay (and come with 3.5" drive mounting adapters) and have 3 little fans on them. They cost about $7 each. I put 4 of them in my mac
My experience (Score:3, Informative)
These drives are all crammed into an old Dell that was my Wintendo a couple of years ago. A few months back, the grilles on the drive-bay coolers I installed got clogged up and I lost one of the drives to overheating. Upon replacing the drive, the rebuild took the better part of an evening (but didn't need to be attended). No lost or corrupt data.
The only major problem I had was that the RAID was dirty in addition to being degraded (insert "your mom" joke here), because I brought my machine down hard before realizing what was going on. In theory, I could have done a raidhotremove on the bogus drive and brought things down normally
I ended up having to do some twiddling to get it to rebuild the dirty+degraded array. I don't remember what that was, but as long as you don't do something boneheaded like ignore kern.log messages about write errors to a specific drive, get annoyed that it's taking so long to cleanly unmount the filesystem, and hard-reset the box, that shouldn't be an issue
Re:Ok. (Score:2)
Re:Ok. (Score:3, Insightful)
I've used them all, Seagate primarily though (SCSI servers), and have noticed a trend. They all suck the same!
The sooner we can move to cheap solid state storage the better.
Re:Don't go with 3ware (Score:2)
And they make quite good cards too, that are highly supported in linux,freebsd, etc. (I have an 8 port SATA raid card in use atm)
Re:Don't go with 3ware (Score:2, Interesting)
We've run several 7810s, 7850s in the past, totalling quite a few terabytes. All in all it's not too awfully bad, but the cards do seem to have trouble with dropping drives that don't seem to have any real problems (they recertify with the manufacturer's utility often with no errors).
If you go 3ware though, get the hot swap drive cages from 3ware. They are expensive, but it makes it much nicer.
Re:Don't go with 3ware (Score:4, Insightful)
For all I know, you could have a very good reason. But if you tell someone to make sure to to stay away from something, you should provide a reason. Especially if it's something that seems to have a really good reputation.
*DO* go with 3ware (Score:3, Informative)
Re:hmmm (Score:2)
Re:Did you read the RAID-Howto (Score:3, Insightful)
Re:don't use ext2 (Score:4, Funny)
Re:Advice: Get lots of RAM (Score:5, Interesting)
Your logic eludes me. The blocks do not need to be read, as we are in the process of writing. We already have the data, because we are writing, so why would we re-read the data?
Furthermore, block sizes default to 4k, though you could go to 8k or 32k block size. At any rate, you don't need a gig of RAM to handle this.
Finally, XOR is not that expensive of an operation, and a 500Mhz CPU is going to be able to handle that faster that any but the most expensive controller cards.
So unless you are actually a RAID kernel developer, I don't buy your story.
Re:Advice: Get lots of RAM (Score:3, Interesting)
Your logic eludes me. The blocks do not need to be read, as we are in the process of writing. We already have the data, because we are writing, so why would we re-read the data?
That would depend on the nature of the write. If you're writing the initial data it's unlikely that you'll require reading. However when you go to update the date you may have to perform reads in order to calculate the parity required for the update.
Software RAID 5 is very reliable but does suffer a performance hit. Not because o
Re:Advice: Get lots of RAM (Score:5, Informative)
Your logic eludes me. The blocks do not need to be read, as we are in the process of writing. We already have the data, because we are writing, so why would we re-read the data?
Unless you write across a whole row in the array, how are you going to compute the new parity without reading in something? This is the "small write problem", and it is why expensive RAID controllers have a non-volite writeback cache.
The current kernel does read in the whole row to recompute the parity for simplicity. Technically, though, you just need to read in the block you are modifying and the parity block, making writes take 4 operations under RAID 5, but unless something has recently changed, Linux doesn't do that. A gig of RAM, however, will allow a degree of volitile write-back cache, to help offset what will otherwise be poor write performance.
Re:Advice: Get lots of RAM (Score:3, Informative)
Re:Advice: Get lots of RAM (Score:5, Informative)
When one of the drives fails--and one of the drives will fail--this will allow you to swap in the replacement drive immediately, before another drive fails. (Remember, if two drives fail in a RAID-5 array, you lose data.) You can then return the defective drive, get a replacement from Maxtor, and when that one arrives FedEx in a few days, that one will be your new "spare."
You can either keep your spare drive unused, outside the computer, or keep this spare "hot"--in the computer, connected and ready to go, but unused by the array or anything else, and have the array fall over to it automatically when a drive fails.
Both ways offer advantages. If you keep the drive out of the computer, since you need to shut down to remove the bad drive, you can install the spare drive at that time. If you were to keep the drive "hot" in the meantime, your extra "new" drive has been spinning for months or years, and exposed needlessly to heat. Which increase its probability of failure, making it essentially as likely to fail as all your other drives that have been running the whole time.
However, keeping the spare "hot" means that the array can be rebuilt sooner, in some cases automatically before you know there is a problem. This can reduce the possibility of data loss. You will have to reboot twice--once to remove the defectie drive to return to Maxtor, and once when the replacement arrives to install it as the new hot spare.
Which of those two choices is a judgement call, but it's absolutely critical to have a spare drive on hand.
Re:Advice: Get lots of RAM (Score:4, Informative)
Also, any half-decent RAID implementation will have that hotspare in the machine with its spindle off until it is needed. So it won't have been spinning for months/years at all. Not quite as good as having it in a box as far as wear and tear, but very close.
Re:Advice: Get lots of RAM (Score:4, Informative)
Probably the best move is to have a cron job examine /proc/mdstat and e-mail you if it's troubled.
Re:Advice: Get lots of RAM (Score:4, Informative)
Or you can just have mdadmd (pard of the mdadm [freshmeat.net] suite.(comes with my distro (SuSE 9.1))) running, and it'll monitor your raid arrays, and email you when there's a problem.
Re:Advice: Get lots of RAM (Score:4, Informative)
IOW: Two reads, and two writes. Not six reads and two writes. But yes, large amounts of RAM is a good idea. Of course, if a drive goes south, everything goes out the window and your performance will be shot until you replace the dud drive and everything resyncs.
Re:Advice: Get lots of RAM (Score:3, Informative)
Re:Advice: Get lots of RAM (Score:3, Interesting)
Re:Advice: Get lots of RAM (Score:3, Informative)
a PIII with 128 megs of ram. LVM on top of the array, and I have never run
into a problem with serving files via NFS or SMB. More ram is always nice, of
course, but again, I have not ran into any problems.
SealBeater
Avoid cheap raid controllers (Score:4, Insightful)
Re:Please! (Score:5, Informative)
There is no such thing as a "cheap" hardware RAID 5 controller. Well there is, but they'll still set you back at least $120 and are crap.
There are RAID controllers from highpoint and promise, et al that are card-based, but they are still CPU bound (that is where the XOR really takes place). So they're really nothing more than a controller with a driver that does the calculations in the CPU. These cards are good for booting windows to a software RAID (since that is essentially what they are) but not good for anything else.
Most motherboards especially those with only 2 RAID ports (whether IDE or SATA) are software-based, as well. The nvidia nforce3 250 is one of the few notable exceptions.
But the bottom line here is: Linux Software RAID 5 is a logical approach if simple redundant mass storage is your main concern, and will save you at least $120. Also note that for RAID 0/1 it doesn't really matter if you go hardware or software since they aren't very processor intensive anyway. Pure software RAID 0/1 seems to be easier to set up in Linux (less mucking around with drivers) so it often makes sense to go with it for that reason alone.
Re:Please! (Score:4, Interesting)
Not compared to $0.
You see, the typical budget RAID 5 builder just wants to store his collection of MPEG4s, MP3s, and other downloads or perhaps uncompressed hobbyist video. It's not a database, it's not a 150+ employee corporate file server, it's just personal. Performance is not a concern.
And if performance is a concern (say he wants / on these disks) then the cheap way to go is software RAID 0, 1 or 1+0 (aka 10) *COMBINED* with a RAID5.
For instance, I just built myself a new system with four 300gb drives and partitioned each one like so:
50mb -
1gb - swap
20gb
5gb -
For the 50mb, I made a bootable RAID 1 of four drives (grub can boot this, dunno about lilo)
For the 1gb swap, I made a RAID 1 with two drives and a RAID 1 with the other 2. Thus I have a net of two 1gb swap partitions, with redundancy so my system will never crash due to drive-induced paging errors. This is essentially a RAID 0+1, though I let the kernel's swap system handle the RAID 0 aspect by giving them equal priorities.
For the 20gb
For the 5gb
With the four equal-sized partitions that were left, I made the RAID 5 for
Don't you see what a great cost-effective approach this is?!?
Maybe you work for some company with plenty of money lying around for $160 RAID controllers. But I'm in business for myself, and I don't see the sense in spending money where it isn't needed. Besides, the flexibility of software RAIDs (per-partition, not per-drive) would be well worth it to me even if something like the SX4 were cheaper.
Re:Please! (Score:3, Informative)
frickin expensive, though... if you need that kind of performance it'd probably be speedier and more cost effective to do a software RAID 0+1
Re:Devil in the... (Score:3, Informative)
Which is absolutely horrible. This violates protocol - mail MTA's demand that data is written to disk before they acknowlege delivery. They get this from the confirmation from the kernel, but if the disk array lies about it, a power failure could lose data even though the kernel assumed it had bee synced properly.
What is the definitive article? (Score:3, Informative)
Is this the definitive article about software RAID under Linux?
Software-RAID HOWTO [unthought.net]. In English and HTML: Software-RAID HOWTO [unthought.net].
--
Bush borrows [brillig.com] money to kill Iraqis [iraqbodycount.net]. 140 billion borrowed [costofwar.com]. With interest, you pay 200 billion. When Saudis attack, invade Iraq?
Re:Don't run software raid... (Score:4, Informative)
Um, that's bogus. If your OS goes (probably due to hardware?) then you can simply put the drive in a new computer (same basic master/slave setup) and away it goes. Linux knows how to detect its own RAID arrays!
OTOH, if you have a hardware RAID, good luck getting tech support, especially if they no longer carry that board, or have gone out of business altogether.
At least with Software RAID, your data is not stuck in a proprietary format.
Re:Don't run software raid... (Score:3, Interesting)
First of all, linux software raid has excellent autodetection. You need to set the partition identifier to 0xFD so that the autodetector can identify it. As many have mentioned, software raid has a huge advantage over hardware raid for recovery - you can disconnect the drives from one computer, hook them to another and the autodetect code will figure it out. I know this works because I've done it.
Second, for 8 drives and 2 co
Re:8 drives? Maybe I am missing something? (Score:3, Informative)
I bought one of these from Newegg. (Score:3, Informative)
I bought one of these from Newegg. I had a lot of problems with it. I called Silicon Image technical support. They told me that particular chipset did not work correctly, and they would not release working firmware for it.
I told Newegg about this, but they continue to sell them.
Fry's sells them also. I told a Fry's manager that Silicon Image told me they know they don't work correctly. Fry's still sells them.
I would love to find a technically knowledgeable and honest distributor.
Maxtor 60GB drives for $18 after rebate (Score:3, Interesting)
Office Depot had an 18th anniversary sale, and was selling Maxtor 60GB drives for $18 after rebate. Bought three for my personal test machines, and used my friend's addresses for the rebates.
I often hear bad things about Maxtor drives, but after a whole 40 hours use, they haven't failed once.
Re:RAID isn't totally reliable (Score:3, Informative)
I take it you didn't have a drive sitting waiting as a hot-spare?
I got bit by this once. Never again.... now I always have a hotspare waiting to jump into place for an instant rebuild.
Re:RAID5 is for High Availability, not Storage! (Score:3, Insightful)
While I have to agree that data can be lost because of user error, I built a 2tb RAID 5 out of Maxtor 300gb SATA drives and have thus far had one in five of the drives fail. And, of course, two drives failed within a day of each other so I lost the whole shebang. RAID 5 is fine for stuff like movies and music but I'm sticking to RAID 0+1 for the really important stuff (along with good rsync backups of course).
So, "RAID5 is for High Availability but not
Re:Experience (Score:5, Informative)
It scares me that they let people like you play with the sort of computing resources that have 50TB of disk space.
RAID-5 data recovery after losing 2 drives (Score:5, Informative)
No, you're not "done period". You'll lose a lot of data, but may still be able to recover some. Likewise when losing one disk in a RAID-0 setup.
Any file that resides entirely outside of the gap in the array can be recovered. How likely that is depends on the details of the filesystem, the striping, and the size of the file (the larger the file, the more likely that a part of it fell into the bit bucket).
Also, not all drive failures are total. You may have a RAID-5 array with one drive that completely failed, and another drive that just has some bad sectors. In that case you should be able to recover most of your data. Or you may have two disks with just a few bad sectors, which is even less bad.
This all depends on being able to force the array to allow access to the device, so that you can mount the filesystem (in read-only mode) and sift through the remains. Some (many? most?) RAID implementations may just give up if two disks in a RAID-5 array (or one disk in a RAID-0 array) are flagged as bad, in which case you really are screwed, even though your data is still there. From what people have been posting here I would guess that Linux SW RAID will let you force it, though I've never needed to try it myself.