Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

IDE RAID Examined 597

Bender writes "The Tech Report has an interesting article comparing IDE RAID controllers from four of the top manufacturers. The article serves as more than just a straight product comparison, because the author has included tests for different RAID levels and different numbers of drives, plus a comprehensive series of benchmarks intended to isolate the performance quirks of each RAID controller card at each RAID level. The results raise questions about whether IDE RAID can really take the place of a more expensive SCSI storage subsystem in workstation or small-scale server environments. Worthwhile reading for the curious sysadmin." I personally would love to hear any ide-raid stories that slashdotters might have.
This discussion has been archived. No new comments can be posted.

IDE RAID Examined

Comments Filter:
  • by Anonymous Coward on Wednesday December 04, 2002 @09:22PM (#4815386)
    IDE can only handle one or two hard drives per channel, which makes the cabling a real nasty hassle as opposed to SCSI-based RAID.

    Even those so-called rounded cables can clutter the hell out of a tower case if you have a 4-channel RAID controller.

    In my case it's the Adaptec 2400A four-channel, with four 120GB Western Digital hard drives, RAID 1+0.
    • by mccormick ( 40772 ) on Wednesday December 04, 2002 @09:34PM (#4815468)
      For performance reasons, I haven't seen a single vendor that actually expects you to put two drives on a single interface, and infact, I've found that the 3ware Escalade controllers just won't let you. When they advertise that it can two drives, it usually means it has two dedicated interfaces, therefore have the potential for completely saturated a single port all by itself (which is hard to do with ATA/100 and ATA/133 drives that cannot even burst that high -- get good drives! big caches too!)
      • by Anonymous Coward
        Thats because one of the major limitations of current generation IDE is that only one device on a channel can "talk" at a time. So even if you're using a RAID card with two devices on a channel, it will be no faster than a standard IDE connection, since only one drive read/write can be done at a time. With SCSI, all of the drives on a channel can talk at the same time until the 160 MB/s that SCSI can handle is saturated.
        • Thats because one of the major limitations of current generation IDE is that only one device on a channel can "talk" at a time. So even if you're using a RAID card with two devices on a channel, it will be no faster than a standard IDE connection, since only one drive read/write can be done at a time.

          Not at all true. There are a good many IDE (ATA, actually) RAID controllers out there that use one drive per IDE channel, and connect to the host via SCSI or Fibre Channel. (Of course, in this case there is *never* channel contention, and the weak spot is in the SCSI or of FC connection, both of which are using the SCSI protocol. This approach is FAR faster than almost all SCSI-based RAID systems out there, and much cheaper to boot. One of the advantages of using serious IDE Raid subsystems (not the cheezy desktop variety) is that the cost savings can allow you to replace RAID 5 with RAID 0+1 (sometime called 10) and still save money. I know because I've engineered and built multi-terabyte storage servers on this technology that are 2-3x faster and an order of magnitude less expensive than high-end storage servers like the IBM Shark or EMC Symmetrix. IDE *will* squash SCSI, it's not a matter of if, but when, mostly because SCSI will never be able to compete with the volume economics that produce IDE's 5-6x cost advantage. The performance advanstage of individual SCIS drives is already becoming marginal, and the speed of individual drives is nearly irrelevant anyway in a RAID environment where most of the poerformance comes from spanning mutiple splindles, not the speed of the individual disks. (This is why a properly configured RAID array of disks with average access time N can deliver average access times of significantly less than N.)

          With SCSI, all of the drives on a channel can talk at the same time until the 160 MB/s that SCSI can handle is saturated.

          Not even close. SCSI is a one-talker at a time bus architecture. This is one reason a good IDE RAID controller can so easily kick SCSI butt. The largest clusters and multiprocessor computers are all going to high performance IDE RAID arrays because of their superior cost, performance, and yes, reliability, since electrical problems in physical SCSI are one of the most common causes of data corruption in high performance environments, which is one of the chief reasons Fibre Channel has been so widely adopted. It too uses the SCSI protocol, and so has real weaknesses, but at least it avoids the hideous flakiness of SCSI's connector and termination scheme.
    • by Anonymous Coward
      Oh yeah, I used to have this configured in RAID-5 mode, and had a drive fail. It took just about 24 hours for 120GB to reconstruct onto a spare disk.

      I was fortunate enough to have also purchased a Fry's Electronics Instant Exchange guarantee for all the hard drives. So I popped in to Fry's to exchange it, and got a replacement after waiting for two fricking hours. I swear the poor guy had to run around for 20 different manager signatures.

      Fry's Instant Exchange is not so instant.

      Adaptec 2400A - $350
      Two 3ware Hotswap drive bays - $340
      Four 120GB western digitals (7200rpm) - $920

      With Linux 2.4.9-SGI-XFS, filesystem writes were pretty damn slow -- maybe 12MB/sec on RAID-5.
    • The next generation of IDE will be serial ATA (SATA). These drives will have a small cable going from the controller to the drive getting rid of all the cable clutter. Also, these controllers will allow you to use more than four drives, the more ports the more drives. Finally, these controllers will have improved electronics allowing the card to do more work, and making them less of a CPU resource hog. Continuing to use SCSI will get you higher speeds and greater drive MTFBs, but with IDE RAID you might not have to worry about the drive MTBFs (I can buy several larger IDE drives at the same cost of a smaller SCSI drive).
    • by PetiePooo ( 606423 ) on Wednesday December 04, 2002 @10:44PM (#4815842)
      I've got a friend that has a FileZerver [filezerver.com] NAS device. Does RAID0/1/5/JBOD on up to 12 IDE devices. As easy to use as a toaster.

      He initially bought it with six 100GB drives, giving him a formatted capacity of 477GB using RAID5. Ripped his CD collection, restored all his scanned images and textbooks and filled the sucker up to about 75% capacity.

      The only problem is that he used only 3 of the 6 channels to connect his 6 drives; 3 as master, 3 as slave. One controller had a momentary glitch and 2 of the 6 drives dropped out of the RAID. Can anyone tell me what happened next? Anyone? Anyone?

      After a bit of investigation, we found out the Zerver sled runs a version of Linux and uses the same md drivers modern Linux distros use. We pulled the drives out, and one by one slapped them into a spare Linux PC to update the superblocks. Brought it back up, and after a 24-hour fsck, the system was back up and stable. And each drive had its own IDE channel!!!
  • by autopr0n ( 534291 ) on Wednesday December 04, 2002 @09:23PM (#4815389) Homepage Journal
    Whats the point in having SCSI-Raid in most workstations these days? I mean, ram is so cheap now you can throw in a couple gigs for much less then the price diffrence between SCSI RAID and IDE raid.

    I mean, I know the hest drives are SCSI flavor, but it seems like there's so many other things you could spend money on first that would get you way better performance, like getting a Dual Athlon CPU or something.
    • by redfiche ( 621966 ) on Wednesday December 04, 2002 @09:35PM (#4815482) Journal
      Performance isn't the only issue. We build custom PC-like devices from parts for use in health care, and we are constantly struggling to get a steady supply of parts that will be the same for more than a few months. Hard drives are about the worst, and IDE hard drives have a market lifespan of a few months. It can be a paperwork and testing nightmare to change the hard drive you use frequently. SCSI has a much longer lifespan in the market.

      There is also the reliability factor. SCSI drives tend to be more robust.

      • by Anonymous Coward on Wednesday December 04, 2002 @10:10PM (#4815697)
        The "Enterprise Server Group" at my Fortune 500 employer keeps telling me I should be purchasing $1,200 "SunFire V100" servers with IDE instead of wasting $2K+ on the V120 with hot-swap SCSI.

        I keep telling them to wait a couple of years, and we'll see who is wasting money.

        There is also the reliability factor. SCSI drives tend to be more robust.

        Agreed. This is not always easy to back up with facts (by quoting mfgr specs, etc), but in both recent and long-term (10+ years) experience, my systems with SCSI drives have tended to fail less often, and usually less suddenly, than IDE.

        Generally, in 24x7 server usage, a SCSI disk will run for years, then either slowly develop bad blocks, or you start getting loud bearing noise, and after powering down, the drive fails to spin back up. In the old days we'd blame that failure mode on stiction, and could usually get the drive to come back one last time (long enough to make a backup) by giving the server a good solid thump in just the right spot.

        Background:
        My first SCSI-based PC was a 286 with a 8-bit seagate controller and a 54 MEG Quantum drive recovered from my old Atari 500 "sidecar".

        • >In the old days we'd blame that failure mode on
          >stiction, and could usually get the drive to come
          >back one last time (long enough to make a backup)
          >by giving the server a good solid thump in just the
          >right spot.

          Heh, funny you mention that. At one of my former jobs, we had a very old machine running OS/2 with SCSI drives. This machine was the database bridge between the mainframe and many PC based applications. Anyway, when the machine had to be rebooted/powered down (once in a blue speckled moon) they'd have to pick the machine up and drop it just to get the drives spinning. I kid you not! But it ran forever.
    • I like my data. I like it to be there when I get home from work. Thats why I've got a three drive RAID-5 on my main workstation. That way if a drive dies my data is still there.
    • by aussersterne ( 212916 ) on Wednesday December 04, 2002 @09:56PM (#4815610) Homepage
      Ummm, no.

      Try getting sustained data transfer rates out of an IDE RAID under load. It won't happen. You'll stutter. *boom* goes your realtime process.

      SCSI RAID, on the other hand, streams happily along with very little CPU load.
      • by Anonymous Coward on Wednesday December 04, 2002 @11:16PM (#4815966)
        I have a dual Xeon 2.4GHz 4U with dual 8 channel IDE controllers connected to 16 160GB IDE drives under Windows 2000 arranged as two separate logical drives.

        I'm able to read sequentially from very large files (20GByte+ files) at a continuous rate of over 180Mbytes/sec.

        The controllers are 64-bit, 33MHz PCI cards and the high speed sequential reads are exactly what my application demands. SCSI would have added nothing to the performance of the system except an additional 60% to the cost.

        Find me a 2.5TByte dual Xeon 4GByte RAM 4U box with SCSI drives for well under $10K and I'll give SCSI another look.

        Once serial ATA comes out I think you'll see even more IDE based RAID being used.
        • What does your CPU utilization look like when you're doing that 180 MBytes/sec? You're doing software-raid, yes? (You didn't mention a RAID controller) -- are you doing RAID-5?

          Do you think you could pump the 20-gig file over gigabit ethernet at a saturated 125 MB/sec?

          That is to say, for sequential read, would this sub-$10k solution be a media server limited only by gigabit ethernet bandwidth? Holy cow!

          How about sequential write? Can you copy a 20-gig file from the network at the same speed? (i.e. sequential write.)

          What does the highest your CPU utilization gets to? Are both processors used?

          Very interesting...
        • SCSI would have added nothing to the performance of the system except an additional 60% to the cost.

          Consider that the seek time on those 160GB IDE drives is around 9-12ms compared to a ibm's 146GB SCSI drive with a seek time of 4.7, 133MB burts vs 320MB burst, 7200 vs 10000rpm. And the thing most business love: 5 year warranty for scsi vs 1 year for the IDE's. Once serial ATA comes out I think you'll see even more IDE based RAID being used

          In workstations yes, in high usage servers, no. Even in the small department I work in, we'd rather pay 60% more for scsi and get a 5 year warranty and proven long term reliabilty

      • by prisoner-of-enigma ( 535770 ) on Wednesday December 04, 2002 @11:17PM (#4815970) Homepage
        You apparently didn't read the article, and have no current experience with IDE RAID systems. Take at look at the sustained tranfer rates of the 3Ware system. They meet just about any SCSI controller you're likely to find when paired with good 7200RPM drives. The myth that SCSI is the only way to get reliable sustained transfers is just that -- a myth. SCSI's only advantage now is reduced cable clutter and having up to 15 drives on one controller, but who needs that many drives these days when 120GB drives are available for next to nothing?
        • by kscguru ( 551278 ) on Thursday December 05, 2002 @01:10AM (#4816542)
          And those same ultra-high-capacity 120GB hard drives have horrible seek times. SCSI is so much better there... look at a modern OS, and seek times for disk access will make MORE of a difference than just about anything else (given sufficient RAM, CPU cycles, etc... - but if you're spending on RAID, you'll have those anyway). Heck, if this poster's parent wants to just suck data out of a linear file, any drive'll work - you're really just pulling out of the drive's cache. Idiot-proof.

          Try random access. Then you'll see the difference. Sequential is optimized by just about every cache out there - you're NOT benchmarking the drives with sustained transfer! You're benchmarking the caches!

          • How about this (Score:3, Interesting)

            by TheLink ( 130905 )
            18GB SCSI 10K rpm drive vs 120GB ATA 7200 rpm drive.

            Partition 120GB drive so that you only use the fastest 18GB of it.

            Now compare random access seek times. Only seeking 15% of 120GB drive ;).

            If 120GB ATA drive is too expensive. Test with an 80GB drive.

            Not sure what the results will be, but it's worth trying don't you think?

            Some drives would probably be better at short seeks than others (settling time etc). Don't see much info on this tho.
    • by GT_Alias ( 551463 ) on Wednesday December 04, 2002 @10:57PM (#4815888)
      Ehhhh...RAID vs. RAM/Dual CPU's? I was under the impression people used RAID for data integrity (at least, that's what I use it for). Unless you're striping, I suppose.

      So yeah, you could probably spend your money on other things to get better performance, but that's entirely besides the point. What could you spend that money on to get better data reliability?

    • I can easily tell that 90% of the people spouting off here have never used both modern SCSI and modern IDE. Well, I have actually used both. So take it from someone who knows.

      There are valid performance and reliability reasons for using SCSI drives instead of IDE drives; the question is whether these gains are worth the cost, not whether they are there at all.

      Reasons why SCSI might be worth it:

      1. Spin rate. Until IDE drives gain 10k and 15k spin rates, SCSI drives will always be king in multitasking and random-access situations. 3ms seek time is so much better than 10ms that you have to use it to believe the difference.
      2. Reliability. IDE drives have one year or at best three year warranties. SCSI drives have five year warranties. You can run modern 15k scsi drives stacked next to each other with zero additional case fans and expect to outlast your warranty. Try that with IDE.
      3. Hot swap. Does anyone here know of a hot-swap IDE raid solution? I think not.
      4. Tagged command queuing. A SCSI drive can collect multiple drive requests and reorder them to optimize the actual physical retrieval of the bits in question. IDE drives, even if the box lists this feature, have never done TCQ particularly well. This kind of thing is impossible to benchmark because its benefits only show up under heavy multitasking, not single-tasking benchmarks.
      For most people, I would agree that you would be better off buying 2GB ram or two CPUs before spending money on SCSI. However, if you already have 2GB ram and two CPUs, and you still need more, then that's when you should look into high end SCSI.
  • by npietraniec ( 519210 ) <npietranNO@SPAMresistive.net> on Wednesday December 04, 2002 @09:24PM (#4815400) Homepage
    At the company I work for, IDE RAID has become somewhat standard because we're basically cheap... At least it's standard on the servers that are fast enough to support it. The rest use dd to copy partitions between backup drives. My boss calls it "RAID point five" We lovingly refer to it as the ghetto network.
  • experience (Score:5, Informative)

    by Jahf ( 21968 ) on Wednesday December 04, 2002 @09:24PM (#4815404) Journal
    I ran an IDE RAID, one of the first, a few years ago. It was a 3ware RAID-1 controller. I thought it would be useful because I had gotten sick of losing data on a drive failure. I didn't have the money (or patience :) for a good backup solution and Linux RAID hadn't matured.

    Everything was fine for awhile. After a few months I lost a drive, replaced a drive and it remirrored fine. Same thing happened a year or so later.

    Then one day my controller fried. Nothing else in the system went down, but some kind of surge hit the 2 drives from the RAID controller. The controller still worked but neither drive was accessible, either as RAID drives or as single drives. Tried numerous tricks, eventually gave up.

    I've run SCSI RAID in boxes I admin at work ... never have I seen 2 drives go down simultaneously. Nor have I seen a controller malfunction in a way that damaged the drives (though I've heard of it from other people).

    All in all, I decided it wasn't worth it. I am currently doing Linux mirroring in combination with journaling filesystems on one box, and Windows mirroring on another.
    • Re:experience (Score:5, Informative)

      by puto ( 533470 ) on Wednesday December 04, 2002 @10:45PM (#4815847) Homepage
      Hmmm, You suggested raid 5. Would not have been the best in video editing. 3 would have hit been better cause of the next to none performance loss when a drive is out.

      Well let me break it down first to you by where you went wrong.

      But in any case you should have at least left a little manual with them to explain very non technically what you had done and if they had a problem to look in the manual because all of stuff you had done on the system would have been there laid out and they could have researched. I always tape a note on the side of the server and sometimes inside saying WARNING READ VENDOR SUPPLIED INFO. I make them very aware of what I have done.

      It is also hard for me to believe that the guy looked at the server and thought they had one 500 gig hardrive, instead of thinking it was a volume. Any idiot would go. "500 gig drive? Huh?). Then again they got some real bozos in the world and I still shake my head on a weekly basis sometimes.

      I always also get then to sign a CYA(Cover Your Ass) statement saying I explained backups, what they should do, and should a problem crop up it ain't my fault. Usually scares em into buying a tape drive. Or at least meeting me in the middle on the raid end.

      RAID Level 3 - RAID Level 3 provides redundancy by writing all data to three or more drives. Just Awesome storage for video imaging, streaming, publishing applications or any system that requires large file block transfers.

      The only real disadvantage here is in small file transfers.

      Advantages -
      Single dedicated parity disk
      High read data rate
      High write data rate
      4 drives minimum
      No performance degradation if drive fails
      Best and worst case performance similar
      Video Streaming
      Video Publishing
      Video Editing
      Pre Press
      Image editing
      Any application that needs heavy updating and large file usage

      RAID Level 5
      Advantages
      Most flexible of all disk arrays
      Best balance cost / performance / protection of any RAID system
      Allows multiple simultaneous writes
      High read data rate
      Medium write data rate
      3 drives minimum
      Ideal for small write applications
      Highly efficient
      Transaction processing
      Relational Databases
      File & Print Servers
      WWW, E-mail, and News servers
      Intranet Servers

      You lose a drive in a 5 situation and performance takes a huge hit.
      This has been my experience.

      Puto
      • Re:experience (Score:5, Informative)

        by CerebusUS ( 21051 ) on Wednesday December 04, 2002 @11:02PM (#4815914)
        You're close, but you've got raid 3 a bit wrong. raid 3 still requires a parity drive, so you lose disk space again.

        The major difference between raid 3 and raid 5 is where the parity info is stored. on raid 3 all the parity info is stored on one drive, on raid 5 it's mixed in with the stripes and spread out over all the drives.

        However, you are correct that raid 3 is recommended for video editing, as it has lower latency on disk writes... in raid 5 the checksum has to be done before the writing can commence, in raid 3 it only slows down the actual parity write.

        Source:
        raid 5 [acnc.com]
        raid 3 [acnc.com]
  • by bravehamster ( 44836 ) on Wednesday December 04, 2002 @09:27PM (#4815419) Homepage Journal
    I work for a small custom computer shop. We built a system a few months back for a video editing company here in town. Obviously they needed a lot of storage, so we suggested a RAID-5 system using 6 100GB drives, giving them roughly half a terabyte of storage. The liked the idea, but insisted we used RAID-0 (the Purchasing Officer had read his PC Gamer and thought it sounded cool). We advised against it, but they insisted. 2 months down the line, a hard drive on one of their other computers breaks down. Their newly hired technician (the office managers son) saw that their big old file server had 5 hard drives in it, but was only using 1 in windows! Being the smart boy that he is, he dutifully shuts down the machine, removes one of the drives, puts it on the broken machine, formats and loads windows on it. He seemed awfully surprised when the file server wouldn't boot, and tried to blame it on us for losing a month of work. Despite our other recommendations, they had no backups. They went out of business last month.

    • Re:A little story (Score:5, Insightful)

      by tmark ( 230091 ) on Wednesday December 04, 2002 @09:34PM (#4815470)
      their big old file server had 5 hard drives in it, but was only using 1 in windows! Being the smart boy that he is, he dutifully shuts down the machine, removes one of the drives, puts it on the broken machine, formats and loads windows on it.

      So how did he decide which of the 5 drives he was going to pull ?
    • Re:A little story (Score:4, Insightful)

      by alexburke ( 119254 ) <alex+slashdot@al ... a ['urk' in gap]> on Wednesday December 04, 2002 @09:50PM (#4815573)
      Oh. My. God.

      I let out a yelp when I got to
      puts it on the broken machine, formats and loads windows on it *

      One of the things that really chaps my ass, more than anything else, is people asking my advice (and they do so specifically because of my experience in whichever field they're inquiring about), patiently listening to what I have to say, asking intelligent questions... then doing something completely or mostly against my recommendations.

      More often than not, something ends up going wrong that would/could not have occurred had they followed my advice in the first place, and then I hear about it.

      It sucks the last drop of willpower from my soul to hold myself back from saying "I told you so!" and charging them a stupidity fee. It's tempting to do so even to friends, if/when I get sucked into the resulting mess. [Hear that, Jared? :P]

      * Linux zealots: For a more warm-and-cozy feeling, disregard the first eight words of this quote.
      • by Afrosheen ( 42464 ) on Thursday December 05, 2002 @12:51AM (#4816467)
        This is where a little sound clip from the Simpsons cartoon comes in handy.

        Find any two-second clip of Nelson saying "Ha Ha!" and email it to fools that destroy things after neglecting your advice. It'd be even better to find a little flash clip of Nelson pointing and laughing, it'd add insult to injury.
    • by Anonymous Coward
      <tears>
      You had me at "office managers son" :)
      </tears>
    • by archen ( 447353 ) on Wednesday December 04, 2002 @11:27PM (#4816021)
      Sort of reminds me of the place I work.

      A week after I was hired the computer with the sales database died. I'm the computer guy, so I'm supposed to fix it. I was a bit surprised at what I found (keep in mind this information is supposed to be fairly important information to the company).

      The computer had around 256 megs of ram. Was a database server (for sales info) that around 3-4 people were connected to at any given time. Was running WINDOWS 98 using striped IDE hard drives. Among other things that this machine was used for at any given time was graphic editing in Corel Draw (wonderfully stable too I might add), and crash prone MS Office... as well as every God awful freeware screen saver ever found, and many other useless stuff that most people didn't even know what they were supposed to do. Apparently the machine crashed at least 3 times a day, and no one thought there was anything wrong with this.

      So one drive dies, and surprise the backup is done on a jazz drive that never worked right. Apparently the girl who used the computer never really read that error message regarding the Jazz drive every morning when she came in. So we had a wonderfully redundant backup with a different Jazz disk for each day of the week with nothing but garbage on all of them.

      When I actually put all the pieces of the puzzle together, I just started laughing at how ridiculous the setup was.
  • I've got a RAID-5 machine made with 5200 RPM WD 120 GB drives. Works great. It's a light server, and I built the thing for under 700 bucks, dual procs and all.

    I didn't use a RAID card, just a couple of IDE cards. And it was amazingly simple to set up.
    • RAID 5 can be a pretty poor performer, even with a dedicated RAID card with processor and cache memory.
      (writes in particular)

      I can't imagine doing software RAID 5, as the overhead is quite high.
  • by tmark ( 230091 ) on Wednesday December 04, 2002 @09:27PM (#4815426)
    I personally would love to hear any ide-raid stories that slashdotters might have.

    Once upon a time, in an array far, far away, there lived a young princess who was worried about the integrity of her data...
    • by Mitchell Mebane ( 594797 ) on Wednesday December 04, 2002 @10:50PM (#4815867) Homepage Journal
      Once upon a time, in an array far, far away, there lived a young princess who was worried about the integrity of her data...

      She knew that elephants never forget, but they do tend to die after a while, so she hired a consultant to investigate multi-elephant solutions. He came up with RASP - Redundant Array of Short-lived Pachyderms. While SCSI (Smart Chimps Storing Information) is more reliable, elephants were a good solution for more people, because they could also be used for plowing fields.
  • by lakeland ( 218447 ) <lakeland@acm.org> on Wednesday December 04, 2002 @09:28PM (#4815434) Homepage
    This article is much more an introduction to RAID than a point by point comparison of the various drives. Certainly, I wouldn't want to use it for choosing between them when I couldn't afford a mistake. But if you're used to using one or two disks and want increased performance or reliability (and lets face it, who doesn't?) then this article is well worth a read.


    My favourite quote from the article : As an added bonus, the lights sometimes flash in a side-to-side in a pattern reminiscent of Knight Rider's KITT.

    • My favorite:

      The Escalade must be hooked on phonics, because it loves to read.

      The rest of the article was cool too. :-)

  • I'd be happy if I could find a decent external IDE RAID enclosure at a good price. So far, the only ones I've seen cost waaaaaaaaay too much money. Is there anything similar to a Sun 711 Multipack for IDE? (Hopefully something I can buy on the cheap through Ebay?)
  • Annoying (Score:5, Funny)

    by cheezedawg ( 413482 ) on Wednesday December 04, 2002 @09:34PM (#4815469) Journal
    You would think that after 130 graphs comparing the controllers he could come up with a stronger conclusion than "I cant really decide which one is the best"
  • by snowtigger ( 204757 ) on Wednesday December 04, 2002 @09:38PM (#4815498) Homepage
    A friend of mine set up a raid0 (striped array) using the built-in raid-controller in his motherboard. Later, this motherboard had to be changed. To our great surprise, the raid information was only stored in the motherboard and thus permanently lost. This could be a good thing to know ... Make sure the data is not lost if the controller fails.

    Personnally, I run several software RAID arrays under Linux and it works very well. It's easy to manage and gives me decent performance on my rather old machine.

    I feel very confident in mirroring system/boot partitions on my linux machines =)
    • Most onboard RAId solutions and add-in cards under ~140$ are like this. You have to replace failed ones with cards using the same chipset in order to recover the data or use the array again. Onboard Promise and Highpoint RAID controllers have add-in counterpart cards that use the same chipset, and thus can be used to recover data if the on-board chip decides to die.
    • Same here, I built a fileserver in my brothers house for all his DJ'ing media. Since I didn't have a RAID controller, i used the software RAID 0 formatting when installing Red Hat 7.3 (2x120GB IBM Deskstar drives).

      This was built at the end of June this year. The system has had plenty of usage over the last couple of months and has been fine (especially with extended power cuts that went beyond the ability of the UPS). Due to dwindling space I want to slap a RAID controller in there and put 2 extra drives on (the same model of drive). For the controller I've got the 4 channel Promise Rocket on my NewEgg wishlist ($100 isn't bad at all).

      I'm not looking forward to how I'm going to juggle backing up 230GB of media... I suppose I'll format the new drives, copy the data over, take out the software RAID'ed drives and slap them into the controller (format and prep), and copy the data to the new RAID and add the new drives (i.e. just to make sure I don't have any major screw ups!). If anyone has any tips on this I would be grateful! I'm sure my brother will be more inclined for me to get a DVD+RW drive and plenty of disks... CD's don't seem to cut it anymore when dealing with 250GB+ of data (hey ho *sigh).

      Oh and machine I'm using the file server on is one of those Walmart deals (it was the Duron 1Ghz $399 one), and has performed brilliantly (touch wood!)... although the first thing to die on me was the Intel EtherExpress Pro NIC I put into it.

  • IDE RAID (Score:5, Interesting)

    by 13Echo ( 209846 ) on Wednesday December 04, 2002 @09:45PM (#4815546) Homepage Journal
    My experiences with IDE RAID have been pretty darn good. Benchmarking my Desktar 60GXP drives in Windows 2000 last year showed that I was getting read speeds in striping mode (between two drives) at faster rates than the fastest seagate Cheetah SCSI drives. Times have probably changed now though.

    I started with a KT7A-RAID mobo. The important thing is that you get the cluster sizes just right for your particular partition. I used Norton Ghost to image my drive and try all sorts of different variables. In the end I had very satisfying results. Since I switched to Linux, I stopped using RAID-0 (yes, it is supported with this device!). I found that ReiserFS and the multi-drive Linux filesystem on these drives seemed to be just about as fast without having to hassle with soft-RAID controllers. It is probably due to my system RAM though. I couldn't seem to get Windows 2000 to make the most of 1024 MB without using that swapfile. Linux seems to avoid the swap altogether and uses static RAM instead. It is very nice having the extra IDE channels though. Without them, I probably wouldn't have 4 HDs hooked up right now.
  • ... and I absolutely love it ...

    I can't remember how I got by without IDE RAID ...

    In fact I love IDE RAID so much I reccommend it to everyone I see on the streets ...

    I even bought one for everyone in my family, just in time for the hollidays ...

    Thank you IDE RAID, THANK YOU!

  • by tcc ( 140386 ) on Wednesday December 04, 2002 @09:52PM (#4815587) Homepage Journal
    I bought that about a while ago when the maxtor 160GB 5400RPM drives started to ship.

    I had to build a datacenter and storage price was the main issue. I had to have something cheap, yet hold a LOAD of data. Problem is personally I hate maxtor drives, I always found the more or less reliable (but drive experiences varies from a person to another so..). Anyways at that time maxtor were the only one offering 160GB drives, at a decent price/meg, and although 5400RPM is quite slow for access time, the main issue was cost so I could take a hit on access speed as long as "streaming" speed was fast enough.

    the Adaptec 2400A card was the best at the time, simple, cheap efficient, it had 3 bad sides for my application, no 48Bits LBA support (130GB+), no 64bits PCI version (I was using a K7 thunder, and that chipset will slow down the pci bus to the slowest card connected to to bus, and since I wanted all available bandwidth to be thrown to the 64bits gigabit card, I couldn't accept using 32bits), and finally, no more than 4 drives. I wanted to break the terrabyte limit, so let's say I would have used 2 of those cards, it wouldn't have been price-performance-wise since the 2 would have shared the bus and I would have lost 2 drives for raid-5 instead of one with a 8 drive setup. but the performance of the Adaptec 2400A was the best. Still looks like the best overall today, yet I dunno if they are supporting 48bits LBA?

    Anyways the 3ware 7850 was an excellent choice. Although their tech support is more or less good (like most tech supports) especially for real bugs and not just standard drivers reinstallation issues, the response time and sales people were very nice and professionnal. I got surprising results from the array, where I thought it would run like molasse, I was getting over 50MB/sec sustained non-sequential reading if I recall correctly. And the tools are very good, rebuild time is about 3-4 hours with 8x160GB @ 400GB filled on the drives, there are email alert tools and web interface to the host machine to check diagnostics. Overall it's a nice system and I'm sure the 7500 series are even better.

    Oh and on a "funny" note, windows shows 1.1TB available in the explorer window, not 1134GB :) Reminds me when I plugued my first gigabyte drive in my amiga and saw big numbers :)

    As for the maxtor drives, I didn't take any chances, I ordered 10 to get 2 spares, 2 blew off in less than a month, but didn't have any problems since then, I guess if you can afford the time, doing a 1 month burn-in test with non critical data isn't overkill. usually they SHOULD blow up one by one so you could rebuild the array :).

  • by snowtigger ( 204757 ) on Wednesday December 04, 2002 @09:54PM (#4815597) Homepage
    HP has developped a pretty cool type of RAID. An automatic RAID-level that automatically organizes your disks for best performance while maintaining security.

    When a friend explained it to me, it sounded like a mixture of raid 5 and 0+1. For example, if you replace a disk with a larger one, the extra capacity will be used to duplicate some other part of the array.

    White papers here [hp.com]
  • by trandles ( 135223 ) on Wednesday December 04, 2002 @09:56PM (#4815612) Journal
    We've run several big RAID-5 setups on 3ware cards. When I say big I mean 1TB+ on each card. To do this we've used the 100GB+ drives available (120GB - 160GB) The biggest problem has been drive failures. Out of the 40 drives I think we've lost 6 in less than 1 year. In only 1 case have 2 drives gone bad at once (RAID-5, we're covered if 1 drive fails), but lost around 1TB of data. Luckily the data could be reproduced but took two weeks to regenerate.

    It's WAY too easy to build massive arrays using these devices. How the hell are you supposed to back them up? You almost have to have 2, one live array and 1 hot spare array. If you think you're going to put 1TB on tape, forget about it. If you have the cash to buy tape technology with that capacity and the speed to be worthwhile, you should be buying SCSI disks and a SCSI RAID controller.
    • I guess the good news is that building your own 1TB array via 3ware route (i have had too many bad experiences with promise cards to recommend them, having not used their current rev, which does look nice...) would cost you probably 1/5th or better of buying that much storage any other way. Maybe you buy the 300+ GB drives, and do raid 50 on a controller

      I also wonder if the 4-8 drive configs are just overwhelming server cases, and heat is an issue.

      ostiguy
  • by Suppafly ( 179830 ) <slashdot@sup p a f l y .net> on Wednesday December 04, 2002 @09:56PM (#4815613)
    Using IDE Raid is like using a winmodem. Unlike with modems, where everyone has one, RAID has a basic educational entry point. I seriously doubt IDE Raid will ever overtake SCSI in any area where knowledgeable people are doing the administration.
    • by Nintendork ( 411169 ) on Wednesday December 04, 2002 @11:46PM (#4816107) Homepage
      Whatever dude.

      Winmodems do the calculations through software because they lack the chips on the card. That's a horrible comparison. These ATA RAID cards have everything built on the card. The Promise SX6000 even has an on board Intel i960RM RISC processor for XOR calculations.

      CPU utilization of these ATA RAID cards is negligible, so if you really need that extra 2 or 3 percent, just get a faster CPU.

      The main advantages that SCSI has for performance is the individual drive performance (15,000 RPM and 4.5ms access time as opposed to 8.5) and command queueing. The transfer rate isn't a big issue if you're transferring it over the network. You're still limited to your PCI bus speed and the network speed. Even on a gigabit backbone, that's roughly 65MB per second of thoroughput in real world performance. The performance is only a factor for local reads/writes and access time.

      The cost of a 1TB RAID 5 IDE setup (6 200GB drives, Promise SX6000 card, removable enclosures for the drives, and 128MB cache) = $2,450

      The cost for a 1TB RAID 5 SCSI setup (8 10,000 RPM 146GB Cheetahs and an Adaptec 2200s dual channel card plus the hot swappable enclosures (add at least $700 here) = At least $9,350

      If price is no object, go with SCSI. If you're running an enterprise SQL or WWW server with thousands of users, the access time of the drives is a huge benefit, so go SCSI. If each server must have more than 1TB of fault tolerant storage space, go SCSI because it can house enough drives per card to accomplish this. For everything else, go IDE.

      As an FYI, I'm running the described ATA RAID 5 setup with 120GB WD Caviars with 8MB buffer, a dual port 3com teaming NIC, 512MB RAM, and an Athlon XP processor as a highly utilized file server. Runs like a champ. No issues and the boss is incredibly happy with the price tag. $2,800 to build the whole server. It's rackmounted under our incredibly expensive Compaq Proliant ML530 which is just doing SQL. If a drive goes out, I'll get an email notification. I simply remove the dead drive, replace it, and rebuild. No rebooting needed.

      -Lucas

    • by jonbrewer ( 11894 ) on Thursday December 05, 2002 @12:07AM (#4816237) Homepage
      Using IDE Raid is like using a winmodem. Unlike with modems, where everyone has one, RAID has a basic educational entry point. I seriously doubt IDE Raid will ever overtake SCSI in any area where knowledgeable people are doing the administration.

      To You, Unbeliever: In 1999 I set up a file server in a factory in Connecticut. I used a four-channel Adaptec card and four 76 GB IBM DeskStar disks to create a RAID 0+1. (they were the biggest IDE drives on the market at the time) The array lost one drive after a few months, which was replaced without incident. It has faithfully served a 50+ node network for almost four years now. And at the time, it cost that factory $2500 in hardware and 7 hours of labor, for a 150GB volume. This was less than 25% of the cost of the cheapest SCSI RAID.

      SCSI raid is for those who don't keep up with the times, and find it easier to throw money at a problem than to actually find a good solution.

      Maybe you're one of these people?
  • I have about 5 TBs of RAID5 storage online at various customer sites. They are all using Linux software RAID and Promise ATA66/100/133 controllers. Even when using two drives per IDE channel, we still see very good performance. An RAID5 system with eight 120-GB 5400-RPM Maxtor drives gives about 55 MB/sec write and 80 MB/sec read performance under Bonnie. Those eight drives were on two Promise ATA100 controllers. Cabling is fairly easy if you use 24" UltraATA cables. And it will get much easier with Serial ATA.

    One customer ordered a system from a vendor who insisted on installing an ATA raid card, and it was a remarkable disappointment. Linux was able to indentify the array as a SCSI device and mount it. Then, for some reason, the customer rebooted his system. During the BIOS detection, the raid card started doing parity reconstruction and ran for over 24 hours before finally allowing the system to boot! For comparison, the same sized array would resync in the background under Linux in about 3 hours.

    Also, the reconstruction tools built into the raid cards are pretty limited. If you have a problem with a Linux software RAID array, at least you can use the normal low level tools to access the drives and try to diagnose the problems. Just MO.
    • have you had any drives go south yet? my experience with promise 33/66 cards a generation or two was that 2 drives on a cable, one bad drive = both drives data gets corrupted. So two cards, 4 drives, 1a,1b,2a,2b, in raid 10 meant one drives dies, all is lost = so much for raid.

      ostiguy
  • Here's a mod I posted before that converts a cheap Promise ATA-100/133 or ATA-66 controller into a RAID unit. http://www.tweakhardware.com/guide/raid100/ The last time I checked, Maxtor was selling the Promise unit as their own brand as well. This means that it's in wide distribution.

  • Promise controllers have a quirky setup display. About two years ago they said they would fix it, but haven't done that.

    Anyone have comments about the others?
  • From what I've seen, hardware IDE raid requires matched drives (at least, the capacity is based on the smaller drive). If I'm wrong, I'll be happy to hear about it.

    In any case, we use software RAID-1 so that the system can survive a drive crash. We started using RAID-1 on SCSI with the AIX Logical Volume Manager, and began using Linux RAID-1 on IDE when the Promise PCI controllers were supported in RH72.

    We have lots of AIX and Linux systems, and have had a dozen drive crashes over the years.

    • The first problem is not noticing that the drive is dead until two weeks later. :-) Preventing this requires good monitoring software.
    • The next problem is that when your 1G drive fails, the smallest drives available are 4G. When your 4G drive fails, the smallest available are 18G. With hardware RAID requiring equal size dirves, you have to replace the whole array. With software RAID, you just pop in the new drive, remove the old drive, remove the old mirrors, add new mirrors. On AIX, volumes are divided into equal sized "logical partitions" which are mirrored independently - so this is super easy. With the Linux "md" driver, partitions are mirrored independently - which is more cumbersome, but still lets you do something with the extra space on the new drive.
  • While I find IDE RAID's attempts to change the "I" in RAID back to "inexpensive" interesting, I just can't get excited about it. It smacks of being a stop gap to SerialATA drive arrays, in the same way that EISA and MCA where a stop gap to PCI. The fundamental limitations of the two drives per channel, and bundles of 40/80 pin cables just doesn't warm me up at all. I'm not even worried about the mess in the case, because you could probably tape the ribbon cables together into a chunky bundle and run vertically up the back of your drive array.

    Having made the investment, I'll be wringing every last drop of sweat out of my homebuilt Linux/SCSI-160 network attached storage array thank you very much! I'm hoping that by the time that is on its last legs I'll be able to drop in a SerialATA RAID controller and a whole bunch of cheap drives to build the multi-terabyte storage array everyone will inevitably want by then.

  • by lanner ( 107308 ) on Wednesday December 04, 2002 @10:22PM (#4815751)

    Holy cow. Sistina LVM (Logical Volume Manager) rocks. It is a partition system/file system of the future that really makes RAID sort of unnecessary. It is true that it is done by the host OS, but when integrated right it does not matter.

    Documentation for LVM is great. It is stable and works without quirks. It does all of the things that I would typically desire from a RAID 0,1,5 setup. Administration tools are awesome and give output just as I hoped. Expand partition sizes LIVE (ext2resize needs to unmount though, that is not LVM's problem), move a file system to another physical drive, mirror partitions, spread partitions over various devices. LVM is NUTSO!

    It is built into the Linux kernel past 2.4.7 (or somewhere around there), though I have heard that it was inspired from LVM for HPUX. I can't say much about this.

    Understanding the concept of how LVM works can be a little hard at first, but once you get past that and then actually use it on a system, you will be totally blown away by what it does and the performance.

    Here is the website for LVM
    http://www.sistina.com/products_lvm.htm

    I personally use Sistina LVM on a Debian Gnu/Linux system that has two IDE 60GB hard disks. I can change the sizes of partitions, move data around, move to a new hard drive on the fly, and tons of things that I don't even think I could do with the highest end of RAID controllers. As for performance, it is software RAID, but it does not have any of the typical software RAID slowness or cruft factor. I initially chose LVM as a cheap alternative to buying an IDE RAID card. Now, I don't even want an IDE RAID controller.
  • by dougie404 ( 576798 ) on Wednesday December 04, 2002 @10:28PM (#4815770)
    ...so be alert.

    Each IDE controller can support up to two drives, a master and a slave. What happens if you hang two drives off one controller, and the "master" drive dies?

    If it dies badly enough, the "slave" drive can go offline. Now you've got TWO drives in your array that aren't talking. There goes your redundancy.

    If your purpose in using RAID is to have a system that can continue operating after a single drive failure, then you better think again before you hang two drives off any one controller.

    As it points out in the Linux software RAID docs, you should only have one drive per IDE controller if you're really concerned about uptime. That would imply that "4 channel" RAID cards should only be used with a maximum of two drives, both set to "master", and no "slaves".

    Note that this does not apply to SATA drives, as there isn't really a master-slave relationship with SATA -- all drives have separate cables and controller circuits. SATA drives are enumerated the same way as older drives for backwards compatibility with drivers and other software, but they are otherwise independent. (At least that's what I hear, I haven't actually seen one of these beasts yet...)

    And of course none of this touches on controller failures, which is another issue. But if you are worried about losing drives and still staying up, then better take this into consideration when you design your dream storage system.

    (I don't know about you guys, but I have lost several drives over the years, and not one controller...)
    • by Enigma2175 ( 179646 ) on Thursday December 05, 2002 @01:36AM (#4816642) Homepage Journal
      Each IDE controller can support up to two drives, a master and a slave. What happens if you hang two drives off one controller, and the "master" drive dies?

      Actually, any modern standard IDE controller supports 2 channels or four devices. You are right in saying you shouldn't have more than 1 device per channel, or 2 devices on a standard controller. Most of the dedicated RAID IDE controllers like the ones review in the article have 4 or more channels. This allows you to build a pretty big RAID before you would consider putting a disk on as a slave.

      Standard controllers are cheap, I just added a controller and 2 drives to my linux software RAID and it cost me less than $200 for the controller and the drives (80 GB and 30 GB). IIRC, the controller was ~$40. With prices like that, there is no need to run more than 1 drive per channel (unless you run out of PCI slots).
  • by Erpo ( 237853 ) on Wednesday December 04, 2002 @10:33PM (#4815788)
    I'm using IDE raid on my home desktop right now, but I'm using software raid as opposed to a hardware controller. I have two Seagate Barracuda ATA IV 40GB hard drives hooked up as masters to my primary and secondary motherboard IDE ports. I also have a DVD-ROM hooked up as secondary slave, and a Promise Ultra133TX2 controller with a CD-RW hooked up to its first port. Both hard drives are sectioned into a 3GB primary 1st partition and a 34GB (yes, the drives are only 40GB when you're in marketing land) 2nd primary partition. Windows 2000 is installed on the first drive's 3GB partition, and redhat linux 7.3 is installed on the second drive in the same place. Both OSs share the combined 68GB RAID 0 set, which is formatted with NTFS, made from the combined second partitions. The only problem is that linux can't write to the array because NTFS write support under linux is currently "DANGEROUS" according to the driver's author and I keep important data on there. (Yes, I know about the dangers of using RAID0 and I back up regularly.) It'd sure work a whole lot better if that driver were finished, though. (hint hint, Legato Systems, Inc.) ;)

    Getting the two OSs' software raid drivers to play nicely together was an "adventure", mostly due to Win2K's insistance on turning the disks into "dynamic disks" before letting me use its built-in RAID functionality, meaning it wanted to wipe out my old partition table, replace it with a single partition taking up the entire disk, and create a new system of partition organization inside the dummy standard partition. After a lot of reading, I found out that Windows NT 4.0 supported "stripe sets" using standard partitions, and that Windows 2000, when installed over an old copy of NT4, would support the "legacy" software RAID drive. Windows 2000 would not, however, allow me to create new legacy stripe sets for compatibility with other OSs. Stupid Micro$oft. So all I had to do was fake Win2K into thinking it had been installed over an old copy of NT4 which had been using its stripe set functionality.

    The first thing I had to do was create partitions. I opened up linux fdisk and allocated 3GB on each disk to my OSs, one for linux and one for windows, and created two partitions, each one taking up the rest of the space on its disk, and set their types to 87h (NT stripe set [thanks to whoever put the L command in linux fdisk!]). After installing Windows 2000 on the first disk's first partition, I needed to get my hands on a couple of tools that didn't come with windows 2000: Windows NT 4 Disk Administrator and MS's fault tolerant disk set disaster recovery tool, FTEDIT. After spending about 6 hours searching online, I finally found a download site for FTEDIT - MS's web site says you can get it free from them, but it provides no download link. NTDA was a bit easier. Since MS service packs replace OS files, and somewhere in NT4's history a bug or problem had been found in NTDA, that file was in the service pack 6a for NT4. Service packs check to see if you're using the correct OS _after_ they decompress themselves, and they're nice enough to display an error message telling you this ("Whoops. You just wasted a whole bunch of time downloading a huge file you didn't need. Sorry!") before they delete the decompression directory. Figuring that out took a while, but snagging the executable during decompression was easy.

    I ran NTDA, which populated the "missing" DISKS key in the windows registry (Win2K stores disk information in a different place from NT4), and told FTEDIT that, yes, I really did already have a software RAID 0 set on those drives, and that windows NT had died on me and I had to restore it. After a reboot, "Drive D" appeared in my computer. 68GB and unformatted. YAY! :D After a quick format with NTFS (the partiton was too big to format with FAT32), I was in business.

    Getting linux to see the array was much easier. I added

    raiddev /dev/md0
    raid-level 0
    nr-raid-disks 2
    persistent-superblock 0
    chunk-size 64

    device /dev/hda2
    raid-disk 0
    device /dev/hdc2
    raid-disk 1

    to /etc/raidtab, ran raid0run /dev/md0, and added a line to /etc/fstab. (I read online that WinNT 4.0's software raid driver uses 64K chunks.)

    Btw, yes, I know linux has support for MS's dynamic disk scheme. I enjoy tweaking and doing new things, even if it means days spend reading about Windows. ;) As a bonus, I also get to keep my standard partition table as well as compatibility with non-M$ disk editing/management/recovery tools.

    "So," you're probably wondering, "why did Erpo spend all that time setting up a RAID0 set (presumably for extra performance) and then go and do a stupid thing like put a DVD-ROM drive on the same ata cable as one of the disks when he has an extra ata port on his add-in controller that he's not using?" Thanks for asking. It's because Promise's bios on the Ultra133TX2 card was broken. The company "Promised" me it would allow me to boot from CD, but in reality it only will let me do so when I want to boot from a windows installation CD. Not just any windows installation CD, either. It had to be Windows 2000 Professional or XP, which I refuse to use.

    It wouldn't recognize my Windows 98 SE cd, or any of my linux distros. I didn't have a choice about the DVD drive if I wanted to install linux. Just now, months after I got the card and sent promise and email, they released a bios update that claims to fix the issue. If it works I'll be moving my optical drives around. Even with the DVD drive, the performance isn't too bad - about 80MB/sec at the beginning of the disk, and it slowly drops to 50MB/sec at the end.
  • by cluge ( 114877 ) on Wednesday December 04, 2002 @10:47PM (#4815857) Homepage
    Our test of the promise raid under redhat linux with the "open source" drivers (2.4.19 vanilla) compared with the 3ware product were VERY different.

    I don't have the exact numbers off hand, but the 3 ware product was roughly 3 times faster at reading (raid 0+1 and raid 1). The 3ware was also faster at writing albeit the numbers were much closer. The number that DOES stick in my head was the postmark [netapp.com] benchmark from netapp we ran. The promise did 2500 files, from 2 to 200k with 500 operations in about 35 seconds. The 3ware product did the same in 12.

    The moral of the story is TEST TEST TEST, these types of articles only give you an idea. Promise worked great for me personally in several applications. After testing it for a production machine at work, we went with the 3ware because the promise did not perform well for our application. Test for youself, or forever be dissapointed.

    Cluge
  • by photon317 ( 208409 ) on Wednesday December 04, 2002 @11:14PM (#4815961)

    First off, they've failed to note that some of their contestants are in fact just IDE controllers, with the RAID functionality implemented in the software driver (WinRAID, like WinModems), whereas others are Hardware. I don't know all four products well, so I'm unsure on at least one of them as to which are which.

    They tested CPU utilization, and seperately various speed tests, but never a comprehensive "loaded system" test. As expected they ranked the Adaptec (a true hardware RAID) lowest, while ranking the WinRAID's higher. This couldn't be further from the real truth. Sure, the idle P4 cpu does a great job of fast software RAID compared to the embedded RAID ASIC on Adaptec's card. However, if you had a heavily loaded server machine, where the processors were loaded down doing other things (say SSL-encrypting for an secure web server), the machine with the Adaptec would trounce the others, as the RAID processing speed will not decrease while your applications are using most of the CPU (or depending on the device driver's pre-emptability, it could be the other way around, that the CPU simply wouldn't be as available to your CPU-hungy SSL server as it's busy with the RAID).

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday December 04, 2002 @11:33PM (#4816044)
    Comment removed based on user account deletion
  • by Futurepower(R) ( 558542 ) on Wednesday December 04, 2002 @11:33PM (#4816047) Homepage

    From the Slashdot story: "I personally would love to hear any ide-raid stories that slashdotters might have." I also would like to hear about this.

    Here's my story: I have extensive experience with Promise controllers. An IDE mirror makes data reads faster. If you are about to do a possibly damaging operation, it is good to break the mirror, pull out one of the hard drives, and do the operation on the other drive only. Then, when craziness happens, the other drive is a complete backup.

    A mirroring controller is a convenient way to make a Windows XP operating system hard drive clone. Windows XP prevents this; normally third-party software that runs under DOS is needed to make a useable full hard drive backup. See the section "Backup Problems: Windows XP cannot copy some of its own files" in the article Windows XP Shows the Direction Microsoft is Going [hevanet.com]. (The article was updated today. To all those who have read the article, sorry for the previously poor wording of the section "Hidden Connections". Expect further improvements later in this section later.)

    But Promise controllers are quirky. Sometimes things go wrong, and there is no explanation available from Promise. Promise tech support is surprisingly ignorant of the issues. The setup is quirky; it is difficult to train a non-technical person to deal with the controller's interface.

    Mirrors are a GREAT idea, but Promise is un-promising. That's my opinion. I'm looking for another supplier, so I want to hear other's stories.
  • by jason andrade ( 17150 ) on Wednesday December 04, 2002 @11:46PM (#4816109) Homepage
    I've passed this feedback onto the author of the ide raid-roundup - i figured i might as well post it here too.

    I just thought i'd share some of my experiences with promise support.

    Frankly, they have been terrible. I would not voluntarily buy another promise product again at this stage based on my experience with them.

    I have been attempting to get support for the Promise FastTrack which is a popular embedded raid controller option, under Linux.

    Promise indeed "support" RedHat but do so with a binary only, closed source module that in the end turns out to be useless.

    Promise hard code a supported kernel version for this driver such that you can run it under say RedHat 7.3, but only the initial 2.4.18-3 kernel, which has a number of critical bugs which have been addressed in later (errata) kernel updates.

    Needless to say, promise's driver will not run on any later kernel or at least they are unwilling to answer questions on how to do this.

    A comparable analogy would be if they had released Windows XP drivers and then your hard drive failed to work if you installed a hot fix or a service pack because the driver is keyed to only the specific intial installed released of XP. Promise don't treat windows users this way, so why do they do this for linux users ?

    I've managed to get two responses out of their support, none of which will address my problem - support the hardware under linux by releasing the source or provide updated kernel drivers for the released kernel images that will actually work.

    In terms of driver support for Linux/FreeBSD, 3ware wins hands down in this group.

    regards,

    -jason
  • by ChrisCampbell47 ( 181542 ) on Thursday December 05, 2002 @12:14AM (#4816270)
    Every time there's a discussion or article about RAID, especially IDE RAID, I am astounded with all this discussion about drivers, OS support, integration problems, yadda yadda yadda.

    Why hasn't the ArcoIDE solution [arcoide.com] caught on like wildfire? It provides mirrored disk capability with absolutely no visibility to even the motherboard, much less the OS. I've been running it for years and it's great. Mine is the PCI slot model [arcoide.com] that simply uses the slot to get power to the card. One IDE cable from the motherboard to the card, two cables to the two hard drives.

    And there's all sorts of alarming options -- LED's on the card, LED's on a front panel bezel, audible screech, Form C contacts for you industry types ...

    I don't get it.

  • by baptiste ( 256004 ) <{su.etsitpab} {ta} {ekim}> on Thursday December 05, 2002 @12:42AM (#4816410) Homepage Journal
    I am one of the few who think IDE RAID is a useful tool and the 3Ware cards are the best out there.

    So I was surprised reading the review to see the Adaptec and 3Ware neck and neck in the RAID 5 area. 3Ware's usually have no competition in RAID-5 since their firmware and HW rock.

    Then I found out WHY they were so close:

    "I don't currently have any boards that support 66MHz PCI slots, so all testing was done with
    32-bit/33MHz PCI."

    The 3Ware cards are 64-bit cards while the Adaptec's are only 32-bit. 3Ware cards can hit 70MB/sec writing and over 150MBsec reading with 8HD's! If they ever get to 66MHz, I expect their performance to go even higher.

    If you want to see better benchmarks that fit with reality, check out the XBit Labs Review

    • by EvilNight ( 11001 ) on Thursday December 05, 2002 @09:30AM (#4817727)
      I'll second this. I've got a 3Ware card running a 4-disk RAID5 (100GB WDCs) under Suse Linux 7.3 on a dual Athlon 1800XP tyan board with 64bit/66MHz bus, and it owns every raid system I've benchmarked here in the office.

      I even ran it up against a real SCSI RAID5 array running on 10,000RPM Seagate Cheetah drives (again 4 disks) and it decimated SCSI for write speed, the 3Ware card was easily 5x faster. It tied it for read speed, but the SCSI still beat it in access time (5ms vs 16ms). The SCSI raid card was one of Adaptec's best, $800 but I forget the name now. Still, that's damn good performance for something 1/4 the cost. I've even got the benchmarks around here somewhere...

      If you are going to build a raid for a server, and you decide not to use 66MHz/64bit cards for your array controllers (scsi OR ide), kindly take this ball peen hammer and go stand in the corner whacking yourself in the head with it for several hours.
  • Fibre Channel RAID (Score:4, Interesting)

    by nuxx ( 10153 ) on Thursday December 05, 2002 @12:52AM (#4816469) Homepage
    Utilizing eBay and a few vendors that I dug around for, I was able to assemble a blazingly fast fibre channel RAID system for home for around $500. If you take a look at http://www.nuxx.net/gallery/fibrechannel [nuxx.net] you can see the assembly of the box. There are also benchmarks detailing the RAID 5 array bursting to >160MB/sec (image at http://www.nuxx.net/gallery/fc_benchmarks/aad [nuxx.net]).

    The box is set up as follows:

    o Mylex eXtremeRAID 3000 ($200 via eBay)
    o Crucial 256MB DIMM for Cache (~$50 from Crucial)
    o 4 x Seagate ST39102FC 9GB 10,000 RPM drives ($9/ea on eBay)
    o Venus-brand 4-disk external enclosure (~$35 on eBay)
    o Custom made FC-AL backplane for disks (~$200 from a site I can't remember at this time)
    o 35m FC-AL cable (HSSDCDB9) (~$40 for two on eBay)

    The best part? The box is located in my basement, so I have this incredibly fast disk disk access, with no noise and no extra heat inside my case. That also allows me to cool the case more efficiently. Sure, IDE RAID may be cheaper, but the performance, per-disk, coupled with the reduced noise in my office and the reduced heat in the case is a big plus. Also, I might eventually pick up a second backplane for another four disks and do RAID 0+1. Since each channel is capable of 100MB/sec (without caching), the use of a set created across two channels would be amazing.
  • My experience. (Score:3, Interesting)

    by WeThree ( 2688 ) on Thursday December 05, 2002 @01:22AM (#4816588) Homepage
    I've got 12 WD 120GB 7200rpm special edition drives (8mb cache on each).

    They're all hooked up to a 3ware Escalade 7500-12 card, RAID5, with a hot spare. Application is storage of large amounts of raw digital images 7-8MB each.

    Been going for a few weeks now, no problems, 2.4.19 kernel's built in drivers lights the array right up as sda1.

    bfair@deathstar:~$ df -h /dev/sda1
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 1.1T 543G 574G 49% /storage1

    SCSI subsystem driver Revision: 1.00
    3ware Storage Controller device driver for Linux v1.02.00.025.
    scsi0 : Found a 3ware Storage Controller at 0x10d0, IRQ: 5, P-chip: 1.3
    scsi0 : 3ware Storage Controller
    Vendor: 3ware Model: 3w-xxxx Rev: 1.0
    Type: Direct-Access ANSI SCSI revision: 00
    Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
    SCSI device sda: -1951238656 512-byte hdwr sectors (100477 MB)
    sda: sda1

    reiserfs: checking transaction log (device 08:01) ...
    Using r5 hash to sort names
    ReiserFS version 3.6.25


    I would show you more but I'm ssh'd in and the power just went out. The 300VA ups running this box while I'm testing it probably just let its smoke out. Doh.

    Anyway I like it. If its not fried. :\
  • by haggar ( 72771 ) on Thursday December 05, 2002 @04:40AM (#4817161) Homepage Journal
    I like reading the comments here, I am humble enough to know I can always learn something. But there's something I didn't see mentioned, in all these IDE RAID setups that people describe: can you have a hot spare disk? Hot spare is critical for data reliability. If you have a large RAID 5 or RAID 0+1 (not advised, always do 1+0, whenever possible), you can do the math and see how darn important it is to have the host spare.

    What good it is to have a RAID 5 without a hot spare, when you can only guard against single drive failure? So, I really hope IDE RAID supports hot spare, otherwise I question the saity of mind of the admins who implement such solutions.

    As for IDE vs SCSI drives, I have to say that I will always go with SCSI, as long as I am in a multuser environment where seek times are critical. Apparently (experience shows), if you put your database space on a RAID, seek times are critical for the performance of your application. In this context, I think this review/coparison would have benefitted from a real-life aplication's benchmarking, with a database hosted on the RAID.
  • by bdowne01 ( 30824 ) on Thursday December 05, 2002 @11:50AM (#4818504) Homepage Journal
    I've been working on x86-based servers a long, long time.

    There are many reasons one should choose SCSI over IDE, but I want to counter a few of the arguments I've read through the many messages here:

    Argument #1:
    SCSI can have 15 devices per bus, but why buy more smaller and more expensive SCSI drives instead of getting fewer large IDE drives?

    Answer: Bigger isn't always better. On large RAID systems (real servers, here people...not Mp3 servers) one of the concepts of RAID5 is to spread out the data among as many drive spindles as possible. This keeps each drive's load level under control, and eliminates hot-spots on individual disks. If you sit down with any SAN vendor, like EMC, they will tell you the same thing.

    Argument #2
    Sustained IDE Raid performance can equal SCSI
    This is absolutely incorrect. This may be true on a server with no CPU load. Try this again on a server running SQL and averaging 85% load. You will NOT see the same performance out of an IDE disk layer. There is simply too much CPU overhead on an IDE-based RAID system for heavy-load systems. The idea behind a SCSI controller is that it is free of the system's CPU as a bottleneck. The money saved on non-SCSI hardware will instead need to be spent on faster CPUs.

    Argument #3
    IDE Disks are just as reliable as SCSI
    Again, completely false. You get what you pay for. SCSI disks have logic on each disk to control the operations OF that disk. In a RAID array, you want each disk to be completely independant of the others. IDE RAID requires the controller to do all the monitoring (if there is any) of each disk, lowering performance of its primary function--controlling disk I/O. Anyone who has worked on a Compaq server and used Insight Manager will be able to see the advantages of SCSI disks directly. SCSI disks will be more reliable since they are built to be more reliable. IDE disks are meant for cheap deployment on cheap systems.

    Thank you, have a nice day :-)

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...