Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Promote Your ATA66 Controller To A RAID Controller 311

SPI3LB3RG writes " Evidentally he only differences between the Promise ATA66 Controller and the Promise FastTrack66 RAID Controller (beside cosmetic) are a five-cent resistor and the bios. The page tells how to change the ATA66 to a RAID controller. (A simple bios flash and some soldering.) In the end, you have a $65 RAID controller for about $20 bucks."

Current price at buy.com on the Promise ATA66 Controller is USD 34.94, and the FastTrack66 RAID Controller is USD 123.95; at pricewatch lowest prices shown are USD 27.00 and USD 113.00 respectively.

This discussion has been archived. No new comments can be posted.

Promote Your ATA66 Controller To A RAID Controller

Comments Filter:
  • by Anonymous Coward
    I'll probably get moderated down, but I have to step on the soap box some time. The only people who have ever told me Jews are cheap are my jewish buddies. So why is it wrong when a non-jew says it? How do we even know he is a non-jew? It's like blacks calling themselves niggers, so why can't we? Nothing to do with racism, just realize the poster is a twit, and move on. And realize it's not a racist comment, because enough Jews are actually cheap to make the generalization semi-valid. (Though I believe the crime is generalizing something that should not be generalized.) The PC attitude is the same as racism, because it's dumb generalization of being offended by anything controversial and race-related. Your whining is as bad as the original statement.
  • by Anonymous Coward
    Here [storagereview.com] is a site that describes a much simpler mod which accomplishes the same thing without needing to desolder the 32-pin chip.
  • by Anonymous Coward
    ..I just bought a new gateway computer last week and I still have not gotten linux to work on it yet!

    It also doesn't support NT natively without unpluging both hard drives from the card and then doing an install by pluging the hard drives into the ide slot in your motherboard and then installing the driver and then opening the case up again and THEN pluging your hard drives back into the card. Rediculous !

    There is also no native linux support and there is only a beta linux driver that is not raid compatible. To install the driver you must do the same steps as the NT installation. If you try to install linux you will recieve "Can not find hard disk" error.

    If your motherboard comes with an ide ata-66 port soldered into it, then you are screwed! You will have to either a>) throw out your new computer or b:) put up with Windows and hope linux will support your new ata-66 eide ports someday right out of the box.

    ..oh. DO NOT EVER BUY A GATEWAY!

    ahhh that feels alot better. The card doesn't make that much of a noticable difference in performance. The cards i/o is waaay behind scsi. For a server with alot of users, scsi is still the way to go. For workstations ata-66 might be a good deal. I would still stick with eide pio-mode 4 drives and ports for non servers.
  • by Anonymous Coward
    It is still software RAID. You will have much better luck using the device as a normal ATA 66 controller and using Linux, BSD or even WinNT's software striping, mirroring or RAID 5 (interleaved striping+mirroring).

    Nifty, yes.
    Useful, no.

    Unless you want to run the suboptimal Promise drivers on Win96/98/NT, I would run Promise drivers like I would Creative Labs sound drivers on my NT box. Please stick me in the eye with a hot poker.
  • by Anonymous Coward
    Yes, "hardware hacking" is fun. But I wouldn't call pokeing around a little bit with commodity PC components "hardware hacking."

    Hardware hacking is done with oscilloscopes, microcontrollers, EPROM burners, wire wrap sockets, and suchlike. Magazines like "Circuit Cellar INK" cover it, and it involves more than PeeCee software.
  • by Anonymous Coward

    Your setup is still cheaper, but ultra160 SCSI is available now and it HAULS!. The controller is about $350, and a 18 gig drive costs about the same. Yes, now I have $700 invested and only 18 gigs to show for it, while your setup can be had for about $650US (2x$200US for the drives and $150US for the controller), and you've got 40 Gigs.

    I plan on getting one of these ultra66 cards to try this out, but not for speed. I want to setup a striped mirror set of 4 20 gig IBM drives (40 gigs total) for fault tolerance.

  • AOL and Mathematica CD-ROM disks are physically identical products too, except for slight differences.

    Catch a clue. The product vendor is selling their IP. You buy the cheap product, and you're buying the lower cost IP. You buy the more expensive product, you're buying much more sigificant chunk of IP.

    If you don't understand the concept of Intellectual Property, maybe you don't belong in the computer industry.
  • You know that someone's going to find out how to fix it sometime or other..

    Kinda reminds me of security by obscurity . . .
  • When really huge amounts of data at stake, I certainly have heard RAID suggested for this reason. Indeed, I reacall that it's as an inexpensive alternative that my database administration textbooks introduced it. With prices on large drives going down, it's starting to be used more strictly for redundancy in many places -- but that doesn't make the original use invalid.
  • You want to void your warranty and render tech support useless so you can save $40-100? How valuable is your data? Hopefully more than that.

    You're in the wrong place. The workstation users are over there ---------->. These are peecee lusers. They've never heard of the service contract. Probably because any vendor who offered them on peecees would go under in five seconds.

  • Right. I know they're not full-blown RAID cards. But that's the exact same thing for the Promise cards described.

    Thanks for the tip... I'll look into getting one over the summer...
  • Hmm... I know that some of the older Athlon mobos couldn't supply enough power to both the processor and the card, dunno about the Super7 mainboards.
  • A translation of the German text (which is not much better than the English one, BTW):
    "A friend of mine brought me to the idea to publish this on the Internet so that other POWER USERS also have the possibility to feast their eyes on how fast you can STILL get your PC".

    Not that this was very important ...

    Sebastian
  • If the difference between $30 and $113 is "...a pretty chunk of change...", then you must work for a really small company. Personally, while it might be a neat hack for home use, if RAID is that important to you, I'd really recommend something that can be supported.

    Beware false economy.


    ...phil

  • If they weren't making a profit *and* working towards breakeven at the $20 pricepoint, they wouldn't be selling at that price at all.

    I'm curious to understand the bizarre logic that makes you think this.

    The 'crippled' $20 product is a means to sell only a portion of their intellectual property. For $20 you get the benefit of their ATA66 R&D effort. For $65 you get the benefit of that plus the RAID R&D effort. This ability to charge incrementally for development effort is probably key to the profitability of the product line as a whole.

    How they package that intellectual property is irrelevant. It simply turns out that it's cheaper to disable the RAID stuff than it is to fab two entirely different products.

  • >If you've ever found yourself wanting a third hand while solderinge
    >then you're probably not a hacker.

    If you've never soldered anything where a third hand would have been *really*
    useful, then you're just not ambitious enough.

    Soldering wimp! :)

    With a soldering iron, yellow stickies, duct tape, and a couple of
    bungies, you can fix *anything*

    Hmm, I've never considered myself a hacker, but I guess that last line
    kind of defines me as one, doesn't it? :)

    hawk
  • until it's drawn blood . . . And my latest (hmm, two years old???) managed to do it before I got the motherboard . . . My uncle & cousin wanted to see it, so I brought it in from the trunk.

    OK, it's also that my Uncle reads & writes ancietn egyptian, and I wanted the label that read "Hawkins" :)
  • Ok. There's a bunch of questions here.
    • Re: your performance problems. Are you using the 0.9 RAID patch (and appropriate RAID-utils)? It's much better.
    • Software RAID for SCSI IMHO is better than Software RAID for IDE. It works great on my system. Perhaps the problem is with your controller?
    • 2.3.3x's standard RAID is different from 2.2.14, I think... Somewhere in the recent 2.3.x tree the 0.9 RAID patches were applied.
    I use Software RAID over my UW-SCSI controller, and I'm telling you, it ROCKS!
  • If you've ever found yourself wanting a third hand while soldering then you're probably not a hacker.

    No matter how many times I read this, I can't make it make sense. Did you mean "If you've never..." or what?

    Of course, maybe my inexperience with soldering is showing...
    -----
    The real meaning of the GNU GPL:

  • If you've ever found yourself wanting a third hand while soldering then you're probably not a hacker.

    You know, that is a cool saying. Mainly because it's got SO many different interpretations, but also because almost all of them are SO false.

    • Here are some interpretations:
    • And why not? I've always wanted a third hand -- wouldn't that be COOL?
    • What, you mean soldering doesn't excite you?
    • Yeah, a real hacker would figure out a way to hold the extra items without having to use his hands.

    I'm sure you meant the last one I listed. I sorta like the other two.

    -Billy

  • Hmm, 600MHz Pentim III, 128Mb of RAM and cheap 18Gb 5400rpm IDE drives.

    Anyone else see the problem with this picture?

    Especially when you're going to be wasting 6 million cycles waiting for I/O.

    10/15K rpm SCSI drives. Can't use ATA interface drives.
  • Think about it for a second: The overhead imposed on your CPU when doing software RAID is something like going thru one extra layer in the FS-> VFS-> block device-> (RAID)-> buffer/cache-> device chain. It's something like 30 extra lines of C code (give and take). This is just _not_ going to make your CPUs peg to 100%. Something else must have been doing this to you, I don't know what, but I've _never_ heard anyone report this. For the record my own box with 4 4.5G IBM SCSI disks do 30+ MB/s sustained transfers on RAID-0. For me the speed limit is here because of crappy memory bandwith on the aging PPro system.

    SW RAID should work with IDE, SCSI, and any combination you please. Your experience sounds pretty strange.

    If you didn't patch the 2.2.14 kernel and didn't use an -ac patch, then you've been running the old RAID code. This is inferior to the 0.90 code available as patches, but RAID-0 performance should be comparable. My guess is something else in your setup caused this CPU hogging. If it's just misconfiguration of the RAID, it's pretty interesting as I've seen noone else hitting the problem you describe.

  • 1) Software on a PII is slower than software on an i960 or i486 or M68something ? That's a new one...
    2) The Linux kernel has excellent caching techniques. Besides, it uses your main memory for caching, so you get the benefit of having a dynamically sized cache, adapting to your needs at any given time
    3) A HW controller doesn't magically make all your disks hot swappable. Often people are happy just their system keeps running until some time where they can conveniently take it down and replace failed disks. I'm happy with that :)
    4) No serious sysadmin is going to use anything but csh and a Sparc5. But for the rest of us, the better performing, more flexible, and cheeper solution is definitely something that shouldn't just be ignored.
    5) You can build nice RAID-5 sets with IDE, just buy a few extra IDE controllers, use only one disk per bus (to keep good performance), and you're off.

    The point of RAID without hotswap is saving you a night of reinstalling and restoring backups. You will have to take the system down to replace the disk, but you can do this at a convenient time. That's worth a lot to many of us. And remember this is often at the price of one extra disk.
  • You know your a hacker when...

    you don't mind soldering over your lap when wearing only shorts.
  • Once again, I am not a RAID guru, but even I can see that you are obviously wrong here.

    1. IDE controllers are brain-dead. They do absolutely nothing and rely on the CPU to do everything. That was the intent of the design -- to make it as cheap as possible. They sure have achieved that, and it works rather well for desktops I must say. In and of itself it is not necessarily bad, but see #3.

    2. DMA33 is actually 33 MB/s (hence the name ...) The 25 MB/s transfer rate you are seeing is actually the HD limit. Even though the interface can sustain 33 MB/s, the HD is not as fast. 12.5 MB/s is actually pretty good for a HD.

    3. (most important) Only *one* IDE HD per channel can work at a time. This means that if you have 2 HDs attached to the same IDE channel (i.e. master & slave), only *one* of them at a time will actually be able to transfer data. The second one will have to wait. Therefore, this defeats the whole point of RAID -- concurrency. The only way you can have concurrent reads/writes is if you connect 2 HDs to 2 different IDE channels. And since there is normally only 2 IDE channels in a box, your RAID would be kinda limited...

    On the other hand, with SCSI you can connect up to 15 devices to one SCSI controller and have them *all* operate concurrently.

    So, my conclusion is that, while software RAID may actually be a viable alternative to hardware RAID (as some people claim), IDE RAID is simply not suited for the job.
    ___
  • I bet you can keep on stacking the cards up too, until you run out of IRQs...

    Exactly. Until you run out of IRQs. I have never heard of more then 4 IDE controllers in a box, and even that is kind of a stretch.

    Well, this is good. If what I think is true above (ie: the PIO modes are the only processor intensive ones) then this means that a UDMA hard drive is going to run at peak performane.

    Well, DMA does offload CPU somewhat, but it's still not SCSI. SCSI offloads *all* the I/O work from the CPU. The UDMA HDs sure will work at peak performance (since the 33MB/s the UDMA33 can provide is usually at least twice as fast as HDs can sustain), provided that they have CPU's attention.

    ___

  • I have heard lots of times (even here on /.) frases like "avoid software RAID like the plague". Is the overhead really that small? Also, I understand that Linux does support some hardware RAID controllers (I heard Mylex something or other mentioned).

    Could somebody else please comment on the issue? As you can see my knowledge RAID is quite limited.

    ___
  • Umm.. ahem. It's pretty funny to see all these people arguing about why you should use this or software RAID from Linux when they are the same thing. Think about it - if you can flash the controller's BIOS into suddenly being a RAID controller, then something must be happening in software. I've never heard of a BIOS flash that actually creates a whole new processor right onboard. Hardware RAID always, always costs more than a hundred bucks. If you don't believe me, may I suggest Pricewatch [pricewatch.com]. The extra money you pay is for a dedicated on-board processor. The FastTrack don't got one; it relies on the host processor just as much as Linux kernel RAID does. I'm not sure which is faster, but I guess they'd be comparable. So just choose one and pick it - if you're like me you'll pick the free solution, but whatever.

    --
  • How do you explain RAID 1 then? Both redundancy and striping, and a lot more cost effective: combining two 4.5gb 10000 RPM drives (assuming such a beast existed) into a logical drive with half the seek time and twice the throughput would be a lot, lot cheaper than buying a 20000 RPM drive with no failover. I realize they don't even exist, and I think (?) that's the whole point of RAID: obtaining a logical drive that works a lot, lot faster than anything on the market today, and doing it at a cost that would be much cheaper than obtaining some sort of prototype or whatever. If a RAID guru would like to jump in here, feel free.

    --
  • Hotswap really isn't the commodity people make it out to be. I find that those who decry software RAID because you can't hotswap often are the same people who bought EDO DRAM over FPM, etc. - they don't really see a difference or know why they are paying extra money, except that it's the "right thing to do." The reality is that very, very few people can actually justify spending several hundred dollars on a hardware hotswap RAID controller because they can't tolerate more than a few minutes downtime. Furthermore, a lot people equate hotswap with the ability to rebuild a crashed drive. This is fallacy - one is inherent to RAID 5, the other isn't. You can rebuild a crashed drive using software RAID 5, but you have to open the case and replace a drive. BFD.. most people don't need hotswappability. Software RAID is fine.

    --
  • It's all a matter of perspective. I can go out and buy a one terabyte RAID array right now. I cannot go out and buy a one terabyte hard drive. In fact, a one terabyte hard drive would could so much to produce in terms of physical medium (platters and heads) that it would be cost prohibitive versus stringing a bunch of "cheap" Cheetah 18LP hard drives together. That's the whole idea behind RAID - string a bunch of cheaper disks together to make one really, really big disk, which, in some cases, even if it could actually be made (like the 1TB), would cost exponentially more.

    --
  • One to hold the iron, one to hold the solder, and one to position the work. Sometimes I'll clamp the iron between some books and hold the work and solder, and sometimes bend a piece of solder so it's hangin in the air and hold the iron and the work. The real solution is get a 'panavise' to position the work. A not recomended solution is the hold the solder in your mouth. If your working on the floor sometimes you can hold the iron between your toes. There were a few irons available that had the roll of solder right on them - just press a button and it feeds some to the tip!
  • Companies just want to have fun - no, to have thick profit margins - they do this by reducing their production costs and NOT passing the savings on to you, the consumer, unless a competitor threatens to.

    One big F500 company I know of had an 8-bit device with 16K of memory. Customers could buy memory upgrades for it by having a tech come out and plug in another row of mem-chips, customer paid for parts and labor. Well time went on and they found it cheaper to just make all the boards with 64K chips but they only enabled 16K. Now the customer had to pay the same damn fee to 'add more memory', but now all the tech did was come out and pull a jumper.
  • "Should you ever need to get support for your card from the manufacturer, you will be able to..."

    Think of it this way: It's true that you will probably only be able to get support for the actual raid card, should it ever fail. But, if the cheap, "overclocked" card fails, you can simply buy a new one (or three) and you'd still be saving money.

    --

  • Your issue there is that FlashPoint RAID cards aint RAID cards.

    AMI Megaraid is a RAID card. Mylex DAC960 is a RAID card. Flashpoint, is a scsi card with some firmware bits and pieces to help along a software RAID system.

    FWIW, there are any number of two channel UW-SCSI AMI Megaraid 428 cards on the surplus market at about $150. They rock, and have full linux support.

  • i have had bad, Bad, BAD experiences with software raid in production environments!

    I had 1 of 4 drives in a raid 5 array using the 0.90 drivers go bad. When it went bad, it locked up the scsi bus long enough that the raid driver decided that another drive must also be bad.

    So i had one drive down and another drive tagged as desynched. This resulted in Total Loss of Data. There was no way to make it rebuild. I tried everything the people on the mailing list could think of.

    My fault for not having backups. Shame on me.

    BUT. The speed allegations are pretty false. A *good* hardware RAID card with on-board hardware cacheing beats the tar out of software raid. Up to 4x the sustained throughput.

  • RAID 1 is the most reliable configuration you can have, in that fully 50% of your storage space is devoted to redundancy. However, if you're needing speed above all else, I'd say go with either RAID 3 or 5...

    With just 4 drives, RAID 3 is the least expensive in terms of CPU usage. It's also better if you're most concerned with speed, in that 3 of your drives will be focused solely on read/write operations and the 4th will dedicate itself to redundancy... Rather than splitting the data and redundancy across drives. You add many more seeks...

    I'm not sure what your end goal is... I'm thinking you're looking at something like video editting where throughput means everything compared to anything else. If you're already doing a lot of seeks - database operations - go with RAID 5, as it's more easily expandable.

    -----

    As a finaly note, if you're going to invest in new drives, you may want to check the newest 10,000 RPM drives... The added RPM's plus the increased density of the tracks means that they're easily more than 150% faster...
  • By a friend mailto:(bp@nrw-online.de) I was brought on the idea
    it by Internet admits to give thereby also different POWER to USERS those
    Likewise by it like fast one has itself to amuse possibility its computer STILL
    wars can.


    Honestly, what in God's name is he trying to tell us here?

    Other than that though, groovey discovery.
  • Do you remember the original description of RAID?

    Redundant Array of Inexpensive Disks ;-)

    I don't see SCSI, Fibre Channel, or 3+ disks requirement there anywhere.

    With SCSI it started to mean more like Independent Disks, and you got the features like the so-called "hot swap".

    IDE drives are certainly inexpensive, and there should be no reason not to set them up in a RAID configuration.

    Why not setup 2 IDE drives in Raid level 1 (Mirror)? No one is required to setup Raid level 5 on their box.

    If you ask me, that is quite a usable configuration, the odds of both IDE drives dying at the same time are slim.

    Let RAID stand for what it once stood!

    P.S. If you are wondering, I use DataDirect Enterprise 8 and Mylex RAID hardware in my datacenter.
    --
    Leonid S. Knyshov
    Network Administrator
  • Before I will try messing with the card, I would first like to see it work correctly in my system. I got one about 4 months ago, but I have not gotten it to recognize the drive. The system sees the card and I can use the drive with the motherboard's regular ide, but the promise card is being a pain.

    Second point, I am not too hot on the idea of applying heat to a card. Even if it works, I could see Promise coming out with later driver updates which are tweaked to work right on their raid cards and deep-6 a tweaked card (referring to purchased OS's, not open OS's and home grown drivers) . This is too new a card. There will be a year more of bug fixes before they work everything out.
  • Heh. You haven't really hacked until you've used low-temperature solder and a screwdriver heated over a candle to fix a circuit... :-)

    Then there was the time I assembled an Apple-II-clone motherboard and wondered why the heck it didn't work until I noticed (the scope helped) that all the discrete transistors had been placed according to the silk-screen shapes, but that that was the opposite of how they were supposed to go in (ie emitter and collector reversed), the silk screen was wrong. Amazingly it actually worked after I desoldered them and turned them around.
  • Try swallowing an anvil. Quite harmful, whatever it's made of.
  • Well, I believe the point is that they are the SAME BOARD with the SAME COMPONENTS.
    The only difference is the firmware, and perhaps a transistor that allows you to flash in new firmware.

    Just like USR's old sportster.

  • 1. So? We have plenty of CPU available. Usually we are waiting on I/O. If you are worried, go SMP. You are still getting away cheap.

    2. UDMA33 bursts to 33.3, but sustains 16.6. UDMA66 bursts to 66.6, but sustains 33.3.

    3. So add more channels. You have 2 on board, add 2 more for under $50. Now you ahve a 4-way RAID-5. If you want a hot spare, make it a slave. When you replace the dead drive, move the spare to be a master. No one in their right mind puts 15 hard drives on a SCSI channel. Drives can do an easy 15 MB/s, usually 20 MB/s+. U2W SCSI is stuck at 80 MB/s. Also, for redundancy, you always use multiple channels, so you can't be taken out by a cable.

    You are touting conventional wisdom. I'm giving you an interesting alternative for the dollar-impaired. I will not argue that SCSI is a better way to go, given a sufficient budget. I'm not used to such circumstances.

  • IRQs are not a problem. Each PCI card you add takes one IRQ, a "limitation" of PCI. You can find cards with as many as 8 IDE channels. I think you start worrying about PCI bus bandwidth at that point, especially if you are competing with 100baseT net cards, etc.

    Linux supports 8 IDE channels. I'm not sure where the limitation is at this point, never needed more than 8.

  • How about an email message? /proc/mdstat contains the status of the array, write a script and cron it. If you want an LED, attach an LED to the serial port and have the script write there.

  • Can a real RAID guru post them?

    I'm not a real RAID guru, but I play one at work.

    If you are putting together highly available servers, perhaps for heavy DB serving, and have money, I might agree with you.

    But if you are putting together load balanced servers, such as mail, web, authentication, etc. then software RAID, even with cheapo IDE drives, kicks serious ass. Normally, you can't justify serious RAID for these boxes, but cheap software RAID/IDE means that you can add some redundancy and have an easier day. Also, if you are load balancing, you can afford the reboot to replace a drive.

    At an ISP, this is the world I live in. I don't have big DB servers, but an army of smallish servers. I also have no money and no time to rebuild a server every time a drive dies. This is the best of both worlds to me...

  • I've seen this in action. We installed a hardware hot swappable RAID-5 enclosures in businesses where there is no resident tech support. The bad drive LED goes on, an e-mail notification is sent, and the person in charge of maintaining the enclosure (could be a janitor) hot swaps out the drive tray with the error LED on and sends the bad one for advance replacement.

    It really couldn't be easier, and its well worth a few K$, if you have better things to do than futz with a balky RAID.
  • This is a really good point, and bears reinforcement.

    People forget what RAID stands for: Redundant Array of Inexpensive Disks. IDE RAID is sneered at by the general hardware geek because it's crappy compared to SCSI. Well, of course it is, but you don't care, because it's RAID.

    Yes, yes, you can't hot-swap most IDE, and you're limited to the number of hosts per controler, but you don't much care 'bout that either because it's just so damn cheap. If you use multiple controlers you get around most of the limitations of IDE anyhow, so adding an extra controler for the additional disks is a no-brainer. As for hot-swap, you can just keep a hot-spare in the case. That way you can bring it on line in case of a failure and you don't go down unless two disks fail before you can schedule down-time to replace one. Linux RAID doesn't do this sort of hot-spare thing automatically that I know of, but it certainly is easy enough to hack together.

    As for refuting software RAID because hardware is faster... try using a NetApp, and then tell me software is slow. Yep NetApp (the high-high-end of network-attached disk arrays) is software raid.

    Speaking of which, when is someone going to write a one-step online backup system for ext2fs (or ext3fs) like the one on the NetApp. Basically it's a copy-on-write setup where the root inode is coppied, and every write to a block causes it to be coppied and every inode above it to be coppied. Thus an online backup can be done in seconds, allowing applications like databases to be brought down for practically no time (0 time, if you're using Oracle).

    There are almost certainly patent problems here, but someone should at least research how close to this one could get before stepping on patented toes....

  • Does anyone have any experience with netapps in real production environments? Do they generate a ton of network traffic? Mount options that make them perform better? Comments?
    I know that one of the largest sites on the net (in terms of hits) runs off of NetApp. I'm sure others do, but this one is run by a friend. NetApp is often faster than locally attached storage. Things to keep in mind:
    1. No one ever runs a NetApp off of a wire shared with other services
    2. You can go to gigabit speeds over fiber, though switched 100baseT is usually fine for file access
    3. Databases tend to run faster on a NetApp than local disks
    4. EMC is good, but if you've got, say, a farm of a half-dozen or more file or Web servers that all need to see the same filesystem, NetApp is the cleanest way to go
    I don't have any stake in NetApp (though I kick myself for not buying stock), I just love their product.
  • As for refuting software RAID because hardware is faster... try using a NetApp, and then tell me software is slow. Yep NetApp (the high-high-end of network-attached disk arrays) is software raid.
    Actually, the other shocking thing is that NetApps aren't RAID5, they're RAID4, which usually has a performance penalty, but they've gotten around that by using a sort of log-based-RAID, which is a cool idea all on it's own.
  • That seems unlikely to me. The loss of the tail happened around 30 million years ago, well before humans split off from the other primates around 5 million years ago.

    Our closest relatives, the apes like chimpanzees and gorillas are also tailless, but their brains are much smaller than ours, with chimps between 300 and 500 cc (average 400), and gorillas between 400 and 700 cc (average 500), while modern humans have a range of 1000 to 2000 (average 1400).

    I think a simpler explaination is that in moving away from a quadraped arrangement the apes did not find a use for their tails, unlike the monkeys, and like all unused organs, it became vestigal. If we've kept our tails for balance, like the old world monkeys, or it had become prehensile, like the new world monkeys, then we would still have tails.

  • Whiye knot holed the solder in yer mouthe? I no it's made uf lead butt eye doo it all het time und eye hav had know problums yet.

    Hehehehe...

    Seriously though, I can't even begin to imagine the long-term effects of half of the toxic substances I used have to deal with my job. Between the pot of boiling lead for tinning wires and carcinogenic chemicals we used to watch the resin off the boards, I don't believe I want to know what my anticipated life expectancy is.

    Lucky for me, I am now dealing with components only after they've been assembled, the toxic residue of the computer industry being confined to poor people in countries far away from us.

    Note: The above is sarcasm.

  • Here are some interpretations:
    • And why not? I've always wanted a third hand -- wouldn't that be COOL?
    • What, you mean soldering doesn't excite you?
    • Yeah, a real hacker would figure out a way to hold the extra items without having to use his hands.

    Well, don't know if I'm a "real hacker"...I suspect I solder far too well to be a hacker. Most of the electronics hackers I've known had the soldering skills of a chimp on acid. I can say this because I used to build all the one-offs and production prototypes for a company. The hacked piles of wires and chips that I got to work from were rife with cold joints, globbed solder, etc.

    But you do eventually learn to hold more than two things with two hands...holding a coil of solder in the fingers of the same hand that holds the soldering iron, holding the board and a pair of plyers with the other hand.

  • Actually, the article says the only difference is an SMD resistor and the BIOS. Sucks that they hid it UNDER the BIOS chip though.
  • I spent the last 6 months working on a FibreChannel RAID adapter. It could also be run in direct connect mode. I can tell you that with any OS the raid on the controller could always beat the host based software raid. External raid (raid boxes of storage with no controller) are around the same speed, perhaps better under big loads. Software raid really doesn't make sense. All the extra time spent in raid calculations wastes host CPU which could be better spent running the database, web server, etc. Under heavy loads, software raid starts to either effect the rest of the system or vice-versa. Offloading RAID processing on to a controller or external storage system that has dedicated processors is the best solution. Think about video cards...you could do the work on the host processor but why? Host processors are for general, non specific tasks. For specific tasks (only raid, only video, only tcp/ip) that do not change much over time, offloading is a good idea.
  • Hey folks, the FastTrack66 is not a raid at all. It is a software raid card, but implemented in the ON BOARD BIOS.
    For the uninitiated, just because software is stored on a chip (in this case the card bios) rather than a disk, does not make it "hardware". This is commonly referred to as "firmware" but in reality is software that runs on the host CPU just like any piece of software.

    I'd be interested to see where you get this information from - as far as I can tell, the flashable bios on the card is run ONLY by the onboard controller; it is not downloaded to the CPU at all......
    --
  • Umm... it's an expansion PC ROM like any other.
    It doesn't need to be "downloaded" to the CPU. It simply exists in memory space.
    Yes, I know that - but as far as I know, the part of the flash upgrade that gives you Raid rather than standard SCSI is running on the onboard processor, not the host machine - after all, this is the whole point of having SCSI in the first place - to offload disk I/O from the host CPU to the Card.

    However, I was more interested in where you got your analyis from - do you have the source to the thing someplace on the web? I don't have one of these cards, so can't really check locally....
    --

  • No reason! Sheesh. How about the R&D effort expended on the RAID capability?
    Is expended already - yes, it is reasonable to price your product to cover research costs in x years, and still make a profit, but not quite so reasonable to try and make you pay a premium for features that should be standard. If they weren't making a profit *and* working towards breakeven at the $20 pricepoint, they wouldn't be selling at that price at all. So the price difference of $45 is pure profit - due to what amounts to a lie about how much improvement you are getting over the $20 card.

    Should Promise be punished for engineering the card so that it could be modified to be a simple ATA controller? Is their $65 price point exhorbitant?
    No, it's a marketing tactic - the Raid controller market is profitable at $65, the Standard ATA controller market wouldn't be, so they sell at $20 - this does'nt mean the $65 is an unreasonable price for that market, it means the market AS A WHOLE is artificially high-priced.
    --

  • To me, the more notable thing is that you can buy a RAID controller for $65... the ability to get it in a clandestine manner for $20 instead is not as interesting, IMHO.
    Not really - the point is that the Cheaper card IS the same as the more expensive one, with features disabled so they can charge a higher price for the one with raid. It's not the first time this has happened, and it won't be the last, but they have no reason not to be selling the raid-enabled card for the $20 and forgetting the poor-brother mode one......
    --
  • Was this page posted by a friend of Mahir, the "I kiss you" [xoom.com] guy? The English sounds familiar.
  • No, I've seen the same problem with SCSI and Linux's software RAID. It seems to be an intermittant conspiracy on behalf of all three layers, RAID, fs and hardware.. I used to use a striped RAID to spool high-bitrate video to. Some days, it would take it like a champ. Others, under identical conditions, would beat the hell out of the processor. I finally dumped the FS layer, and wrote the stream raw. I could only keep one stream managably without rechecking for end-of stream but I never had any load spikes. I finally switched from software/SCSI to one of the Promise IDE RAID controllers, and while the upper end I could push was lower it was much more consistant.

    On a side note, I was contemplating purchasing a pair of this very RAID controller in the near future. The older Promise has served me well..

    I will be trying this. Thanks /. ! You saved me a couple of hundred.. Pick a bar and a time guys, the first few rounds of Bass are on me!
  • What, is today's generation wussified? When I was a kid we used to play with screwdrivers and B&W TV flyback transformers.. I vaguely remember poking a Ann Arbor terminal with a pen-knife once.. After waking up, I didn't have much of a Wenger or much desire to do it again..

    Wait.. I'm only 22, and I was a psychotic. The 112vac squirt gun proves that..Hmm.. The voltage must have fried a couple of braincells.
  • It all depends on the hardware/software in question. Having not played with software RAID recently, I defer judgement on speed. But hardware RAID has a few advantages over software.

    Most notably, you're offloading a processing chore from the CPU, which will make the main CPU more available. It also means that the RAID controller can do 'background' chores, like verifying the volumes when no accesses are pending, rebuilding a volume after a disk failure (assuming the RAID set has redundancy).

    But I wouldn't knock IDE RAID for hobbyists: IDE disks are cheaper, and there is a performance boost. And most implementations (I believe this is true for FasTrack66) fool the OS into thinking the RAID set is just a plain ol' IDE hard disk!

    And the main point: will software RAID be better than no RAID at all (for the very cost-conscious)? In a reasonable implementation, the answer should be yes.
  • Perhaps it warrants another look then. I didn't make note of what version of the raid patches I was using, I didn't have anything to compare it to as it was my first time. ;-) My controller is a Symbios Logic 53c875.
  • Not that this is germane to the the ATA66 controller in question, but in my experience software RAID ranges from unusable to suicide-inducing. I laid out substantial funds to get myself twin 4.5gb seagate cheetahs, planning to stripe them. Well, it was so enormously slow that I couldn't stand to use it. When opening the gimp, for instance, both processors pegged to 100%. Is there some reason that software raid shouldn't be used for SCSI ? For the record, this was in kernel 2.2.14, and when I tried it in 2.3.3x (whatever was current at the time) I couldn't even get the drives mountable.
  • Can the raid controller also give the same functionality as the ATA66 because the components are so similar?
  • Is the BIOS limitation why the newer geForce cards won't work on older mobos (well, older than the Athlon boards out now) with the VIA chipset?

    Esperandi
    Just bought an Athlon, new mobo, and geForce card... made sure I got one with the AMD bridges and NO VIA!
  • Read upwards... one of the posts moderated up above this tells of a much better way of doing this that does not involve removing the BIOS *at*all* and only involves flashing the BIOS and adding a 10ohm resistor...

    Esperandi
  • Here's what I got when I did a Babelfish Eng->Ger, Ger->Eng to it:

    Just as by it you fast have may maintain possibility which its wars of the computer can do STILL.

    Not much better =)

    Pablo Nevares, "the freshmaker".
  • So your going to spend 2+ hours trying to rig up this *IDE* RAID controller to save $40? Sounds like a story for www.overclockerlamerz.com, not slashdot.
  • ...from the article:

    "Likewise by it like fast one has itself to amuse possibility its computer STILL wars can."

    I think that's impressive!
  • I plan on getting one of these ultra66 cards to try this out, but not for speed. I want to setup a striped mirror set of 4 20 gig IBM drives (40 gigs total) for fault tolerance.

    Go for a RAID 5 config-- you get 60 gigs practical storage, and get the same fault tolerance -- can rebuild a dead disk on the fly.

  • So, theoretically: I could buy the FastTrack 66 for $65, flash the BIOS with the Ultra66 Bios, resolder R9 onto R10 and have a $20 Ultra66 controller. :)

    But seriously, I think it sucks that a company can sell two identical products, with the only difference being 1 resistor is moved and a different BIOS, and more than triple the price for one of them.

    kwsNI

  • and IBM invented it. IBM had a printer device in the 1950's that was rated for a certain number of lines per minute and leased (no, you're not allowed to buy computers) for a certain amount per month. This model was "field upgradable" to another model which printed twoce as fast, but leased for twice as much. To do a "field upgrade" a tech would go to the site and replace two parts. One was the label with the model number and the other was a pulley half the diameter of the one it replaced. (My source? Some huge book about IBM and the Watsons.)

    Lexmark (the descendant of the IBM printer division) pulled the same stunt later on when it had two models of laser printer that differed only in the value in a certain location in the ROM.

    Anomalous: inconsistent with or deviating from what is usual, normal, or expected
  • by Anonymous Coward on Thursday March 02, 2000 @03:28PM (#1229672)
    Doing things for absolutely no useful reason is what these little machines we all love are all about. Never forget that.

    If you've ever found yourself wanting a third hand while soldering then you're probably not a hacker.
  • by CaseyB ( 1105 ) on Thursday March 02, 2000 @08:19PM (#1229673)
    It's not the first time this has happened, and it won't be the last, but they have no reason not to be selling the raid-enabled card for the $20 and forgetting the poor-brother mode one......

    No reason! Sheesh. How about the R&D effort expended on the RAID capability? Should Promise be punished for engineering the card so that it could be modified to be a simple ATA controller? Is their $65 price point exhorbitant?

    The cost of the parts for ANY piece of computer hardware is next to nil. The only thing that makes any hardware expensive is the R&D effort expended designing it. It doesn't strike me as unreasonable for them to charge extra for the ability to take advantage of the RAID capability.

  • by Oestergaard ( 3005 ) on Thursday March 02, 2000 @09:53PM (#1229674) Homepage
    Old habits die hard I guess. I stopped counting the number of times people in disbelief have seen Software RAID wipe the floor with their HW solution, performance wise. Drop by the linux-raid list (archives eventually) and see for yourself some day.

    Even if we agree that it's not for production (I don't agree, but let's assume so for a second) you still didn't want to use your hand-patched ATA/66 card for production either now, did you ?

    If you want to swap systems, SW RAID is just as fine. Swap the disks and the other system will boot on them as well. Wether they're attached to a SCSI controller with RAID capability or not makes no difference. The other system will see the volumes too, the configuration doesn't change magically when moved from one system to the other...

    IDE lacks hotswap capability, that's why it's often considered a bad idea. But compare it to a production server _without_ RAID, and suddenly it's one hell of a lot better. You can take the machine down some time convenient, and you won't be reinstalling and restoring backups all night.
  • by Otto ( 17870 ) on Friday March 03, 2000 @06:35AM (#1229675) Homepage Journal
    That link is bad too.. Doesn't work for me anyway.

    After a little searching, here's the post that you mentioned, and YES this way is a lot easier.

    ---Begin Crosspost---
    Ok, I know this sounds crazy, it is.
    This is how you do it...(see link)
    http://www.geocities.com/promise_raid/

    I know this is in danish and most of you don't understand anything of it.

    Look at the pictures.

    I'll translate for you guys, because I like you (LOL!)

    Goals:

    1: Update the card's BIOS
    2: Solder a 100 Ohms resistor from pin 23 to ground, OR from pin 23 to 16.
    3: Enjoy your new el-cheapo RAID system

    IT IS VERY IMPORTANT THAT YOU UPDATE YOUR BIOS BEFORE YOU SOLDER!!!

    Things required:

    A Promise UDMA66 controller

    A 100 Ohms resistor

    A soldering iron, soldering tin, and a screwdriver.

    Detailed walk-through:

    Buy a Promise UMDA66 controller

    Buy a 100 Ohms resistor (color code: brown-black-brown)

    Check if the card works as an UDMA66 controller

    Format a 3 1/4 inch 1.44MB diskette, make it bootable (copy system files under windows format)

    On this diskette you place the BIOS update program and the new BIOS, find the bios on:
    www.promise.com
    (it is the one for the RAID device you have to download)

    Boot on this diskette

    Start the BIOS program:
    A:\ptiflash.exe

    FIRST TAKE A BACKUP OF YOUR CURRENT BIOS!!!

    Choose option no.1 and choose where you want to save your current BIOS.

    Flash your BIOS with the one you just downloaded, do this by selecting option no.2 and write the name of your new downloaded BIOS (normally A:\ft66b108.bin)

    Restart your computer

    When you restart, you will get an error when your computer begins to initialize the IDE-66 controller's BIOS.

    Shut your computer down

    Pull out the Controller card

    Unscrew the metal plate from the controller. (this makes it easier to handle)

    Solder the resistor on pin no. 23 (see the picture on the website I linked to, you will see it clearly)

    BE VERY CAREFUL WHEN YOU SOLDER!! The bios is much sensitive to heat, so if your card has an IC socket they recommend you to remove it.

    Now you can put back the metal plate, put the card back and power your computer on.

    Hopefully it will work, and by pressing Ctrl-F you can go into a program where you can easily select which RAID mode you want to run.

    Link to bios flash program and BIOS update
    http://www.geocities.com/promise_raid/FT66b108.Z IP

    NOTE!!! I cannot be held responsible to any damage or failure of your system or the card itself or any living person walking around you, you are on your own!

    ------------------
    Uffe Merrild
    ------------------------
    editor at Hiphardware.com

    ---End Crosspost---

    ---
  • by Haven ( 34895 ) on Thursday March 02, 2000 @03:40PM (#1229676) Homepage Journal
    It matters which RAID you want to run. I run a RAID V (5) system on my server at work, and I view it as only serving a purpose, not gaining speed. Now if you had multiple hard drives what were exactly the same size, and a RAID controller with 32mb of cache you would get a performance increase. If you are serious about performance get a real nice controller. If you want to hack some hardware try this out.
  • by British ( 51765 ) <british1500@gmail.com> on Thursday March 02, 2000 @03:36PM (#1229677) Homepage Journal
    Ahh, love these little hacks.

    Friend of mine showed me the difference between some low-end radar detector and the higher-up model by adding in a 10 cent light.

    Remember a friend of mine modifying my caller ID unit to hold twice as many numbers, with a method similar(no bios flashing tho) to the RAID controller.
  • by kevinsl ( 64538 ) on Thursday March 02, 2000 @03:44PM (#1229678) Journal
    I got a FastTrak66 for $59, thanks to Fry's for putting the wrong price sticker on the box!

    Unfortunately there is no Linux driver for the FT66 (and promise will not give a delivery date), so I downgraded the card to an Ultra 66. Just move a resistor to a different jumper and flash the bios. Now I'm doing poor man's disk mirroring with rdist. Wish I could have hardware raid though...
  • by wowbagger ( 69688 ) on Friday March 03, 2000 @04:19AM (#1229679) Homepage Journal
    It seems to me that if all it takes to make this card a RAID controller is changing the BIOS and moving a pullup resistor, that it is very likely the card is doing software RAID anyway. To properly do hardware RAID, the card must have it's own processor, IDE interfaces, and an interface to the main CPU. While it is not impossible that Promise has condensed all this into one chip, I find it unlikely they have.

    Rather, I suspect that they are just replacing the DOS INT 13H (hdd control) interrupt handler with their own at system startup, and are having the main CPU do the RAID work in software. I would futhur guess that the Windows drivers for the card then do the same work in protected mode in the driver. If I am correct, then there is no advantage to using this card in its own native RAID mode vs. using software RAID.

    The only advantage HW RAID gives you is that the main processor is freed up to do other tasks. In the case of a file server, there are no other tasks and therefor hardware RAID buys you very little. As others have said, SW RAID allows you to

    use system memory for buffer. Your system almost certainly has more RAM than the card

    Better detect, log, and correct disk errors. HW RAID tends to hide this sort of thing from the system.

    use drives on different controllers. This allows you to spread your RAID array across controllers, so that a controller failure will not take the array out. Now, I could be dead wrong on the Promise not having its own CPU. If anybody out there can correct me by telling me the specs of the CPU (RAM size, type, operating speed, etc.) then I will be greatful. However, this smells to me like a WinModem, WinPrinter, software "wavetable" sound card, etc.: "We just won't tell the user his CPU is being used to do the work, and he'll be fat dumb and happy...."

  • No formal IDE hotswap outside of 'IDE' flashcards.. I've tried it (cloning solution), the only safe way involved a pair of fast-latch 20 pole switches and a powersupply modified for variable voltage..

    I burned out eighteen 120meg Connor's that weekend..
  • by apocalypse_now ( 82372 ) on Thursday March 02, 2000 @03:02PM (#1229681) Homepage
    I am going to go out and buy all the parts and give it a whirl. At the worst, I'm out a few bucks. At the best, I've found a way to save s pretty chunk of change for my boss...

    So how song do you think until they change the manufacturing process to break this?
    --
    Matt Singerman
  • by criticalrealist ( 111008 ) on Thursday March 02, 2000 @05:20PM (#1229682) Homepage
    You want to void your warranty and render tech support useless so you can save $40-100? How valuable is your data? Hopefully more than that.
  • Okay, a few reason Software RIAD and IDE RIAD is a bad idea.... 1) Software is slower and not as reliable. 2) Hardware RAID usually has excellent caching techniques 3) Ever seen a hotswap IDE disk? I haven't 4) No serious sys admin is going to use a software solution. It would almost certainly be a lesser quality hardware. 5) If your going to RAID 5, you need at least three disks. Bit of a worry when you have only two disks per IDE chain. And just for empasis - what is the point of RAID fi you cant hotswap. If there is a hotswap IDE, then I apologise in advance for my knowledge gap, but SCSI RAID more often than not allows hotswap.
  • by Ryan C. ( 159039 ) on Thursday March 02, 2000 @05:45PM (#1229684)

    It loads the program out of a Flash ROM instead of a hard disk, and the routine is called from an interrupt hook instead of an OS kernel function, but those are the only fundemental differences.

    A "Hardware" RAID would use its own processor. There is none on the FastTrak.

    What FastTrak gives you is software RAID 0/1 for OS's that don't offer it. If you run Linux or NT, you're just as well off with the Ultra 66 controller and OS RAID functions.

    Any differences in performance or reliability would be from the merits of the respective programs, not a hardware/software difference.

    I have a Promise FastTrak myself, and I use it for my gaming system (Win 98), but in Win2K/Linux I get the same CPU utilization and transfer rates using two single channels and the OS raid 0.

    -Ryan
  • by Anonymous Coward on Thursday March 02, 2000 @03:31PM (#1229685)
    Mate, you screwed up again.

    This is the URL...

    http://www.storagereview.com/welcome.pl/http://www .storagereview.com/ubb/For um1/HTML/002964.html [storagereview.com]

    Odd URL huh?

  • The RAID techniques used by these cards are slightly different than those used by Linux. Unfortunately, this means that the RAID cards won't work with Linux unless it has specific driver support for the RAID features.

    I have a Mylex FlashPoint SCSI card that had similar software RAID features. I wound up flashing the BIOS DOWN to a non-RAID BIOS because there was no support for the RAID features of the card under Linux and my mobo doesn't get along very well with cards that have 64K of onboard BIOS. (Apparently one of the worst bugs in VIA chipsets...)

    So if you're a Linux user, don't get your hopes up as to being successful with this.
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Thursday March 02, 2000 @05:38PM (#1229688) Homepage Journal
    But you do eventually learn to hold more than two things with two hands...holding a coil of solder in the fingers of the same hand that holds the soldering iron, holding the board and a pair of plyers with the other hand.

    Whiye knot holed the solder in yer mouthe? I no it's made uf lead butt eye doo it all het time und eye hav had know problums yet.

  • by garver ( 30881 ) on Thursday March 02, 2000 @05:42PM (#1229689)

    I have gone to the dark side and started running the 0.90 software RAID on *gasp* IDE drives in *double gasp* production servers. I don't see myself going back soon.

    If you have an unlimited budget, then hardware RAID with SCSI disks is great. I might still argue with you about if hardware or software RAID is faster. But if you live in the rest of the world, where money matters, you can't beat IDE drivers for price/performance, especially with the 7200's with 2MB caches available now. Going IDE means I can have a spare in the box and possibly one on the shelf. In short, my boxes are more reliable and just as fast for the same money.

    The only downside I can note with IDE is that I have to turn the box off to replace a drive. Get some $15 shuttles and the box is down for all of 3 minutes. These Promise controllers allow Hot-Swap IDE RAID-1, I believe.

    The overhead is pretty minimal. I do RAID-5, and even with the extra CPU needed for IDE controllers, I still don't see much CPU usage (sorry, I don't have hard numbers... can't find my Bonnies). Actually, on the ATA33 controllers that I'm using, it seemed the bottle neck was the controller bandwidth. On a 3 way RAID-5, I always pulled roughly 25MB/sec, regardless of CPU, block sizes, etc. After thinking about it, it made sense; with RAID-5 reads, I'm reading from 2 drives at a time, and ATA33 can sustain only 16.6/bus. After OS overhead, seeks, etc. 12.5/bus ain't bad.

    Linux does have pretty good HW RAID support. Mylex, DPT, and ICP-Vortex come to mind. All well supported. And you can always go with an external RAID chassis solution, where the external box does the RAID and just connect a SCSI channel to it. Since it looks like any other SCSI disk, it is OS independent. This is perhaps the simplest approach, but can also be expensive.

    Enough rambling... off to some starcraft.

  • by Delboy ( 96501 ) on Thursday March 02, 2000 @04:39PM (#1229690) Homepage
    Heres a snippet from the news section of http://www.hardocp.com [hardocp.com]

    Just got ours, Ultra66. Flash the bios. 100 Ohm resistor from pin 16 - 23 (Don't pull out the bios - just solder. over the top or underneath.) Reboot and sweet.

    Jim.

    Don't know if it'll work, but sure as anything it'll be a lot easier for newbies:

    Delboy

  • by dogma256 ( 150382 ) on Thursday March 02, 2000 @05:42PM (#1229691)
    When you remove the bios from the card, go down to radioshack and buy a socket to put it in. Then soilder that to the board and put the chip in the socket. It will save you a great deal of trouble if you mess up or need to go back to Ultra66 mode.
  • by Oestergaard ( 3005 ) on Thursday March 02, 2000 @03:53PM (#1229692) Homepage
    Go with Linux Software RAID instead, and save even more money. The 0.90 code (which works very well) is available as patches to the 2.2 series, and is currently being integrated into the 2.3 series.

    This will support RAID-linear, -0, -1, -4 and -5. It will work with your ATA cards as well as with your SCSI ones. The IDE layer in Linux is stable enough to survive any disk failure I've ever seen, so stability is as good as it gets.

    Besides, Software RAID solutions are usually somewhere between faster and _much_ faster than HW ones. Back in the old days it was a gain to do RAID management in software on an auxillary processor (``hardware'' RAID), but these days your average 400MHz PII won't even notice the extra workload (it's neglible to running ``top'' etc.).

    Check out the HOWTO at http://ostenfeld.dk/~jakob/Software-RAID.HOWTO. It's also in the process of getting into the LDP, so we'll be nicely set up for when 2.4 hits the street.
  • by k8to ( 9046 ) on Thursday March 02, 2000 @08:41PM (#1229693) Homepage

    Hey folks, the FastTrack66 is not a raid at all. It is a software raid card, but implemented in the ON BOARD BIOS.

    For the uninitiated, just because software is stored on a chip (in this case the card bios) rather than a disk, does not make it "hardware". This is commonly referred to as "firmware" but in reality is software that runs on the host CPU just like any piece of software.

    The only difference is of course the BIOS calls you use to access the disk are able to understand the striping used on your disk. There are basically two advantages to this.

    1. On crappy operating systems like DOS, where the BIOS is used to access the disk, you get a free software raid without having to load wacky TSRs. (Remember DOS has no such thing as a reasonable driver).
    2. _IF_ you can get this properly supported under linux, you can have one set of disks going in software raid across multiple operating systems.

    Thus, as I said previously, it's not a raid card at all. It's got pretty much no functionality for doing for doing raid at all. Given the fact that it's advertised as a hardware raid, I'd just as soon not purchase any products from Promise at all, until they learn to quit with the false advertising.

  • by KyleCordes ( 10679 ) on Thursday March 02, 2000 @03:13PM (#1229694) Homepage
    To me, the more notable thing is that you can buy a RAID controller for $65... the ability to get it in a clandestine manner for $20 instead is not as interesting, IMHO.

  • by poopie ( 35416 ) on Thursday March 02, 2000 @05:10PM (#1229695) Journal
    You're totally wrong. ... and the moderators who moderated you up and wrong too. Please moderate this down!

    I would have, but then I couldn't post.

    Hardware RAID is always going to be better than host-based (software) RAID.

    software raid may be neat to play with on your PC, but if you were planning a PRODUCTION server to run your business off of, you'd want a real hardware RAID box.

    Also, you can dual attach a hardware RAID box, you can swap the server out from under your hardware raid box and still see the volumes.

    IDE RAID is a bad idea for a number of reasons that I'm not qualified to go into, but I've heard the arguments. Can a real RAID guru post them?
  • by DingALing ( 42557 ) on Thursday March 02, 2000 @03:25PM (#1229696)
    And this [storagereview.com] is it. You don't have to mess around with SMD components or remove the BIOS chip.


    Sorry for the dbl post, but I fscked up the last URL.

Today is a good day for information-gathering. Read someone else's mail file.

Working...