Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Western Digital Pulling Out Of SCSI HD Business 454

leiz writes "This article on Yahoo says Western Digital is pulling out of the enterprise hard drive business. This means they will no longer produce SCSI hard drives and Western Digital will be instead concentrating on the IDE and software business. What does this mean for the SCSI market? With 7200 rpm UltraATA/66 hard drives catching up in performance to SCSI HD, products such as the Fastrak RAID 0, 1, 0+1 card, and the cheap cost affectiveness of IDE/ATA, is SCSI no longer necessary for desktops / workstations / small servers?"
This discussion has been archived. No new comments can be posted.

Western Digital Pulling Out Of SCSI HD Business

Comments Filter:
  • will be instead concentrating on the IDE and software business

    I didn't realize they had any software.
  • They made rather failure prone drives IMHO. So does Maxtor. On the other hand, nothing beats an IBM drive.
  • by jawad ( 15611 )
    I've been considering SCSI hard drives (mostly for the reason SCSI controller cards can support upto 16 devices) but have been put off by the prices. The increased performance is a plus, but at these prices? Where does that leave me, with 4 IDE devices and no room for expandability (well, none that I know of)?

    Re: WD leaving ... I was looking at IBM anyway.
  • SCSI is better/faster than IDE, but it's *way* to expensive. If there was a 10-15% price difference, it would be worth the extra performance. But unfortunately, the price difference is much bigger (plus you have to spring for a SCSI card), and it isn't worth it.

    For a server or high-performance workstation where you need to get every ounce of performance out of it, go SCSI. Anything else, go IDE.
  • being the purist that i am, scsi still holds more appeal for me.

    putting scanners on a parallel port? bah!
    putting CDRs on parallel? bah!

    i like having one damn bus for everything. a nice clean interface. no stupid driver problems.

    scsi makes sense.

    and i'm pronouncing that "sexy."
  • You can add an additional IDE controller card if you want more than 4 IDE devices. You're not limited to the two IDE controllers (with 2 devices each) that come on most motherboards.
  • of course, WD's IDE drives use IBM's technology...
  • by Anonymous Coward on Wednesday January 19, 2000 @07:58PM (#1356082)
    The current implementation of SCSI is 160 Mb/Sec. SCSI is a multitasking i/o subsystem (simultaneous read/write), IDE is not. SCSI typically impedes CPU performance by 3%, while IDE typically impedes CPU performance by ~25%. For Starters.
  • For small office servers, ide is just fine, infact, i tried scsi, and the the box just crashed all the time (ok, like once every 20 days or so, but thats still bad), but for a serious server, like web/mail/news/MAJOR file server, i would perfer to use SCSI, mainly because of all the neat things i could do with it, like 7 drives on one controler, or if its sub-addressed, some huge number (to tired, and to happy to see more snow to think of it) of drives.
    -LW
  • On a Celery 400, copying to or from an OLD SCSI drive used less than 1/4 of the CPU that copying to or from a modern 8.4G (Quantum Fireball) IDE drive did.

    Now, if only they didn't cost nearly double!
  • by Anonymous Coward on Wednesday January 19, 2000 @08:00PM (#1356086)
    Come see Mr T teach CmdrTaco a lesson!

    I pity the fool who don't like mr T!

    Mr T vs Slashdot [tripod.com]

    Go Read it suckas!
  • by jkujawa ( 56195 ) on Wednesday January 19, 2000 @08:00PM (#1356087) Homepage
    Christ, I'm getting so incredibly sick of EIDE. First: A fast 7200 RPM drive will deliver no more than about 8-10 MB/s. Because of the brain-damaged nature of EIDE allowing only one device to talk at a time, anything beyond EIDE/16 has been useless dickwaving. Second: It's a creeping evil. Plextor has recently released an EIDE CD-R. My local Microcenter has completely stopped selling SCSI drives. They only stock Maxtor drives, as well. SCSI is no more expensive to produce then EIDE. IBM is at least good enough to not shaft people too for buying SCSI, and their UltraStar drives are the finest hard drives that can be had for love or money. But people will continue to buy crap, driving quality out of the market. Before too long, you won't be able to buy quality at all, or at least at anything approching a reasonable price.
  • by Anonymous Coward
    I don't know about insightful -- but it's definitely the truth. For pure performance under any OS, SCSI is the way to go. IDE is great for the cheap PC crowd or the casual Word Processor user, but for the rest of us that are into performance and top-of-the-line gear, SCSI is always two generations ahead of IDE.

    And FYI, UDMA/66 doesn't mean those drives are 66mb per second. It means that you can share 66mb/sec throughput with two UDMA/66 drives. That's 33mb each. You still have to deal with IDE's command queue and latency problems. You just have a wider channel. UDMA/66 was created because IDE drives are getting close to maxing out the UDMA/33 interface, speed wise. Two 7200rpm IDE drives could theoretically hit a bottleneck when used simultaneously with only 33mb of channel throughput.

    Ultra/160 SCSI will fix all you IDE kiddies. :)
  • You can get a 36 GB IDE drive from most drive companies now and rumor has it IBM has 50 and 70 GB IDE drives now (I don't know if that's really true, I'll have to check it out.) How much hard disk storage do you actually need? Seems to me you could slap 2 or 3 50+ GB drives in your system and be set for the foreseeable future (The next 5 to 10 years) unless you're storing a hell of a lot of live video goat porn or something.
  • Think on this:

    1) Anything better than ultrawide SCSI is pointless for average users. Yes, you can crank out more performance with seven striped drives, but who really has the money.

    2) Decent ultrawide controllers are relatively cheap (look at NCR/Symbios based cards.)

    3) Smaller SCSI disks are getting very affordable, where small 10GB and cheap $200.

    4) SCSI has lower CPU overhead and doesn't make your system slug along as the kernel babysits the disk transfers.

    5) SCSI disks are usually made with higher MTBFs in mind.

    6) Where do you need the performance? Swap, binaries, config files, libraries. Mostly swap.

    7) Do you really need massive performance on the disks that hold your MP3 collection? Didn't think so.

    8) IDE disks are shit cheap.

    The moral? SCSI is slick. Don't spend too much on a controller or disks. Put your root on that disk. Then use IDE for all non-critical data.
  • For what I can get 18 GB of SCSI and a decent SCSI card for, I can buy _4_ 36 GB IDE drives AND a system to dedicate as a file server. Even if setting my system up as SCSI were to double my performance, I'd have a hard time justifying the extra expense.
  • by Leomania ( 137289 ) on Wednesday January 19, 2000 @08:13PM (#1356103) Homepage
    I spent seven years at Western Digital, and I watched a chip company with an amazing amount of IP (basically everything except processor and memory) scuttle the chip business on the alter of the almighty hard drive. WD has seeded many successful companies in SoCal (Broadcom, QLogic, Emulex, Silicon Systems, Adaptec, JNI, more...) by letting go their talented engineers when management had failed yet again to follow in a market where they should have been leading. I'm sorry to have seen it happen yet again. Sounds like sour grapes, but it's really not; I'm just not surprised at this latest news. It's just another round of layoffs at WD, which isn't really anything new there. To any of the old guard still there -- you're a hardened lot, and I wish you luck in yet ANOTHER new direction set by the company. As for me, let's just say that in retrospect, WD was a great place to be from.
  • First off, I'd suggest some research on your part. SCSI has many advantages over IDE, EIDE and any implementation of UDMA/** with the IDE form factor, depending on what your trying to use it for. Will you see a massive advantage on a small home user system, not really, unless maybe your a Quake3, Unreal, or other massive game player.

    However, attempt running a server of any kind with roughly 200 users pulling about 5 apps a piece using IDE/UDMA drives. What you will get is really bad load times. SCSI has the ability to read/write at the same time, not to mention it can read/write to all the drives on the chain at the same time Vs IDE's ability to read or write on one drive per BUS. Hence why you get better performance from having Ur HD and CD-Rom on sperate IDE buses.

    However, I think that WD is possibly pulling out of the SCSI HD market for reason like, high pricing causeing low volume sales, the explosion of home networking, or they are just tired of making SCSI drives for other venders. It's not often you find a WD labeled HD. But the recent explosion in home networks is what I'd bet some of my money on. Now with the average of 2/3 pc's per house, that 2/3 times the ammount of HD's being used and as we all know, most home systems containe IDE HD's.

    Each of the two formats IDE - SCSi has it's advantages and dis-advantages. It just all depends on what you want to use it for, and how much you have in you wallet. Still though, it's sad to see a company give up on a product after so long.

  • by pete-classic ( 75983 ) <hutnick@gmail.com> on Wednesday January 19, 2000 @08:20PM (#1356115) Homepage Journal
    High end systems will be using SCSI for a bit.

    1. ATA-66 can't touch 160 mb SCSI, even single drive to single drive, with serial access.

    2. ATA performs very poorly when multiple reads and writes queue up.

    3. SCSI handles large numbers (>4) drives much better, ATA sees problems with just 2 drives per channel.

    Maybe ATA-262 will have a shot.


    -Peter
  • by BJH ( 11355 ) on Wednesday January 19, 2000 @08:21PM (#1356117)

    Well, I'm a SCSI-only kinda guy myself, but there's a couple of points you glossed over...

    4) SCSI has lower CPU overhead and doesn't make your system slug along as the kernel babysits the disk transfers.

    Almost every bit of ATAPI kit out there these days uses DMA, which has made a big difference. It's not like the old days when you could see your CPU usage peak during a long copy operation. That said, SCSI still handles multiple requests better.


    5) SCSI disks are usually made with higher MTBFs in mind.

    True at one time, but many of the SCSI drives out there now are almost identical to their ATAPI counterparts, except for the interface.

  • How much load does USB put on the CPU compared to SCSI? Will it bog down when you start burning CDs, making coasters? Has anyone actually succeeded in chaining 127 USB devices? I think the record is 112, and that took a lot of voodoo dancing. Where are you going to put these 127 devices?

    I'm still gonna go SCSI on my next box. Configuration? You do it once, what's the big deal? 127 devices? I won't have more than 15, you can hang 15 off just about any SCSI card these days. And it'll leave my CPU to do more important things.

  • by Anonymous Coward on Wednesday January 19, 2000 @08:21PM (#1356120)
    WD always made bad SCSI drives, anyways. Their old ones, like ones were slow, noisy, and were heat problems. Their new ones were always considerably slow compared to their competition. WD was known for their IDE drives, but that reputation isn't to strong anymore.

    I've got 4 SCSI one drives, a SCSI-2 Seagate Barracuda, a UW SCSI IBM (9zx), Yamaha CDRW.. my brother has two Seagate 4gb, and two Yamahas, and my mother has a SCSI caddy cdrom. And of course there's lots of IDEs laying around (hdds, cdroms, CDR, DVD), and a few MFMs, plus a prop. 1x sony cdrom.

    Problems with SCSI:
    1. Barracuda came defective. Same with Yamaha. Both from a really bad reseller who gave me a bad controller (defective), and claimed Seagate's tech. was lying and the drives really were out of production. Took 3 months to clear up.
    2. IBM over heated, and eventually died months afterwards. IBM replaced within a week, the data was recoverable. The drive was 1st generation 10k rpm, and a pre-release w/ updated rom.
    3. For some reason my brother's Yamaha CDR102 wont write onto newer cd media. Seems to be a cd-design change, as old media works fine.
    4. Always requiring innovative tweaks to keep cool. SCSI caddy cdrom (6Plex) overheated and was replaced for free, years ago.
    5. Pain to get UNIXes to install with the Yamaha. They'll boot off cd, and then say there's no CDROM to install from.

    On non-SCSI problems:
    1. Connar 1.4gb drive had bad blocks, years later stopped functioning.
    2. 8x cdrom 'kinda' works.
    3. 10/12x cdrom sticks.

    So, there are more difficulties on SCSI, but cabling and installation is easier. Cooling is just a pain. Performance, though, is high. My Barracuda 2lp still outperforms IDEs in cpu/speed. The IBM is so fast I feel bad having it... Still, its amazing seeing 10% cpu used, max, when 100% is used on IDE, like the few DMA/33s I have.

    SCSI for home = waste
    SCSI for workstations = ok-good (mostly useful if the CAD is CPU oriented)
    SCSI for servers = good-great (depends on what the server is doing, load, etc).

    Just an anon that's really bored....
  • by Anonymous Coward on Wednesday January 19, 2000 @08:23PM (#1356122)

    I've found the IDE performance problems go away entirely with the correct bus-mastering settings. You still can't get decent performance with two drives on a single channel, but my new Abit motherboard came with four IDE channels. So you can have four IDE hard drives without losing performance.

    Here are some real benchmarks to back this up:

    [root@olympus /root]# /sbin/hdparm -c0 -d0 -k1 -m0 -W0 /dev/hda
    <snip>
    [root@olympus /root]# /sbin/hdparm -t /dev/hda

    /dev/hda:
    Timing buffered disk reads: 64 MB in 28.40
    seconds = 2.25 MB/sec
    [root@olympus /root]# /sbin/hdparm -c1 -d1 -k1 -m16 -W1 /dev/hda
    <snip>
    [root@olympus /root]# /sbin/hdparm -t /dev/hda

    /dev/hda:
    Timing buffered disk reads: 64 MB in 4.86
    seconds =13.17 MB/sec

    That's with a 7200 Seagate drive. The first benchmark, giving a whopping 2.25MB/sec, was with all the IDE options in sucky mode. This is the way older IDE controllers work, and in large part responsible for IDE's bad name. The second benchmark shows that it can have good performance. It's CPU performance wasn't as good as SCSI's (17% out of 200%; dual-processor box) but wasn't as bad as many have said.

  • by Ledge Kindred ( 82988 ) on Wednesday January 19, 2000 @08:25PM (#1356126)
    $#!+ yes!

    The big trouble with IDE is still that they are "dumb" devices that require CPU resources to manage. On workstations doing lots of disk access I can see NOTICABLE performance degradation between similar hardware, one of which is IDE, the other of which is SCSI. The nicest thing about SCSI is the fact that the controller offloads all disk management off of the system's CPU. If you're doing power computing, this makes a big difference. Also, as someone else mentioned, IDE has real problems allowing the system to manipulate multiple drives simultaneously, a problem SCSI does not have. For some schmuck just dicking around with Netscape so they can browse the web, who cares, but for hardcore users with big machines trying to get real work done, it can make a legitimate difference.

    From a server perspective, there's no question that SCSI is the best. Just TRY putting more than four IDE drives into a Linux box without tearing your hair out and threatening to take a shotgun to the thing. The only way to do it is to get some sort of additional IDE controller like the Promise controllers which are unmitigated junk. I don't even want to mention the hoops I've gone through to to get a Promise Ultra33 stable in my Linux server. What makes it worse is that I could buy the four IDE drives I put in there for about the same price as I would have been able to pick up two SCSI drives of about the same size. (It's not that SCSI is so tremendously expensive as much as it is that IDE is just dirt cheap.) More unfortunately, I needed the space and I didn't have the extra money, or I *would* have just gone with SCSI. (As it turns out, I spent so much time trying to get the IDE drives working, I probably *should* have just gone SCSI from the get-go and saved myself money in the long run from doctor's bills from high blood pressure and ulcers trying to build an IDE-based server will give me.)

    I see the whole "IDE vs. SCSI" thing as yet another case of mediocrity winning the battle. It doesn't have to be great as long as it's cheap and good enough to get the public to buy it. For those of us who like quality, we just have to pay so much more. Unfortunately, unlike the software industry, there's no way to start an "Open Source/Free Hardware" movement to force the other manufacturers to start focusing higher on quality.

    -=-=-=-=-

    1. UDMA cpu usage is comparable to SCSI.
    2. The multitasking restriction is that IDE cannot issue more than 1 io request at a time, but this doesn't matter for single disk systems.
    3. The only essential difference between a SCSI HD and an IDE one is the drive electronics, they are physically identical.
    This last point brings up an interesting facet of this announcement: the only reason SCSI disks are more expensive is because all the manufacturers make both IDE and SCSI disks, so they market the SCSI as "power-user" and "enterprise". The only reason we don't have 10,000 RPM IDE drives today is marketing.
    --
    My opinions may have changed, but not the fact that I am right.
  • ...it's not. As far as I know, ATAPI drives still don't implement key SCSI features such as tagged command queueing. In fact, most of the time there is no way to tell for sure when a write has been committed from the drive's cache to the media.

    With ATAPI and LBA, what we essentially could have is the equivalent of SCSI but over a cheap cable with standard TTL drivers and no fancy termination (the things that make SCSI expensive). We could get nearly the same speeds -- or at least, we could if we were willing to put a single drive on each IDE interface (which certainly isn't out of the question).

    Maybe, if it's dropping out of the SCSI business, WD can be persuaded to flesh out its ATAPI command sets so as to keep customers. Wouldn't hurt to ask.

    --Brett Glass

  • by anewsome ( 58 ) on Wednesday January 19, 2000 @08:40PM (#1356144)
    Like the guy who posted up above about being at WD for 7 years, I too was at WD for a few years. 3 to be exact. It was a shame to see the chip/controller business go by the wayside. It was a shame to see all of the enginering talnet vaporize too. When the chief scientist (Carl) left, I knew that things would go downhill from there. I didn't work in the drive engineering group, but the way I heard it was that Carl was basically responsible for every hard drive design and worthwhile innovation out of WD in the last 15 years.

    Now that he's gone and the SCSI business is a memory, you can all expect nothing but crap to come out of this company for years to come.

    All of these posters talking about EIDE (or whatever this months incarnation of the ATA spec is) being better than SCSI have no clue what they are talking about. I use my computers alot. Anytime I sit down to a system with any type of IDE drive, I can immediately feel the sluggishness set in, all while the CPU wastes cycles babysitting the rather braindead disk channel. Server or not, SCSI systems are *always* better and I will *always* continue to pay the extra quid to be at the keyboard of a system that doesn't slow me down. For me, that's not EIDE - ever.

    Case in point: my shiny new Dell 600MHz system with the best Dell has to offer in EIDE technology. Many fingertip tappings waiting for the fluttering of the hard drive to settle down whilst I work. To me, that's not good technology or a good use of my time. At my earliest conveinance, I'll be swapping out the disc subsytem in favor of something with 80 pins and real bandwidth capability.

  • by Vladinator ( 29743 ) on Wednesday January 19, 2000 @08:45PM (#1356156) Homepage Journal
    I certainly wouldn't use WD's for mission critical production servers.

    I used to work at a computer company in the St. Louis, Mo area. We used WD drives exclusively. We got word that they had "oops"'d and that we have 30 to 50 IDE drives that we had to ship back to WD - AT OUR COST - even though it was thier defect. Needless to say, we switched to Fujitsu and never looked back, simply returning the drives and demanding a refund.


    Hey Rob, Thanks for that tarball!
  • by mosch ( 204 ) on Wednesday January 19, 2000 @08:49PM (#1356162) Homepage

    A lot of people use this EIDE crap, thinking it's great for a server and what not, after all it works for their desktop. It's not.

    While I'll admit, there are some SCSI disks which are differentiated from the IDE drives solely by their interface, the higher end SCSI disks usually do have some serious advantages.

    Some of the bigger SCSI advantages are

    • low CPU load
    • the ability to queue multiple requests asynchronously.
    • higher quality components. yes, i know this is a manufacturer's choice, but true server-class hard drives use far more reliable actuators than your little desktop drive. And for those who point out the MTBF, remember that's the MTBF when used as a desktop drive, not as a news spool
    • 15 devices off a single SCSI controller is standard. 4 devices off a single EIDE controller is standard.

    the fact of the matter is that if you want a cheap drive, you can buy a cheap EIDE drive, or a cheap SCSI drive. if you want a *good* drive, ultra-high quality EIDE drives are virtually non-existant, leaving you with good ole' SCSI.

    A lot of people have this odd notion that when two computers are PIII 600s with 256 megs RAM and 18 gigs hard drive, but one costs $500 more, that the more expensive one is automatically a ripoff. People seem to forget that sometimes the more expensive one has better components and is less likely to die and wipe out the past two weeks of work. (all you non-student types, how much did you make in the past two weeks? I'd bet a *lot* more than $500). We need less ads that say the price, and more like the great VA Linux ad with the steak dinner on one page, and the TV dinner on the other.

  • It's a matter of cost/performance ratio. Yes, SCSI is light on taking up CPU resources; no, SCSI isn't much faster than a good 7200 RPM ATA66 disk, and IDE is reaching into 10K RPM these days.

    So, in terms of performance, is it worth the several hundred $ extra per hard disk? No. Purchasing a 25% faster CPU for several hundred extra dollars is the better long-term solution, because each time you need to add a new HD or replace an aging one you save the $$$ overhead you'd be paying for the additional SCSI drives.

    No one ever said the most technologically sophisticated solution is the best solution--it's not. A 70s muscle car will generally kick the ass of a 90s sports car. The same is true with computer hardware, which is why commodity IBM clones have consistently kicked the ass of more elegant PPC boxes and even SPARCs and Alphas--sure, Alpha is the fastest thing this side of whatever the NSA's private little fab is putting out, but for the price of a smooth-as-silk Alpha or SPARC server, you could have built the ass-kickingest SMP x86 box--or two. Sure, SCSI will give you a performance boost--but you could get more processing power and more disk space with IDE 66; the processor power advantage would be nullified by the IDE pull on the CPU, but that still leaves you with more disk space.

    More is better, right? So unless you want to serve pages from that pathetic 2GB SCSI disk all your life, because you can't afford to add more SCSI drives, just ride the ATA66 revolution. If you can afford mondo SCSI disk space anyway--go with IDE and get yourself some more bandwidth or throw in more RAM.

    Thing is, with a computer there are always trade-offs, always several things you could get to improve performance. Few if any of us here have unlimited wallets--SCSI is dead. SCSI is the past. In the future, for the highest-end most expensive hard disks, we'll have FireWire or some other high speed standard--not SCSI. And for all other applications, IDE 66 and successors will be the way to go. The revolution's on, folks--all that stuff we've been using for 20 years is going by the wayside: x86 architecture (at least as we know it--maybe Sledgehammer and Crusoe will make it more serviceable), Microsoft operating systems, ISA slots, SCSI, and I wish my old college would hurry up and get rid of that ancient VAX, too.
  • by Deathlizard ( 115856 ) on Wednesday January 19, 2000 @08:51PM (#1356166) Homepage Journal
    I've got to say, this doesn't suprise me one bit.

    I worked for a Computer Repair shop for about 5 months now. Here's the Breakdown on the Brand Names of Drives that come in Crashed that I've seen so far.

    90% Western Digital - at least 20 that I can think of offhand
    8% JTS (which are out of business) - 2 of these
    2% Seagate - I know of 1 that had bad sectors

    As of yet I've seen no Maxtor, Fujitsu, IBM, Quantum, Samsung or any other manufacture's hard drive crash. Although I've heard alot of bad things about the Maxtor Drives and the Quantum Bigfoot's Crashing. and I know from personal Experience that some old IBM Drives, (and I'm talking 10-15 year old PS/2 Hard Drives) were crap.

    We would sometimes get WD drives that came fresh out of a box, stick it in a machine, and it would be damaged. My boss Had to deal with a company for a week because They bought a WD Enterprise Drive for their mission critial Server and it crashed. It Wasn't even a year old!

    If I had a choice of any drive today, Hands Down I would have to go with the IBM Drive. If I had a second choice, I would probably go with a Samsung or a fujitsu.
  • And you waste an IRQ and PCI slot for each one of them. And you still get two devices per channel, instead of 15. There's a big difference between 2 and 15.

    --
  • by soldack ( 48581 ) <soldacker@yahoo . c om> on Wednesday January 19, 2000 @08:56PM (#1356171) Homepage
    EIDE will eventually hit limits as even desktop computers become more demanding. As that time arrives SCSI will take over. On the server front, fibre channel is looking like the future over SCSI. It may be even more expensive but it is faster, has a crazy 10 km or so distance limit, supports more devices on one loop, and supports multiple HBA's connected to one set of devices. This allows multiple systems to talk directly to the storage rather than through a network to a computer that talks to the storage. SANs are going to really need fibre channel.
    SCSI may seem to be "too much" for the average user but in that as the old MTV logo used to say..."Too much is never enough!" This held true for music television and it holds true for computers. I remember getting time on a 386 SX 25 Mhz with 4MB RAM and 80 MB hard disk. This was a $10,000+ system at the time. Now it's a paper weight for all but a few geeks (like me) who love to find uses for old hardware. I have a few IDE paper weights...I will have a few more before they are done.
  • by Shadowell ( 108926 ) on Wednesday January 19, 2000 @08:56PM (#1356172)
    Every engineer I've ever talked to at a HD co. has outright said that the IDE drives are not bui;t to anywhere near the quality levels that the same cos SCSI drives are. This for me is enough of a factor to go SCSI on anything that I want some reliability out of. IDE may be half price, but how much does it cost after a premature failure?
  • It's flamebait because he posted first. Many moderators would just like to get rid of their points ASAP, so they see the first post being only one line long, posted by an AC, and they think "Troll, Offtopic, Redundant, Overrated. Who cares, I can blow a point."

    Almost guaranteed the mod didn't read the context of that post :(

    -- posted by a responsible moderator, which seems to be why I get mod access more than I'd like to =)
  • If you're referring to your personal machine, perhaps you're right, assuming you love to do system maintenance. In a corporation, the cost factor definately swings towards SCSI and RAID. While they may seem quite expensive, the fact of the matter is that downtime costs are usually in the thousands/hour for small companies and go up from there. (put 200 people with an average salary of $70k out of work for an hour, and it just cost you $7000.) Cheap hardware starts looking really expensive when you realize that.

    You can argue that your personal system downtime costs nothing, but that assumes that a) you don't value your time or you like spending it rebuilding systems or b) that you don't do anything worthwhile on your computer anyway. If these are true, then buy the cheapest thing you can find. Otherwise, I suggest looking at the MTBFs and realizing that about half the drives fail before that time.
  • So responsible you seemed to have forgotten the big sign on the Moderator Guidelines page... I dunno, something about anonymity?
  • by roystgnr ( 4015 ) <royNO@SPAMstogners.org> on Wednesday January 19, 2000 @09:09PM (#1356184) Homepage
    Most Linux setups I've seen default to using at least a couple of the "sucky" IDE controller settings. The "-d1" setting with hdparm is crucial, in particular, as it turns on DMA, which hacks a huge chunk out of your CPU usage, makes things nice for the scheduler, and increases transfer rates dramatically. With DMA off on my system MP3s skip whenever the hard drive thrashes too much; with DMA on I can't make an MP3 skip with any level of hard drive activity (and believe me I tried).

    One last flag you might want to try: -X34 will make sure the drive is set to DMA mode 2 transfers, and on new drives -X66 will select Ultra DMA transfers. DMA->UDMA isn't nearly as big a leap as PIO->DMA, but it's sizable.

    I wish more people knew about hdparm - it's a single command you can run as root that can double the performance of your system under some circumstances. I think new kernels are getting more aggressive about enabling good IDE settings themselves, but there are still too many systems out there where the default settings needlessly give both Linux and IDE a bad name.
  • by Shanep ( 68243 ) on Wednesday January 19, 2000 @09:11PM (#1356185) Homepage
    IDE can't switch between master and slave fast enough to allow the greatest performance increases when striping with RAID and swap. It has silly limits like 2 drives per channel. It does'nt support command re-ordering in hardware to allow the heads to move less during many access'. It is not multi-threaded... blah blah blah. If this is true, I have lost respect for WD. My Caviar 340Mb is still going strong and I loved their build quality. I know IDE is getting really fast now, and it's cheap, but for the server with really disk heavy applications, transfer rate it not the be all and end all. Neither is how fast the heads can move. SCSI is far better for server stuff. Damnit!
  • Personally, I don't view this as that much of a loss to the SCSI world. Western Digital has been on my bad list for quite some time. The past several hard drives that I have had fail on me have all been Western Digital,and usually just past the warrantee (dohh!). Lately I've been buying mostly Maxtor and have had pretty good luck with them. I've also had good luck with Quantum (but I've heard lots of bad things about their BigFoot line -- never bought one myself though). I've been mostly buying IDE drives lately, but when I buy SCSI drives I generally either get Quantum or Seagate (although I am not a big fan of Seagate's IDE drives, their SCSI drives seem pretty good). I've heard good things about IBM's SCSI drives, but haven't tried any of their newer ones.

    All in all, there are enough other choices that I don't think Western Digital will be missed in the SCSI world.

  • LOL. You were using as your examples for having "one damn bus for everything", Printer, CDR and Scanner on Parallel. Which is a laughable thought :)

    I love SCSI. Specially nice LVD IBM drives. :)

  • SCSI is expensive, and a card still $200 or so, but you're exaggerating.
  • Yeah? Cached, or raw read? I know which I'd place my money on.
  • please, do not immediately say I don't know what I'm talking about, I do (somewhat, no real life scsi experience, but i do have a 3rd generation seagate barracuda in the garage)

    ata hard drives have historically been slower than scsi hard drives. But recently, they have been catching up. ATA/IDE drives now use busmastering and dma, making the cpu utilization less. The speed of the interface has also been increasing, up to 66 mb/s, beating ultrawide scsi 2 which was state of the art about two years ago before lvd scsi arrived. Of course, this is just the interface itself, the actually speed still depend on the hard drive itself. A few years back, IDE hard drives were running at 5200/5400 rpm and had 128k/256k buffers. Currently, IDE hard drives run up to 7200 rpm and has 2 megs of buffers and a 10 gb/in^2 areal density, whereas SCSI hard drives have up to 10000 rpm (giving them lower latency and stuff) and a 4/8 meg buffer (mostly on AV optimized models) but only a 3.3 gb/in^2 areal density. Anyway, the point of all that was to say IDE hard drives are catching up.

    Back to the interface:
    scsi can go up to 45 devices per card (3 channel UW card, 15 devices per channel) or more realisticly - 15 devices per card, on a single channel UW scsi card - totally beats IDE's 4 devices per card/motherboard limitaion. but what about those ABIT motherboards with 4 ide channels or the ability to add on a PCI ide card? you can argue this will use up all the irqs but PCI irqs can be shared. Then there's USB/Firewire, which allows up to 127 and 63 devices respectively. USB ports are standard on all PCs since around 97 and they can be used for low speed pheripherals such scanners or zip drives. Firewire is available on various compaq/other name brand pcs, as well as some Macintoshes and can be added onto any pc through a firewire card (adaptec makes them AFAIK and they are commercially available) and firewire has the bandwidth to support high speed hard drives and other pheripherals. With all this, shouldn't scsi be obselete by now? (Yes, I know scsi hard drives are still the fastest, adn I understand SCSI is necessary for hardware raid 5, and SCSI hard drives have a higher MTBF - all of this is critial to mid/high end servers) but what about the low end server / workstation market? (where price may be an issue, btw, I'm ruling out the home pc market because the _ordinary_ user doesn't have 5 hard drives and a scanner, and a cd burner, and etc, etc)

    which brings me to my next point:
    scsi is incredibly expensive. yes, there are cheap (7200 rpm) scsi hard drives out there (still $200+ though) and there are cheap scsi controllers (tekram - $170 aint bad) but 7200 rpm scsi hard drives dont give the performance advantage to justify the additional price (10000 rpm hard drives start at $350) and the tekram scsi cards (they provide linux drivers, cool eh) are only up to 80 mb/s, not 160 mb/s (the new adaptec cards - which are REALLY expensive - the scsi card alone can buy 20gb - 30gb worth of IDE hard drives) so looking at the cost effectiveness of the scsi hard drives - they are not really worth it for low end servers / workstations now are they? (the cost is worth it for high end servers, especially if they run anything critial where maybe a business is depending on the speed and reliability)

    just a little note: by "workstations" i'm also including people's high end machines and those ultimate gaming rigs...

    now, back to slacking off, for I've got senioritis maximus =)


    _______________________________________________
    There is no statute of limitation on stupidity.
  • by |TheMAN ( 100428 ) on Wednesday January 19, 2000 @09:30PM (#1356206)

    Face it, the IDE design is ancient and is inadequate for today's uses. I mean its fine when you are plain ol' Joe Bob who just checks his email and does word processing. But when you want to *add* something to the computer and do some serious stuff, like a geek will, you're running into problems.

    Not only does IDE have bad command queuing, it doesn't even do sync transfers. The most debated issue is the CPU usage and the transfer rate: IDE relies on the CPU more than it should because the controller is too simple and therefore braindead. You can overclock the IDE controller, but what always happens is the drive is too crappy to even handle the higher speed, but you always get the same read performance. IDE always sends date from devices back to the host controller at the same original spec speed, whereas write-to-device can vary due to oc'ing the controller.

    Ok, now to the point:
    the engineers (I'm sure they've been TOLD to do this) keep trying to make IDE "better" by keeping this backwards compatiblity junk and at the same time trying to squeeze a wider data bandwidth for the devices. Think about ATA/66, you need those special 80 wire/40 pin cables because if you used a regular 40 pin/wire cable the signal to noise ratio will be so bad that you get tons of CRC errors. The additional wiring are for the extra shielding in order to keep the SNR well enough to avoid CRC problems. What about adding additional devices for more storage space and removable like what most of us geeks do? Okay, they draw up these brilliant schemes of secondary, tertiary, and quarternary controllers which are essentially the same in controller design as the "primary" except on a different IRQ and port. Wow, cool, now I can hook up 8 IDE devices!
    Ok, but I want to add some stuff like: a PCI soundcard (2 IRQs... 1 for ISA/DOS emu, and 1 for actual PCI), add NIC (there goes another IRQ), add DVD decoder card (1 IRQ). Hmmm... wait a minute, isn't IDE 0-3 using IRQ 10,11,14,15 already? So didn't that left me with IRQ 9 for video? Ok, suppose I _DON'T_ even have a NVidia based video card (which has problems sharing IRQs), and try to share IRQ 9 through "PCI steering" with the USB, also; that only gets me 2 devices working. I still have to disable the serial port(s), and the parallel port to get more of this working. It is possible to have one of the devices' IRQs share with the other, however this is all determined by the BIOS's DMI these days (in a modern PCI BIOS at least). I'm only talking about PCI here, ISA is already a forgotten issue since I'm talking about the latest and "greatest" motherboard.
    Aren't they trying to keep some ancient inferior, simple interface up to date and competitive just because its "cheaper"? AFAIK, it should cost no more to make a SCSI device/drive with the *same* MTBF rating as an IDE device. IDE works, only when you are keeping things *simple*, but things aren't so simple these days. The more expensive, branded, prebuilt *gasp* systems these days already come with a decent sized HD, with DVD, and usually a burner, and sometimes a Zip or LS-120. This means 2 IDE channels may already taken up. IDE seems cost effective, but it doesn't look like it to me when it comes to long term. Its more trouble than its worth when you are going to add cards into your slots. Doesn't this remind you of the saying "beating a dead horse" to you?

    It all comes down to this: we all know that we are in a serious IRQ resource problem already, and adding to that we get "newer and better" IDE "standards" which contributes to this problem even more. What I think should be done is to either ditch IDE (it worked great as a cheap solution but is no longer really viable), or take care of the IRQ problem. However, there is one thing that seem to be preventing this: the industry thinks they need to maintain backwards compatibility. I feel that there will eventually come a day where someone out there in some company will crack and actually officially acknowledge of this problem and is actually willing to deal with it.

    My strongly suggested action is to actually make SCSI cheaper (man, they make tons of money selling those things, when costs of manf are no more than IDE), thus allowing IDE to be rid of, and in turn allow us to connect at least 15 devices (Wide SCSI) and using only 1 controller, 1 IRQ, 1 port, and lower CPU usage tremendously.

    I still have to admit that IDE is ideal for people, and some of the geeks out there who are poor and can't afford good stuff like SCSI. But the minute you can afford and want to do serious (workstation/server) stuff, there is no doubt about it: SCSI is the way to go.

    TheMAN

  • I thought this guy had gone away. Shame.

    He used to do long rants (if it is him) about "Sun and HP should just acknowledge that Linux/Intel has won, and absolutely smokes them everywhere, everyhow".

    "Linux, Intel and EIDE are what power the Enterprise Storage/RAID Market"???

  • Yes. When you're a shady dealer at one of the computer swap meets here in Melbourne (Australia), and have someone trying to convince you, and I quote:

    "We have an 8 speed IDE CD drive here, and we have a 2 speed SCSI CD drive too. If you're looking for the top speed, SCSI is much more powerful than IDE, but you do need to buy this controller card... Oh, good, I have a few in stock"

    The one time I've stepped in and told this customer he was talking crap. Dealer wasn't amused, but like I gave a shit, that is just pure scammery.

  • 3. For some reason my brother's Yamaha CDR102 wont write onto newer cd media. Seems to be a cd-design change, as old media works fine.

    I had this problem. Hunt down a firmware upgrade and I almost guarantee your troubles will be over.

  • "Cheap" is a matter of opinion. Is the IDE drive a cheaper solution when you have to replace it three times over the life of the machine where a SCSI drive would move on to the next machine? And FYI, you can "yank" a SCSI drive and put it any SCSI equiped machine. In the PC world, neither IDE nor SCSI is telling the truth about the geometry of the drive. (Coming from a non-PC world, I really hate that.)

    If you want the most G/$, then yes, IDE is the choice. However, "you get what you pay for." IDE _is_ slower than SCSI and much more likely to fail. I still have a Maxtor LXT-213S, 213M SCSI-1 (CCS) drive. That drive is over 10 years old and is still purring along -- it's living up to its MTBF.
  • Hear Hear. 2 good comments in a row.

    The same goes for WinNT as it goes for Linux. DMA makes the difference. Look at www.arstechnica.com to see how to turn it on, it's not as easy as Linux, but it can be done.


    ----------------------------------------------
  • SCSI still supports multi-tasking OS's better. When I'm compiling and using netscape, that means something (although only because of netscape sigh).

    Also, if you stick with SCSI drives from good companies (apparently just IBM now :), you'll get reliability improvements over their IDE drives, which are some damned reliable IDE drives.

    So, for systems that multitask with disk activity (lots of OSs now), and with drives that typify SCSI, rather than random "I can sell SCSI drives now" vendors, SCSI is still better. Expensive, but worth it.

    (Deleted rant on SCSI vs. IDE installation since it was too angry :)

    Bah
  • Funny... I've done alot of HD swapping around before in my company. I saw no appreciable difference between SCSI or IDE. Of course, that's most likely because the actual drive technology is the same - the disk platters and the heads are identical. Now the interface on the other hand...
    That's where SCSI shines. Not on reliability- not one bit. But it does offer preformance advantages. And, not that most techheads care, but IDE is easier to plug in. Of course, there is only 1 more step in SCSI than in IDE, and it consists of selecting a unique SCSI channel... a no-brainer anyway...
  • by The Man ( 684 ) on Wednesday January 19, 2000 @09:57PM (#1356227) Homepage
    2.The multitasking restriction is that IDE cannot issue more than 1 io request at a time, but this doesn't matter for single disk systems.

    And in general, single disk systems are peecees, not workstations or servers. So, thanks for playing.

    Yes, in many cases the drives are physically identical. So why don't we have 10k rpm ide drives? It might be marketing - or it might be that the vendors aren't going to waste the cost and effort to build those fast drives on ide. After all, systems with only ide are unlikely to get any increased benefit from additional media speed, and people who buy them aren't likely to be willing to pay the difference in disk cost.

    You're forgetting the fundamental basis of peecee buyers: the only thing that matters is the ratio of $IMPORTANT_NUMBER to price. In this case, disk size. Nobody quotes MB/s or seek times or the crucial "platter to ethernet" time. Why? Because people buying biddy boxes don't give a fsck.

  • What about technologies like SSA, that are used inside of those massive storage arrays?

    I've seen storage arrays that used SSA internally and Fiber Channel externally, big things the size of an outhouse. near-terabyte to multi-terabyte stuff.

  • That's not entirely true.

    Yes, SCSI is expensive compared to IDE, from an end-user point of view.

    But the host isn't anywhere near that pricy. Well, it doesn't have to be.

    Symbios (NCR) scsi host adapter ICs are actually pretty good stuff. HP, IBM, Compaq, AMI, all use them on their RAID controllers.

    Sure, you could spend $200 on an Adaptec ultra-wide setup, but you could also spend $150 on an Adaptec ultra-wide setup, and only be almost as foolish.

    Adaptec has impeccable marketing, as scsi vendors go. Their products, however, are middle of the road.

    SIIG, Initio, the performance isn't horrible but the quality is questionable. The pricing is competitive, but not fantastic.

    BusLogic/Mylex, may be a notch above Initio/SIIG due merely for the fact that wars rage over whether or not BusLogic/Mylex cards are a lot better.

    but Symbios, man, pound for pound, if budget is a concern, is the only way to go.

    I'm typing this now on a machine sporting a Symbios SYM83c875 ultra-wide scsi host. Cost: $47.

    And it even feels faster than the same drive did when it was on an Adaptec AIC7880 (aka 2940UW)

    The drives, yes, they are more expensive, but keep an eye out for SCA drives and 80-64 pin adapters. Ultra/Wide scsi is quickly falling by the wayside in RAID arrays due to the lowering cost of LVD, and SCA interfaced UW drives are selling quite cheap at the surplus joints. I'm talking $345 for 18 gig 7200rpm IBM UltraStar.

    Yeah, it's more expensive than IDE, but it's always been worth it in my experience.

    Lets fire up your IDE system and watch you burn a CD on your IDE CD-R while ripping audio tracks off your IDE CD-ROM while encoding MP3 off your IDE harddrive while playing Quake II. I've done this on my dual celeron UW SCSI system more than once, system didn't even break a sweat.

  • by spinkham ( 56603 ) on Wednesday January 19, 2000 @10:23PM (#1356248)
    Actually, current IDE drives are about the same speed as curent scsi drives.
    The SCSI drives have high rotational rates (measured in RPM) and latency, and the IDE drives have much higher Areal Density (loosely measured in GB per platter).

    This is lifted from a page at www.storagereview.com:
    "The primary way that hard disks have been increased in capacity and speed over the years is by storing more and more information into the same physical space. This is done by increasing how tightly packed together the bits on the disk are, which is the areal density or bit density of the platters."
    The differences in the two types of drives even out in situations where there is one drive per controller (and CPU usage is almost identical).
    (Also note, that for the price of one SCSI controller, you can buy quite a few IDE controllers, most of which have 2 controllers per card, so 4 disks would only take up 2 PCI slots, one if you also use the onboard controllers that usually come on motherboards..)

    For more info, check out this section of www.storagereview.com:
    http://www.storagereview.com/guide/guide_int_per f_fact.html
  • pfft, IDE ports are a silicon by-product of having a PCI bridge controller. There are no currently produced PCI bridges that don't have a few thousand transisters dedicated to an IDE interface. The early PCI based Macs were the only ones I know of that don't have any IDE hardware.

    Hell, even Sun is using IDE hardware now -- even in things called "server". It's saving them, what, 12$ per 3000$ machine by not putting a SCSI controller on there? I gave up a 366MHz Ultra10 in favor of a 167MHz Ultra1 at work to get back to SCSI -- the U10 paid too much of a penalty for being IDE based (and yes, it was _very_ noticable.)
  • Check out www.storagereview.com for good information on how fast hard drives REALLY are..
    (and the fastest IDE and SCSI drives are curently almost the same speed...)
  • USB is not in the same class as SCSI. Heck, in terms of i/o throughput, USB is not in the same class as IDE.

    Don't get me wrong, USB is great for a lot of things. I wish every digital camera had a USB port, etc. For moving relatively small chunks of data (50 megs or less), USB is a great way to do things.

    But sheesh, that's 12 mega *BITS*, and it's a *SERIAL* interface. Divide by eight, then account for bus latency. Then compare it to the ATA-1 speed limitations. Yeah, a hard drive on USB would be soooo cool.

    Firewire is interesting, arguably in the same class as some forms of scsi, especially in regards to price.

  • by imagi ( 27636 ) on Wednesday January 19, 2000 @10:34PM (#1356255) Homepage
    Here's a tip sent around our company concering tweaking IDE perf. Thanks to Andrew Tridgell for the info.

    This tip is useful for just about any Linux box, and is probably the
    simplest way to significantly speed up your IDE based Linux box
    without changing the hardware.

    If you are impatient then just add the following near the top of your
    /etc/rc.d/rc.sysinit (or equivalent startup script):

    /sbin/hdparm -u 1 -d 1 /dev/hda
    /sbin/hdparm -u 1 -d 1 /dev/hdc

    (and so on for any IDE devices in your system)

    Now for a more complete explanation.

    By default Linux uses extremely conservative settings for IDE. In
    particular the default settings do two things that make IDE perform
    really badly:

    1) DMA is not used. That means all data coming to/from the hard disk
    or cdrom is processed a byte at a time by the CPU. That is not very
    efficient. With a fast processor that isn't doing anything else at
    the time this can appear fast in simple minded benchmarks but it is
    a big drain on CPU resources when you are actively using the
    machine.

    2) hardware interrupts are masked during IDE transfers. That means
    that while a lump of data is being transferred to/from a IDE device
    no other interrupts are processed. This includes interrupts from
    other IDE devices, from network devices, from serial ports and from
    mice. Your whole machine is effectively clagged up doing nothing
    but waiting for a horrendously slow device to say "I'm done". Not
    good.

    If you want to see just how slow this is on your system then do the
    following:

    1) put a CDROM in the drive.

    2) run the following commands:

    hdparm -d 0 -u 0 /dev/hda
    hdparm -d 0 -u 0 /dev/hdc
    cat /dev/hdc > /dev/null &
    hdparm -t /dev/hda
    hdparm -d 1 -u 1 /dev/hda
    hdparm -d 1 -u 1 /dev/hdc
    hdparm -t /dev/hda

    that shows you the hard disk speed while accessing the CDROM with the
    default settings and with the improved settings. On my system the hard
    disk speed goes from 3.8 MB/sec to 12.9 MB/sec. I've seen much bigger
    changes on some other systems.

    Even more importantly than the speedups is the fact that you will stop
    dropping your PPP connection while doing cdrom transfers, and you will
    be able to use your system while burning a cdrom without creating a
    coaster.

    You may wonder why the default settings are so poor. The reason is
    that there is some rare hardware out there that corrupts data during
    IDE transfers when you either use DMA or receive an interrupt during a
    transfer. If that happens then the kernel should detect the failure
    (in nearly every case) and fall back to the default
    settings. Unfortunately after the auto-fallback you are still left
    with corrupt data in your cache. Luckily systems that don't handle DMA
    and unmasked interrupts are really quite rare these days so it is a
    pretty safe bet to turn the options I suggested above, especially if
    your system isn't from the stone age.

    For more info and piles of options for fine tuning your IDE system try
    "man hdparm".
  • Are there any IEEE 1394 drives? I was looking forward to IEEE 1394 drives mounted in Device Bay racks. [devicebay.org]. But that whole concept seems to have disappeared, even though it's in the PC98 and PC88 specs.
  • by Laven ( 102436 ) on Wednesday January 19, 2000 @11:22PM (#1356272)
    For small servers, workstations and desktops, I myself believe in the new IDE standard. For systems with small numbers of hard disks, U/ATA 66 is great for the cost/effectiveness ratio.

    7200rpm + U/ATA66 can sustain some wickedly fast speeds. For this reason I chose this on an Abit BE6 motherboard and cheap 7200rpm IDE drives for my cheap budget server at my cash strapped school.

    I was astounded when I ran an hdparm -t (without cache disk speed test) and it reported 21MB/sec. This went well beyond my expectations from a little cheap IDE drive.

    In situations where you only have one disk per controller (the Abit BE6 has two U/ATA 66 controllers), 7200rpm IDE can actually outperform SCSI based systems. (According to an article on Thresh's Firing Squad)

    HOWEVER, SCSI still beats the heck out of IDE in reliability, speed and scalability in large and important jobs (enterprise solutions). The redundancy and failover protection of SCSI + raid controllers is not as reliable with IDE (it's possible with stupid human tricks). Don't even talk to me about software RAID. Software RAID is too CPU intensive. SCSI + RAID controllers can do all the failover, drive rebuilding and cool stuff without the CPU knowing anything about it.

    Also, the extra bandwidth of SCSI shines when many hard disks are added to the fray. IDE has nowhere near the level of scalability of SCSI.

    So basically, I highly suggest U/ATA 66 IDE for desktops, workstations and low budget servers. But for large and important jobs use SCSI.

  • The big trouble with IDE is still that they are "dumb" devices that require CPU resources to manage.
    SCSI also requires "CPU" resources, the difference being that SCSI controllers take care of the details themselves instead of nagging the main CPU for attention. Bus-mastering DMA controllers have gone some way towards removing this overhead.
    I don't see why you couldn't build a "smart" IDE disk controller that places no more load on the system that a SCSI controller, or even one that looks like a SCSI controller to the computer, but uses IDE disks (so long as you have room on the card, give them all a dedicated IDE channel, too.) The price difference between the drives should pay for the controller!
  • 2.The multitasking restriction is that IDE cannot issue more than 1 io request at a time, but this doesn't matter for single disk systems.

    Yes, this matters for single-disk systems. You send several requests to the disk. The disk will then process request 2 while transferring the data read in request 1 (or while receiving the data you are writing for request 1)

    This improves your read bandwidth a lot, particularly when using good read-ahead algorithms. It makes a difference. It will also improve write bandwith in high-load situations, i.e. when you're writing at the platter bandwith for long enough time to fill the on-disk ram cache and more.

    3.The only essential difference between a SCSI HD and an IDE one is the drive electronics, they are physically identical.

    Sure. The scsi advantage is in the interface. Now, if at least one manufacturer would see the light and sell them at the same price...
    IDE would be gone in a few years, and they would save money on not developing IDE anymore.
  • SCSI is *SO* much better with IRQs. Look at it this way.

    IDE: 2 IRQ's, 4 devices. Thats .5 of an IRQ per device.

    SCSI: 1 IRQ, 30 devices. 0.03 IRQ/Device.

    From /proc/interrupts
    15: 263999 XT-PIC aic7xxx, sym53c8xx

    PLEASE DONT LET THEM KILL SCSI!
  • You dont have to give ide any IRQ's tho.
  • by Yarn ( 75 ) on Thursday January 20, 2000 @12:00AM (#1356293) Homepage
    I was reading the comments, getting angrier and angrier with the price difference between IDE & SCSI, when I thought this: 'I wonder if it would be possible to rip of the IDE controller board from a hard disk and replace it with a SCSI one'

    Any thoughts?
  • Ars Technica has an _excellent_ review [arstechnica.com] of SCSI drive technologies. I found it extremely useful.
  • Ten years plus ago, I had a lot of machines out with customers with Western Digital drives in. I think every last one of those drives ate itself for breakfast within a year, and I had some very unhappy customers. I haven't used Western Digital since; they may have got better.

    Recently a friend's company had a spate of Samsung drives go down, most less than a month old.

    These days I consciously choose better quality disks because the extra cost is considerably less than the cost of a lost customer, or even having to go out and swap a disk in a hurry. Disks are about the only critical mechanical parts left in a modern computer (except bl**dy chip-fans), and are among the things most likely to die. Buying cheap ones is (in my opinion) a very poor economy.


  • The current implementation of SCSI is 160 Mb/Sec. SCSI is a multitasking i/o subsystem (simultaneous read/write), IDE is not. SCSI typically impedes CPU performance by 3%, while IDE typically impedes CPU performance by ~25%. For Starters.

    This is correct, but it doesn't capture the whole picture. Only fools and zealots can argue that IDE Tech is as good as SCSI Tech. The advantages of SCSI are obvious. SCSI allows more disks, more cabling distance, much more bandwith, less cpu usage, and you could go on. The real question has always been who cares?

    Technological superiority is a terrible to buy something. (Unless you're a nerd buying a gadget just to have it that is) When you make real purchases you buy whatever fits your needs and your budget the best. Any tech superiority you pay for but don't use is nothing more than Gold-Plating. A gold plated computer might look great, but unless that gold plating is used for something it's pretty dumb.

    When people say that IDE Tech is catching up to SCSI tech, what they really mean is that IDE capability to satisfy thier needs is catching up to SCSI's ability to satisfy them. Personally I often wish I had bought SCSI so I could put more devices on a single controller. Then I look at the prices for certain SCSI HD's. I remember why I choose IDE in the first place, and I don't feel so bad.
  • While you are right that narrow/wide has 8/16 target ID's, that does not neccessarily mean you can plug 7/15 devices onto a SCSI bus.

    Eg, Ultra will not really tolerate more than 4 devices on a bus. It's possible to have 5 or 6 Ultra devices on the same bus, but you are likely to have problems.

  • Western Digital's caviar models from 1.0GB up to 4.0GB are notorious for failing. At my company we use Gateway computers, which have WD, quantum, and maxtor drives. The WD's have a high rate of failure, such as developing bad sectors, and of course the Clunk of Death, where one day you turn the machine on and the drive just sits there going "clank clank clank clank clank..."
    Now, the new Expert line is a different story. They're licensed from IBM technology and should be just as reliable.
  • ok. doublechecked. IDE takes no IRQs on my computer.

    I have nothing connected to it so it's not activated.

    Maybe I was a little unclear then.
  • *nod* I use Adaptec and Initio. Adaptec 2940UWs - yes, quite expensive. And the Initio A100U2W, which I haven't had problems with yet.

    Drives: I haven't touched an EIDE in many years, I've used Seagate, Quantum and IBM drives. Definitely love the IBM, dead silent. (Anything to reduce the noise from the PCs in my bedroom is A Good Thing(tm)).

  • Be careful with enabling DMA under NT, there are problems. My CD burning software would blue the machine on start after I enabled DMA (which incidentally is per cable, not device). Their web site was aware of the problem, and suggested disabling DMA! Microsoft is also aware of the problem, with articles in the MSDN dating back several years (why the hell haven't they fixed it?). If you get blue screen in ATAPI.SYS, it's probably related to this. In the end I installed the Intel Bus Master driver (only works on Intel chipset of course): same performance as the DMA trick, but no blue screens (although I've heard there can be problems rebooting as sometimes the OS can't tell if the drive has been written - so I also shut down before I reboot).
  • I personally have had five Western Digital IDE drives go bad, especially that one series (1.2 GB or 1.6 GB?) that Western Digital couldn't make reliable to save their life.

    To their credit, they replaced all of the drives I've had go bad (one even went bad in the first six months). But I don't even use the replacements, one of them is still ni shrink wrap even, because I don't trust my data on a Western Digital drive.
  • by RayChuang ( 10181 ) on Thursday January 20, 2000 @05:34AM (#1356371)
    Folks,

    I think many of you are missing the point.

    The big advantage of IDE is simple: low cost. Remember, in the old days you had to buy a separate hard disk controller, and that hogged valuable expansion slot space (not to mention the time wasted in doing a low-level format of a hard drive.)

    Since IDE drives don't need a separate controller card (and don't need low-level formats), all you need to do in 1999 is connect the drive to the motherboard (heck, even the system BIOS will automatically set up the drive type), and you can right there install the operating system of your choice.

    Also, in the past people have rightly criticized about IDE drive's low performance compared to SCSI drives. However, with Intel shipping the 82371 series of I/O controller chips, that allows software drivers to be written that dramatically reduce the CPU utilization to access an IDE drive. Also, the development of Programmed I/O Mode 4 in the early 1990's, ATA-33 in 1996 and ATA-66 in 1999 has dramatically increased throughput on IDE hard drives to the point that for most desktop operating systems there is almost nothing to be gained by going to SCSI hard drives.

    The only place where SCSI hard drives still are useful are in environments where hard disk access is very heavy, such as in servers. This is where the RAID 5 capability of modern SCSI host adapters and the throughput of SCSI Ultra-Wide and Ultra2-Wide becomes useful.

    It's small wonder why Western Digital is no longer interested in SCSI hard drives. That's because IDE hard drive technology has advanced to the point that SCSI hard drives are only useful for server environments.
  • by Anonymous Coward on Thursday January 20, 2000 @05:39AM (#1356376)
    SCSI Fanatic: I can burn 2 CDs while I play quake 3, do 24 bit colour scanning, leech mp3s and watch the latest pamela anderson vid.

    IDE Fanatic: IDE is cheaper.

    SCSI Fanatic: Oh yeah????? Well my drive rotates at 20,000 gigaschmirkels per second and I can chain ***37*** DRIVES TOGETHER!

    IDE Fanatic: IDE is cheaper.

    SCSI Fanatic: So, my mega-ultra-fat-wide-giga-fast-scsi-4 drive can do simultaneous reads and writes and can reorder requests fast enough to pilot the space shuttle - LETS SEE YOUR IDE DRIVE DO THAT!!!! AHHAAHAAHAAHHAHAHHHHAHAHAHAHHAHAHAHAHAHAHA AHAHAHAHAHAHAHHAHA AHAHAHAHAHAH AHA!

    IDE Fanatic: IDE is cheaper.


    Do you see what I am getting at? Nobody is saying IDE is technically better, yet all these wonderful SCSI fans are screaming until they turn blue in the face and the veins in their forehead start bulging out. Frankly it's disgusting.

    Stop it.









    I mean it.







    I can still see your veins.















    I wasn't joking.
  • Ahem.. check your spec's buddy.

    UDMA66 drives CAN transfer upto 66MB/sec. Currently generation IDE drives sustained transfer rates are just above 34MB/sec but can burst data at 66MB/sec.

    Additionaly there are only 2 IDE drives per channel. That is if 2 drives are both working at the same time the max bandwith on the whiore is 66MB/sec.

    ULTRA/160.. um those drives don't transfer at 160MB/sec. 160MB/sec is just the max bandwith of the bus. SCSI drives are still limited by the same read/write channel as IDE drives.. which means they too can only stream around 34-35MB/sec right now.

    NOW ULTRA/160 can have up to 16 devices (ok 15 without counting the controller) so 160/16 = 10MB/sec. So if you have 15 drives on a single SCSI channel none of the drives can transfer faster then 10MB/sec (if they are all talking at the same time).

    IDE is perfect if you don't need more then 160Gigs worth of storage (40Gig x 4 drives) And BTW you can;t tell the diferance in speed between a 7200RPM ide drive in UDMA mode 4 and a 7200 SCSI drive.

    Ex-Nt-User
  • ...is that even though they keep updating the technology, they don't keep changing the damned connectors.

    How many different types of SCSI cable are there, between original SCSI, Wide SCSI, Ultra Wide Scsi, etc.?

    I can still use the same 1GB drive that I had on my 486/33, on my much newer PII/450. And I can use the same cable. This makes me happy. The fact that IDE is also cheaper than SCSI makes me happy.

    Now, if the SCSI advocates are only talking about practicality for high-load file servers, and server candy like hot-swappability, they have a point. But if they want to treat workstation practicality as being equivalent to server practicality, that's an all too common fallacy.
  • by stevew ( 4845 ) on Thursday January 20, 2000 @06:03AM (#1356387) Journal
    Let's see - PCI 1.0 can do 132MBytes/sec. Now that does limit a system that can produce 160Mbyte/sec - but not to 66 as per UDMA.

    Further, it isn't SCSI that limits the speed of the drives, but rather the speed off the platter. The drives will BURST at 160 for blocks of data at some fraction of the size of the drives buffer.

    So - then lets put 5 drives on the channel and stripe the data (can you say RAID) and you have
    a high performance channel that will saturate PCI.

    UDMA can't keep up with that.

    Oh - I'm not really an expert in the stuff. I've just designed disk controller chips and an Ultra 160 host adapter.

    Summary - Horse Hocky! ;-)
  • The future is in serial buses, not in SCSI, both for the big server markets and the luser market. Firewire is already there, it might not be the standard though, given the lack of Intel chipset support. Why are serial buses superior? After all, the more wires you put, the more data you pull, don't you? Not quite: synchronising those 2^n lines is a bitch. Also, as the price go down, it will make low-tech parts (copper, plugs, pcbs) less expensive.
  • I just go a box with an Abit/BE6 board, it indeed has 4 IDE channels, but two of them are UDMA/66. They seem to be a bitch to use on Linux. How do you use them?
  • >If you are doing video editing or other disk-intensive tasks, you should probably use SCSI, otherwise...

    Thanks for saying this, now I don't feel as much like an idiot after reading through these posts.

    I bought a SCSI drive and adapter after I ran into trouble with my PC being able to capture video over IEEE 1394 from my new digital camcorder. My PII-333 and WD udma-66/7200 drive dropped about 20% of the video frames. When I checked into this, the best advice I found was to go SCSI.

    Adaptec makes a combo adapter that has Firewire as well as UW-SCSI interface (was coming up short on PCI slots) and I went with a Seagate Barracuda drive. I'm still using the IDE drive for running the machine since I've never had any trouble with it being fast enough for anything else, but now I'm using the SCSI drive to do the video capture and editing. This was an expensive way to go, but it sounds like the alternative might have been to upgrade the CPU to compensate for the IDE.
  • But if they want to treat workstation practicality as being equivalent to server practicality, that's an all too common fallacy.

    SCSI subsystems usually feature more mature and faster chipsets and better busmastering, and SCSI controllers tend to load the CPU less than any IDE/EIDE controller I've ever used.

    If your workstation is really a workstation, where you're doing local rendering or driving large compiles or anything which requires lots of CPU, and you're willing to pay for performance, SCSI still holds court on the top end. The difference now is that the top end is getting thinner and thinner as ATA continues to raise the bar.

    Frinstance, when I finally buy my 2 HDDs for video editing (to mirror together), 99% sure they'll be ATA66 Quanta. If I was a video pro, though, and had the $$$ and needed to get 24bit uncompressed RGB video at 29.97fps, striping across 6-7 HDDs and having lots of controller cache would still mandate SCSI.

    Still, it's not a religion, just stick with what's fast enough for your needs..

    Your Working Boy,
  • by Anonymous Coward
    Seeing as we now have over 300 responses to this story, I doubt my rebuttal will even be read...

    I think many of you totally misunderstood my response and jumped on the flame wagon too quickly.

    UDMA/66 means that a single IDE interface has 66mb of possible bandwidth to use. That means with two drives on the interface you'll max out if each drive is sustaining 33mb/sec of data.

    It was created because UDMA/33 is slowly growing into becoming a bottleneck for today's faster IDE drives. If you have two drives capable of 16-17mb/sec sustained throughput, you'll max out the interface's possible throughput when both drives are in full utilization.

    As for current IDE drives doing 34mb/sec sustained -- you're on absolute crack. There are currently no drives (IDE or SCSI, sans solid state drives) that are capable of average sustained transfer rates 34mb/sec. Go reach benchmarks. Burst mode is entirely different. As is sequential data access rates. Those are not real world benchmarks.

    Currently the fastest drive on earth are the Quantum Atlas 10K drives. 24mb/sec sustained throughput. If I do a burst transfer or sequential access, the drive easily sucks down the full 80mb/sec of the entire U2W SCSI bus. That's why Ultra/160 is needed.

    If you put 15 of these Atlas 10K drives on a single U2W SCSI chain and stripe the drives together and start copying large files, you'll hit the roof at 5mb/sec -- you'll flood the SCSI bus with no bandwidth left. Ultra/160 takes this limit up to 10mb/sec. That's a pretty big difference.

    UDMA/66 gives you 66mb for two individual drives. So if you have two IDE drives, someday in the future when drives can actually go beyond 33mb/sec, running at max throughput -- you'll hit your limit.

    And you're absolutely wrong when you talk about not being able to tell a difference. I went SCSI last year and I've never looked back. The speed and performance difference is absolutely incredible.
  • by The Breeze ( 140484 ) on Thursday January 20, 2000 @07:29AM (#1356422) Homepage
    (original post was in the wrong place)
    I've read a ton of stuff on the debate about SCSI vs IDE...and I've seen some people comment on how "SCSI
    seems to last longer" and I've seen other people comment on how "SCSI can handle multiple requests
    better"...but I must confess, it took an electrical enginneer to explain to me the reason that SCSI blows IDE away in
    servers, and always will: Let's start with the fact that most SCSI & IDE drives are identical in the hardware, it's
    the logic board that's usually different. Both the SCSI drive and the IDE drive have the same MTBF. Which
    drive is going to fail first in a server? The IDE will, every time -- BECAUSE IT WORKS HARDER, and
    RUNS MORE. SCSI's ability to get multiple packets of data means the moving parts of the drive don't have to
    work as hard as the IDE drive, which is sending the head flying over the platter for every little bit. Result? Two
    servers, same workload, one with an IDE drive, one with a SCSI, both drives have the same MTBF...but the
    IDE drive is chugging away to exhaustion while the SCSI drive caches some of the data it needs and is not
    working nearly as hard. This is why the speed debate is useless as applied to servers. In a desktop? Sure, IDE
    has its advantages, and big speed is always nice. But in a server in a business environment with a heavy
    workload, time is the value, and downtime costs -- and that IDE drive is GOING to fail because it's working
    10 times as hard as the SCSI drive is to get the same data. Now, if someone can just explain why it costs so
    much more. I am inclined to agree with the previous poster who said that the hard drive companies just milk
    the "business market" but I have no real facts to base that on

  • No, in my experience Maxtor and WD have always been the lowball price HDs at CompUSA, and the most prone to failure (I work summers at a computer reseller and fix broken computers, so I have actual experience with this, yes). My personal pick is Seagate.

  • Whatever the licensing issues are, it doesn't give anyone much incentive to include the technology if there is nothing to use with it.

    Apple should take a page from their own book and drop the royalty -- the iMac's USB support, followed by Blue and White G3's created an instant explosion of USB devices which at least in theory should work for Macs and PC's.

    In case you didn't know, the only difference between Mac modems and hardware PC modems was the serial cable connecting them. With USB, it's standardized -- one size fits all, without hardware/interface modification.

    --

  • by Wanker ( 17907 ) on Thursday January 20, 2000 @09:24AM (#1356455)

    In the days of 200MB hard drives, Western Digital was king. They made solid, inexpensive, high-performance drives.

    About the time of the 500MB hard drive, they started cheapening things up. Cache sizes were reduced, and while everyone else was looking towards a screaming 5400RPM, Western Digital stuck at 3600.

    This seemed to peak about the time of the 1.2/1.6GB drives. These had a tiny, tiny cache and performed abysmally, despite the WD propaganda about how their 128K cache was somehow better than everyone else's 512K cache. The post-install failure rate from my experience was on the order of 20-30%, with an early-life failure rate of about 30-40%, based on about 200 sold.

    About this time, Seagate was making a 1.0GB low-profile drive that was rock-solid. Of about 500 sold, I saw two go bad. I haven't gone back to Western Digital since.

    When talking about drive reliability for a particular manufacturer, it's important to give a timeframe. Different manufacturers have been good at different times, and who is great one year might suck the next.

  • > I wonder if it would be possible to rip of the IDE controller board from a hard disk and replace it with a SCSI one

    I've been dreaming of this for years. The problem is the different natures of IDE and SCSI tech. I'm no expert, but IDE seems to depend on constant attention from the CPU. That means the the converter would have to include a stand-alone microprocessor and memory to handle the IDE controller, plus the SCSI controller to talk to the bus. A single-chip 486 with IDE and SCSI running some ultra-hacked software would do it, but there's no way it'd be cost effective.

    On the bright side, you could map multiple IDE drives into a single SCSI ID (2 per IDE channel). A standard dual controller could handle 4 drives, but the SCSI bus would only see one ID. Might make for an amusing RAID/mirroring solution.
  • As well you should.. But that drive is much slower then the current crop of drives.. It's aural density is about 1/3 of the current crop of drives.
    Go to www.storagereview.com, click on "database". and compare the two drives(or 3 if you want to include your old IDE) for yourself.. the tests are in NT and windows, but the performance is comparable in Linux(Follows the NT trends moslty).
  • I mean geez. Talk about ease of use! Sign me up!
  • I started CD burning on my mac with Toast. It FORCED you to do nothing else. (gotta love that cooperative multitasking ). Literarly. Win & Linux users won't know what I'm talking about, but a Mac app can completely take over. anyway, I got used to the same thing too.

    When I got my IDE burner at home, I don't do anything while it's burning. sometimes I will read email, but that's only, like, once or twice. Coincidentally, I've never burnt a coaster with it.
  • one of the direct impacts of this announcement is being felt here in Rochester, MN where WD Enterprise Storage Systems had just built a huge new R&D lab, and were preparing to plunder the staff of IBM's DASD group here in town... now they're going from moving into their new building and staffing up something like 600 jobs... to selling the building to Mayo and laying off the 400 some odd people that work at their current location. Bet those folks that left IBM are thinking about just how well they'd be welcomed back.....
  • I have used both SCSI and IDE based systems and I have to say SCSI fares much better in the performance area like anyone here could tell you. The big deal with IDE on desktop systems is back when a 1.2gb drive was enormous the drives on your PC were IDE. For hard drive manufacturers to keep selling their products they need to make damn sure their products work with legacy systems. So many years of IDE being dominant has left IDE the dominant standard today, SCSI is still too expensive for most people not to mention the need for a SCSI adapter. Motherboards with SCSI adapters built onto them cost way more than boards with just IDE, which is something I can't understand. The controller chips can't be very expensive anymore (if you use Moore's law) but I figure the companies keep the prices high because they are figuring they will sell to corporate customers with oodles of cash. I personally think the standard should be changed from IDE and SCSI to IEEE 1394. My reasoning is this, since all 1394 devices can work independantly from the CPU and from each other it would make the system all around more efficient if your DVD, HDD, scanner, and video camera were all connected to the same bus. The speed isn't bad either, 400 or 800 Mbps which is comparable to ATA-66 and faster SCSI speeds. I also rather like the idea of power being supplied by the bus itself rather than by a separate connector, that leaves me with alot less wires hanging around in my system.
  • ;-)
    Depends on what controllers you use...
    You will spend $250 or so on SCSI controllers.. If you get 2 IDE controllers, and have one on each channel, will be about the same.
    But, I agree that for any more then 4 IDE devices, this starts to get silly, and if you need more then 4 or maybe 6 hard drives or hot swap capability, go with SCSI, as that's not IDE's market..
    However, current IDE fits 99% of end users needs, and is as fast without using up many irqs for up to 4 devices. High end servers who need more the this should pay out the nose for SCSI, or get redundant external fiber channel linked storage such as SUN provides(which use SCSI in the enclosure) and get REAL reliability and fail over...
  • I still contend that SCSI hard drives are useful primarily in environments where constantly heavy disk access is necessary, primarily in servers. When you have many people trying to access a database file that is several hundred megabytes in size, you want disk throughput that is very high, and this is where Wide SCSI in its various forms (40, 80 and now 160 MB/sec. transfer rates) becomes useful.

    But for single-user desktop operating systems, today's ATA-66 IDE hard drives is more than enough for their needs. Remember, since Intel chipset motherboards usually sport a variant of the Intel 82371 I/O controller, this means you can write bus-mastering software drivers (regardless of operating system) that will dramatically reduce the CPU utilization during disk access, speeding up the computer.

    It's only the high-end desktop computer user (e.g., people who work with big CAD/CAM or illustration files) where SCSI Ultra-Wide might become useful.
  • We did verify with that utility. We did call as an OEM. All of that was made clear. They STILL expected us to pay shipping.

    Hey Rob, Thanks for that tarball!

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...