Follow Slashdot stories on Twitter


Forgot your password?

PCI 3.0 Coming; Intel gets the Green Light. 172

pjbass writes "This story on ZDnet discusses the next I/O subsystem planned for PC's. It will be PCI 3.0 once making it to the consumers, but it is now known as Arapahoe, or 3GIO. Intel Corp. is responsible for making the technology, and boast its performance will be about 6 times that of PCI2.x, getting up to speeds of 6.6 gigabytes per second of bandwidth initially, with promises to scale more once the technology is mainstream."
This discussion has been archived. No new comments can be posted.

PCI 3.0 Coming; Intel gets the Green Light.

Comments Filter:
  • AMD (Score:2, Interesting)

    by arielb ( 5604 )
    AMD also voted for this too so we'll expect support for future Athlons (or more likely the Hammer). In the meantime Hypertransport is here for us to enjoy
  • by Shivetya ( 243324 ) on Wednesday August 08, 2001 @07:19AM (#2110313) Homepage Journal
    Don't confuse Hypertransport functionality with PCI 3.0, as an eetimes article explains AMD's logic for voting to support the new intel standard,

    Reading it closely makes me feel as if AMD is trying to curry favor with Intel for some odd reason while at the same time promoting their own technology.... They do overlap in a few areas, but I am curious if their support for the new PCI 3.0 standard will make it harder for them to sell HT as they will have to work to differentiate it.

    • Reading it closely makes me feel as if AMD is trying to curry favor with Intel

      or maybe AMD realizes that Intel owns the market and to not support PCI 3.0 would mean PITA for hardware vendors and suicide for AMD.

    • > Reading it closely makes me feel as if AMD is trying to curry favor with Intel for some odd reason while at the same time promoting their own technology.... They do overlap in a few areas, but I am curious if their support for the new PCI 3.0 standard will make it harder for them to sell HT as they will have to work to differentiate it.

      Maybe they're hoping to saddle Intel with a standard that can't compete with their own proprietary solution?
  • The last update I heard, was that AMD already had a new PCI bus (I thought it was PCI 2.0??), and the FCC was waiting on Intel. Because Intel was getting all upset that AMD had already made the standard and they weren't going to get their $$$. This was about 3 months ago, and I don't recall were I read it or I'd post it. I know the numbers are still the same (speed wise anyways), but what happened to AMD's new PCI? Did they even have one in the first place?
  • by Dr_Cheeks ( 110261 ) on Wednesday August 08, 2001 @06:17AM (#2118899) Homepage Journal
    I already have to explain to family and friends why a Pentium 75 is worse than a Pentium 4 far more regularly than I'd like to. I can just picture the pathetic puppy-dog looks on their faces when I tell them that their 5-year-old box won't take their new PCI 3 piece of kit ("See this number 3 here..."). And then they try to guilt trip me into taking it back and asking for a refund too....

    Call it "New PCI" or "Super-Duper PCI" or "Extra Whizzy PCI (not compatible with any computers made before 2001)". Please!

    And don't even get me started on the trouble I've had explaining why people's "innovative" cheap storage solutions are flawed (Zip disks don't work in regular floppy drives, you can't overwrite normal music CDs now matter how good your burner is etc.).

  • Yeah? (Score:2, Interesting)

    by The_Weevil ( 448754 )
    Oh great a new architecture. How long will it take before we get PCI 3.0 2x, 4x and 8x ?
    Still waiting for that fibre-optic bus. Still waiting.

  • Where did my trusty PCI-X that doesn't require all new cards & was finalized over a year ago, go?
  • Dude, PCI 3.0 is not yet available. Take a look at AMD's new alternative to current BUS technologies. All you bus are belong to hypertransport. 12.8 GBps. Intel has been beaten on this.
  • Lawyers?! (Score:3, Funny)

    by agdv ( 457752 ) on Wednesday August 08, 2001 @06:05AM (#2123778)
    The nine committee members [...] had voted July 27 to take another week for company lawyers to review the standard.

    WTF? Since when are lawyers qualified to decide on technology issues? I'd understand if they were to review the legalities of the standard (patents and all that crap), but the standard itself?

    Next time I need to design a computer bus I'll ask my mother (a law professor). But first I'll teach her how to use scrollbars...
    • Re:Lawyers?! (Score:5, Insightful)

      by geomcbay ( 263540 ) on Wednesday August 08, 2001 @06:57AM (#2128562)
      WTF? Since when are lawyers qualified to decide on technology issues? I'd understand if they were to review the legalities of the standard (patents and all that crap), but the standard itself?

      Obviously it IS the legality of the standard they are interested in. They will all want to go over the spec with a fine tooth comb to make sure they don't wind up with another RAMBUS fiasco.

      Yes, I realize RAMBUS's patents werent actually published at time of the memory standards meeting, they were still pending, but that whole incident has definately raised the amount of due diligence companies are putting into the legal end of standards committees. It makes no sense for AMD to endorse the standard going forward if, for example, it wound up that they would have to pay Intel a bunch of royalties on every chip they sold because they needed to use some patented method for the CPU to talk to add-in cards over this bus.

    • Funny, the headline on the monday august 6th, 2001 Electronic News [] was 'Intel Strongarms the PCI SIG' [].

      Might be why this [] is whythe lawyers are involded...

      And you thought this was a good thing,
    • Re:Lawyers?! (Score:2, Insightful)

      by bartyboy ( 99076 )
      Patent lawyers often have two degrees - one in law, and one in the field that they're working in.

      A friend of mine finished his biochemistry degree and is now studying law. This will not only open up more doors for him, but make him a slightly more competent lawyer if he chooses that career path.
  • What about AGP (Score:2, Interesting)

    by uhmmmm ( 512629 )
    If PCI 3.0 is going to be so much faster, what effect will that have on AGP. Will I have to go but a PCI videocard when I upgrade, or can I keep my AGP one?
    • Re:What about AGP (Score:2, Interesting)

      by CajunArson ( 465943 )

      You need to remember that AGP is PCI!

      The AGP standard was derived from the PCI bus, but AGP is a port meaning only 1 device is hooked to the controller.

      There may be a new AGP spec based on PCI 3.0, or due to it's point-to-point nature, it may not be even necessary to have a special device interface just for graphics.

      In response to other posts, AGP 4X maxes out at 1.1GB/s while PCI 3.0 is initially proposed to go to 6.6GB/s and will go higher than that once the technology matures.

      All in all this new spec is a Good Thing (tm)

    • Re:What about AGP (Score:3, Interesting)

      by Syllepsis ( 196919 )
      Sure, if you get a board w backwards compatibility. It took forever for ISA slots to dissapear. I remember there were boards with 3 PCI, 3 ISA, and 1 VLB slot. The AGP-Pro will perhaps take the place of the VLB as the outdated quirky standard still supported.

      I bet you will not want to keep it though. PCI3 would offer a shared 6.6 GB/s peak versus an AGP 4x peak of 1 GB/s. At that point, a GeForce 3 MX PCI3 with 128 MB DDR-333 will most likely run for under $40 online, if they are still bothering to sell them. Drool...

  • Is this Hemos finally responding to this []? Geez, took him long enough. :)
  • Lowered MB Costs (Score:5, Interesting)

    by nate1138 ( 325593 ) on Wednesday August 08, 2001 @05:54AM (#2140163)
    One Good Thing that the article failed to mention is that fewer wires also means it is easier to design a motherboard, and expansion cards, thus lowering the overall prices of both items (once the required chipsets get into mass-production, of course). You should also be able to get more spacing between the circuit paths, which should lead to a lower possiblity of cross-talk, and better reliability.
    • Re:Lowered MB Costs (Score:3, Interesting)

      by Milican ( 58140 )
      I agree that motherboards will have fewer lines and thus be simpler because of the serialization of parallel lines. However, the serialization means that higher frequencies will be required for one wire to do what many parallel wires had done before. The result when moving to higher and higher frequencies is more cross-talk on the lines that are left. A good example is Rambus. From what I hear, there are lots of difficult issues with the cross-talk on the narrow bus [].

  • It would be nice to have a full 64 bit computer that doesn't have any of the 8 bit/16 bit left in it so we can get away from the limiting backwards compatability.

    Doing this bit and piece at a time is just dragging out the process and going to get people more confused than if they just switched to a non-self-bottlenecking set of standards.

    Or am I just dreaming?

    • You try something like that, and what you end up with may be a nice, squeaky-clean architecture whose implementation have no customers because every single one of their old software and cards don't work on their new computer. That's why it's been so tough to break the Intel/AMD cycle.

      If Intel tried something like that, then Intel would lose tons of market share and then AMD would capitalize on it. And it would totally segment the marketplace, which is desirable to nobody.

      If AMD tried something like that, almost nobody would buy it and AMD would go bankrupt trying it.

      It has to be done in incremental fashion. It's safer for the consumer that way, and that is what the typical consumer has historically wanted.

  • Will you be able to sneak your old PCI cards into this newfangled technology or no? I don't remember seeing anything in the article about that... If not, I think that it will hamper the transition to this new standard...
    • Software yes, Hardware no. At least that's what I get from this paragraph:

      "The key message is that PCI software and device drivers do not have to change to be supported in the base level of Arapahoe," Tipley said. "As far as the actual link level, how electrons get across the wires, that's quite different, and obviously won't be the same PCI pins. It will be very similar to what a link would look like for 10 Gigabit Ethernet or InfiniBand, that kind of signaling."

    • Very probably the computers using this bus will also have a PCI2 bus for older cards, just as today's computers have a few ISA slots.
      • Not trying to be an ass...

        Are you saying that you know that it isn't compatible? Becuase if it wasn't I'd expect them to put the old PCI2 slots in mobos. Sorry to be so exacting, but can anyone confirm or disprove with confidince if it is backwards compatible?

        I.E. Will you be able to throw your current PCI devices into it, like the way different AGP speeds work currently?
        • 3GIO is a serial bus while PCI is not... That alone is enough to make that foul crap known now as 3GIO from being compatable with existing cards.
          • Thank you very much for that horribly uninsightful comment. There is of course no reason why it could not be backwards compatible just because it's serial.

            Ir probably will not be simply because it's such a change from previous PCI that I seriously doubt that Intel wants people confused about what cards will work in what (as someone else mentioned, there are a lot of people out there would would try to shoe-horn a new card into a Pentium 75...)

            Also, because it's intended to be more of a port, not a bus, one goal is to try to prevent conflicts, sharing, noise, and other things that severely limit current PCI technology.

            There is no sound reason to assume that PCI 3.0 is "Foul Crap" except that you probably don't like Intel.

            Get over it.
            • No I don't like Intel, not even a little... But that has nothing to do with why I know that it would have to use new cards...

              If you've looked at the spec it calls for an entirely serial approach to everything... & you see modern PCI cards are parrelel with the ability to send 32 or 64 bits every cycle... This is where we get a problem... You either have to use a buffer to handle parellel requests in a serial fashion (adds $ to use) or you have logic added to each card so that it can communicate in either Serial or Parellel mode (which doesn't work for existing cards). Think about it for awhile & you should realize that I'm right about it...

              & frankly Intel has never cared if what they do forces a user to upgrade, heck Intel is the king of forcing upgrades on people just to make more money... They've used that tactic since the 386 days...

  • I remember when I was a kid, seeing some article on Usenet circa 1990 about how it was impossible for any computer to do 30 FPS in 24-bit ...The original PCI spec had come out, if I remember correctly, spec'ed out at 133MB/sec. Ah, the world was going to open up..So many things that you could do with that much bandwidth. The difference being that nowadays, I cant think of a single application that could need 6.6 GB/sec of bus bandwidth, other than really, really intensive data collection. But then again, it may not mean much now..It's built for the future, after all.

    • I'm confident some 3D chip manufacturer will come up with some uses for it, like single-pass texturing using 8 textures ...

      Besides, as long as the 3D card can render it, you can send many more polygons/second if you have 6.6 GBps of bandwidth

      Another thing that comes to mind is Video Capture and processing of HDTV signals.

    • > I remember when I was a kid, seeing some article on Usenet circa 1990 about how it was impossible for any computer to do 30 FPS in 24-bit

      If you drop it from high enough, any computer will do 30 fps before it hits the ground, without regard to its bitness.
    • The difference being that nowadays, I cant think of a single application that could need 6.6 GB/sec of bus bandwidth, other than really, really intensive data collection.

      How about 10 Gbit ethernet? A few such interfaces should put some load on the bus, so maybe a router working with 10 Gbit could need the bandwidth.

    • I remember when 6 GB hard drives first hit the market. I thought to myself *no one* needs this much space. How I have been shown up.
    • I remember when I was a kid, seeing some article on Usenet circa 1990.....

      It makes me chuckle to hear these young'uns talk about 1990 being a long time ago.

      Back in 1980, I remember when a 40 MB drive was so big (and expensive) it was only used in a multi-user system. Now I have individual files that easily exceed that. Sometimes by several times.

      So I hope nobody makes any statements to the effect "640K, that ought to be enough for anybody."

      On my bookcase, I've got the drive mechanism from an old 5MB drive. It is about 40 % larger than your typical 5 inch drive mechanism today. It's 5 MB. It sounded like a jet engine starting up. And it cost --- $3000 when new. And that was the "new", "low-cost" technology.

      I hope the lessons here are obvious and don't need explaining. The time will come when 6 GB/Sec will seem so limiting. After all, a holographic projection needs way more bandwidth than this. I look at the progress of the last 20 years, and I am hopeful to see where computers will be in 2010.
    • One word: graphics.

      I mean, this kind of bandwidth is at least in the same league as what today's graphic chips have to their local (on-board) memory. If a board could have >6GB/s bandwidth to system RAM, that might make it feasible to do unified memory systems again. I'm not saying that would necessarily be better, but it's at least possible and might even be cheaper in some cases. Also, I for one would enjoy a world where PCs don't have a single one-of-a-kind AGP connector for the graphics board, but where I could plop in as many boards as desired with at least reasonable bandwidth.
  • Oh baby baby BABY!!!

    Who's you super computer, who's your super!!!

    Can't wait for these to hit the market and build a network of 3.0 spec motherboards!


    PS: Gonna have to sit down now....I feel dizzy.
  • Intel=Evil Corp.


    PCI 3.0 = Bad
  • "...representatives of Advanced Micro Devices [ok], Broadcom's ServerWorks division [ok], Compaq Computer [ok], Hewlett-Packard [ok], IBM, Intel [ok], Microsoft [what the hell are they doing there ?!??!?!], Phoenix Technologies [ok] and Texas Instrument [ok]."

    Some big players are missing but is Microsoft doing there ???
  • by AFCArchvile ( 221494 ) on Wednesday August 08, 2001 @10:33AM (#2149505)
    So will the connector be backwards-compatible? Or will we return to the days of three different bus connectors? (I'm not counting AGP, since there's always just one of those).
    • Which 3?

      ISA is effectively dead (I know some people still use it, but more and more motherboards simply don't have a slot).

      PCI 2.x is the current "legacy"

      Between PCI 3.0 and HyperTransport... If PCI 3.0 is not backwardly compatible then I would expect a motherboard to probably support PCI 2.x and ONE of the other standards (most likely dependant on whose CPU the MoBo supports).
  • I haven't read the spec: Are there any provisions for hardware copy protection systems in this thing?

    Intel's been working on hardware copy protection for IEEE 1394, so it wouldn't surprise me if they managed to sneak that garbage into PCI 3.0.


  • by Drakino ( 10965 ) <> on Wednesday August 08, 2001 @10:46AM (#2150202) Journal
    Why does a consumer machine need this when PCI 64 bit, or 66mhz hasn't gotten into the market? The 3 types of machines I ever see these slots on are servers, very high end workstations, and Apple systesm.

    Also, where does PCI-X fit into all this?
    • All machines need cheap speed.

      A 64-bit bus is expensive because it doesn't go as fast as a serial bus (you have to slow it down deliberately to avoid timing problems making all 64 lines sync up) and it eats a lot of board space and chip pins.

      Though it seems counterintuitive, a serial solution is currently more consumer-friendly than a wider bus is.

      Of course, in 5 years, when the PCI is becoming the bottleneck again, and even cranking it up to 24 or 48 GHz isn't enough, someone will put several of these in parallel and tout it as a great advancement.

      I take it back. I predict we'll see someone doing that and marketing it as vapor it before we even buy the first one of these.

      Or maybe I just did.

  • by sunking7 ( 112069 ) on Wednesday August 08, 2001 @08:30AM (#2150919)
    Well when you have that kind of bandwidth on the PCI bus, doesn't it seem a little redundant to have the AGP port expense on the the bridge chips?

    Will everyone who bought AGP 4X graphic cards have to abandon them again like they left the PCI platform before? Anyway I'm still plugging along with an old PCI card and maybe I'll be glad I stayed there.

  • Does this mean no more AGP?
  • Yeah! (Score:1, Funny)

    by quigonn ( 80360 )
    Together with IA-64, this will finally make the PC platform a "good" computer. ;)
    • You won't have a *good* computer until you can insert 6 PCI Cards *and* an AGP card without having an IRQ conflict...
  • Should be noted (Score:4, Informative)

    by Anonymous Coward on Wednesday August 08, 2001 @06:21AM (#2152820)
    One thing the ZDnet story doesn't mention is that unlike PCI 2.x, 3GIO will use point to point connections instead of a shared bus.
  • this is just getting the spec out the door

    no silicon yet so many companies do not even have access to what it is

    their are no third partys supplieing interfaces or anything BUT lots of archs with bus problems i.e. Bandwidth problems

    EV^ aka hypertransport is here right now and their is third part silicon SUN and apple will use it to link AGP Memory CPU because it is just faster !

    nice but intel still have bandwidth problems now and look to drop their prices by up to 50 % today on the P4

    the BUS is the problem for them and thats where AMD rules

    SUN has also had faster machines simply because the BUS was faster

    oh well


    john jones

    • SUN has also had faster machines simply because the BUS was faster

      Not to quibble, but while this might have been true a long time ago, it's certainly not true today. In a Sun Fire 6800 you can't write from memory to PCI space at more than 150 MB/sec, which is really terrible for a 64/66 PCI bus. (The PCI to memory speed in that same machine is about 370 MB/sec.)

      Supposedly their next PCI controller chip will fix this problem, but that's what they said about the last one...

      • That might still be a lot faster than what's seen in the PC space. I remember a few years ago having a devil of a time getting some of the popular Intel chipsets to sustain more than about 20MB/s without locking up. Sun's new-at-the-time "Psycho"[1] chipset was a breath of fresh air by comparison. You might think that 150MB/s sucks, but it would not be at all surprising if it's still better than what you'll find in the Intel/AMD camp.

        • Oops, forgot the note. [1] I'm not sure about the spelling because I only ever heard it talked about, never saw anything on paper. Then I left that job and stopped keeping track of such things.

          Another thought: the reason Intel, AMD et al keep pushing faster pipes when they only get 20-50% of nominal on the existing pipes is simple. They'll always use only a fraction of whatever pipe you hand them. It's way easier to design a faster pipe and get 20-50% of that than to get 70% or more out of the existing pipe.

    • Hypertransport has nothing to do with the EV6 bus that is used by the Athlon and Duron. Hypertransport is an interconnect technology for on-board components. PCI (2|3) can do this, but also has a physical interface definition, the "PCI slot". Hypertransport is better than PCI2.x by a mile and more...
    • EV6 aka hypertransport
      EV6 and HyperTransport are different things. EV6 is the system bus used by the current generation of Alpha and Athlon CPUs. AMD licensed it from Digital. HyperTransport (codename Lightning Data Transport) is AMDs next generation system bus.
      • Ok, there seems to be some confusion here as to what EV6 actually is.

        EV6 is the code name for the 21264 Alpha. (Yes, EV67 is the code name for 21264A and EV68 is the code name for 21264B. And of course, EV7 is the code name for the upcoming 21364.)

        AMD licensed the EV6 Bus from DEC for use with K7.

        I hope this clears everything up.

      • ok

        I am preaching to people who should understand this

        hypertransport is the same thing bought ready to go for AMD

        arrrch read the spec if they tell me its differant why does the same patent end up in both ?


        john jones
        • by hattig ( 47930 ) on Wednesday August 08, 2001 @08:48AM (#2148160) Journal
          EV6 is a 64-bit wide point-to-point processor bus used to connect Athlons and Durons to compatible Northbridges. It was developed by Alpha, and it can scale up to 200MHz DDR (400MHz effective). It can currently transfer either 1600MB/s or 2100MB/s.

          Hypertransport is a variable width, bi-directional bus. It can transfer up to 12GB/s. It can be used for many things - CPU - Northbridge (as it will be used for the upcoming Hammer CPUs), Northbridge - SOuthbridge, Northbridge - RAM, GPU - RAM, Southbridge - RAID controller, etc.

          Hypertransport is packeted. EV6 isn't. AMD license EV6 from Alpha, AMD designed Hypertransport.

          Is this enough to convince you that EV6 and Hypertransport are different?

          • EV67 is varible width point to point bus

            EV67 is packeted in effect

            how do you want to spin it EV67 and hypertransport are the same thing AMD research got the guys from DEC OK

            please dont delude yourself its like saying electrons are nice little things that fly around atoms

            or electons flow from positive to negitive

            its just nice lies that work


            john jones
            • John, you're totally wrong. The EV6 bus and HyperTransport are NOT the same thing.

              Read the specs.
              • right


                ok some of what I say is wrong i.e. they are not compatable there switching is done differantly but the ideas and the way they are implemented are the same

                so I am not totaly wrong but then I am nit completely right if you want to be pedantic

                one question how many pages is the EV67 reqspec ?

                or the tech manual for hypertransport ?

                dont know

                thats because you dont have them !

                they are subject to NDA so dont tell me to read the spec ! (I have them)

                what I am annopyed at is the fact that so many people seem to have jumped on the mrketing DROIDS words

                I admit I am wrong but hey !

                your so far off base its incredable


                john jones
                • You Sir, are a Troll.

                  I am not surprised that patents for one bus technology are reused in another bus. But that does not make the second bus a variant of the first bus. It makes sense to reuse good ideas!

                  EV6 IS NOT the same bus as HyperTransport. They are not even similar, except maybe for some low-level things.

                  EV6 does not use LVDS.
                  EV6 is not a bidirectional (full duplex) bus (X data lines one way, and Y data lines the other way), instead all of the data lines are use for communications in both directions (half-duplex).
                  EV6 is a processor (Alpha or Athlon/Duron) to northbridge bus. Hypertransport is a chip interconnection technology for the future.
                  EV6 is not packet driven, unlike HyperTransport.
                  EV6 is a point-to-point bus. Hypertransport can have 32 devices on a single bus, via a hub architecture (i.e., you could say it is a lot of point-to-point busses connected together, but the addressing allows for 32 devices)

                  and there are such a lot of other things that are different.

                  You're so far off base it's incredible. And you have the specifications? Have you thought of reading them? If your job requires you to work with these busses, and you do not even know the difference between them, then I feel sorry for your employers.

    • I could be wrong, but...

      I've read quite a number of technical reviews of both the athlon and p4 and for some reason I remember reading more than once that the bus the p4 uses has 3.2GB/sec of bandwidth whereas the EV6 bus tops out (at least right now) at 2.1GB/sec.

      Doesn't this mean that the bus is more a problem now for AMD than it is for Intel? Or am I totally wrong? :)
      • The EV6 bus tops out at 3.2GB/s in the specification. AMD have decided to only implement the 1.6GB/s and 2.1GB/s version of it (100MHz and 133MHz DDR). However people have overclocked the bus significantly to 2.4GB/s or even 2.7GB/s already. I expect that Barton might work with a 166MHz DDR FSB anyway, as the next AMD chipset will support PC2700 DDR memory, and AMD's chipsets are alway synchronous with the FSB speed and memory bus speed.

        The P4's bus is quite bandwidth hungry if I remember correctly. It isn't as efficient as the EV6. Anyway, the P4 is slowed down badly by high RDRAM latency.

        Of course, the P4's FSB will be updated to be 533MHz (133x4) next year, thus getting over 4GB/s bandwidth, with faster RDRAM with more bandwidth. However DDR ram then will be faster and have even lower latencies. Imagine a dual channel DDR chipset for the Athlon that support PC2700, aka nForce 2 coming next year. A total of 5.4GB/s of bandwidth between memory and the system.

    • From the AMD HyperTransport FAQ:
      PCI-X is a motherboard to expansion card interconnect, while Infiniband is a networking protocol and interconnect for linking systems. Both are medium performance expansion-bus specifications. The I/O Hub is a very slow speed interconnect between two proprietary chips. These standards are optimized toward the constraints of connecting external components and computer systems, namely with low cost connectors, long cable runs, device sharing, etc. HyperTransport technology is optimized to chip interconnect and is designed to operate at much higher bandwidths by eliminating many of the constraints necessary in expansion bus designs.

      HyperTransport = chip to chip
      PCI/3GIO = motherboard to expansion cards
      InfiniBand = box to box

      Each of these technologies are designed with differenct applications in mind and therefore different constraints and complexity. There is no such thing as a "one size fits all" bus.
      • Each of these technologies are designed with differenct applications in mind and therefore different constraints and complexity.
        True, but a lot of trade rags make it sound like if PCI wins, the HyperTransport loses. Idiots.

"I shall expect a chemical cure for psychopathic behavior by 10 A.M. tomorrow, or I'll have your guts for spaghetti." -- a comic panel by Cotham