Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Quad PCIe Motherboard 147

SlipKid writes "PCI Express Graphics cards have allowed for some new and innovative ways to increase rendering horsepower in Desktops and Workstations. Recent introductions of NVIDIA's SLI and ATI's CrossFire technology have enabled dual PCIe Graphics cards in a load-sharing architecture. Motherboard manufacturers are jumping into the fray now and Gigabyte has released a Quad PCI Express graphics enabled motherboard, capable of running four cards at once. The board is not capable of running Quad SLI, mostly due to lack of NVIDIA driver support currently but it does offer support for eight simultaneous display outputs on four Graphics cards."
This discussion has been archived. No new comments can be posted.

Quad PCIe Motherboard

Comments Filter:
  • by Anonymous Coward on Tuesday March 14, 2006 @05:32AM (#14914357)
    Well, $topic pretty much says it all. More PCIe-slots, great, but it'd be nice if there were stuff besides graphics-adapters to push in.
  • by Zakabog ( 603757 ) <john.jmaug@com> on Tuesday March 14, 2006 @05:35AM (#14914364)
    Even if you could do Quad SLI, would it make that much of a difference in performance? At what point would splitting the rendering task be more work than it's worth?
    • by jeroenb ( 125404 ) on Tuesday March 14, 2006 @06:05AM (#14914440) Homepage
      Back in december, Tom's Hardware managed to get two dual-GPU GeForce 7800 cards working on a regular SLI-board. In their bechmarks [tomshardware.com] the performance increase was quite good. Although not worth the money ofcourse, but none of the high-end gaming cards are.
    • I was under the impression that the way these SLI things worked was that each card renders a different scanline; so the graphics performance would continue to increase as you added more cards (up to the total number of visible scanlines, obviously). Each new card just makes proportionally less difference.

      So if 4 cards take 1 ms each to render a scanline, and you have 1000 scanlines (and it was perfectly parallel) it would take 250ms to render the screen. 2 cards of the same performance would take 500ms. So
      • This is a correct assumption in so far as I can tell. However, bottlenecking is still an issue. If the CPU and other associated hardware cannot handle the thoroughput straight from all four cards, you will see degredation in performance. It's much like the difference between a common garden hose and a pressure washer. Putting Quad SLI cards in with a normal CPU is like putting 3000 psi through a Wal-Mart garden hose. Having all the power in the world at the source won't do you a bit of extra good if it can
        • Surely a better analogy would be that of attaching a water cannon to a household tap. The adapter is capable of outputting a fast stream of water / rendered graphics, but if the tap / cpu can't supply water / rendering data at a fast enough rate then that capacity is wasted.

          What's you really need for a quad-sli set up is a super hi res monitor to give the cards some more work to do per frame. The CPU bottleneck will be less of an issue if the time to render each frame increases.
      • SLI used to be like that; now, each card renders the top half/bottom half iirc.
      • by Anonymous Coward
        1ms per scanline? Far faster, since that would give you 4 frames/second with one card. We're talking microseconds in real life ;-)
      • unfortunately not quite so easy, as there is more that the video card needs to do other than rasterisation (ie, drawing scanlines). There are many other tasks including edge clipping, face culling, etc which are slightly more difficult to split the load between graphics cards.

        I'm not sure exactly how well this is achieved in SLI but I'm sure there isn't a perfect 50/50 split, and there won't be a doubling of performance. I'm sure it's close though
      • Right, so Moore's law now means that today we buy 2 video cards, next year we buy 4, the year after that 8, and 20 years from now, we buy 2097152 video cards?
  • Octohead (Score:3, Interesting)

    by realnowhereman ( 263389 ) <andyparkins@nOsPam.gmail.com> on Tuesday March 14, 2006 @05:39AM (#14914378)
    I like the idea of an eight-head computer. I wonder what the price difference would be to equip a computer lab with octoheads instead of singles.

    In fact, if I could get some long enough wires, every television in my house could be just another head of one master computer. Master Control! Huzzah!
    • Why? (Score:3, Interesting)

      by WIAKywbfatw ( 307557 )
      I like the idea of an eight-head computer. I wonder what the price difference would be to equip a computer lab with octoheads instead of singles.

      Why would you do this? You risk losing eight desktops instead of just one to a single component failure (eg, a faulty motherboard).

      Standard entry-level desktops and workstations are commodity items now: their prices are so low, and they are so easy to acquire that I doubt that there would be much in the way of cost savings to be had when comparing eight single-CPU
      • You forgot to mention that in windows, there normally are no functions for supporting eight keyboards/mouses and locking them to a specific screen and logging on as a different user per screen on a multi-head setup.
        Windows simply isn't a multi user OS.
        I haven't tried it, but I'd wager you'd have to go through hell and back to get this working in linux too.

        But my guess is that he didn't want to use the one eight-header as a multi-user computer but instead have eight monitors per workstation at the lab.

        At the
        • http://www.active-hardware.com/english/reviews/div /b210.htm [active-hardware.com] PC Buddy allowed for 5 KVM hookups to a win 98 pc..

          (one original, 4 buddies) I installed one once- it worked well.... but it was mostly used as a second browsing terminal....

        • I'd wager you'd have to go through hell and back to get this working in linux too"

          You would lose. It's trivial.

          • Link? I my HTPC has two video cards, one driving the TV, and the other for the kids' computer setup, and I never figured out how to logically separate them into two different X consoles. Instead I'm using one extended display, which isn't what I really want... if the kids move the mouse cursor too far it goes on to the TV.
            • Make sure that you have two seperate configuration files, and that you start X by specifying the specific one for each screen. You need at least one USB keyboard and mouse. You need to specify precisely which video card you are using, and which USB keyboard/mouse devices (versus the PS2 keyboard and mouse). You can't just use /dev/input/mice or /dev/input/keyboard, because they multiplex PS2 and USB devices (usually used in laptops).

              I believe GDM can be set up to do this (one login screen per monitor/keyboa
              • I'll try it. When I looked about 18 months ago it seemed you couldn't associate specific USB devices with specific displays, but perhaps using a PS/2 device for one and a USB for the other is a clever trick that will work. (Though I actually don't use a keyboard or mouse for the TV display at all... just a remote control and lirc.)


                • For naming USB mice, use /dev/input/mouse0, mouse1, mouse2, etc.
                  For keyboards, use the /dev/input/eventXX or the "physical" devices using the "evdev" keyboard driver.
                  See this document:
                  http://www.c3sl.ufpr.br/multiterminal/howtos/howto -evdev-en.htm [c3sl.ufpr.br]

                  You can use up to 4 SiS video cards, or a combination of Matrox cards, or even 4 NVidia GeForce 2MXs if you use the NVidia-supplied driver.

                  Cool, eh?
                  • Looks like just what I need.

                    Now all I need is a multicore CPU. My kids just *hate* it when I fire up a movie and their videogame comes to a crawl :)

                  • Cool. I didn't bother to research the subject before my comment. I simply remembered how hard it was to bind separate shells to a specific gfx-card/keyboard when I last tried it in 1998 and assumed that it would be even harder to bind something to a specific monitor on a specific card.
                    Never tried it in X though...

                    Does this work on one gfx-card with several monitors attached too?
                    Just wondering since the "it's trivial" comment of jericho4.0 was about a four 2 x monitor card setup resulting in eight virtual wo
      • Re:Why? (Score:5, Interesting)

        by MikShapi ( 681808 ) on Tuesday March 14, 2006 @06:37AM (#14914522) Journal
        I'm not sure you're right.
        How much power do you need for your 8 PC's at home?
        Let's assume no more than one of them is actually a gaming rig.
        Let's assume 2 more are MPEG-4 decoding boxes.
        Let's assume another two run office apps. All concurrently.

        Prioritize your processes properly and a dual-core dual-processor rig will do this with modern mid-range processors.

        You can even do this with windows using Jetway's Magic-Twin (I do this with 2 seperate consoles and WinXP in my car).

        Further, due to all your harddrives being piled in one place serving everyone, you get both a RAID-5 volume, a secondary backup volume to back up your entire RAID5, and if you really want to you can go RAID6 as well. And you get to pool everyone's unused space together, greatly optimizing disk usage.

        RAM will be in demand, but not really a problem you can't solve with 4 1-Gig DIMMs stuck in, (and a Gigabyte I-RAM with another 4G for swap if you really want to go overboard... though I'd wait for the SATA2 version).

        Another great benefit is QUIET. the machine will be stuck away somewhere and make a lot of noise. Fans, drives, the works. your 8 workstations though will be silent as a grave.

        There's several other quirks you'd have to work out such as external peripherals (USB2 hubs wherever applicable), packet shaping for that 15-year-old daughter who wants to run P2P apps, and you'd have to keep the system clean of adware or else.

        All in all, for the amount of money 8 new entry-level home PC's would cost, I could build a hydra that would knock the socks of your home box both in data reliability, speed, storage space, noiselessness and bragging rights, whereas availability stretches both ways (lose the mobo and you're fucked all the way, but lose a DIMM, lose a CPU and your box is slightly slower till you get it replaced, lose a drive and you don't even feel it). Performance-wise it'd rock too, as most of the users are not using disk I/O most of the time, and a simple software SATA raid5 (or even a H/W one) with new drives would easily go into the 200-400MB/sec ballpark and when only one of the users is doing something that needs disk I/O it'd fly.

        Build it around a 3U or 4U chasis with server H/W and you're set :-)

        • Let's assume no more than one of them is actually a gaming rig. Let's assume 2 more are MPEG-4 decoding boxes. Let's assume another two run office apps. All concurrently.

          Just one question - how are you going to handle controlling those ("virtual") machines? That motherboard might have 8 outputs, but it only has one keyboard/mouse.

          Even worse - so far, the OS (and 3rd-party software) support for multi-head displays is... far from complete. I've had difficulties persuading a movie to show on the correct di

          • Just one question - how are you going to handle controlling those ("virtual") machines? That motherboard might have 8 outputs, but it only has one keyboard/mouse.
            He mentioned using USB Hubs whenever possible so i'm gussing he is a usb hub(s) for I/O, and thats enough for 8 keyboards + 8 mouses. I think a better idea would be to use some kind of wireless input devices (perhaps using Bluetooth). More pricey, but beats routing usb cables all over your house (Not to mention that you may need more USB hubs for
            • He mentioned using USB Hubs whenever possible so i'm gussing he is a usb hub(s) for I/O, and thats enough for 8 keyboards + 8 mouses.

              True. But that still leaves the matter of OS (and 3rd-party) support. With today's software, can you imagine 8 mice (in separate rooms) competing over the same cursor?

              • >> True. But that still leaves the matter of OS (and 3rd-party) support. With today's software, can you imagine 8 mice (in separate rooms) competing over the same cursor?

                There *IS* software that does this.

                I even said I use it in my car.

                You might need to pay Jetway a bit more to use it on an 8-console rig, and you'd definitely need to be using a magic-twin enabled mobo, but it works. It splits your windows into several desktops, each with its own kb, mouse and sound card. Connecting the actual keyboard
          • The mouse/keyboard problem is solved with USB, or a hardware solution (PCI card).

            The OS is any *NIX, of course, and your senario will not happen. This is not a multi headed display, it's many seperate displays. X has been doing this for a long time.

        • You left out the proper OS. You seem to be suggesting doing this on Windows! ? Yikes!
          A better way would be to run Windows in VM's only when you need Windows sessions. Host them on Linux. You'll get better support for multiple monitors that way. Spyware could still be a problem, but a contained one, since the firewall on the host could detect outgoing spyware/trojan connection attempts.

        • The main problem you'd face is how to pipe your video signals throughout the house. Even at a lowly 1024x768@60Hz, DVI is limited to 9 meters [scala.com].
          • They also claim that 1600x1200 at 60Hz is limited to 1 meter. I have piped 1600x1200 at 60hz through the 6' cable that came with my monitor with a 10' extension cable attached, with no problems. That's roughly 5 meters. As always, YMMV.
        • Ask yourself this: do you need eight PCs in eight locations in the house? A gaming rig and two PCs to run office apps on? No? Well neither do most people.

          In my ideal home network (based upon my home right now), I'd say the most I'd need are a gaming/office PC, a dedicated home theatre PC (that would service two locations around the house), and a RAID-5 server. Add to that a notebook or two, perhaps.

          Sure, if I had a bigger family or lived in a mansion that had TVs all over the place then I might think differ
          • By now, there are three computers used in this house, plus a fourth when I'm home for vacation, because there are four people who live here. There's also two TVs, one much bigger than the other, because my mother likes to have the TV on sometimes in the kitchen while she cooks/cleans/whatever, and everyone else likes to watch the much larger TV in the other room.

            Even if you count my laptop/desktop as one, since if (in theory) the laptop was powerful enough, I wouldn't need the desktop, that's still six hea
            • Well, once you introduce games to the mix, then you're truly screwed, because, let's face it, for games you want a recent Windows OS (2000, XP) and as much performance as a desktop can give you.

              You certainly don't want to be trying to play games on a machine that's giving you a Windows experience via Wine while at the same time it's playing back video to two other displays, recording a TV show or two, downloading something and processing a filter in Photoshop. It's just not practical.

              If there are a couple o
      • You risk losing eight desktops instead of just one to a single component failure (eg, a faulty motherboard).

        With resource sharing like this, it becomes far, far more cost-effective to use more reliable components. You can buy a high-end hot-swap power supply, instead of 8 individual ones, thereby making it more reliable, not less. Same thing goes for hard drives (RAID), RAM (ECC), etc.

        It's comparable to the current situation with printers... You could have a high-end document center shared by several hun

        • Well, in most computer labs environments the eight PCs will often be being used simutaneously, especially so in a teaching environment, such as a school, college or university.

          In such a setting, eight individual desktops are probably far more suitable for the job than a single eight-headed hydra and I challenge anyone to show me any significant savings that can be made here.

          Patching, maintenance and upgrades can easily be done simultaneously, and in the case of software, automated. If eight PCs are needed,
          • Well, in most computer labs environments the eight PCs will often be being used simutaneously, especially so in a teaching environment, such as a school, college or university.

            Yes, USED simultaneously, but everyone isn't going to be running their most CPU-intensive app at the same time.

            Even if they ARE, it's still price/performance-competitive with 8 individual lower-end PCs.

            If eight PCs are needed, then a ninth identical system can be bought too, as a spare if one goes down, giving a level of redundancy th

            • Thank you for explaining concepts like hot-swapping to me. It wasn't necessary, but thank you anyway.

              If I'm in charge of a computer lab, then I'd rather have fully-functional PCs that would allow me to keep 7/8ths running in case of a single component failure, and which could be easily administered by even the least experienced of sysadmins (because I won't always be there) than one that's reliant on a hydra configuration that could take down every machine if one key component fails and which might be a nig
              • Thank you for explaining concepts like hot-swapping to me. It wasn't necessary, but thank you anyway.

                Yes, well, it certainly does seem necessary to explain basic concepts, which you have been completely ignoring, or dismissing off-hand.

                You're not even challenging what I've said, you're just ignoring it. If you have a lab where you need completely idiots to be able to administer the machines, fine. You're completely dismissing this idea, not because it's doesn't have numerous advantages, but because your p

                • OK, well it seems to me that you're looking at a computer lab that:

                  1. runs Unix/Linux clients;
                  2. runs apps that don't make huge demands or the processor or memory, and which never require sound;
                  3. doesn't require simultaneous use of resources;
                  4. has a server very close to all the displays.

                  And those are only some of the limitations and issues that you're ignoring.

                  Look at the big picture here. I'm not saying that dumb terminals can't work, I'm simply pointing out that the average computer lab is not a suitabl
                  • 1. runs Unix/Linux clients;

                    Yes, that much is a requirement.

                    and which never require sound;

                    Sound is pretty easy. You can get good 8 channel PCI sound cards for $20. Put two of those in the system, and do some ALSA configuration tricks to make each pair of outputs act like a seperate audio device, and you've got 8 stereo channels... One for each user. No big deal.

                    3. doesn't require simultaneous use of resources;

                    For the price of your 9 desktops, I can put together a multi-core system that will be as fast as

    • Graaaarrgh! I am OCTOHEAD, the man with the broadest shoulders in the universe!!!
  • Could you use Quad Display PCI Express like ATI's FireMV(TM) 2400 [ati.com]?

    ( 4 Displays * 4 PCI Express X16 slots = 16 Screens ) +
    ( 4 Displays * 2 PCI Express x1 slots = 8 Screens ) +
    ( 2 Displays * 1 PCI slot = 2 Screens )
    = a total of 26 displays.

    It's a pity it is not an multiprocessor opteron system...
    See 2005 April's How Many Desktop PCs Can One Server Replace? [slashdot.org]

  • by Ka D'Argo ( 857749 ) on Tuesday March 14, 2006 @06:04AM (#14914437) Homepage
    The sad part is, I'd venture to guess in the next couple years, more games or even applications are going to require dual, triple or even quad video cards to reach a running state.

    Note I didn't say optimal performance or peak effenciey or any other term to make it seem like more cards would just equal "OMFG MORE FPS, YESSZZZ!". No. With games like BF2 that are starting to require specific actual components of stuff coupled with how much things like DirectX are a huge huge factor in games, you are going to need massive amounts of GPU power to get alot of stuff to run.

    I mean (not to plug them or anything) but look at games like Project Offset, which plans for real-time rendering of everything no cutscenes nothing. The processing power of that game is going to be astronomical. I bet it will hit at the least a 2 PCI-E card requirement with at least 1.5 or 2 gigs of ram and 3.0+ GHZ processor, probably 3.4+. And we all remember how systematically intense past games like Farcry were, imagine cranking out a game that's five times as powerful as Farcray or even P:O you're going to require so much raw processing power it's insane.

    Which itself is within the true nature of computing, technology evolves, advances, grows faster or more powerful or more advanced. I still think it's sad though, I mean you look at some of the top of the line cards these days required for games, they are insanely priced (200,300 even 400-500 or more). And yes while you can go with something slightly slower and save alot of money, as I originally said I think it will hit the point where they simply will not run without X amount of cards or equipment. Just like I can't run modern games like BF2 or HL2 on my current setup, same thing in a few years for people wanting that hot new title that needs quad cards. The price will be fucking outrageous too. You thought $400 for an Xbox 2 was bad, wait until you need to drop $300 per graphics card, two three or four times plus all the other components just to play games.

    Nvidia and ATI are wetting themselves awaiting that day. Why sell them one GPU when Game X they want needs quad cards to even execute.

    • by ickeicke ( 927264 ) on Tuesday March 14, 2006 @06:36AM (#14914518)
      This has been an issue for years. There are always upcoming games that seem too require insane systems (the "recommended 1GHz CPU for Max Payne when the rest of the world though 500 MHz was decent"-era comes to mind), but it's just a matter of time before those systems are the new norm.

      And you fear that within a few years there will be games that require 2 or 4 $300 dollar GPU's just to get the game running. How many game developers would make games that only run on a small fraction of PCs? They want to get a decent audience, and to realise such an audience, the technology has to be avaiable at reasonable prices.

      The whole hardware software market is both self-regulating (releasing games with insane requirements does not work) as well as self-stimulating (higher software requirements boost hardware technology and sales and better hardware results in software with better graphics).

      BTW; Happy Pi Day [wikipedia.org]!
      • Nicely put. But also, we're moving toward a world of cheaper laptops, wireless, media centres and consoles that can look almost as pretty as a mainstream PC from the comfort of your sofa. A dedicated PC for gaming will always appeal to a few people but for a lot of people their laptop graphics chip is good enough, or the console is good enough, or the budget games they can buy for their old PC are good enough. Only a few people are motivated enough to buy a new PC for games (err, I am one of those peopl
      • Most of these resource hog games are all FPS's anyway. Not only will they price themselves out of the market from hardware demands but people won't keep spending big $ on new hardware just to play the same old shoot-em-ups. I wouldn't be surprised if there is a backlash and people go for retro games that were played on 8-bit systems. I'm seeing some of that now to be honest. The bigger the game demands doesn't mean it is better, look at chess.
      • Yes, but with faster hardware always on the horizon, is there a real push for highly optimized software for 3D games?

        Or would they prefer to be first to market at the expensive of a bit slower (fps) game? After all, if they don't push too hard to make the software run fast/efficiently, it will push the hardware companies to advertise/sell faster cards.... And when one gets a faster video card, they tend to want to try it on as many games as possible....

    • You wont need four cards to run things, remember that the individual cards themselves are improving too. I cant think of a game that REQUIRES more than a 7800 or X1800.
    • You have to remember that game developers don't want to exclude most of their audience. Yes, new games require full DX9 support cards, but even the cheapest $50 GeforceFX5200 is fully DX9 supporting.

      As for CPUs, how much does an AthlonXP2000+ cost on ebay? $50, $60?

      I have an AthlonXP1500 that only runs at 1100MHz due to my crap motherboard, and an FX5200 and i played HL2 all the way through on it.

      By the time games require 2 GPUs, dual core GPU cards will be present throughout the market price range. T

    • "The sad part is, I'd venture to guess in the next couple years, more games or even applications are going to require dual, triple or even quad video cards to reach a running state."

      That'd be really sad if it was based on some facts, but you're just being fatalyst and speculating about stuff that won't happen. Half Life 2 and Doom 3 run on GeForce MX4, GeForce MX4!! That's a crappy DX7 card, no pixel shaders or anything, but anyway the games run smooth and is playable even if some special effects are missin
    • The sad part is, I'd venture to guess in the next couple years, more games or even applications are going to require dual, triple or even quad video cards to reach a running state.
      I'd venture to guess fewer and fewer games will be designed to exceed the capacity of a $300 game console.
  • by HalfFlat ( 121672 ) on Tuesday March 14, 2006 @06:16AM (#14914468)

    There did not appear to be much written in the review on the way the PCIe lanes could be configured. The default apparently has that the four physical 16-lane slots are electrically 1-lane, 16-lane, 16-lane and 1-lane respectively.

    What excites me about such a board is the possibility of having simultaneously a fast SLI rendering set-up, together with fast I/O with 10Gbit ethernet and SAS. Having everything on PCIe rather than a mix of PCIe for graphics and PCI-X for I/O cards would allow more flexibility (at least, once there is a bit more range available in PCIe non-graphics cards!). Yet, if the configuration of channels only allows 1-lane on all but two of the slots, then it's not going to work out.

    • Unfortunately you couldn't run a Dual 16x SLI setup and a 10Gb Ethernet and a SAS controller on this board.

      A PCIe 4x will only shift 8Gb of data ((2.5Gb * 4 = 10Gb) * 8/10 (for encoding) = 8Gb)

      4 SAS lanes are in theory 10Gb of data and obviously a 10Gb ethernet is also 10Gb

      So to have an all singing all dancing SLI + 10Gb + SAS you would need (2*16xLanes = 32) + (2*8xLanes=16) = 48 Lanes total.

      Still not a bad improvement.

      I would be looking more towards future PCIe 2 spec based system for putting a system lik

  • Recent introductions of NVIDIA's SLI [...] technology


    Um, right. If by 'NVIDIA' you mean '3DFX' and by 'recent' you mean 'ten years ago'.

    Sheesh. Kids these days, they got no respect.
    • Re:recent? nvidia? (Score:5, Informative)

      by MrTufty ( 838030 ) on Tuesday March 14, 2006 @06:56AM (#14914571)
      3DFX SLI = Scan Line Interleaving
      Nvidia SLI = Scalable Link Interface

      Yes, Nvidia based their version on the ideas they acquired from 3DFX when they bought them out, but the actual techniques they use now are much more advanced. IIRC, the driver does automatic load-balancing, in the sense that if there are more polygons on one section of the screen than another, the rendering will be split so that each card still renders approximately half of them - even if that means one card is doing 75% of the actual screen resolution.
    • Re:recent? nvidia? (Score:3, Informative)

      by cyranose ( 522976 )
      Try 12-14 years ago and SGI instead of 3dfx. SLI is pretty close to the multi-pipe configurations SGI had on their ONYX systems -- generally up to 3 parallel reality engines in a single machine.

      Of course, that machine cost upwards of $700k. But multiple CPUs (2,3,4) were pretty typical.
  • by this great guy ( 922511 ) on Tuesday March 14, 2006 @06:27AM (#14914496)

    I would love to see a quad-Opteron mobo with four x16 PCIe slots but arranged in a way that traffic is spread across all HT links. So that I could use it to put 4 PCIe SATA cards, and have the highest possible read/write I/O throughput for a Linux software RAID array. Hardware RAID is out of the question, since no constructor offers a way to create arrays of disks across 3 or more cards. An Opteron has 3 HT links, 2 of them could be used as coherent links to other CPU's, and 1 of them could be used as a link to an external PCIe bridge chipset. The solution I would like to see implemented is one where 4 PCIe bridge chipsets would be connected to their own Opteron, via their own HT link. And each PCIe bridge chipset could provide at least one 16x slot.

    Some numbers: each of the four x16 PCIe bus would allow for 2500 MT/s * 16 bits / 8 = 5000 MB/s of traffic in each direction. And each of the 4 HT links: 1600 MT/s * 16 bits / 8 = 3200 MB/s. The global amount of I/O would be 3200 MB/s * 4 = 12.8 GB/s in each direction ! (HT links are the bottleneck). To resolve this bottleneck AMD would either need to increase their width from 16x16 to 32x32 bits or need to increase the signal freq from 800 MHz to 1.25 GHz (current limit is 1 GHz for coherent links and 800 MHz for the ones facing outside worlds -- chipsets seem to lag a little bit regarding HT frequency).

    But for some reason no constructor has ever designed such a board (Tyan only did it with 2 PCIe chipsets on their S2895 mobo). Why oh why is that the case ?! Seems like nobody understands the true potential of HT. This could provide a low-cost solution to so many perf issues I have seen in the various companies I have worked for... Argh !

    • Some numbers: each of the four x16 PCIe bus would allow for 2500 MT/s * 16 bits / 8 = 5000 MB/s of traffic in each direction. And each of the 4 HT links: 1600 MT/s * 16 bits / 8 = 3200 MB/s. The global amount of I/O would be 3200 MB/s * 4 = 12.8 GB/s in each direction ! (HT links are the bottleneck). To resolve this bottleneck AMD would either need to increase their width from 16x16 to 32x32 bits or need to increase the signal freq from 800 MHz to 1.25 GHz (current limit is 1 GHz for coherent links and 800

      • Of course I know that 12.8 GB/s is a theoretical value. But even reaching the third of that value is totally impossible with current mobos :( Yet I could build a box with 4.2 GB/s of potential I/O: four 16-port PCIe SATA cards with 16*4 = 64 disks. A modern SATA disk can sustain about 65 MB/s of read/write operations. And 64 * 65 MB/s = 4.2 GB/s. Such boxes do exist today, but cannot realize their full potential because of slow PCI-X busses (PCI-X 2 can alleviate the situation, but PCI-X 2 mobo are VERY RA

    • I would love to see a quad-Opteron mobo with four x16 PCIe slots but arranged in a way that traffic is spread across all HT links. So that I could use it to put 4 PCIe SATA cards, and have the highest possible read/write I/O throughput for a Linux software RAID array. Hardware RAID is out of the question, since no constructor offers a way to create arrays of disks across 3 or more cards. An Opteron has 3 HT links, 2 of them could be used as coherent links to other CPU's, and 1 of them could be used as a lin

      • See my comment about 4 PCIe cards here [slashdot.org]. The whole point of what I am proposing is precisely to be able to use regular commodity hardware to do tasks that, nowadays, can only be accomplished using high-end expensive gear. This is critical for some businesses. See Google ? Their whole architecture is built with commodity hardware.

    • So that I could use it to put 4 PCIe SATA cards, and have the highest possible read/write I/O throughput for a Linux software RAID array.

      I expect the storm of interrupts would bring this system of yours to a crawl long before you got even a fraction of that throughput.

  • Finally a board that can run a 2 card nVidia SLI configuration alongside a 2 card ATI crossfire config.

    No more bickering which one runs this or that game better - just use the right tool for the job, no swapping of cards required.

    Granted, you'd have to move to Antarctica to cool this sucker, but that should be no problem as long as the pizza delivery guy can get there, too. And just think of all the heating equipment you can replace with this rig...
  • This Motherboard will better option for them who wants to run 8 moniter at a time ...... but i don't think in day to day ususal computing this is having any importance as u u willl not be having any PCI slots. so fitting a sound card or TV tuner card will be a problem. but people doing video editing or photo editing can be really benefitted from this. and about quad SLI i will say dual GPU on single card is better option but still u will be wasting all the PCI slots. i don't think we need 4x7900GTX even we
  • VMware? (Score:3, Informative)

    by Mostly a lurker ( 634878 ) on Tuesday March 14, 2006 @06:42AM (#14914537)
    This could be nice for a big VMware setup but, if my memory serves me right, VMware has problems with multi head setups. Assuming it works, I may need to look for a larger desk!
    • Yeah, imagine the memory and the CPU needed to run all those systems ;) And the resource conflicts they would have :o It is better if you use those all the heads for watching the same porn
    • I run vmware with a triple monitor setup and have had no real problems with 5.5. The only minor annoyance is when the vmware session is maximized in one monitor it will not relese the mouse automatically. You have to escape out of full screen to get the mouse out of the session.
  • yay porn! (Score:4, Funny)

    by Errtu76 ( 776778 ) on Tuesday March 14, 2006 @06:55AM (#14914568) Journal
    I don't know why, but since i'm on /. and it includes support for more displays i automaticly think of porn. Maybe it's the crowd ....
  • nVidia Quad SLI is based around two dual GPU video cards running as a pair on a motherboard with two PCIe slots.
  • by JollyFinn ( 267972 ) on Tuesday March 14, 2006 @07:17AM (#14914616)
    Some people say its really bad thing happening as it would raise the performance requirements of games.
    BULL** it will make some EXTRA highend stuff possible. The games are designed for something thats more mainstream, and very highend systems are for extra eye candy. The game companies probably cannot convince majority of their target market to upgrade to SLI. So majority of the money is from those who don't have SLI systems.
    But what does quad sli give us.
    Well first use comes to my mind is 30" displays, you know the thing that has slightly over double the pixels to 20" displays. You need twice the power to run equal 3D performance on 30" compared to 20". Or 4 times the low end displays.
    Another point is antialiasing requires some performance out of card.
    QUAD SLI isn't cheap so it won't be mainstream, so it won't be MAIN target for game developers. However its supported.

    One use for QUAD SLI is when making a game you need to design the game for performance of typical system in 4-5 years. It is probably twice or quadruple the performance of current highend think the memory bandwith of SINGLE card has quadrupled in that time frame. Also there is more than 12 times the computational performance increase.

    As for free slots. There is 7 places on the case where to put a card. For game system a soundcard is a must, that gives 2 free slots with quad SLI. For gaming those two slots *COULD* get nic or some extra raid card. But neither of those are top of list items since the onboard ones could be considered good enough.

    Only problem though that with 4 gfx cards dualslot cooling isn't reasonable. Some highend cards don't have dual slot cooling already, and water cooling IS option for these kind of priceye highend systems.
  • but it does offer support for eight simultaneous display outputs on four Graphics cards.

    Cool! Now I can fight myself in a lightsaber battle on 8 different screens!
  • Remember that those "graphics cards" are high-performance processors that can perform more "general purpose" tasks: GPGPU [gpgpu.org]. I'd love to see a Linux kernel that is basically just a task scheduler for distributing computing among a network of GPGPU cards on these multiple PCI buses. Scalable desktop supercomputers running Linux apps.
    • General purpose as in "other massively parallel computations, sometimes with somewhat sloppy precision requirements", not as in "massive databases". Scientific computing, yes, sometimes with precision problems. Helping your server to survive /.ing, no.
      • More research will make GPGPU ever more "general". As will transforming computational patterns into dataflow processing, especially linear equations (vector processing).

        Meanwhile, I just want to stuff a PC with cheap videocards and make an MP3 compression supercomputer running Linux.
  • My first thought about this was that it would make a great HPC box. I think boxes like this might move the i/O folks to x16 pciE. As far as I know nothing besides graphics has gone past x8. InfinBand SDR 4X and some 10GE cards could go to x16 and have dual ports. Finally a port that could run that dual port DDR 4X IB card...or that SDR 12X IB card. Bandwidth for I/O is getting better all the time...at some point it will get close enough to memory speeds that a whole new set of distributed applications

"If it ain't broke, don't fix it." - Bert Lantz

Working...