Quad PCIe Motherboard 147
SlipKid writes "PCI Express Graphics cards have allowed for some new and innovative ways to increase rendering horsepower in Desktops and Workstations. Recent introductions of NVIDIA's SLI and ATI's CrossFire technology have enabled dual PCIe Graphics cards in a
load-sharing architecture. Motherboard manufacturers are jumping into the fray now and Gigabyte has released a Quad PCI Express graphics enabled motherboard, capable of running four cards at once. The board is not capable of running Quad SLI, mostly due to lack of NVIDIA driver support currently but it does offer support for eight simultaneous display outputs on four Graphics cards."
I'd rather have some NICs, soundcards, etc. (Score:5, Insightful)
Re:I'd rather have some NICs, soundcards, etc. (Score:4, Insightful)
And yes, there are plenty of 8 lane PCI-e cards which aren't graphics. There are NICs as well as hardware RAID controllers which can push that much data.
Re:I'd rather have some NICs, soundcards, etc. (Score:1)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Nope, nothing wrong with your arithmetic (Score:2)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
16, at 96 or even 192 kHz. People who do serious audio work do it with cards requiring a lot more than consumer soundcards like the latest offering from Creative.
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Right then, let's assume 32bit sampling at 192kHz. That is approx 768 kilobytes per second. A plain old PCI bus can do 133 Megabytes per second - that's well over 100 simultaneous channels even allowing for overhead.
Would you like to explain to us in a bit more detail why you think someone needs a 16 lane PCIe soundcard? I breathlessly await your b
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
16x PCI Express slot should be able to sustain quite a bit of bandwidth.
Re:Again with the fucking strawmen (Score:2)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
How about a twin tuner DVB TV card?
http://www.reghardware.co.uk/2006/02/27/review_te
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Re:I'd rather have some NICs, soundcards, etc. (Score:1)
Re:I'd rather have some NICs, soundcards, etc. (Score:2)
Even if you could do Quad SLI... (Score:5, Interesting)
Re:Even if you could do Quad SLI... (Score:4, Informative)
Re:Even if you could do Quad SLI... (Score:2, Interesting)
So if 4 cards take 1 ms each to render a scanline, and you have 1000 scanlines (and it was perfectly parallel) it would take 250ms to render the screen. 2 cards of the same performance would take 500ms. So
Re:Even if you could do Quad SLI... (Score:1)
Re:Even if you could do Quad SLI... (Score:1)
What's you really need for a quad-sli set up is a super hi res monitor to give the cards some more work to do per frame. The CPU bottleneck will be less of an issue if the time to render each frame increases.
Re:Even if you could do Quad SLI... (Score:2)
Re:Even if you could do Quad SLI... (Score:1)
oops (Score:1)
Re:Even if you could do Quad SLI... (Score:1, Insightful)
Re:Even if you could do Quad SLI... (Score:1)
I'm not sure exactly how well this is achieved in SLI but I'm sure there isn't a perfect 50/50 split, and there won't be a doubling of performance. I'm sure it's close though
Re:Even if you could do Quad SLI... (Score:2)
Re:Even if you could do Quad SLI... (Score:2)
Since a top of the line card costs about $500, then in today's dollars that would mean you'd have to have $1,048,576,000 lying around.
I'm sure there is someone at NVidia making that business case.
Re:Even if you could do Quad SLI... (Score:1)
Octohead (Score:3, Interesting)
In fact, if I could get some long enough wires, every television in my house could be just another head of one master computer. Master Control! Huzzah!
Why? (Score:3, Interesting)
Why would you do this? You risk losing eight desktops instead of just one to a single component failure (eg, a faulty motherboard).
Standard entry-level desktops and workstations are commodity items now: their prices are so low, and they are so easy to acquire that I doubt that there would be much in the way of cost savings to be had when comparing eight single-CPU
Re:Why? (Score:2)
Windows simply isn't a multi user OS.
I haven't tried it, but I'd wager you'd have to go through hell and back to get this working in linux too.
But my guess is that he didn't want to use the one eight-header as a multi-user computer but instead have eight monitors per workstation at the lab.
At the
Ummmm. (Score:2)
(one original, 4 buddies) I installed one once- it worked well.... but it was mostly used as a second browsing terminal....
Re:Why? (Score:2)
You would lose. It's trivial.
Re:Why? (Score:2)
You need to run two instances of X. (Score:3, Informative)
I believe GDM can be set up to do this (one login screen per monitor/keyboa
Re:You need to run two instances of X. (Score:2)
Re:You need to run two instances of X. (Score:2)
For naming USB mice, use
For keyboards, use the
See this document:
http://www.c3sl.ufpr.br/multiterminal/howtos/howto -evdev-en.htm [c3sl.ufpr.br]
You can use up to 4 SiS video cards, or a combination of Matrox cards, or even 4 NVidia GeForce 2MXs if you use the NVidia-supplied driver.
Cool, eh?
Re:You need to run two instances of X. (Score:2)
Now all I need is a multicore CPU. My kids just *hate* it when I fire up a movie and their videogame comes to a crawl :)
Re:You need to run two instances of X. (Score:2)
Never tried it in X though...
Does this work on one gfx-card with several monitors attached too?
Just wondering since the "it's trivial" comment of jericho4.0 was about a four 2 x monitor card setup resulting in eight virtual wo
Re:Why? (Score:5, Interesting)
How much power do you need for your 8 PC's at home?
Let's assume no more than one of them is actually a gaming rig.
Let's assume 2 more are MPEG-4 decoding boxes.
Let's assume another two run office apps. All concurrently.
Prioritize your processes properly and a dual-core dual-processor rig will do this with modern mid-range processors.
You can even do this with windows using Jetway's Magic-Twin (I do this with 2 seperate consoles and WinXP in my car).
Further, due to all your harddrives being piled in one place serving everyone, you get both a RAID-5 volume, a secondary backup volume to back up your entire RAID5, and if you really want to you can go RAID6 as well. And you get to pool everyone's unused space together, greatly optimizing disk usage.
RAM will be in demand, but not really a problem you can't solve with 4 1-Gig DIMMs stuck in, (and a Gigabyte I-RAM with another 4G for swap if you really want to go overboard... though I'd wait for the SATA2 version).
Another great benefit is QUIET. the machine will be stuck away somewhere and make a lot of noise. Fans, drives, the works. your 8 workstations though will be silent as a grave.
There's several other quirks you'd have to work out such as external peripherals (USB2 hubs wherever applicable), packet shaping for that 15-year-old daughter who wants to run P2P apps, and you'd have to keep the system clean of adware or else.
All in all, for the amount of money 8 new entry-level home PC's would cost, I could build a hydra that would knock the socks of your home box both in data reliability, speed, storage space, noiselessness and bragging rights, whereas availability stretches both ways (lose the mobo and you're fucked all the way, but lose a DIMM, lose a CPU and your box is slightly slower till you get it replaced, lose a drive and you don't even feel it). Performance-wise it'd rock too, as most of the users are not using disk I/O most of the time, and a simple software SATA raid5 (or even a H/W one) with new drives would easily go into the 200-400MB/sec ballpark and when only one of the users is doing something that needs disk I/O it'd fly.
Build it around a 3U or 4U chasis with server H/W and you're set
Re:Why? (Score:1)
Just one question - how are you going to handle controlling those ("virtual") machines? That motherboard might have 8 outputs, but it only has one keyboard/mouse.
Even worse - so far, the OS (and 3rd-party software) support for multi-head displays is... far from complete. I've had difficulties persuading a movie to show on the correct di
Re:Why? (Score:1)
He mentioned using USB Hubs whenever possible so i'm gussing he is a usb hub(s) for I/O, and thats enough for 8 keyboards + 8 mouses. I think a better idea would be to use some kind of wireless input devices (perhaps using Bluetooth). More pricey, but beats routing usb cables all over your house (Not to mention that you may need more USB hubs for
Re:Why? (Score:1)
True. But that still leaves the matter of OS (and 3rd-party) support. With today's software, can you imagine 8 mice (in separate rooms) competing over the same cursor?
Re:Why? (Score:2)
There *IS* software that does this.
I even said I use it in my car.
You might need to pay Jetway a bit more to use it on an 8-console rig, and you'd definitely need to be using a magic-twin enabled mobo, but it works. It splits your windows into several desktops, each with its own kb, mouse and sound card. Connecting the actual keyboard
Re:Why? (Score:2)
The OS is any *NIX, of course, and your senario will not happen. This is not a multi headed display, it's many seperate displays. X has been doing this for a long time.
you left out something important! (Score:2)
A better way would be to run Windows in VM's only when you need Windows sessions. Host them on Linux. You'll get better support for multiple monitors that way. Spyware could still be a problem, but a contained one, since the firewall on the host could detect outgoing spyware/trojan connection attempts.
Re:Why? (Score:2)
Re:Why? (Score:2)
Re:Why? (Score:2)
In my ideal home network (based upon my home right now), I'd say the most I'd need are a gaming/office PC, a dedicated home theatre PC (that would service two locations around the house), and a RAID-5 server. Add to that a notebook or two, perhaps.
Sure, if I had a bigger family or lived in a mansion that had TVs all over the place then I might think differ
My house, then. (Score:2)
Even if you count my laptop/desktop as one, since if (in theory) the laptop was powerful enough, I wouldn't need the desktop, that's still six hea
Re:My house, then. (Score:2)
You certainly don't want to be trying to play games on a machine that's giving you a Windows experience via Wine while at the same time it's playing back video to two other displays, recording a TV show or two, downloading something and processing a filter in Photoshop. It's just not practical.
If there are a couple o
Re:Why? (Score:2)
With resource sharing like this, it becomes far, far more cost-effective to use more reliable components. You can buy a high-end hot-swap power supply, instead of 8 individual ones, thereby making it more reliable, not less. Same thing goes for hard drives (RAID), RAM (ECC), etc.
It's comparable to the current situation with printers... You could have a high-end document center shared by several hun
Re:Why? (Score:2)
In such a setting, eight individual desktops are probably far more suitable for the job than a single eight-headed hydra and I challenge anyone to show me any significant savings that can be made here.
Patching, maintenance and upgrades can easily be done simultaneously, and in the case of software, automated. If eight PCs are needed,
Re:Why? (Score:2)
Yes, USED simultaneously, but everyone isn't going to be running their most CPU-intensive app at the same time.
Even if they ARE, it's still price/performance-competitive with 8 individual lower-end PCs.
Re:Why? (Score:2)
If I'm in charge of a computer lab, then I'd rather have fully-functional PCs that would allow me to keep 7/8ths running in case of a single component failure, and which could be easily administered by even the least experienced of sysadmins (because I won't always be there) than one that's reliant on a hydra configuration that could take down every machine if one key component fails and which might be a nig
Re:Why? (Score:2)
Yes, well, it certainly does seem necessary to explain basic concepts, which you have been completely ignoring, or dismissing off-hand.
You're not even challenging what I've said, you're just ignoring it. If you have a lab where you need completely idiots to be able to administer the machines, fine. You're completely dismissing this idea, not because it's doesn't have numerous advantages, but because your p
Re:Why? (Score:2)
1. runs Unix/Linux clients;
2. runs apps that don't make huge demands or the processor or memory, and which never require sound;
3. doesn't require simultaneous use of resources;
4. has a server very close to all the displays.
And those are only some of the limitations and issues that you're ignoring.
Look at the big picture here. I'm not saying that dumb terminals can't work, I'm simply pointing out that the average computer lab is not a suitabl
Re:Why? (Score:2)
Yes, that much is a requirement.
Sound is pretty easy. You can get good 8 channel PCI sound cards for $20. Put two of those in the system, and do some ALSA configuration tricks to make each pair of outputs act like a seperate audio device, and you've got 8 stereo channels... One for each user. No big deal.
For the price of your 9 desktops, I can put together a multi-core system that will be as fast as
Re:Octohead (Score:1)
How Many Desktop PCs Can One Server Replace? (Score:2)
( 4 Displays * 4 PCI Express X16 slots = 16 Screens ) +
( 4 Displays * 2 PCI Express x1 slots = 8 Screens ) +
( 2 Displays * 1 PCI slot = 2 Screens )
= a total of 26 displays.
It's a pity it is not an multiprocessor opteron system...
See 2005 April's How Many Desktop PCs Can One Server Replace? [slashdot.org]
Re: How Many Desktop PCs Can One Server Replace? (Score:1)
But Linux has no such limits (Score:2)
Within the 7.5 meter cable distance imposed by DVI (Score:2)
4 + 8 * 3 = 28. Easy fit.
Managing Multiple USB with HAL (Score:2)
By alocating one external hub to each "station" you can use HAL config script to "alocate" the each hub to its station. All it takes then is to customise the GNOME/X Display Manager to grant read write access to devices pluged into that hub for the user logging in to the X session.
A little cement on the hub could lock in the keyboard and mouse.
Load bearing the nice Unix way. (Score:2)
The above setup may not be really suitable for full screen 30fps multimedia movies ( I have experimented with VLC
Just wait a few more years (Score:4, Interesting)
Note I didn't say optimal performance or peak effenciey or any other term to make it seem like more cards would just equal "OMFG MORE FPS, YESSZZZ!". No. With games like BF2 that are starting to require specific actual components of stuff coupled with how much things like DirectX are a huge huge factor in games, you are going to need massive amounts of GPU power to get alot of stuff to run.
I mean (not to plug them or anything) but look at games like Project Offset, which plans for real-time rendering of everything no cutscenes nothing. The processing power of that game is going to be astronomical. I bet it will hit at the least a 2 PCI-E card requirement with at least 1.5 or 2 gigs of ram and 3.0+ GHZ processor, probably 3.4+. And we all remember how systematically intense past games like Farcry were, imagine cranking out a game that's five times as powerful as Farcray or even P:O you're going to require so much raw processing power it's insane.
Which itself is within the true nature of computing, technology evolves, advances, grows faster or more powerful or more advanced. I still think it's sad though, I mean you look at some of the top of the line cards these days required for games, they are insanely priced (200,300 even 400-500 or more). And yes while you can go with something slightly slower and save alot of money, as I originally said I think it will hit the point where they simply will not run without X amount of cards or equipment. Just like I can't run modern games like BF2 or HL2 on my current setup, same thing in a few years for people wanting that hot new title that needs quad cards. The price will be fucking outrageous too. You thought $400 for an Xbox 2 was bad, wait until you need to drop $300 per graphics card, two three or four times plus all the other components just to play games.
Nvidia and ATI are wetting themselves awaiting that day. Why sell them one GPU when Game X they want needs quad cards to even execute.
Re:Just wait a few more years (Score:5, Insightful)
And you fear that within a few years there will be games that require 2 or 4 $300 dollar GPU's just to get the game running. How many game developers would make games that only run on a small fraction of PCs? They want to get a decent audience, and to realise such an audience, the technology has to be avaiable at reasonable prices.
The whole hardware software market is both self-regulating (releasing games with insane requirements does not work) as well as self-stimulating (higher software requirements boost hardware technology and sales and better hardware results in software with better graphics).
BTW; Happy Pi Day [wikipedia.org]!
Re:Just wait a few more years (Score:2, Interesting)
The games themselves (Score:2)
first-to-market or collusion? (Score:2)
Or would they prefer to be first to market at the expensive of a bit slower (fps) game? After all, if they don't push too hard to make the software run fast/efficiently, it will push the hardware companies to advertise/sell faster cards.... And when one gets a faster video card, they tend to want to try it on as many games as possible....
Re:Just wait a few more years (Score:1)
Re:Just wait a few more years (Score:1)
As for CPUs, how much does an AthlonXP2000+ cost on ebay? $50, $60?
I have an AthlonXP1500 that only runs at 1100MHz due to my crap motherboard, and an FX5200 and i played HL2 all the way through on it.
By the time games require 2 GPUs, dual core GPU cards will be present throughout the market price range. T
Re:Just wait a few more years (Score:1)
That'd be really sad if it was based on some facts, but you're just being fatalyst and speculating about stuff that won't happen. Half Life 2 and Doom 3 run on GeForce MX4, GeForce MX4!! That's a crappy DX7 card, no pixel shaders or anything, but anyway the games run smooth and is playable even if some special effects are missin
Re:Just wait a few more years (Score:2)
PCIe lane configurations (Score:3, Interesting)
There did not appear to be much written in the review on the way the PCIe lanes could be configured. The default apparently has that the four physical 16-lane slots are electrically 1-lane, 16-lane, 16-lane and 1-lane respectively.
What excites me about such a board is the possibility of having simultaneously a fast SLI rendering set-up, together with fast I/O with 10Gbit ethernet and SAS. Having everything on PCIe rather than a mix of PCIe for graphics and PCI-X for I/O cards would allow more flexibility (at least, once there is a bit more range available in PCIe non-graphics cards!). Yet, if the configuration of channels only allows 1-lane on all but two of the slots, then it's not going to work out.
Re:PCIe lane configurations (Score:2)
A PCIe 4x will only shift 8Gb of data ((2.5Gb * 4 = 10Gb) * 8/10 (for encoding) = 8Gb)
4 SAS lanes are in theory 10Gb of data and obviously a 10Gb ethernet is also 10Gb
So to have an all singing all dancing SLI + 10Gb + SAS you would need (2*16xLanes = 32) + (2*8xLanes=16) = 48 Lanes total.
Still not a bad improvement.
I would be looking more towards future PCIe 2 spec based system for putting a system lik
Re:PCIe lane configurations (Score:2)
recent? nvidia? (Score:2, Funny)
Um, right. If by 'NVIDIA' you mean '3DFX' and by 'recent' you mean 'ten years ago'.
Sheesh. Kids these days, they got no respect.
Re:recent? nvidia? (Score:5, Informative)
Nvidia SLI = Scalable Link Interface
Yes, Nvidia based their version on the ideas they acquired from 3DFX when they bought them out, but the actual techniques they use now are much more advanced. IIRC, the driver does automatic load-balancing, in the sense that if there are more polygons on one section of the screen than another, the rendering will be split so that each card still renders approximately half of them - even if that means one card is doing 75% of the actual screen resolution.
Re:recent? nvidia? (Score:3, Informative)
Of course, that machine cost upwards of $700k. But multiple CPUs (2,3,4) were pretty typical.
Quad-Opteron quad-PCIe mobo (Score:5, Informative)
I would love to see a quad-Opteron mobo with four x16 PCIe slots but arranged in a way that traffic is spread across all HT links. So that I could use it to put 4 PCIe SATA cards, and have the highest possible read/write I/O throughput for a Linux software RAID array. Hardware RAID is out of the question, since no constructor offers a way to create arrays of disks across 3 or more cards. An Opteron has 3 HT links, 2 of them could be used as coherent links to other CPU's, and 1 of them could be used as a link to an external PCIe bridge chipset. The solution I would like to see implemented is one where 4 PCIe bridge chipsets would be connected to their own Opteron, via their own HT link. And each PCIe bridge chipset could provide at least one 16x slot.
Some numbers: each of the four x16 PCIe bus would allow for 2500 MT/s * 16 bits / 8 = 5000 MB/s of traffic in each direction. And each of the 4 HT links: 1600 MT/s * 16 bits / 8 = 3200 MB/s. The global amount of I/O would be 3200 MB/s * 4 = 12.8 GB/s in each direction ! (HT links are the bottleneck). To resolve this bottleneck AMD would either need to increase their width from 16x16 to 32x32 bits or need to increase the signal freq from 800 MHz to 1.25 GHz (current limit is 1 GHz for coherent links and 800 MHz for the ones facing outside worlds -- chipsets seem to lag a little bit regarding HT frequency).
But for some reason no constructor has ever designed such a board (Tyan only did it with 2 PCIe chipsets on their S2895 mobo). Why oh why is that the case ?! Seems like nobody understands the true potential of HT. This could provide a low-cost solution to so many perf issues I have seen in the various companies I have worked for... Argh !
Re:Quad-Opteron quad-PCIe mobo (Score:1)
Re:Quad-Opteron quad-PCIe mobo (Score:2)
Of course I know that 12.8 GB/s is a theoretical value. But even reaching the third of that value is totally impossible with current mobos :( Yet I could build a box with 4.2 GB/s of potential I/O: four 16-port PCIe SATA cards with 16*4 = 64 disks. A modern SATA disk can sustain about 65 MB/s of read/write operations. And 64 * 65 MB/s = 4.2 GB/s. Such boxes do exist today, but cannot realize their full potential because of slow PCI-X busses (PCI-X 2 can alleviate the situation, but PCI-X 2 mobo are VERY RA
Re:Quad-Opteron quad-PCIe mobo (Score:2)
Re:Quad-Opteron quad-PCIe mobo (Score:2)
See my comment about 4 PCIe cards here [slashdot.org]. The whole point of what I am proposing is precisely to be able to use regular commodity hardware to do tasks that, nowadays, can only be accomplished using high-end expensive gear. This is critical for some businesses. See Google ? Their whole architecture is built with commodity hardware.
Re:Quad-Opteron quad-PCIe mobo (Score:2)
I expect the storm of interrupts would bring this system of yours to a crawl long before you got even a fraction of that throughput.
Yay! (Score:1)
No more bickering which one runs this or that game better - just use the right tool for the job, no swapping of cards required.
Granted, you'd have to move to Antarctica to cool this sucker, but that should be no problem as long as the pizza delivery guy can get there, too. And just think of all the heating equipment you can replace with this rig...
Only for professionals not for gamers (Score:2, Insightful)
Re:Only for professionals not for gamers (Score:1)
VMware? (Score:3, Informative)
Re:VMware? (Score:1)
Re:VMware? (Score:2)
yay porn! (Score:4, Funny)
You don't need 4 slots to do nVidia Quad SLI (Score:2)
Quad sli. (Score:3)
BULL** it will make some EXTRA highend stuff possible. The games are designed for something thats more mainstream, and very highend systems are for extra eye candy. The game companies probably cannot convince majority of their target market to upgrade to SLI. So majority of the money is from those who don't have SLI systems.
But what does quad sli give us.
Well first use comes to my mind is 30" displays, you know the thing that has slightly over double the pixels to 20" displays. You need twice the power to run equal 3D performance on 30" compared to 20". Or 4 times the low end displays.
Another point is antialiasing requires some performance out of card.
QUAD SLI isn't cheap so it won't be mainstream, so it won't be MAIN target for game developers. However its supported.
One use for QUAD SLI is when making a game you need to design the game for performance of typical system in 4-5 years. It is probably twice or quadruple the performance of current highend think the memory bandwith of SINGLE card has quadrupled in that time frame. Also there is more than 12 times the computational performance increase.
As for free slots. There is 7 places on the case where to put a card. For game system a soundcard is a must, that gives 2 free slots with quad SLI. For gaming those two slots *COULD* get nic or some extra raid card. But neither of those are top of list items since the onboard ones could be considered good enough.
Only problem though that with 4 gfx cards dualslot cooling isn't reasonable. Some highend cards don't have dual slot cooling already, and water cooling IS option for these kind of priceye highend systems.
Single player MMORPGs! (Score:1)
Cool! Now I can fight myself in a lightsaber battle on 8 different screens!
GPGPU (Score:2)
Re:GPGPU (Score:2)
Re:GPGPU (Score:2)
Meanwhile, I just want to stuff a PC with cheap videocards and make an MP3 compression supercomputer running Linux.
screw the video, give me I/O (Score:2)