Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Nvidia Talks About Next-Gen Geforce, Plus Pics 375

Per Hansson writes "Techspot was at Comdex in Sweden a few days ago; we have now posted a small interview with Nvidia along with some high-res pictures of the Geforce FX on this page in our new comments system." This is one of the strangest looking video cards I've ever seen (and it isn't cheap), though it may look different by the time you can buy it in a box. Which is not yet, despite all the hype.
This discussion has been archived. No new comments can be posted.

Nvidia Talks About Next-Gen Geforce, Plus Pics

Comments Filter:
  • For who? (Score:4, Interesting)

    by jhoegl ( 638955 ) on Sunday January 19, 2003 @06:24PM (#5115037)
    This is "market specific?" What market? Ill tell you this, They best not think people will go for a 2 slot card for "heat management". I do agree with the passive heat sinks on the reverse though, very good idea!
    • Re:For who? (Score:2, Interesting)

      by Sepherus ( 620707 )
      Run a search for the Abit Siluro OTES. It has an immense fan very similar to the new Geforce and takes up two slots too. Its proven very popular with gamers and overclockers, who are the people who'd spend the money to get a card on release day anyway.
    • Re:For who? (Score:5, Insightful)

      by Bios_Hakr ( 68586 ) <xptical@gmEEEail.com minus threevowels> on Monday January 20, 2003 @12:01AM (#5116623)
      Ever since AGP rolled out, most system builders have written off the first PCI slot.

      The first reason is that the first PCI slot tends to conflict with the AGP slot in terms of resource managment. This may no longer be a problem, but old habits die hard.

      The second reason is the damn heat-sink and fan is on the bottom of the card. I'll never figure this one out, but why did the hardware enginers do this? The heat from the heatsink rises back into the card and makes the ambient temp even hotter. Most people leave PCI 1 open to help dissapate this heat.

      A third reason is that most people are not going to fill their slots anyway. Good mobos today have good sound, 10/100 NIC, and USB2 onboard. Add a good video card, and the rest of your slots are pretty much empty. Even if you add another card, just follow the urinal code. Never place 2 cards too close for comfort.

      In short, the 2 card rule has been the de-facto standard for years now, why shouldn't nvidia embrace it for their own purpose?
      • Re:For who? (Score:3, Interesting)

        by Keeper ( 56691 )
        The second reason is the damn heat-sink and fan is on the bottom of the card. I'll never figure this one out, but why did the hardware enginers do this? The heat from the heatsink rises back into the card and makes the ambient temp even hotter. Most people leave PCI 1 open to help dissapate this heat.

        Hot air rises. Heat radiates outward.

        Ie: The efficiency of a heatsink is not altered by it's orientation.

        "But the hot air gets stuck under the card!"

        Unless the temperature of the air contained within your case varies significantly (which it doesn't with a normal case with a couple of fans sucking air through it), orientation of the heatsink/fan does not matter. Your case doesn't have a mini atmosphere inside of it with updrafts and downdrafts.
  • by abbamouse ( 469716 ) on Sunday January 19, 2003 @06:25PM (#5115041) Homepage
    Obviously this is a first crack at the FX. I'd bet serious money that within six months of its release, a version will be ready that requires only one slot. Consumers hate incoveniences like this -- what if a cap on the motherboard gets in the way of one slot? Moreover, those who wait six months are more likely to be price-conscious consumers -- which means their systems are less likely to have gobs of space open (cheaper mobos = fewer slots).

    Still, I want one. Now.
    • by TheOverlord ( 513150 ) on Sunday January 19, 2003 @07:07PM (#5115256)
      When I talked with the guys from BFG (who are already taking preorders [bfgtech.com]) at the HardOCP Workshop, they had a FX card on hand that you could look at up close. I asked one of their guys about the huge ass coolers and they said that the manufacturers had the choice to put their own type of cooling on it if they wanted. So I'm sure there will be some 1 slot options out there if the customers demand it...
    • Please, The market that this card is aimed at could care less that they lose a slot. Reason 1 being that the AGP adjacent slot normally shares and IRQ with the video card. Reason 2 is the target market is also obsessed about cooling, having a card in that slot reduces circulation. Look at the Abit GF4 Ti 4200 with the OTES, that's a production card. And the last reason being that the trend towards onboard peripherals has increased. Onboard audio has gotten better and LAN/USB 2.0/1394/RAID is onboard now on many high-end boards. Oh, and the people who buy those high-end boards will be the ones buying the GF FX. Hell, with all of that onboard and there being 5 or 6 pci slots, you really think that burning one slot is gonna keep someone from buying the GF FX? I don't have a PCI card in the AGP-adjacent slot, don't use the onboard sound and have a PCI nic. I still have 2 PCI slots left.
    • by Mac Degger ( 576336 ) on Sunday January 19, 2003 @08:19PM (#5115674) Journal
      What I wonder is why just not put all the transistors, the chip and the ram...on the other side of the circuitboard! No-one looses a pci slot that way.
      • What I wonder is why just not put all the transistors, the chip and the ram...on the other side of the circuitboard! No-one looses a pci slot that way.

        That would certainly look funky. Then again, I still think of PCI/AGP cards as "upside-down"...

        I believe that extending the size to that side of the card would be considered "out of spec", and some motherboards would have a problem with that. My Aptiva board for example has the CPU clip/thing (Slot-1) very close to the AGP card, so in that box at least this wouldn't work.
        • OTOH, a number of ATX motherboards I've worked on recently don't use the far-right (towards the CPU) slot, putting the AGP in the 2nd slot. In cases like this, one could not only have a 2-slot cooling system, but have a convenient exhaust vent attached to the card-cage, if they were to use the extra space behind the card.

          Considering that even good motherboards barely break the $150 mark, while high-end GPUs can be $400+, it doesn't make much sense to make the GPU fit the mobo, when you can find a mobo to work with your GPU of choice.
      • This wouldn't solve a thing. In fact, it'd cause huge numbers of problems.

        First off, the reason it eats 2 slots is because the 2nd slot is used for the blower. If you invert everything exactly where are you going to vent the blower? There's no standardized hole available for this kind of thing.

        Second, it would render it incompatible with most motherboards. You'd hit either an I/O header, the CPU slot, or (most likely) support electronics like capacitors and the like. There is generally not a great deal of space between the AGP slot and anything above it because there are minimal (if any) specs requiring distance. A small number of MBs had problems with high end graphics cards right now because of heat sinks on the back of the cards -- they usually end up hitting caps, which is the last thing you want to do (ever short a cap? Not good)
  • by hikousen ( 636819 ) <info@@@heavycatweb...com> on Sunday January 19, 2003 @06:25PM (#5115045) Homepage
    What are we up to now? Three months to obsolescence? It was just last Fall that we heard about the ultra-mega-super Radeon 9700 that could render 47umptyzillion somethingorothers every picosecond (only $400 while supplies last)?

    I wonder if we're ever going to get to a point where "this is the hardware. You have 10 years to do something cool with it" instead of "oh, look, your program is obsolete again! Your graphics are dated! Another 10 man-years down the drain! Place your bets... (spin)"

    sigh...
    • by Saeger ( 456549 ) <farrellj@nosPAM.gmail.com> on Sunday January 19, 2003 @06:43PM (#5115120) Homepage
      Place your bets... (spin)

      I'll put $40 trillion on "The Law of Accelerating Returns" [kurzweilai.net], and laugh at you for putting your money on "Moores Law Has To Hit A Wall Dammit!!!!1!!!1" :-)

      --

    • No (Score:4, Insightful)

      by Kjella ( 173770 ) on Sunday January 19, 2003 @06:46PM (#5115135) Homepage
      I wonder if we're ever going to get to a point where "this is the hardware. You have 10 years to do something cool with it"

      Only when, if ever, we can render something like the Final Fantasy movie in real-time. Something tells me Moore's "law" will have broken down before that though.

      Kjella
      • Only when we can render the Final Fantasy movie in real time, and then they can work on making things easier to write and require less design work.
      • Re:No (Score:3, Interesting)

        by jericho4.0 ( 565125 )
        Final Fantasy in realtime? No problem. I think we'll see that level of rendering sooner than you think. All it takes is more textures per pass, and that number is going up quickly.

        What I want is for the hardware to support a realistic and comprehensive physics model in said Final Fantasy universe.

      • Re:No (Score:3, Insightful)

        by erpbridge ( 64037 )
        Well, let's put it this way: you'll be able to buy that card, and the machine to do it with, in about 5-10 years. It'll probably be the card that comes out at the same time as the Pentium 5-5000 or 6000 (7500 at latest), which isn't as far away as you might think. The average machine will have about 1 Gig to 1.5 Gig of RAM then, and about 400 GB hard drive will be availble (200 GB will be the norm for the people like Dell and Compaq). I think (without knowing what Square's processing requirements were at the time of making FF the movie) that this system will be able to render something like FF realtime, but that type of rendering will pale to another breakthrough movie of the time.

        Moore's law will not have hit a wall by then, but I think you will be able to do your Final Fantasy and Shrek rendering by then... but there will be another couple all-CGI movies about a year before that will elicit the same post as you said, and will be answered the same way: wait 5-10 years, it'll happen.
    • by goatasaur ( 604450 ) on Sunday January 19, 2003 @06:46PM (#5115137) Journal
      The fact that something may not be as fast (or as expensive) as the newest new computer hardware has nothing to do with obsolescence. It is only obsolete because they want you to believe it is obsolete.

      There's very little reason someone with a video card made a year or two ago would need one of these. My Radeon 8000 works fine, thanks. $400 for a 10-frames-per-second improvement isn't what I call revolutionary progress.
      • by b0r1s ( 170449 )
        Exactly. Everyone who complains of immediate obsolesence is either a naive fool or has too much money and nothing to spend it on.

        A common sense view of the situation would be: yes, you have a Radeon 8000: you shouldn't even consider a GFFX. The GFFX SHOULD be marketed at people who have Nvidia TNT2's and 3DFX boards: people who are getting to the point where they want to upgrade have an extra option, people who don't need to upgrade shouldn't.

        Common sense. It's pretty easy.
      • There's very little reason someone with a video card made a year or two ago would need one of these.

        Two words: Quake III.

        You are right that 98% of games will run on hardware two years old. However, there is a subset of games that demands the latest and greatest hardware to experience the game. There's no "conspiracy" here, just that certain developers aim at the leading edge. If you don't want to play those games, there's no reason to upgrade.

        Personally, the day Quake III comes out is the day I upgrade my video card. :)

    • by Anonymous Coward
      At the moment it's easy for gamers and developers of software to imagine uses for next generation video cards. Hardware will continue to become obselete as long as humans want to do more with it. Something simple like a graphics card can be imagined to have limits though... for example: Imagine I have a video interface directly to my brain from my video card. Once that video card can render, in real-time, as much information as my brain can distinguish then there's no need for more powerful hardware...right?
    • by Alan Partridge ( 516639 ) on Sunday January 19, 2003 @07:05PM (#5115247) Journal
      why does the announcement of a $600 board from nVidia render a $400 from ATi obsolete? When Mercedes announces a new S-Class does it render the BMW 3 Series obsolete? WTF are you on about?
      • You seem not to understand the difference between computer hardware and cars. Cars have improved only in tiny amounts in the last 40 years, usually in matters of efficiency, safty and convenience, not performance. Even then, those changes are small. If the new S-Class went twice as fast as the BMW 3 Series, and there were no practical speed limits on the roads, then YES, it would render it obsolete.
    • Check out some of the equipment from Sun Microsystems [sun.com], SGI [sgi.com], IBM [ibm.com], and Stereographics [stereographics.com].

      A bunch of their equipment is designed for a 10 year obsoletion-cycle. Cost's a hefty penny, though. Designed for business and major research universities.

      At the University, we were using Creator3D graphics cards from Sun Microsystems. That was in 1999, and the general consumer market still hasn't caught up with that tech. Me, I'm still looking around for auto-stereoscopic monitors. Sharp is coming out with a consumer model next year, I hear.
    • I wonder if we're ever going to get to a point where "this is the hardware. You have 10 years to do something cool with it"
      We're [gamecube.com] already [playstation.com] there. [xbox.com]
  • by fidget42 ( 538823 ) on Sunday January 19, 2003 @06:25PM (#5115046)
    The Inquirer has an article [theinquirer.net] that takes a look at the GeForceFX. Hopefully things won't turn out as they did for 3DFX.
    • by Kjella ( 173770 ) on Sunday January 19, 2003 @07:09PM (#5115266) Homepage
      The Inquirer has an article [theinquirer.net] that takes a look at the GeForceFX. Hopefully things won't turn out as they did for 3DFX.

      Disclaimer: I have no idea about the economic status of Nvidia. But I do see them in pretty much every computer advertized, and they've generally delivered very successful products since the first Geforce chip, so I assume they got a strong finacial position. And if you can't solve it even if you got more money to throw after it than the rest, well maybe you deserve being dethroned. That's what competition is all about, isn't it?

      Kjella
  • Ugly little bugger (Score:2, Interesting)

    by Siriaan ( 615378 )
    Honestly, there's gotta be a smarter way to cool that thing than a huge, ugly, entire-slot-using heatpipe. Either that, or developing a new way to crunch graphics numbers other than using a single chip..... SLI on one card using two slightly slower chips? Power consumption would go up, but you could use floppy power connectors in lieu of a new bus solution that provides more voltage grunt, and it'd be easier to cool.
    • ...crunch graphics numbers other than using a single chip..... SLI on one card using two slightly slower chips...

      it's called silicon real-estate.

      it's also called packaging cost.

      it's called data routing on the board (FR4 is very, very slow unless you use a LOT of traces, which is very, very diffcult).

      I think it may also be called lower MTBF.

      and how about "debugging is a pain?"

      either way, though - don't expect "multi-processing" on but the most high-end incarnations - when they have squeezed out of every bit of performance per-chip.

  • Still no dual-DVI! (Score:5, Interesting)

    by altek ( 119814 ) on Sunday January 19, 2003 @06:28PM (#5115055) Homepage
    Why don't manufacturers start doing dual-DVI outputs? Granted, most LCD's have a second analog input, but what's the point of having one DVI output then?

    I wish they'd start putting dual-DVI outputs on them. Maybe one of the other companies that makes them (MSI, PNY, Leadtek, etc) will offer one finally. AFAIK they don't even offer a hydrahead adapter for the one DVI port to split to two (doubt its possible without a proprietary output like the Radeon VE's).

  • wow! (Score:2, Funny)

    I thought the days of full length and double width expansion cards were over.

  • Is it Just Me? (Score:4, Insightful)

    by fidget42 ( 538823 ) on Sunday January 19, 2003 @06:37PM (#5115097)
    For some reason, having a video card with a more agressive cooling solution than my main CPU bothers me.
    • Nothing new here (Score:4, Informative)

      by nothings ( 597917 ) on Sunday January 19, 2003 @07:55PM (#5115525) Homepage
      486 : 1.2 million transistors
      Pentium : 3 million transistors
      Pentium Pro : 5.5 million transistors
      Pentium 2 : 7.5 million transistors
      Nvidia TNT2 : 9 million transistors
      Alpha 21164 : 9.3 million (1994)
      Alpha 21264 : 15.2 million (1998)
      Geforce 256 : 23 million transistors
      Pentium 3 : 28 million transistors
      Pentium 4 : 42 million transistors
      P4 Northwood : 55 million transistors
      GeForce 3 : 57 million transistors
      GeForce 4 : 63 million transistors
      Radeon 9700 : 110 million transistors
      GeForce FX : 125 million transistors
  • by handsomepete ( 561396 ) on Sunday January 19, 2003 @06:39PM (#5115105) Journal
    "[Nv]: The manufacturers have, just like before, complete opportunity to configure in and out connectors after their preferences. Something new is though that Nvidia on these cards has chosen to build in their own TV-out chip, which guarantees better image quality and configuration possibilities... "

    Historically, haven't onboard TV-outs/ins on video cards been kinda crummy? With the exception of the All-in-Wonders, I thought they were scoffed at by the hardcore PC-on-TV users. Does anyone have any more specs on the TV-out chip? Seeing as I'll swim in rusty nails before I spend $650 on a video card, I'm hoping that a watered down version will be available with the same TV-out... anyone?
  • Speeds... (Score:2, Interesting)

    by caino59 ( 313096 )
    Really, with the speeds video cards are pushing now a days, should we really be surprised to see cooling solutions that take up space?

    I mean c'mon...look at even the smallest heatsinks that go into rackmount systems...still going to take up some space as far as a video card goes.

    Also...Iwonder what ATI has up their sleeves. They have had a lot of time to get stuff together for what could be their next-gen chip...

    These prices are just getting absurd though, but as long as people pay it, the prices are going to continue the upwards trend...

  • by kruetz ( 642175 ) on Sunday January 19, 2003 @06:46PM (#5115138) Journal
    In my day, video cards didn't even use these new-fangled slot whatsits. We didn't even have monitors back then - the video card had to do all the drawing and thne DISPLAY it to us as well. And we didn't have RAM or ROM either, so we had to remember each byte ourselves and give it to the video card when necessary. Not that it ever TOLD you when it needed a byte, OR which byte it needed. You had to memorise the order in which bytes were required - the list was provided in invisible ink on the back of the installation manual (which we DIDN'T have) and it was written in reverse-polish ascii pseudo-hexadecimal with a Russian accent. AND it could do everything we needed! And it didn't even need a heatsink (but the horses that powered it did need a break every now and then, and you had to train them not to go potty on the computer ... that was a real CORE DUMP)
    • Horses!?! And what are these vidio carts you're talking about?! In my my day we had to push a ball of copper up and down a hill all day, using the prince albert in our knobs to generate current, and let me tell you, it was uphill BOTH ways! And you had something that DREW your pictures FOR you?!? We got a rusty nail with which we had to draw the pictures in our retina! The refresh rate was murderous, but we liked it, anyway!
  • by Anonymous Coward on Sunday January 19, 2003 @06:47PM (#5115141)
    I've just put together a new computer an Nvidia nforce2 ASUS A7N8X motherboard, and you know how many PCI slots I use? None. My video uses the AGP, but then sound is on board (and it's good), usb/firewire/serial-ATA RAID/regular ATA, etc are all on board PLUS two NICs. Sure, I could add SCSI (but how many home users do?), or a TV tuner (already built in to my video card), or a variety of other things, but I really have no need for these PCI slots. I'm surviving quite well without them.
  • by Fulg0re- ( 119573 ) on Sunday January 19, 2003 @07:00PM (#5115216)
    The GeForce FX is in my opinion, not going to be what the world has expected from nVidia. It is simply too little, too late - 6 months too late. It may have the performance crown for a month, but it will be short-lived.

    ATI will simply respond with the R350, which is likely going to be an improved R300 core, as well as DDR2 and manufactured with the .13u process. In case some people haven't noticed, the leaked benchmarks of the GeForce FX show it to only be marginally faster than the Radeon 9700 Pro. Not to mention that it's 500MHz vs. 325MHz. It seems that ATI is faster in terms of IPC's.

    It would be unfeasible for nVidia to respond until the summer with the NV31/34, at which time ATI will announce the R400.

    I will have to give nVidia one thing though, their drivers are excellent. This is perhaps the only thing they have going for them at the moment. However, ATI is pumping out a new driver set almost every month, and at this rate, they will soon reach parity with nVidia.
  • This happens whe M$ gets the liberty to set hardware standards... like DirectX 9. After years of software bloat, here come meaty example of hardware bloat!

    Yuhoo...
  • by Gyorg_Lavode ( 520114 ) on Sunday January 19, 2003 @07:14PM (#5115284)
    One of the nice things about liquid cooling is that it's expandable. If I were to get one of these cards, I'd wait for a water block to become available for it and just add it to my liquid cooling system.

    People call liquid cooling dangerious, unneccesary, and extravigant, and then buy video cards that have cooling such as this one, cpu coolers that are enormious, and put half a dozen case fans in their case to try to keep the temperature down.

  • This sounds just like what caused the death-knell of 3DFX: company bets the bank to make a monster video card that blows everything out of the water, and holds off on a whole scheduled version release (once every 18 months) to make this monster card... and blows it big time.

    3DFX used to compete with NVIDIA. When NVIDIA released a new line of cards, so did 3DFX, or when 3DFX released a new line of cards first, so did NVIDIA.

    When the GeForce2 cards came out, everyone waited for 3DFX to release their competitive line. About 4 months later, 3DFX released a couple Voodoo4 cards, but not much in the way of competition, and nothing spectacularly advanced above the Voodoo3's. However, they also let out news of plans to make a market breaker card, the Voodoo 5-6000, which would take up fall case length (and bump harddrives), have 5 fans on it, and require an external wallwart-style DC adaptor for power supply. It was a $600 card meant for the mega-gamers and graphic designers out there. This was a huge card... and their biggest flop, for once it came out, NVIDIA was already releasing the GeForce3's which had better specs and lower prices overall.

    Now, Nvidia does something just like that. This card is double-height (the second slot worth is ducting for external air intake and exhaust) and is full case length. It's got monster specs, and has thrown off their regular 18-month cycle of new cards. This new one is $600 as well.

    Sounds to me like some of the execs of 3DFX have gotten on the board of NVIDIA via the buyout, and are trying to make another Voodoo5-6000. I hope it doesn't end the same way, with this company going down the tubes as well.
    • Heh. Wouldn't that be poetic justice: ol' 3DFX moles infiltrate nVidea and lead it over the cliff...

      This quote is revealing: "[Nv]: Well, now that TSMC has their production running at .13 micron we have of course helped them make some mistakes and learn by them, so ATI will probably have an easier switch than us. Though we believe it will take a significant amount of time for them to make the switch."

      How nice of nVidea to pave the way for their competition! ATI's gonna save millions.

      Significant amount of time to switch. Er, yah. Right.
    • by Anonymous Coward on Sunday January 19, 2003 @08:09PM (#5115596)
      No, your entire rant is uninformed. The reason for this delay was because they were moving their chip fabrication process to 130nm. That investment means they can now resume their 6 month cycle for the next couple of years. ATI is probably going to have to do the same soon, causing them a delay.
      • bump parent post [slashdot.org] (#5115596) up. Has a good point.
      • There's a bit more to this as well:

        You can't just take a current chip design, shrink it from the 180nm to the 130nm process, and expect it to run. If it does, it would be a miracle of a cosmic sort. As far as changing processes go, it's somewhat reminiscent of taking an SUV, and pulling out the engine, and putting in an electric motor-- and expecing everything to work fine, except 'faster' or 'better'. Ain't gonna happen

        Most chips are written in a HDL (Hardware Descriptor Language); ATI and nVIDIA use, among others, VeriLog and VHDL. Both of these languages have their behavioral-level code, which is somewhat reminiscent of a traditional C program. (Make no mistake, HDL's are a totally different ballgame to a programming language). Then, after you have the behavioral code working (meets timings, etc.), you synthesize (compile) it.

        Here's where it gets tricky:

        Synthesis involves taking your process (fab size, power, material, and other characteristics), and create an optimized layout of gates to perform the tasks described by the behavioral code. The synthesized code almost definately does not behave exactly like the behavioral code-- but the synthesized code is close enough -- just barely, to meet the critical timings, and the whole thing works.

        Quite often, the synthesized code will utterly fail, and the offending part will have to be identified, diagnosed, and fixed. But the fix will probably break something else. It's like putting carpet in your bedroom, and suddenly the ceiling caves in. Fix the ceiling, and the walls turn pink. Repaint the walls, and the bed becomes sentient.

        The thing to remember is you get used to the 'personality' of a given fab process, and begin to pre-emptively put in fixes to avoid seeing them at all. But the instant you change fab processes, the entire 'personality' of the synthesis changes, and all bets are off. The entire design will have to be re-synthesized, re-simulated, and re-debugged. And that's before it hits silicon.
    • by RyuuzakiTetsuya ( 195424 ) <taiki.cox@net> on Sunday January 19, 2003 @09:13PM (#5115926)
      Problem.

      Volume.

      ATi doesn't ship lots of chips to be sold to OEMs on the cheap. nVidia does, and will still do. This was 3dfx's problem, and this will be what keeps nVidia alive. Whether or not it'll keep them competative or have them go the way of the Trident or not is another story.
  • I'm of two minds about this card.

    On one hand, it's a powerful piece of hardware if any of the hype we're getting fed is remotely accurate.

    On the other hand, is it really a good idea to completely reinvent the wheel? Have we really pushed the computing power available to us in the old methods of rendering things in 3 dimensions?
  • Sneaky... (Score:4, Interesting)

    by shepd ( 155729 ) <slashdot.org@gmai l . c om> on Sunday January 19, 2003 @07:23PM (#5115318) Homepage Journal
    Building in a cooling solution like that which is totally unrepairable by the end user is a great way to build in forced-obsolescence.

    I think I'll stick with my radeon. If the fan quits, I'll just replenish the oil.

    Kudos to Nvidia, though, for finding a way to force their users to buy new cards in the future! This'll certainly be the wave of the future, like fibreglass bodies on cars!
  • Hmmm... (Score:3, Funny)

    by mschoolbus ( 627182 ) <{travisriley} {at} {gmail.com}> on Sunday January 19, 2003 @07:25PM (#5115328)
    I wonder if they are coming out with a laptop model also... =P
  • that had a 486 fan atached to that huge aluminun heatsink. took 2 slots. I ended up without slots because of it, 2 nics and a SCSI card, so no more big cooling in video cards for me. thank you.

    maybe when they move to a smaler interconect size (what they're using in this card ? 0.13 micron ?) it runs cooler. then I'll buy.
  • Two slots wide, with a ventillation shell, requiring an external 12V power connection? Doesn't this seem a bit extreme? In the past, this sort of thing was only seen on overclocker's www sites.

    Factor in the power requirements of the Athlon/P4 processor, and this is getting ridiculous.

    I would love to see some of the laptop power/speed control features in desktop systems. For example, run the CPU's at 800MHz when I'm browsing the web, and goes to 2.4GHz when I need the power (with the cooling fans adjusting accordingly). Of course, a video card with passive cooling is also a requirement for me.
  • Faster is slower (Score:4, Interesting)

    by Veteran ( 203989 ) on Sunday January 19, 2003 @07:43PM (#5115440)
    While all of the modern 3D chipsets have impressive frame rates for running 3D games they tend to suck badly for much of anything else.

    The chips are very slow to switch from text to graphics and vice versa.

    I had a board with a slightly older Nvidia chip set. I wasn't very satisfied with the stability of the Xfree drivers for it so I tried the Nvidia Linux drivers. Their driver took five minutes to switch between text and graphics modes.

    Older chipsets were much more practical for day to day use; the super speed models remind me of trying to drive a AA fuel dragster to the office every day.
    • That's a linux driver problem, nothing to do with the card itself.
      • No, it does have to do with the chipset, nobody had any trouble with mode switching on the slower chip sets..

        Yes, I am aware that the Nvidia written driver for Linux was the cause of the ridiculously long switch time.

        Early in the history of accelerated video cards it was pointed out that the faster they got for graphics - the slower they were in text mode. The very fast processors we have today mask that particular problem.
  • WHY WHY WHY WHY?? (Score:5, Interesting)

    by t0qer ( 230538 ) on Sunday January 19, 2003 @07:46PM (#5115454) Homepage Journal
    Actually, this goes out to all the video card manufacturers..

    Why hasn't anyone put the GPU on the OPPOSITE side of the card yet? Every AGP card I see, the GPU is ALWAYS facing towards the PCI slots in the system where it.

    A. Blocks out other PCI cards
    B. The fan causes noise and instability if it is running too close
    C. It exhaust the heat onto those other cards.

    Instead of trying to put the carridge before the horse, why not just mount the GPU on the opposite side? There's no PCI slots to get in the way, and you could fit a HUGE cooling solution there.

    Hey Nvidia if you want to hire someone with more common sense design tips like this i'm availiable. I'll slap your engineers with a cluestick for ya.

    • Re:WHY WHY WHY WHY?? (Score:4, Informative)

      by RollingThunder ( 88952 ) on Sunday January 19, 2003 @07:55PM (#5115529)
      Presumably, the spec for the motherboard doesn't guarantee that the area on that side of the AGP slot will be free and open - CPU's may be allowed to be there, and thus either their ring of capacitors, or heat sink, would get in the way.
      • by t0qer ( 230538 ) on Sunday January 19, 2003 @08:20PM (#5115680) Homepage Journal

        I thought I'd take a [zeromag.com]
        picture and make a rebuttal to your statement. Gotta love digital.


        In this pic there are 5 mobo's.


        Intel 850GB

        Some asus socket370 thing

        Some soyo socket370 thing

        Iwill BD100 slot1

        Some intel socket370 thing


        You will notice on the asus board I put a tape measure across as a reference.
        Now out of the 5 boards sampled, only 1 has no space for heatsinks on the right
        side. Also to note this board is a slot1, which is no longer in production.


        On the other hand, every single semi modern board in this picture has more
        than adequate room for heatsinks on the right side.


        So unless these newer cards are going into an outdated system, putting the
        fans/heatsinks on the right side shouldn't be a problem right? Simple enough
        solution without having to resort to heat pipes/water cooling or peizo electric
        cooling.



        • I thought I would point one more [zeromag.com]
          thing out


          There is an extra slot to the right of the AGP slot, I have the area circled
          in white. Quick question for the /. crowd, how many other people out there
          have a case with an unusable slot on the right like me?? Seems to make
          perfect sense to put the fan there doesn't it?


          • That slot is for motherboard expansion (if theres not enough space on the other MB output area (like extra usb ports etc).

            > Why hasn't anyone put the GPU on the OPPOSITE side of the card yet?
            Very easy - heat flows upwards - the card itself would block the heat stream.
            And of course the electrons would fall off ;)
    • While you couldn't do that (the spec for the AGP slot doesn't give you enough room above the card) for your standard product, it seems to me like you could also create a reversed product with a disclaimer that the user should verify that it fits before buying the card. Most Mobos won't have a problem and it will have the added advantage of giving your video card a clearer air intake (and will reduce the amount of heat trapped on the bottom of the card).

      On the otherhand, I have seen some motherboards that stick big capacitors right above the AGP slot which would cause problems for your "HUGE" cooling solution.
    • by atam ( 115117 ) on Sunday January 19, 2003 @08:44PM (#5115781)
      C. It exhaust the heat onto those other cards.

      I would rather it blows the hot air to the other PCI cards than to the CPU. Most modern CPUs are already hot enough by itself. So putting the GPU on the other side will essentially blow the hot air towards the CPU, which would make it hotter still.
  • by Julius X ( 14690 ) on Sunday January 19, 2003 @07:56PM (#5115530) Homepage
    Once again, we're seeing a very elaborate (and ungainly) design for the next ultra high power graphics chip. As technology progresses, we're accustomed (as it should be) to the new products to be the same physical dimension (or smaller) as the older products. This is how we have such concepts as the PDA, the microATX form factor, just to name a few.

    The graphics card arena has been a major exception to this for the last few years. It's one of the few industries that I can think of where the product is actually GROWING in size and becoming more combersome as the technology becomes increasingly faster and more complex. I believe this is a sign that, not unlike how we discovered in the Pentium II/III era, that card/based processor packages are poor product design that are a) larger than necessary b)gum up the works, and c) only enhance the problem of cooling, thus needing continuingly more complex cooling systems.

    The current AGP(or PCI or whatever) bus expansion card methodology for video cards can be seen as going through the same problem, especially in the case of the GeForceFX. We've seen these problems previously in the designs for the GeForce3,4, made much fun of them in the case of the 3dfx Voodoo5 6000 cards, and even the latest ATI cards are requiring more power than the AGP bus can provide. Doesn't this show that there is an inherent flaw in the packaging design for this technology?

    GPUs need to take the same road that CPUs have taken (and now restored since we now use socket based motherboard solutions again) and be sold soley as the graphics processor, with the memory substructures and soforth built onto the motherboard. This increases the efficiency and ease that the GPU can communicate with the central bus and the rest of the system. In addition, you will no longer need to build an elaborate cooling strucutre to make up for the lack of ventilation provided by the typical AGP/PCI card slot design.

    Nvidia is part way there with the NForce already, building the graphics subsystem as a central part of the motherboard chipset and PC bus, but the flaw here remains (as in most integrated motherboard systems) that you are stuck with the technology. Of course, you can upgrade an NForce system with a full GeForce4 FX or Radeon if that is your choosing, but that just brings back the card problem. What needs to be done is to create a NForce type chipset with an FCPGA type socket for the GPU as well as the CPU, that way both systems are imminently upgradable (not to mention the potential benefits in creating a more efficient in-line cooling solution for the interior of the system) and thus our size problems begin to be alleviated.
    • Of course, with this idea, you start running into other problems, namely reworking the bus to accomadate high speed tranfers from memory to the graphics processor. Current AGP spec is woefully underpowered, compared to the throughput the cards themselves manage. Actual numbers are in the range of 1GB/sec or so on AGP4x, and ~8.2GB/sec off local ram on the Radeon 8500LE card, ~19.2GB/sec on the Radeon 9700 Pro. Slightly less on the new GeForce FX card - Clearly not in the same ballpark. Conventional DDR memory cannot compete with the timings that video card DDR pushes - Buying ram sticks for the video subsystem will be rather expensive, unless you're willing to settle for vastly decreased performance. Consider - The average PC2100 DDR stick runs at 133MHz, doubled to 266. A PC2700 stick, so far as I know, runs at 166MHz. PC3200 is just beginning to flirt with graphics card speeds, at 200MHz, or 400MHz DDR. nVIDIA is mounting 500MHz chips on their GeForce FX card, and ATI packs ~350MHz chips on their 9700 part
      (Actual clock speeds, not doubled DDR speeds). A drastic reworking of the motherboard layout, and a considerable increase in complexity, would be required to properly support this.

      Then you get issues with the socketing standard - how long with ATI, nVIDIA, and everyone keep playing ball with each other? How long before nVIDIA leans on a motherboard manufacturer, using their nForce chipset, and creates a non-standard socket? Power requirements, as well - Will the motherboard be able to power the chip, or will we have to plug in a lead from the powersupply akin to these new powerhouse cards?

      Interesting upsides to the situation would include the potential to use a G4 Mac style dual-, or perhaps quad-processor modules, for increased processing power - but that has the potential to easily saturate the bus, also bringing us back to the original concept of having everything mounted on an independent board module.
    • Whether it is slot or socket is just a matter of how the pins are arranged and designed. It doesn't mean anything fundamental. The PII/III design proves this. It included no more functionality than modern socket processors (some extra wafer, but nothing fundamental). It just gave more room to put the L2 cache in and provided more area for heat dissipation. The way it is packaged means nothing.

      In this case, a socket format would only make matters worse. One advantage to being on a card is that both sides of the card have airflow, dissipating heat. For the most part, a video card is a GPU and memory. Other stuff figures in, but the problematic part with the FX is the GPU cooling requirements. The presence of extra memory isn't the problem. The best solution would be a spec that *requires* more space between the AGP slot and nearest PCI slot. The rule of thumb for a long time has been that a PCI card next to the AGP slot is bad, this design simply changes that rule of thumb into a hard requirement. It seems sloppy for external power and a waste of PCI expansion, but a socket format won't fix anything.
  • perhaps its time for a case/tower that has openings in the back ala the normal ones that we are used to... but the top 2 slots are meant to be used with a card like this.

    anticipating rear exhaust/cooling, the 2nd to top lines up with a slot, and the top one just has the cut in the back, in case you have a card that needs it.

    chances are youll only have one card that uses a dual slot, and that leaves the rest to be used normally.

    the people who always put the big ass power supply at the very end of the strip so the rest of the plugs can be used will understand what im talking about :)

    • Problem is that the orientation of the design forces it to be towards PCI slots on every motherboard out there. It's more like having a thick power brick with an orientation requirement that defies the design of the power strip (which I have seen a few times).
  • by stevarooski ( 121971 ) on Sunday January 19, 2003 @08:36PM (#5115754) Homepage
    "How cool, a video card with what looks like a trojan stretched over it for safe gaming."

    How apt!
  • by Twillerror ( 536681 ) on Sunday January 19, 2003 @08:49PM (#5115800) Homepage Journal
    I remember reading that John C is going to cap the FPS on Doom3 to like 30 or 40 FPS per seconds. I'm hoping so, I bet he is tired of people grading video cards by how many FPS they can get Quake 3 running.

    The best thing about the FX isn't the overall FPS per second. It is the pixel shaders and such. The number of instructions it can excute per shader, and the rate at which it processes these is the real evolution of this card. The more complex the shader and the faster they run the more life like graphics will look.

    We have been stuck in the same basic quake engine for a while now. Unreal II and Doom 3 ( doom3 more ) will be the first real change in graphics we've had. Now the GPU's can handle movie style rendering, without a ton of little tricks.

    We really do need the horse power. The FX could probably render toy story in real time, that is pretty amazing. I can't wait till I can watch a movie and pause it and change the angle. The ability to have true 3-d movie projection is becoming more realistic with this type of hardware ( of course we need the 3d projector )

    $400 dollars for this is nothing. You don't seem to realize that just 10 years ago a 486 DX system could cost over $4000 grand. With 16 megs of ram and 1/2 gig of harddrive. The price is rather low considering what it takes to create such wonders, stop bitchin.

    Open source will help out in this arena as well. You got to think that the pros that did the work on Golem for LOTR are fans of open source, it won't be long until those kinds of shaders and techniques will be available for game programmers.

    To me saying "why do we need all this power" is kind of sacreligous. Remember that increasing speed and creating a market for new hardware is what keeps most of us employeed. Never say more speed is a bad thing. And don't blame sluggish performance on the developers, as software becomes more complex you have to give up some performance for stability and expandability.

  • Oh for christs sake, the damn thing is priced at ATI 9700 Pro prices. I have no idea why the prices are so high for Europe (sorry), maybe the original post is way out of date.

    Best Buy preorder [bestbuy.com]
  • by Dunkalis ( 566394 )
    nvidia has lost this and probably the next generation of 3D to ATI. ATI's Radeon 9{5|7}00 is a very good card. Superior to the GeForce4. By the time the GeForceFX is released, ATI will have their next-generation chipset prepared. nvidia will be a generation behind. ATI cards are already close in price to their nvidia bretheren. nvidia needs a new product to get the performance crown back, or ATI will dominate.
    • ATI's "next-generation" chip is still built on the previous-generation process - 0.15 micron.

      It was nVidia's move to 0.13 micron that delayed the GeForceFX, and allowed ATI their moment in the sun. ATI have yet to climb that particular hill, and nVidia are already rolling down the far side.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...