Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

NVIDIA Unveils (And Tom's Reviews) The GeForce4 387

EconolineCrush writes: "NVIDIA has finally revealed its GeForce4 Titanium and MX graphics processors. Tom's Hardware has a some benchmarks comparing the new offerings to current products, and the results are pretty interesting. Meanwhile, The Tech Report does an excellent job cutting through the hype with an examination of each new chip's features. Both articles are well worth reading to get the full story on the latest from NVIDIA."
This discussion has been archived. No new comments can be posted.

NVIDIA Unveils (And Tom's Reviews) The GeForce4

Comments Filter:
  • GeForce3 (Score:2, Interesting)

    by dev!null!4d ( 414252 )
    So the advanced features of the GeForce3 aren't being utilised yet, and the GeForce4 is out now... ouch what to do... do I get a GeForce3 now or wait...?
  • Can't stand it (Score:3, Insightful)

    by darketernal ( 196596 ) <joshk@trip[ ] ['leh' in gap]> on Wednesday February 06, 2002 @10:08AM (#2961360) Homepage
    I almost can't stand it when I buy a new flashy graphics card that is praised by every magazine, and then a NEWER card comes out, that supports DX8 pixel shaders, etc., etc. (IE I bought a Radeon 64MB DDR card....two weeks later, hello GeForce3)

    I hope if I buy a GeForce4, it'll last, in both speed and 3D technology.
    • That's why I like nvidia's MX line of Geforces. I mean, these things are to a point where the $70 card is good enough for average gaming needs.

      Now that the Geforce4 is out, I guess I can start looking at the cheaper Geforce3's ...
      • Re:Can't stand it (Score:3, Interesting)

        by Zathrus ( 232140 )
        Which merely proves that you haven't read the article, or pretty much ANY article on nVidia cards.

        The MX isn't a stripped down GeForce3/4 - it's a totally different chip without nearly any of the features that make the GF3/4 powerful and a good match for today's and tomorrow's games.

        The MX chips lack any vertex or pixel shaders. Yes, the GF4 MX has limited vertex shader support, but it's more akin to the GF2 shader than anything else.

        Go look at the benchmarks. There's a reason that the MX line score so far below the regular ones. And a reason why they're performing abysmally in DX8 games - they aren't DX8 compliant. It's about like getting a 2D card and trying to run Quake with it - it simply doesn't have the guts needed to do it.

        If you want to go on the cheap, pick up a full fledged GF3, GF3 Ti200, or the as-yet-unreleased GF4 4200 (I think that's the designation). All have the hardware needed for DX8 games (and contrary to the articles and to what some would have you believe, there are games out right now that make use of DX8 and these cards - one of them is Everquest), and they're cheap - under $200. I suspect the GF3 Ti200 will be heading toward $100 very soon now.

        Personally I bought a GF2 the 2nd day it was out. I paid $350 for it. I would've liked to wait for a bit of a price drop, but my new computer wouldn't work with my old cards (dual Voodoo2 at the time). That was two years ago, and my GF2 is still perfectly acceptable for playing games. It's a bit slow in EQ, but I'll live. It won't handle the upcoming games though.
        • Which merely proves you didn't didn't read my post:

          these things are to a point where the $70 card is good enough for average gaming needs.

          The MXs may be alot slower at the benchmarks, but so what? For the average gamer they'll still be plenty fast. That's like saying "Look at the numbers, you'd be stupid to get a Corvette, the Porsche 911 is way faster", when all I need to do is get to work in the morning.

          People can keep shelling out $300 for the latest Geforce, good for them, but that's 6 PS2 games for me. :)
          • The MX line isn't just "slower at benchmarks." It's actually severely castrated, and when the GF3 Ti200 drops in price with the release of the GF4, it will be a WAY better buy than a GF4MX. Any modern game will be unacceptable on a 4MX, but very nice on a Ti200. As for your car analogy, it's more like saying "Sure an RSX is pretty zippy, but if you want to run quarter miles a Camaro Z28 doesn't cost too much more and is MUCH better at it.
          • Go look at the benchmarks. In particular, Anandtech's Unreal2 benchmark. This is the shape of things to come - and the MX cards can't handle it, even at low resolutions.

            MX cards aren't DX8 compliant, and so while they'll work fine with most games out even now, they're not going to work worth a damn in a year or two.
        • It's about like getting a 2D card and trying to run Quake with it - it simply doesn't have the guts needed to do it.

          Um, IIRC, Quake was software only. Having a 3D card didn't help you any until GLQuake was released. In any case, Quake has always run fine on 2D-only cards - that was the target market, after all.

          Don't get me wrong - the point you're making is partially correct (although I don't agree that pixel/vertex shaders are in such widespread use that not having hw support means all your games look bad) but at least get your back-up facts correct :-)


          • Um, IIRC, Quake was software only. Having a 3D card didn't help you any until GLQuake was released.

            Almost correct. There was a custom version of Quake that was made to support the Rendition Verite chipset, one of the first mass-market 3d cards. It used their own proprietary API and was DOS based (like the original Quake).
    • I'm still rather enjoying my ATI Radeon 32mb. I mean, sure, I'd love to have the biggest/best/fastest thing around, but nVidia keeps making these damn things faster than the old ones become obsolete (or even standard equipment for most). I don't know anyone who bothered upgrading to a GeForce 3 from a GeForce 2, so who the hell would go to a GeForce 4 now? I mean, the GeForce 3 came out less than a year ago. I personally am not willing to spend $600 per year just on video cards, when the performance gain is barely marginal, and nobody's writing games to take advantage of the new bells and whistles.
    • When has computer hardare ever *not* been like this? There's always "newer, bigger, better" in the world of hardware (which explains why hardware is so much more advanced than software) and as long as people like you and me keep buying merely to keep up with the current benchmarks it will continue to do so. When new version video cards come out we aren't forced to upgrade right away, most of the features boasted by said cards aren't usually implemented in software for a few months at the very least. But all this marketing exists to get people all fired up about what the new hardware can do. Is it good/bad? Geez I don't know, the only time I ever upgrade is when I run into some piece of software that is unable to run on my box.

      Recently I bought a GeForce 2 card to replace my dead GeForce 256, I bought it because it was cheap ($110 CDN) and it could run reasonably well with the games I play (Baldurs II, Tribes 2, UT, Serious Sam, etc).
    • Re:Can't stand it (Score:5, Insightful)

      by Geek In Training ( 12075 ) <> on Wednesday February 06, 2002 @10:37AM (#2961491) Homepage
      "I hope if I buy a GeForce4, it'll last, in both speed and 3D technology. "

      Hey buddy, cope. If you're not realizing by now that you're going to be able to run the latest and greatest games with all the eye candy without shelling out $200-$400 every 12 months, that's YOUR problem, not the industry's.

      This is like my neighbors who were mad at *ME* for telling them they could not load WindowsXP on their 486 DX2/66. But we paid $4000 for this machine ten years ago! That was almost half the cost of a car! And it still works! They ended up going to WalMart and buying an HP, with monitor and CD burner, for $699. Now they quit whining... until 5 years from now then it won't run Microsoft AOL version 15.2.

      As for me, I have too many other interests to shell out $400 for a video card. I buy games 18-24 months after they come out, at the $19.95 (or lower) price. I *NEVER* pay more than $130 for a video card, and I'm extremely pleased with my price/performance return. Go look at for the GeForce2 GTS-V for $49 and you'll see what card I'm running; it gives me 70 frames per second at high quality in Quake3.

      If that isn't enough for you, well, I'm sorry, you're just going to have to pay more for the Cadillac.
      • Re:Can't stand it (Score:2, Insightful)

        by Kushana ( 206115 )
        The thing is that you don't need the latest and greatest video card in order to play the current greatest games.

        How many games last year that required Direct3D 8.x support? One, I believe. (The latest EverQuest add-on). nVidia introduced us to vertex and pixel shaders almost a year ago, and we're only now seeing games that can use them. The vast majority of 3D games today run perfectly well on a Kyro II. For over a year after the Voodoo I came out, 3D games were still being shipped with a software rendering option.

        This is always the state of affairs in games. The hardware manufacturers want game developers to make games that make the public buy their cards, but the game publishers want the developers to spend time (and money) on features that will sell more games. And more games will not be sold if the hardware's install base is 50,000.

        So the greatest games will always lag beind hardware. So buy older hardware and save yourself a bundle. The GeForce 3 Ti200 still plays *everything* really fast.
        • Remember when Graphics cards included games that were optimized for the 3-d acceleration for that card? I'm thinking of the first gen cards like the Virge and Voodoo cards. Even if the Descent 2 included with the Virge cards was actually SLOWER then the normal version, it did look better! Maybe after Doom 3 (and games that use the engine) comes out we will finally be able to get some crappy games included with the graphics cards that actually USE the features of the card. That will be cool.
    • If you want to always have the latest chip, then get used to shelling out $400 bucks every other week. You wouldn't want to have an Inferior Card! Someone might find out and you would have to hand your head in shame!

      But if you could care less what other people think about your hardware, then simply ignore them. It's your life, your computer, your software, so don't let anyone tell you what damn hardware you need in order to be cool.

      I'm running a Matrox G450+ and I couldn't be happier.
  • apple (Score:3, Insightful)

    by SlamMan ( 221834 ) < minus threevowels> on Wednesday February 06, 2002 @10:08AM (#2961364)
    And in an almsot suprising move, apple's offering as a build to order option in their towers (announced yesterday. For a company that almsot always has hidiously slow graphics cards, its kind of a nice change tosee them ahead of the game for once in this department.
    • Re:apple (Score:3, Informative)

      by Anixamander ( 448308 )
      Actually, when they announced the speed bumped towers a few weeks ago, they noted that the higher end ones included the GeForce4. Of course, nVidia had not announced the existence of such a product yet, leading to some speculation here on slashdot.

      As far as Apple having a history of slow graphics cards, they have done pretty well in the towers for the last year or two. They were the first (by a couple of days) to have the GeForce 3 even.
      • I think what the poster was referring to was yesterday's announcement that Apple will begin shipping the GeForce4 Titanium cards in the top of the line towers as a BTO option... Apple announced two weeks ago that the towers would ship with GeForce4 MX cards..

    • Re:apple (Score:4, Informative)

      by gamgee5273 ( 410326 ) on Wednesday February 06, 2002 @10:18AM (#2961402) Homepage Journal
      I'm curious as to what you mean here. Apple has had the GeForce3 in the Power Macs for the past nine months (roughly), and since the blue & white G3 they've had the top of the line (or close to) ATI cards in the Power Macs (at least as an option). And, while Apple had little to do with it, 3dfx was supporting Apple with Voodoo cards from Voodoo 3 on up (I'm using a Voodoo 3 in my PM 6500).

      You can't go off of the chipsets in the iMacs - the iMac was essentially a laptop with a CRT on it (now it's a laptop in a bigger package). But the G3s (after the beige boxes) and G4s (G4s especially) have always had strong card options, both at Apple and outside of it.

  • I'm sick of these reviews with a line like "The results are *interesting*". Lets just agree that if the results weren't interesting, it shouldn't have been posted in the first place. By posting the article on slashdot, the "interesting" part is implied.

    Please, go out on a limb, put on some body armor, and have the guts to say ONE MEANINGFUL SENTENCE about the results other than that they were "interesting". It's not that hard.
    • They get slammed for voicing their opinions by other readers and 'biasing' the article. What do you expect them to do? I found the fact that they found it interesting, well, interesting. I guess you didn't find the fact that they thought it was interesting interesting.
    • Well, there is the normal average-joe meaning of interesting and there is the understated-all-to-hell meaning of interesting.

      An example of the latter: at the University of Texas at Austin Hans Mark - former Director or NASA Ames, Deputy Administrator of NASA, ect., ect. - used to teach a class in which the Airborne Laser system used to become a topic of conversation. When asked about its range (since he'd seen the classified testing documents), all he'd say was that it was effective at a 'militarily interesting distance'.

      Now, that's a far cry from Tom's Hardware and the GeForce4, but maybe they're trying to get a little reflected glory rather than simply grossly underusing the language.

      We can hope, right?
    • I've noticed that /. uses the word 'interesting' when an article/review/benchmark doesn't show the community's favoured product (linux/AMD/ATI) as a superior one.

      Most slashdotters see nVidia as an evil corporation because they don't open source their drivers for linux. This leaves ATI as the favourite. The benchmarking shows that in almost every test (except aniso) the GF4 smokes the 8500, therefore the results are summarized as 'interesting'.

      If the ATI card actually did outperform the nVidia one, then the post would contain something like "ATI crushes the evil nVidia, we are 1337".

      I'm not the one to look up previous articles, but I do recal some benchmarks (biased or not) where NT/2000 did something better than linux. The poster stated that the results were "interesting".

      I think this is slashdot's attempt to hide the truth that it is possible for the 'evil' corporation to do something good.

      On another note, who else thinks that it is pointless to use Q3 as a benchmark. Start using RTCW or another game that actually makes modern cards break a sweat.
      • I think the results are "interesting" because they're detailed and show what things the cards are better at.

        As for bias, well, I don't see any, at least not pro-ATI. ATI beat the GF3 in a few things and I don't recall the editors being happy or anything. Maybe ATI fanboys were, but those are just user opinions like yours or mine.

        And as for my own views of ATI. Ugh. Total crap. Or rather, nice hardware, too bad it's saddled with a company that can't make a driver to save its life.

        (Somewhat like Creative Labs, supposedly the Live and Audigy cards are good, but their drivers still blow up on dual-CPU systems and often on single-CPU ones.)

        For the driver reason alone, I'll go with nVidia. One driver pack, works on anything from a TNT to a GeForce 4. And I've never had it screw anything up.
  • Another article (Score:5, Interesting)

    by SILIZIUMM ( 241333 ) on Wednesday February 06, 2002 @10:09AM (#2961369) Homepage
    There is another article at Anandtech too, it's quite a good read. Contains pictures, benchmarks, etc. []

  • ... (Score:4, Funny)

    by lexcyber ( 133454 ) on Wednesday February 06, 2002 @10:10AM (#2961370) Homepage
    And to everyone's suprise. Geforce4 is faster then
    the previous chipsets. Has more pipelines and
    bigger memory bandwidth. When will someone try
    the new and fresh marketing trick and announce
    hardwarre that is slower then the old hardware.
    (I hope MS didnt hear this and starts making hardware)

  • Here's the....... (Score:2, Redundant)

    by qurob ( 543434 )
    Tech Report [] article

    Just a MacGamer short blurb []
  • cheap geforce3 (Score:3, Interesting)

    by belterone ( 176605 ) on Wednesday February 06, 2002 @10:21AM (#2961420) Homepage
    LeadTek has a Geforce3 Ti200 with 128M of memory
    for under $200. I just got one of these a
    couple of days ago. Heaviest video card I've ever
    owned. Looks great in windows. (I did windows
    first because I knew it would take longer). If anybody's curious, mail me; I should have it
    working under linux tonite if nothing comes up
    after work.

    funny story: I upgraded my mobo as well to
    a soyo dragon+... That thing does NOT turn off
    power to the keyboard or ps/2 mouse port when it
    powers down. I finally had to unsolder that idiot
    taillight on my MS optical mouse so I could get
    some sleep.
    • Heaviest video card I've ever owned.

      You should see the full-length ISA monsters from the days of the 286. No wonder they were called "Hercules."

  • by joshsisk ( 161347 ) on Wednesday February 06, 2002 @10:22AM (#2961423)
    After this article and yesterday's overly-glowing review of the Xbox, it seems to me that Tom's has fallen on hard times. Consider the following sentence:

    "The test guys who aught [sic] to have caught this driver bug seem to be busy selling their stock our [sic] counting their money instead."

    All their articles now seem to have been written in five minutes and sent though to door without the slightest bit of editing- or even spell checking!

    I don't mean to nitpick, but Tom's used to be a very reliable source- and a great read. Not so much anymore.
  • So GeForce3's should now get a bit cheaper which is great news. I'm quite happy with mine and it sounds like I won't be missing out on much compared with the new it's just an incremental step this time which is fine with me as I won't be missing out on major features when new games come out.

    GF3, 512MB Ram (PC133 even), 2X 20GB HDD, 1Ghz Athlon and I can run Medal of Honour just fine in 1024X768 - a GF4 would be wasted on my system anyways I think.
  • Wow, the article is "interesting". Come on, at least *some* content in the Slashdot piece, like "overall about N% faster".
  • The GeForce 4 Ti 4600, which is the
    highest end of those listed, is only listed as
    costing $299. I remember that a GeForce 2 Ultra
    with 64 Megs of Ram was something around $550 in
    the store, even months after it came out.

    Of course, I don't know if it is worth it to buy
    one of these things. I'm playing Return to Castle
    Wolfenstein on my GeForce 2 Pro at full detail, and
    I'm still getting good performance.

    • Just to clarify, the ti4600 is $399, while the ti4400 is $299. The 4600 has DDR RAM at 650MHz while the 4200 is non-DDR at 550MHz.

    • Of course, I don't know if it is worth it to buy
      one of these things. I'm playing Return to Castle
      Wolfenstein on my GeForce 2 Pro at full detail, and
      I'm still getting good performance.

      I hear ya brother, thats my game.. and thats my card. :) I look at this fancy new cards, and just wonder how fast my Athlon 1800+ would run with it. I have a GeForce2mx and it seems to work great. In RTCW the only time it seems to really slow down is when someone starts blasting with their flame thrower and then it's all over for me :).

      I suppose it's going to take a 'killer' game for me to really want to switch. I haven't yet played the retail version of Medal of Honor AA, but maybee that will push me.. if not probably Halo.
  • What's the point? (Score:4, Insightful)

    by filtersweep ( 415712 ) on Wednesday February 06, 2002 @10:28AM (#2961455) Homepage Journal
    Does anyone use these cards for anything other than games?

    These cards cost as much as a decent CPU... or a console game system- yet are the fraction the cost of a CAD card. Their shelf life seems pretty limited as well. In a year or two they will all have a half gig of Rambus or DDR and we'll have 16X AGP? Then we'll all need high definition monitors because today's pixels will all look "blocky" by comparison. Then we'll be right back to unusable framerates at higher resolutions... it all goes full circle.

    I've never been able to justify the cost, but then again I don't game. The ironic thing is that "fun and games" arguably stress the hardware more than any other apps for most general home users.
  • by yoink! ( 196362 ) on Wednesday February 06, 2002 @10:36AM (#2961485) Homepage Journal
    The THG article indicates that for all intents and purposes, the average home-computer user still has enough power in his 700-1000MHz machine that upgrading to the rediculously overpowered 2GHz P4s and Athlon XP 2000+ etc, just isn't worth it for them (unless of course their livelihood is dependant upon computing time). I believe the same is starting to happen in the GPU field as well. A brother of mine recently bought a GeForce 3 card, just after the introduction of the whole Ti 500/200 updates. To this day it's still more power than he needs and should be able to outlast the TNT2 Ultra card he replaced it with. The main point being that except for those people that crave "the fastest," and there's nothing wrong with that ;-) , these incremental increases in performance are going to mean less and less to the consumer, most of whom go to the biggest electronics store around and say "my kid needs a special 3d thingy to play this new game." Although I honestly believe people would be happier if they informed themselves a little, it's impossible to think that they will and in the end it doesn't matter. We've been years away from any new device that shows real promise, instead the best some people can come up with is an integrated cell-phone / PDA. Hmmm... who would have thought... until something does show up... I'll be playing Quake on an 8MB single-head graphics card. Humiliation!
    • Graphics cards today are far, far, far away from what they could be. Games still do not look like Real LifeTM, so there is still a long, long way to go.

      Graphics cards will continue to get incrementally better until they can push enough polygons(or whatever) to create fully realistic life-like real-time fully 3D images at a constant framerate high enough so slowdown is completely imperceptible to the human eye.

      That is still very, very far away. Your post is AT LEAST a decade too early.

      • Um, until people start writing the SOFTWARE to utilise the hardware, you're still talking about current level equipment being 'good enough'.

        I got an Xbox because It's got GF3 level support and software to USE it. I can't say the same thing for the GF2MX400 on my desktop computer. I _still_ don't see a bunch of games to utilise even it's capabilities, much less the GF3 that's been the hot card to get OR the GF4 which just came out.

        Further, programmers won't spend the time to fully utilise the capabilities of the GF4 for quite some time as it represents a tiny fraction of the installed based. On the _XBOX_ on the other hand, they CAN devote the time as every single unit can support the code.

        (And swap utilize for utilise if it's mispelled.)
        • Does the XBOX have a VGA out? I saw it in the store and wasn't impressed. But also if you look at the output of a PS2 without S-Video it looks like crap. And if you use the VGA box for the Dreamcast, Soul Calibur looks awesome. Better then some PS2 games with S-Video.
          • It's got a videoport that you use to attach the xbox to whatever you've got. (From ch3 on crappy TVs to 1920xmumble-whatever componend out on high end HDTV's) I'm using the S-video Home theatre connection kit and it looks as good as anything can on a 6 year old big screen TV.
        • I'll respond here, but a number of folks have said, the software needed just ain't out there. It's coming though. Check out the Anandtech article on the new Unreal2 engine. It beats up current graphics cards and I think we can expect the same from other game companies.
          • Dude, I've been waiting _2_Years_ for something that warps anything more than a GeForce256. You're talking about _a_ game that _may_ be out soon. Right now the only thing that tasks your Nvidia card are the demos released by Nvidia! My point was: The Console games are taking advantage of the capabilities _now_.
            • Console games (for the XBox) are essentially the same as PC games. If they're taking advantage of the card in a way PC games aren't it's simply because they know the hardware exactly and instead of providing detail sliders, it's tuned directly.

              However, I think current PC games are using the cards we have. Tribes 2 and Giants are both slow on a GeForce 2. Wolfenstein (and supposedly MOH:AA) are very chunky with less than a GeForce 3, unless you turn the texture quality down.

              Besides, console games are too handicapped by being designed for consoles. Honestly the XBox seems best because of the HD, it lets games actually save state info. But gamepads are lousy for most types of gaming except platform games, writing a game to use them cripples the interface. (As compared to a primary keyboard/mouse or keyboard/gamepad where they don't try to cram everything onto the mouse/gamepad and cut the features that they can't fit onto it.)
    • You must be one of them wierd mac folk that seem to think that the latest and greatest and fastest isn't nessesarily useful for day to day life.

      Didn't you know that playing Quake at anythign less than 120 FPS is dog slow, you can't twitch fast enough if it's less than that. GEEZE, I bet you don't even go out and splurge $200 every six months for a new processor and MB.

    • Doom 3 (or whatever it'll be called) is only meant to be able to run at 30fps on the latest GeForce 3 stuff. GeForce 4 and onwards are really the only cards that will be able to run it at a decent speed 1024x768 and up. You think Doom 3 won't be a popular game? Hahaha.

      These cards are not only necessary. They're going to be standard within the year.
  • Anandtech's review (Score:3, Informative)

    by GweeDo ( 127172 ) on Wednesday February 06, 2002 @10:42AM (#2961507) Homepage
    Anandtech has quite a good review here []. They also have benchmarks from the lastest build of the unreal engine here []. Enjoy :)
  • I guess it's time for me to go buy that Radeon card I've been planning to get for quite some time now. And I mean _the original_ Radeon, not (7|8)500. Hey, I hardly ever play any games and I'm still using ATI's VideoXpression with 2MB ob memory from 1997!
  • by DG ( 989 ) on Wednesday February 06, 2002 @10:47AM (#2961524) Homepage Journal
    I know you're out there John. :)

    Lemme ask you this: it seems that with the previous generation of 3D cards, the technology had reached the point where any game with a reasonable game engine could be run at 1024X768x32bit with all the detail goodies turned on at framerates that were completely playable.

    (Perhaps this is a mistaken assumption?)

    If so, then what does this card bring to the table from a game designer/coder's perspective?

    If there's no point in driving a Quake3 style engine any faster (because it's already fast enough) then what will you be able to do with this new hardware that you couldn't do with older stuff?

    Or to rephrase, what hardware feature do you most wish was availible on the current generation of 3D cards, and does this new card have that feature?

    • If there's no point in driving a Quake3 style engine any faster (because it's already fast enough) then what will you be able to do with this new hardware that you couldn't do with older stuff?

      IANJC, but I think I can try an answer.

      I play Quake3 online about 2 hours a night. At 1024x768x32, no less, on a TFT (which effectively limits famerates since the refresh is so much slower than modern tubes).

      70 fps is plenty good for me in Quake3, and I don't really have much desire to go higher. But my GF2 won't do 2xAA over about 9 frames per second. This helps smooth out the picture considerably. So the newer cards will support the same old engines running with full-scene anti-aliasing at 4x at a "usable" framerate. No big change for coding there.

      Another thing John talked about in his last remarks, though, was poly count. Your models and scenes can have beaucoup more polys when you juice up the core speed on a new processor, making the whole gaming experience a lot more realistic-looking.

      John also talked in previous .plan files about vertex and pixel-shading, and how applying multiple lighting effects on single pixels can make things a lot cooler in actual gameplay. The eyecandy factor for this is hella-big.

      As a side note, the one disappoitning thing is that while the GF4Ti cards (NV25 chipset) include a second Vertex Shading Unit in teh chip, there is *NO* dedicated pixel shading unit at all, as there was in the GF3. Why is this??

      They go into this in the Tom's article, and it sounds as though the NV25 still supports pixel shading versions 1.1 and 1.3 (whatever that means), but won't support 1.4 until the next chipset. And *THEN* they should be fully DirectX 8.1-compliant.

      On the other hand, why should that matter, as John Carmack uses OpenGL, not Direct3D. ;D
      • As a side note, the one disappoitning thing is that while the GF4Ti cards (NV25 chipset) include a second Vertex Shading Unit in teh chip, there is *NO* dedicated pixel shading unit at all, as there was in the GF3. Why is this??

        Um, yes it does have a dedicated pixel shader. It still only has one, like the GF3, though it's faster. Where did you get the idea it doesn't?

    • it seems that with the previous generation of 3D cards, the technology had reached the point where any game with a reasonable game engine could be run at 1024X768x32bit with all the detail goodies turned on at framerates that were completely playable.

      IANJC, but...

      If your GPU can run 1024x768 with all goodies on, then there clearly isn't enough goodies to select from. Have you seen Final Fantasy/Shrek/Monsters Inc? Every one of those look a lot better than any computer game even from basic NTSC set. And that's much worse than 640x480! Clearly we need much more polygons, better lighting calculations and what not. The current trend is to run the same old games with ever higher resolutions - what's it good to be able to see individual polygons more clear? Is it somehow cooler to view borders of polygons instead of pixels and spend hefty amounts of processing power on it? Vertex and pixel shaders sound to me exactly what we need.

      Currently we have like 20K polygons/frame. With 800x600 it makes average of 24 pixels/polygon. When we have more like 4 pixels/polygon, then it's the time to increase resolution. I'd say 200K polygons/frame should look pretty good at 1024x768. Make that times 10 for even remotely realistic looking results. The problem I'm seeing is unevenly distributed polygons. There's 10K polygons used to figure in the distant taking 20x20 pixels from screen and in the same time there's only 2400 polygons used to render the rest of the world. When largest polygon on the screen has less than 10 pixels I'm happy.

      And when we have GPUs and game engines that are able to display Final Fantasy looking graphics I will compare to the real movies.

  • What else is there? (Score:3, Interesting)

    by Galvatron ( 115029 ) on Wednesday February 06, 2002 @10:49AM (#2961538)
    Seriously, are there any competitive alternatives to NVidia these days? Personally, I'm starting to think about replacing my TNT2, but I'd kind of like to get something with open source linux drivers. At the same time, I don't want to have to go back to a Voodoo 5 or some shit like that just because it is open.

    So, does any company make good graphics cards with open specs?

    • by Odinson ( 4523 ) on Wednesday February 06, 2002 @12:23PM (#2962047) Homepage Journal
      Seriously, are there any competitive alternatives to NVidia these days?

      Strangly few slashdoters want to talk about this.

      Personally, I'm starting to think about replacing my TNT2, but I'd kind of like to get something with open source linux drivers. At the same time, I don't want to have to go back to a Voodoo 5 or some shit like that just because it is open.

      I totally agree. Not only would I buy such a card myself but I would advertise it to everybody I know as the best(most flexable) solution.

      So, does any company make good graphics cards with open specs?

      The Raedon 7500 (AIW as well?) is the best non-nvidia card in xfree (4.2) right now.

      The Xfree guys are working on the 8500, but who knows.

      The problem is a one-two punch

      Nobody bothers to try with Linux since good free closed source drivers are made availible.

      Nvidia bought one the players and shrunk it two a two way race.

      I would care less if Nvidia had bought 3dfx or released their own closed drivers, but both.....

  • Best Buy (Score:3, Redundant)

    by antisocial77 ( 74255 ) <bjwtf13@gmai l . com> on Wednesday February 06, 2002 @10:51AM (#2961542) Homepage Journal
    Um... this has to be a mistake, but apparently Best Buy is letting you Pre-Order these little slices of heaven for $129.00
    Check it out. []
    • Re:Best Buy (Score:2, Informative)

      by MJArrison ( 154721 )
      The same thing happened on a $900 19" Toshiba monitor at last year. It was listed for $100 something. Thousands of people ordered it. There was a class action lawsuit, and each person that ordered (myself included) it got a ended up getting a check for $45.

      Bottom line: Go order it, even if you don't get it, you might get some cash out of the settlement.
    • NNNOOOOOOOO... BestBuy doesn't allow Canadian accounts!!!!!!!
    • Re:Best Buy (Score:3, Informative)

      by Julius X ( 14690 )
      Prices and availability are subject to change without notice. Errors will be corrected where discovered, and Best Buy reserves the right to revoke any stated offer and to correct any errors, inaccuracies or omissions (including after an order has been submitted). Best Buy may, at its own discretion, limit or cancel quantities purchased per person, per household or per order. These restrictions may include orders placed by the same account, credit card, and also orders which use the same billing and/or shipping address. Notification will be sent to the e-mail and/or billing address provided should such change occur.
      -From the BestBuy website.

      So this means that this probably won't be honored. Bummer.
      • Best Buy reserves the
        right to revoke any stated offer and to correct any errors, inaccuracies or omissions (including after an order has been

        IANAL, but no entity can "reserve a right" it never had to begin with. It's thoroughly possible that you could make Best Buy honor their offer in a court, depending on laws wherever you are.

        For example, I could post a sign on my front door that says I "reserve the right" to search you if you come into my house. Of course, it's all BS because I never had the right to search you to begin with, and thus cannot reserve such a right. In this sense, it's just a meaningless turn of phrase thought up by company lawyers to let you think you've divested yourself of rights you never gave up. The sales tactic of bait & switch, even on the internet, is still bait & switch, and isn't looked upon kindly by judges no matter where you live. I'd say give it a shot.

    • Re:Best Buy (Score:2, Informative)

      by ph0rk ( 118461 )
      price has been fixed. Now, will they cancel my order? ;)
    • Not anymore! (Score:2, Informative)

      by travdaddy ( 527149 )
      Yep, it was a mistake... they just jacked the price up to $399.99.
  • by fluor2 ( 242824 ) on Wednesday February 06, 2002 @10:59AM (#2961583)
    here are the links for the gf4 in action. i think the resolution is pretty high. I can't wait for the Doom3 on this card.

    Squid []
    Wolfman (i guess this is the best) []
    Tidepool []

    Looks like they had some spelling errors on some of the videos (they spelled content as contnent).

  • Geforce4... Wowee... (Score:2, Interesting)

    by Talez ( 468021 )
    More shaders, More pixel pipelines, More memory bandwidth... whoopee...

    When the hell are they going to ditch the antiquated scanline rendering method and go work on some tile based rendering methods?

    Hell, the reason why the Geforce line has to keep doubling its fill rates every generation is because its architechture is so god damn ineffecient. Look at the memory bandwidth requirements for the cards! Instead of using the relatively limited bandwidth of AGP for streaming textures from main memory (where it should god damn be) to the texture cache, the card is busy wasting bandwidth on the damn Z-buffer (which would be eliminated if they implemented hidden surface removal like the PowerVR chipsets).

    Also, tile based renderers scale better. You stick another graphics chip in, you instantly double the performance of the graphics card because you can process 2 tiles at once.

    How about seeing some new innovation in the field rather than just adding a few new pixel pipelines and a shader that nobody has any freaking idea on how to use!
    • Ah, but then there'd be no reason for everyone to go out and buy $400 video cards every few years! It's the same reason Microsoft keeps adding "features" to Windows instead of making it more efficient. If adding more performance to your video card was as easy as dropping in a $50 processor, Nvidia executives wouldn't be able to afford NEARLY as many fancy cars!

      Progress and business are often mutually exclusive.
    • First of all, nobody uses scanline rendering. Maybe NEC PowerVR if they're still around. 'Scanline' as most graphics guys use the term means you do hidden surface removal with something like Brezenham's algorithm rather than a Z-buffer. But everybody uses Z buffers and, as far as I can tell, a 'sort-middle' approach.

      Second, tile-based rendering has been tried many many times, both by high-end graphics companies (HP's PixelFlow effort a few years back) and by low-end companies (PowerVR's scanline approach, Dynamic Pictures did tiles under the covers IIRC, MS Talisman, PixelFusion, Gigapixel, and others I'm no doubt forgetting of the 40+ PC 3D companies that were around 5 years ago...). Basically it's a loser. It doesn't fit well with DirectX and OpenGL APIs, it creates almost as many problems as it solves (e.g. load-balancing among tiles, bandwidth-sucking data overlap/duplication among tiles), and the marginal improvements it might generate in theory in speed are outweighed by the retraining time required for graphics developers worldwide to learn programming techniques oriented around tile-based hardware. I could describe these problems in more detail if you indicate interest in a follow-up posting, but I don't have the time now in the middle of the day.

      Pixel and vertex shaders are at least relatively innovative. If they can figure out how to tie together not just 2 or 4, but 8 or 32 together in a simple, yet flexible and comprehensible way (I saw Pat Hanrahan give a proposal on how to do this at Eurographics a couple years ago) that makes it easier for developers to use them, that'd be an innovation in parallelism that really pays off IMHO.


      Disclaimer: Any 3D expertise I have is a bit rusty. Feel free to correct any technical misstatements.
    • Nonsense, who moded this to 5?? This guy doesn't have a clue. This card is the fastest, the policy of whatever works should apply, and will ultimately win in the market, people have tried deferred shading and tiled approaches, and while the NVIDIA system is not a scanline approach, it is not the scheme you probably envision that's WHY it's the fastest. The other approaches failed, and many of the people who worked on them now work for NVIDIA. There are hundreds of engineers at NVIDIA who make these design decisions based on what will work in terms of power requirements, implementation, programmability, speed and a host of other reasons. NVIDIA leads in performance because they get this right. Programmers DO know how to use the programmable shaders, but there are other more traditional ways to use this hardware, and the other pixel pipeline will help even simple multitexture applications too. Even scanline systems can scale very nicely, so the scalability of the tiled approach is just not true, you seem to have forgotten Voodoo SLI, but there are other ways to scale graphics systems too. Your post is a plea to support your pet favourite graphics scheme, but there are detailed technical issues to be considered beyone the glib appeal to emotion. The facts and NVIDIAs performance speaks for itself, and your post is the graphics equivalent of complaining that Ford doesn't make water powered cars.
    • by ToLu the Happy Furby ( 63586 ) on Wednesday February 06, 2002 @06:19PM (#2964508)
      More shaders, More pixel pipelines, More memory bandwidth... whoopee...

      When the hell are they going to ditch the antiquated scanline rendering method and go work on some tile based rendering methods?

      Probably never, and for very good reason. Tile-based rendering is a very efficient architecture whose time has already come and gone.

      For those who don't know, tile-based rendering divides an image up into a number of smaller squares ("tiles") and renders them independently, as opposed to the traditional method ("immediate-mode rendering") of rendering an image one polygon at a time. The major benefits claimed for tile-based renderers are that the process is more parallelizable (no risk of two chips rendering to the same area if they are working on different tiles) and that it is an easy modification to check each polygon's z-buffer (its distance from the camera) as you add it to the poly-list for its tile, and then to only texturize those polygons which are not occluded (i.e. actually visible). This is in contrast to the traditional immediate-mode rendering algorithm, where polygons are textured more or less in random order, leading to situations where a polygon will go through the entire process of being textured and rendered, only to later be completely covered up by a later poly--a situation which wastes a lot of (especially) memory bandwidth, fetching all those useless textures and such.

      Cool! Sounds great! Let's hear it for tile-based rendering! Too bad ATI and NVIDIA have clearly never ever heard of this miracle technique! After all, it's not like they would ever make (gasp!) an informed choice not to use it!

      Well...not so fast. Basically what we've seen is that tile-based rendering offers two potential benefits: it eliminates *some* of the complexity of enabling multi-GPU implmentations, and it uses quite a bit less memory bandwidth in the base case. The problem is that both of these supposed benefits really buy you very little when designing a consumer-level graphics card today.

      First, the problem of "dividing up the work" isn't really what's preventing multi-chip graphics cards these days. Indeed, it's really a rather easy problem. Here's a clue: have alternate chips render alternate frames. Gee...that wasn't so tough, now was it? Well, no. But the other problems of implementing a multi-chip card for the consumer market sure are. For example, we have our choice of implementing an (expensive, performance gating) point-to-point bus to handle memory traffic (and have memory bandwidth/chip cut in half anyways), or of completely mirroring the memory, using twice as much for the same capacity (expensive). Then there's the cost of a second chip (expensive), the cost of packaging the second chip and connecting it to memory (expensive), and the cost of the extra power and cooling, the cost of trying to squeeze it all onto one card (results in a bigger, more expensive card; may gate clockability). And this is without mentioning the extra development and debugging time that goes into getting a multi-chip solution to work correctly. (In general this is one of the most difficult issues design engineers face.) Golly, it's almost enough to make you remember how when 3dfx tried to make a multi-chip product it was 6 months late, the single-chip card was far too slow, the double-chip (and cancelled quad-chip) card too expensive, and, due to the release delay, no longer competitive. (OTOH John C has hinted that a scalable multi-chip architecture might be on the way from one of the major players. Tie that in with the fact that Anand reports the GF4 will be the last to use the GF name, and that NVIDIA owns the remnants of 3dfx, and I start scratching my head...)

      Second, the problem of memory bandwidth. Or rather, the former problem of memory bandwidth. Yes, the traditional rendering pipeline is very inefficient with memory bandwidth. Thing is, the prices on high-speed DDR have been coming down so fast that it hardly matters. You can find a Radeon 7500 with 64MB of 128-bit-wide DDR running at 2x230 MHz (i.e. 7.4GB/s bandwidth) for as low as $85 on (Actually there's one for $79 but it may be mislabeled.) The memory is probably less than $30 of the cost. Or maybe even less--the 64MB and 32MB GF2Pros (6.4GB/s bandwidth) only differ by $6. And the new GF4 MX460 hits the street with 64MB of 2x275 MHz DDR (8.8GB/s) for $179, list, on a brand new card.

      As for the price premium of using relatively high-speed DDR instead of the same amount of SDRAM, it's pretty neglibible. Even for the highest speed DDR it's not such a big deal. Sure NVIDIA charges an extra $100 for another 25MHz on the GPU and an extra 1.6GB/s from the memroy (GF4 Ti4600 vs. Ti4400), but that doesn't mean it costs them anywhere near that much. (depending on GPU yields) It just means they like to bilk those in the $400-for-a-video-card crowd for the full $400. So how much does the stuff cost? Well...Hynix recently announced [] samples and volume production of 2x375 MHz x32 DDR selling at $10 for 128Mbit chips. That means $40 for 64MB of 128-bit-wide DDR with 12GB/s bandwidth. Not too shabby.

      Ok, maybe the benefits of tile-based rendering don't really mean all that much in today's consumer GPU market. But better is better: why wouldn't ATI and NVIDIA use tile-based architectures for the benfits it does provide. After all, it's not like there might be some (gasp!) downsides to tile-based rendering!

      Well, actually, there are. For one thing, it's more difficult to design a tile-based GPU and get it running at high speeds. For another both NVIDIA and ATI have years and years of research and experience with implementation techniques and algorithms for immediate-mode renderers, much of which wouldn't apply to tile-based designs.

      For another, neither ATI nor NVIDIA really uses traditional immediate-mode rendering anymore. Instead they use modified immediate-mode rendering, with lots of algorithmic tricks and tweaks to lessen the memory bandwidth inefficiencies of traditional immediate-mode rendering. Things like lossless z-buffer compression and various early polygon-culling algorithms. No they aren't quite as effective in reducing overdraw as tile-based rendering, but they provide quite a significant benefit. Indeed, the GF4 Ti4600 has more or less caught up with the (tile-based) KyroII in Kyro's own villagemark benchmark, which is contrived entirely to test massive overdraw of the sort which is never encountered in a game. The KyroII is only 8 months old. Sure it's much much cheaper than a Ti4600, but if Kyro can barely keep the lead in the one benchmark specially designed to make the case for tile-based rendering then something is wrong here.

      Meanwhile there are very serious issues with the ability of tile-based rendering to scale to meet future challenges. In particular, the tile-based rendering algorithm works very naturally so long as there are no polygons which find themselves spread into more than one tile, and so long as you don't use transparent or translucent textures. Of course it's not that tile-based chips can't handle these situations--the KyroII is here and works just fine, after all--but just that they require complicated workarounds which are more inefficient than for immediate-mode rendering, which handles these cases naturally.

      The problem is that both cases are going to be more and more likely as graphics continue to improve. As tile-based rendering tries to scale with increasing scene polygon counts and resolutions, you get more tiles per scene and many more polygons crossing tile boundries. And as graphical effects get more realistic, the alpha channel (i.e. transparency) starts coming into play more and more. Indeed much of the recent research in non-real-time computer graphics has focused on adding translucent "subsurface" reflections to the ray-tracing algorithm. This (and approximations of it) is the sort of thing that future pixel shaders are going to be called on to do, and tile-based rendering is a bad match for it.

      Indeed, most of the recent advances in graphics are pointing towards a world in which the assumptions which tile-based rendering is based on no longer hold. How, for example, does tile-based rendering handle cubic environment mapping across tile boundries, or cast dynamic shadows across tile boundries? What happens if a dot3 bump map extends a texture from one tile into another? I'm sure clever solutions can be found to these and all the other dozens and dozens of issues that will arise when you try to mix DX8-style effects and tile boundries, but the main point is that tile-based rendering was an algorithm developed under two assumptions which increasingly do not hold:

      1) If one polygon occludes another, the other's texture will never be visible to the camera;

      2) Objects in one section in the screen can be rendered without reference to any other parts of the screen.

      Of course, we may never know the difficulties of trying to make a DX8-compliant tile-based renderer; after all, the KyroII hasn't even made it to DX7, since it is still missing integrated T&L. I have no idea whether this is because of any difficulties integrating T&L with a tile-based rendering pipeline (can't think of why it would be a problem, but it may be), or just because the Kyro doesn't have the money or manpower behind it to keep up with 3 year old technology, but this lack is already preventing the KyroII from competing effectively with the cheaper GF2MX on modern high-poly games. I am pretty sure that integrating a programmable pixel shader into a tile-based architecture would be pretty tough, if not pretty impossible.

      Which brings me to the main point: you started out writing "More shaders, More pixel pipelines, More memory bandwidth... whoopee..." and in a sense, this is the right attitude. To which we should very quickly add "tile-based screen division...deferred rendering algorithm...whoopee..." All these technical details only mean something insofar as they give us the capability for more realistic graphics--this means high FPS, high color depth, higher resolutions, lack of aliasing problems, high-quality mip-mapping/anisotropic filtering, realistic--or even dynamic--lighting and shadows, realistic and/or impressive pixel effects, high polygon counts, useful and realistic vertex effects, etc.--for a reasonable price. It is pretty damn hard to argue that the last few years, under NVIDIA's leadership (and ATI's pursuit) have not resulted in huge improvements on these measures. Again, the new GF4 Ti4600 may be ridiculously expensive and may not change your experience with today's games very much (besides enabling 1600x1200x32 with 4xAA at playable framerates), but when the new Doom game comes out, a card with similar specs and selling for ~$100 will bring you decent performance on an engine which offers a totally new level of graphical realism. Same thing when Unreal Warfare, Unreal 2, Deus Ex 2, and all the other Unreal 2-engine games start coming out. Believe me, a GF4 caliber card will improve the experience of playing those and later games significantly over a GF3 and especially a non-DX8 compliant card like a GF2 (and, sadly, a GF4MX). And, believe me, those games are going to provide significantly more realistic graphical experiences than those of today.

      Immediate-mode rendering is doing just fine, and the GF4 marks an evolutionary but very significant improvement to the state-of-the-art. A switch to tile-based would require significant retreading to reach the same level, and might form a poorer basis for future improvements. But, if I'm wrong, then ATI and NVIDIA will make the switch. Believe me, they know all about tile-based rendering, and NVIDIA even owns Gigapixel (via 3dfx) and their tile-based rendering engine. I think they'll stick to modifications of immediate-based rendering, but no matter what they do it will be whatever they think offers the best graphics performance at the lowest cost to them.

      And now to correct some minor misconceptions in your post:

      Hell, the reason why the Geforce line has to keep doubling its fill rates every generation is because its architechture is so god damn ineffecient. Look at the memory bandwidth requirements for the cards!

      The reason the GeForce line increases its texel fill rates continually is because consumers want to run new games which have higher multi-texturing requirements (Carmack has said Doom3 will have something like ~8 textures/pixel), and to run existing games in higher resolutions and at higher FPS.

      The memory bandwidth "requirements" for the cards don't matter, only the prices. If a recent card with 7.4GB/s only costs $85 (Radeon 7500) and a brand new card with 8.8GB/s lists for $179, then the costs of increasing memory bandwidth are obviously not so terrible. Today's $400 card is next year's $80 card. Similarly, immediate-mode rendering's inefficiencies need to be measured according to their dollar costs, not their bandwidth costs.

      Instead of using the relatively limited bandwidth of AGP for streaming textures from main memory (where it should god damn be) to the texture cache, the card is busy wasting bandwidth on the damn Z-buffer (which would be eliminated if they implemented hidden surface removal like the PowerVR chipsets).


      First off, textures most certainly should not "god damn be" in main memory! The AGP bus is there to stream vertex data from the CPU (pre- or post-transformation, it's the same amount of data). That's all it's there to do, and good thing, too, because today's high-poly games can already generate enough vertex data to make AGP 2x a bottleneck, and those of a couple years will do the same to AGP 4x. (Which is why AGP 8x is on the horizon.) Increasing the bandwidth of a bus from the northbridge across the motherboard through a slot to an add-on card is a whole lot harder than increasing the bandwidth from soldered DDR to a soldered GPU a few centimeters away. AGP should only carry the data which it absolutely is forced to--namely initial vertex data from the game's engine running on the CPU.

      Z-buffer lookups only waste bandwidth between the GPU and the on-card memory. Technically, you don't eliminate z-buffer lookups with a tile-based architecture; you eliminate texture lookups (and texture application) on occluded polygons. However, by dealing with a small tile at a time, you can read all the z-buffer data for the tile in from memory all at once, and store it in an on-chip cache until you're done with that tile. (This is essentially why higher poly-count games mean smaller and smaller tiles.)

      And last, they do implement hidden surface removal techniques, like I pointed out before, even though they are less effective than with a tile-based architecture.
      • Good God someone please mod this UP! I actually read the whole thing and it appears ToLu actually wrote this entire thing himself. This really is informative and even revealing of what might lie ahead in the video card market. Help ToLu out please!

  • by HomerJ ( 11142 ) on Wednesday February 06, 2002 @11:39AM (#2961759)
    If you're sick of all these senseless video card upgrades, just follow the $150 video card rule. No game is really going to take full advantage of a card less then $150. If you're paying more then that, you're wasting money.

    Your money would be better spend putting the extra money towards a better monitor for instance. Be surprised the number of people that spend $400 on a video card to play on a $150 montior. Then wonder why things are still jumpy. A nice subwoofer and new speakers would also enhance your gaming experience.
    • Nice theory.

      I like to play my games with Anisotropic filtering on, with antialiasing on, and with higher than 1024x768 resolution. And I don't like slideshows. 60FPS is bare minimum, constant 80+ is good.

      GF3 can fit the bill barely. I still probably will replace it with a GF4Ti4600 soon.

      If you are happy playing 1024x768, no AA, only trilinear filtering, and no new fancy pixel shader tricks, you prolly happy with GF2. I dumped mine over an year ago, and never regretted even if GF3 costed arm and leg.

      Some of us just care more about the rendering quality. And yes, I have quite fine monitor.

      Then again most people dont' even know what anisotropic filtering is, and how much better rendering result it gives, let alone how to turn it on in the driver options. And those who go and try it on their GF2s will watch the slideshow for 5 minutes and turn it off again...
  • This is your

    Standard /. post

    complaining about

    Tom's Hardware

    not putting enough

    information on one page.

    We need a "slashbot" that will automatically post all the normal postings we have come to know and love.

  • by epepke ( 462220 ) on Wednesday February 06, 2002 @12:31PM (#2962081)

    The exciting thing about the GeForce 4 is not that it's faster or cheaper, it's that finally the programmability is at an appropriate level.

    Uh-huh. 15%. Yawn. Don' need that. I can play Deus Ex just fine. Well, guess what. Even if you think that games are the entire universe, some day you might just need an MRI and need someone to be able to look at it and find something that will keep you from dying. Medical imaging is one of the things that the GeForce 4 will be good enough to do. Scientific visualization, volumetric rendering, that sort of stuff.

    Why is this? About a decade ago, everything was basically SGI. These were big, expensive machines, suitable for vertical markets. It was possible to get the engineers to work with the microcode for the sales of a small number of units.

    Then various card companies came along (NVidea has a lot of ex-SGI engineers) and started making cards for the horizontal gaming market. They concentrated, of course, on satisfying the needs of their biggest customers/promoters, which were the gaming people. Many of these cards were customizable, but at a level of abstruseness that made it so that maybe three people in the world could really hack them up the wazoo.

    In the mean time, SGI suffered, because even people who should know better make decisions on the basis of "gee whiz." No magazine is going to benchmark a card on how accurately it shows a tumor from real data. A perception rose that the graphics problem had been solved for cheap, when it really hadn't been.

    The GeForce 4 finally brings little-card graphics up to the point where mere mortals can actually do customization for vertical markets.

  • The "new" GeForce 4 MX is apparently a warmed-over GeForce 2 architecture, without the programmable ertex and pixel shaders. The GeForce 4 Titanium is an improved GeForce 3.

    I'm surprised, as is Tom's Hardware. If NVidia wants developers to use their underutilized vertex shader hardware, which takes considerable programming effort, they need to put it in the whole product line. Right now, the GeForce 3 vertex shader hardware is in all the GEForce 3 parts, the XBox, and the Mac boards, but not in the NForce or the GEForce 4 MX. Those last two are GEForce 2 architecture.

    This sounds like marketing insistence that the new low-end product be called a "GeForce 4", when it really should have been called the GeForce 2 MX".

    The transistor count on a GeForce 3 architecture part is about 3x that of a GeForce 2 architecture part. This isn't a trivial difference.

  • But I don't think this is necessarily true. I haven't yet, but I'm anxious to get my hands on a game that supports geomod. When you start adding technologies like this, and depending on how detailed it is, I see this as something that could place a HUGE burden on the GPU. Everyone says current boards are OK for CURRENT games, but I'd like to think that game development will eventually grow into the extra bandwidth with all kinds of cool stuff.

I've got a bad feeling about this.