Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Nvidia's NV20 161

Bilz writes "ZD Net UK has posted an article on Nvidia's upcoming NV20 video chip. According to them, they state that during complex 3D scenes the card performs up to 7 times faster than a GeForce 2 Ultra."
This discussion has been archived. No new comments can be posted.

Nvidia's NV20

Comments Filter:
  • well you are the most stupidest!!
  • My first nVidia card was a 4MB PCI Riva 128. As far as I know, their first chipset. It was half price for $125 and worth every penny. I was there when they were developing beta OpenGL drivers, I remember trying to get the nVidia working with SuSE Linux 5.3, all through it I was a solid believer in nVidia's superior technology..

    Now I'm running a 3DFX Voodoo3 2000. It's fine. It's fast enough, I have full HW accelerated OpenGL under Linux, FreeBSD, BeOS (well, 4.5.2 in theory) and QNX. My next card HAS to be able to do all of that. Like 3DFX, I want full open source drivers or I will NOT buy the card.

    Does Matrox's new card fit my criteria? Does the ATI? I know nVidia doesn't. So my next card won't be an nVidia. Plain and simple.

    So, can any of you tell me what the status of the otehr card makers is??

  • Yeah, but hopefully what will happen is that there will become a market in rendered artifacts. So game developers will be able to go to a website and access a library of pre-built stuff, including textures, and just include it in their game.

    I guess this would end up working in the same way as photo agencies - you'd get people doing nothing but contributing to these things, and surviving on only the royalties from the use of their objects, and other people who dabble but that do occasionally create something which is worth re-using and sticking in the library.

    So when you make your game, you'd hit the web, saying 'I want a lamp, has to have on/off modes, must fit into Victorian era game' and up will come a list - sure, it'll take a while before a library has enough in it to make this possible, but once it does, you just populate your VR room much like heading down to IKEA to populate your RL room.

    I like this idea.
    ~Cederic

  • I can remember going custom PC shopping with a friend in 1995, down Tottenham Court Road in London.

    The guys in the electronics shops literally laughed at him when he asked for a 4MB graphics card - taunts of "You'll never need that, unless you're doing graphics for the movies" followed us down the street.

    Of course, they also laughed at him for wanting as much as (gasp) 32MB RAM in the PC..

    ~Cederic
  • The article claims that, in simple 3D graphics, the chip will be just two times faster then GeForce. And it claims it will be faster as graphics get more complex...

    Frankly I may understand that this may happen as GeForce does not have enough processing power to hold up more complex models. But that the new chip will get 7 times faster? If it is just 2 times faster on simple models, how does it get faster several times more, on complex models? The basics of the model don't change, wether it is less or more complex in details. The basics will be processed by the same channels and by the same math of the chip. Or am I missing something?
  • Comment removed based on user account deletion
  • IIRC, this is the chip they're developing with ELSA's help. It's being designed to replace their Quadro2 chip for high end workstation grahics cards. These won't end up in consumer or gaming cards, but some of the technology will probably find its way into future GeForce chips.
  • yes we can. My reasoning? I dont know.. But faster is better and big numbers makes me want to BUY it. Thats why..just makes it seem alot faster.
  • However they do crash for me when running and writing opengl apps. Also restarting x is just hiding the problem not fixing it. I don't think it is okay for software to fail ever with the exception of hardware error. There are lots of software I am upset with. Thankfully opensource seems to be very into fixing these problems.
  • 500 MHz, probably will come with 64/128MB ram, why not just add a mouse/keyboard/floppy and call a spade a spade? this thing will be as fast if-not-faster than my PC dammit! Now just imagine if other technologies, like cars, airplanes, space vehicles, moved at relatively the same pace as computer technology... we'd have colonized half the solar system by now! It's time for all you computer engineers to look beyond computers and start helping out in all other areas!

    just my $0.02
  • Yep - I'm confirming it - it's true. I read about twenty of his postings and they received on average 5 to 10 replies, usually for some kind of hyperbolic remark, or poor comparison. Unfortunately the above poster chose to rename the above guy kiss-the-d**k instead of his real name and got modded down.

    Just an off-topic thought. Are there any provisions for limiting the number of posts for consistently troll or flamebait posters so as to decrease the noise? Maybe limit them to 1 -2 posts per week until the consistently post a few comments above zero? Just a thought.
  • (The Radeon drivers are not completly OSS, just the rasterization parts are)

    Sorry, but this is incorrect. The specs have been released to VA Research, who has not yet finished the DRI driver. The specs were also released to Xig who released a proprietary X driver. When the DRI driver is released, it will become part of XFree 4. The rasterization parts are already in their. If you want 3d support now, you have to get Xig drivers.

    Now the Radeon 64Mb Xig drivers(Alpha 2.0) are actually FASTER than some of Nvidias drivers w/ some of their faster cards. So the statement that NVidia has the fastest GL drivers currently is also incorrect. And I suspect that the next release of the V5 drivers(which BTW does support SLI and FSAA) will also be comparable to NVidias drivers. The DRI drivers are doing on hellofa job, and they all deserve our respect. And the fact is, that the open source developer DON'T NEED nvidias pipeline, the curent DRI/GLX stuff does it just fine.

  • I would imagine that they have made it possible to run more things in parallell. That way it scales better.

    It's just a theory though. And it assumes that the figure is even correct.
  • You know, "hidden surface removal" really doesn't mean that they have any special magic tricks up their sleeve. You can do pretty much correct HSR with a 16-bpp z-buffer, as long as you are careful with your near and far clipping planes. 32 bits per depth pixel is better, of course.

    What I'd like to see is a hardware implementation of hierarchical z-buffering for occlusion culling. That'd be neat.

  • Psst, you're still looking at the damn computer screen while playing. Psst, the game doesn't hurt like discovering the foot of a stool in the dark with one of those small toes on your foot does. Psst, you don't play "real life" with a mouse and the WSAD keys.

    Token-symbol-ish enough for you?

    The next thing you're going to do is start claiming that each time you die "in the game", you die a little inside. Furrfu!

  • The word you're looking for is "hierarchical Z-buffers". "Hidden surface removal" is a blanket phrase that covers such things as z-sorting (used in early software 3d engines), BSP rendering without a depth buffer, S-buffers, the "active edge list" algorithm used in the software renderers in Quake 1 (and prolly 2 too) and the good old z-buffer.

    Please, don't let some corporation smudge the terminology.

  • This may also be the real explanation for the drivers' closed-sourceness - a part of their business model is to have manufacturers ship boards that facilitate crippling of the chip's capabilities by the driver. Imagine what would happen if some open-source hacker could modify the driver to ignore the model ID and enable the Quadro-specific features anyway?

  • Not to mention that some companies (SGI comes to mind, also the company that did the Permedia {1,2,3} product line) have shipped products that have implemented most of the OpenGL pipeline (1.0 or 1.1) in silicon. Now, if that doesn't count as a GPU then I'm sure that "GPU" must be a registered trademark, like "Twinview" is.

    Ugh. Not to mention the fact that the G200 and G400 chips from Matrox also have a kind of geometry processing unit, called a "warp engine" that's programmable using some sort of proprietary microcode (the utah-glx project used pieces of binary-only microcode received from Matrox, and I'm pretty sure the XF86 4.0 DRIver does too). As far as I can tell from lurking on the utah-glx list back when John C. was working on the driver, the g200 has one warp pipe while the g400 has two. It looks like the current drivers for the g200 and g400 use the warp pipes for triangle setup acceleration (they only seem to use one microcode routine for triangle setup, which I think is a shame...).
    Based on my interpretation, that also counts as a GPU, programmable no less!

  • "By introducing better hidden surface removal"

    Oh please, you've just given your own lack of knowledge away right here. Do you even know what a z-buffer is? Or are you suggesting nVidia have some *revolutionary* branch-off from z/w-buffers? Or, wait, don't tell me, they've invented (drum roll ..) *back face culling*, right!? Perhaps you actually meant to say something like "they've optimized the amount of geometry information that needs to be sent to that card by creating higher-level primitives such as curved surfaces, meaning less data to go over the bus (as a simple example, the new sprite primitives in directx8)" .. but I don't think you meant to say that, because it doesn't sound like you know very much about this yourself.

    "If you don't know what it is, maybe you should not voice an opinion in the first place"

    "Complex" does not imply "lots of triangles" in my book. If they meant "lots of triangles" they shouldn't have said "complex". Anyway, any moron knows that fill rate has become a far bigger bottleneck than number of triangles since the introduction of the first GeForce. Your poly count has absolutely nothing to do with "complexity" (go look up the word in a dictionary if you want to confirm that).

    A q3a scene might be defined as complex: multi-texturing, lots of renderstate/texture stage state manipulation, multi-pass rendering etc. Making q3a-style curved surfaces hardware primitives might speed up games like quake, and perhaps this is the direction they're trying to go. "Complex geometry" isn't some specific 3d graphics terminology, it's some vague, undefined marketing BS, and that was my point.

  • "better than yours (seen your homepage)."

    hehe .. yup, Dave Gnukem is pretty much stagnant, I haven't actually worked on it in literally a year, so I can't argue with you there (actually my entire web page is essentially stagnant, it's not a high enough priority in my life right now - my point is, my web page isn't exactly an accurate reflection of what I'm doing.) It's not mentioned on my web page, but I'm currently working on a 3d game with a friend of mine, a networked FPS (OpenGL for gfx, sockets for network, DS for sound etc). It's coming along quite well at the moment, if it gets anywhere close to a finished game we'll be putting up a web-page for it and I'll link to it. Also most of my time goes to my work, which as it happens is 3d graphics simulations, incl. networked (mostly, military and industrial training simulators ..) so I'm not completely clueless ..

  • This "secret document" sounds more to me like a press release crafted by their marketing department. Actually it smells extremely badly of something designed to manipulate stock prices, or at the very least to calm nervous shareholders.

    "In environments where there are low detail scenes (large triangles, simple geometry, hardly any depth)) the NV20 is only twice as fast as the Geforce 2 Ultra"

    What the hell does "hardly any depth" mean? What they are trying to say here, without it sounding too bad, is that although T&L ops are quite a bit quicker, fill rate is and will still remain your 3D app bottleneck.

    "The performance of the chip doubles when handling geometrical data"

    Uh, what the heck is "geometrical data"? 3D polygons as opposed to 2D polygons? ?? All your 3D geometry data is "geometrical data", whether the scene is simple or complex. Also I don't know where they get the number "7" if they say here also that the performance only doubles.

    So the new chip sounds good, yes, but you can forget about it being 7 times faster, that is 100% pure marketing BS. Sounds like they've upped the clock and optimized the T&L engine and antialiasing. I might believe double the speed, but "7 times faster" goes way beyond lies.

    What is "complex geometry" anyway? A polygon is a polygon .. multi-textured maybe? Sheez, I dunno, this whole article appears to have been written by a 1st year marketing student with zero technical knowledge.

  • erh...

    Quake III, realistic?? Even with this super-new nVidia-chip I have a really hard time believing that FPS games like quake will ever be realistic. Nicer graphics doesn't equal more realistic graphics. But even if we managed one day to create technology that made games look exactly like reality (which would need 3D-monitors of course), I still doubt teenagers would have a hard time telling the difference.

    I would like to see a token symbol placed on the screen that would constantly remind the player that he is in a game universe.

    Isn't the icons representing your ammo, your armor, your weapons, and the frag-counter enough? I would think so.

  • Those 200dpi monitors have a digital connection: no need for a Digital to Analog Convertor!

    A digital connection with the same bandwidth would be a lot cheaper (Fast and precise DACs are relatively expensive)!

  • of course, by the time he could take advantage of the 32MB RAM or the 4MB video card, the computer was so outdated, he had to just go out and buy a whole new box as his non-EDO RAM and VLB video card were no longer supported ;)
  • This is something that I wrote to the debian-user mailing list a while ago:

    On Nov 03 2000, wulfie wrote:
    > I'll second the anti-Nvidia driver lobby.

    I'm planning on buying a new computer soon (a Duron) and one
    of the things that was hardest to understand was which video
    card to get.

    It seems that there is an hiatus between el-cheapo, older PCI
    cards and the super-hyper-duper-hi-end cards with 3D
    acceleration with all bells-and-whistles. There's nothing in
    between for someone like me that wants to buy a cheaper one
    and that only cares for 2D performance (I don't play games and
    I don't use 3D applications).

    Since there were no options (or since the manufacturers don't
    want to see that part of the market), I started looking for
    cards that would provide a not so bad performance and not
    hogging my future system performance, while having a
    reasonable price.

    In all reviews that I've studied, the NVIDIA cards seem to be
    the winners of performance, but the fact that they don't have
    a receptive attitude towards the community means that they
    don't want people like me as their customers.

    This is what made me choose a Matrox G400 for my new system
    (together with the recommendation of a close friend that said
    the G400 was running quite fast in his system).

  • 2560x2048 is the wrong aspect ratio. Maybe they fucked up and actually meant 2560x1920.

    1280x1024 is wrong too, for that matter. That's why I use 1152x864 in Windows (which doesn't support non-square pixels like X does).
  • So, we can like use the technology for pr0n too then? Cool.

    Seriously though, I think it has more to do with what sells than what is possible. Game makers could make less violent games using current technology.

    I personally can't stand FPS because they give me motion sickness. How's that for real?

    I'd rather put my 3d card to some good use, but I really can't think of anything that I'd be interested in. Anyone with ideas?
    -- Jacob.
  • Final Fantasy isn't exactly a HW pushing program. Almost everything is pre-rendered, with a few low-poy 3d models. Its Myst plus a few polys.

    If by 'A lot of PC game makers just aren't that skilled' you mean 'A lot of PC games dont have a place for pre-redered graphics' then you'd be correct.
    FunOne
  • But its pronounced X Windows ;)
  • Any single activity done for 14 hours is bad for you.

    I disagree. I feel that prolonged involvement goes along with a certain intensity which I feel should be sought out over mediocrity any day.
  • Okay. How about if we reword it.

    The GeForce 2 get slower as the models get more complex. As you increase the complexity of the scenes, the NV20 gets slower slower.

    At a scene of complexity N the NV20 is (say) twice as fast ast the GeForce 2. At a scene of complexity 20*N the NV20 is 7 times as fast as the GeForce 2. The NV20 on a simple scene is still probably faster than the NV20 on a complex, but we're talking about relative speed.

    Rendering a full quad-textured 60M polygons 50 times a second is fater than rendering a untextrued cube 60 times a second.
  • Hey, i think you could use some cooling yourself. Chill out man, i was just speculating, mmkay?

  • Ok, they say its going to be 7 times faster than a geforce 2. Now, what i wonder is, will this mean it will also be 7 times as heat-generating and power hungry? No, 7 times is not quite likley, but it makes me wonder just how much cooling will be needed? Muscle or Finesse?

  • IMHO the main point is that the transform engine is seven times faster, while pixelfill is only two times faster.

    Gfx HW is a pipeline, and a pipeline is only as fast as the slowest stage. The two main stages nowadays are transform and pixelfill. If the transform is busy because it has to transform bazillions of tiny triangles the pixelfill will idle. Same vice versa, for screen-filled, multi-textured, bump-mapped polygons the geometry part will sit there twiddling thumbs most of the time.

    That's how a card can at the same time be 2 and 7 times faster. It all depends on the problem you throw at it.

    The interesting side effect is that for a fill-limited scene you can increase the detail (i.e. use more polygons) without any effect on the framerate, the same goes for transform-limited scenes. The holy grail of graphics programming is to find the sweet spot so that all stages are busy all the time. But as that depends on the graphics card and the screen resolution and bits per pixel and other factors usually only the demo writers for the chip companies bother to do that. Thus most current games are written for the lowest level of customer hw and don't really use all the fancy features. Which is just fine for people like me who write their own software... ;)

  • but what are they going to do with a 500Mhz ramdac

    I think they may be confusing it with the DDR-memory clockspeed which will be 500 MHz. A RAMDAC of that speed would be overkill, there aren't any monitors big enough for those resolutions, and the upcoming LCD and digital monitors don't even need a RAMDAC.

    300% increase in FSAA speed

    Why not? You have to remember that the GeForces did FSAA in software. They probably added a hardware-based implementation, like 3dfx did with the VSA-100.

    how crap the V5 are compared to GF2 if you talk about speed.

    The V5 wasn't that bad when it came to speed per se (especially with FSAA), it's just that their top card with 4 VS-100 never panned out, thus giving the GeForce 2 and up the edge. nVidia is still far ahead in terms of quality though.

  • Exactly. I think people will adapt to any new development. Just like young people don't have any problem watching MTV-style fast cutting television. You'll still realise you're sitting behind a monitor, not a window.

    Once we get to the level of holodecks, it's time to be worried ;-)

  • "7 times faster" usually means "up to 7 times faster", which in turns, would turns out that "GeForce 2 Ultra outputs 2 fps at 1600x1200x32 with 1M polygons/frame, now we can do 14 fps!!".

    I doubt we are going to see even a 1.5x boost of fps at 640x480x16x20K polygons/frame.

    A new generation of chips has always been able to outperform an older generation by about 2x ON THE RESOLUTION THAT MATTERS AT THE TIME OF RELEASE, because, my friend, 21" monitors are not exactly cheap.

    Therefore, we'll most likely see about 2x performance increase on 1024x768x32 with 30-50K polygons/frame.
  • But technology is being used for both research and education. Of course, if you'd like to pay for me to go back to school, I won't object. ... but seriously, keep in mind that the 3D graphics engine that was developed for video games can be very useful in both research and education. I'm willing to bet that money from the video game industry will also give a very real push to the development of artificial intellegence. And the novelty seeking aspect of the entertainment industry means that it will continue to provide much needed venture capital to help give these new technologies the initial boost that they need.

  • As I understand it, they have had some problems working page flipping into the XFree86 architecture, but the next driver version is supposed to support it. I don't know about the grahpics overlay, but it sounds like the kind of thing that they'd be working on supporting soon.

    ------

  • Yeah, that does suck... NVidia wanted to release specs (they even made a good start [nvidia.com]), but NDA's prevented them from doing so.

    ------
  • The problem lies between the keyboard and the chair. I'm running a GeForce 2 MX right now at 1280x1024, and it's perfectly fine. No artifacts whatsoever.

    And I've been using it like this some months now.

  • The GeForce 2 Ultra is already at 250MHz (GTS is at 200MHz, MX is at 175MHz). The NV20 will most likely debut at 300MHz (note: The X-Box specs state a 300MHz NVIDIA processor, based off of NV20).
  • This issue has been bothering me since Matrox's G200 and Rendition's V2000 (or something like that) series. A lot of benchmarks put these cards near the bottom of the heap for various reasons, but one thing that was attractive about these cards was the fact that their visual quality was second to none.

    As time went on, we saw real powerhouses from NVida which put the competition to shame, performance wise. Now we are being flooded by enourmous framerates (who remembers BitBoys claims of 200fps at 1600x1200 in Q3?), GPU's, quad texel pipelines, DDR ram, and so on and so forth.

    However, has anyone considered visual quality? Having millions upon millions of polygons drawn per second may seem a real treat, but if they look ugly, then what's the point (remember the Riva 128)? Not many games are taking true advantage of all the power available, and there's always going to be a bottleneck somewhere, so I think it's time to relax, acknowledge that we don't need 200fps, and hope to see some beautful images explode onto our monitors sometime soon.

  • I agree, but for slightly different reasons.

    I see computer games as an escape from reality. Surreal images, strange creatures, and worlds we'll never see sometime in our lifetime make for a perfect outlet. Trouble is, when we get to photorealism, the fantasia vanishes, and the magic is gone.

    It's like going to an art museum and seeing a portait painted some 500 years ago, and then compare it some whiz kids photorealistic portrait. It's obvious that the "cruder" image has more feeling inside.

  • Yes, but can you run SETI on it?
  • Does a consumer graphics card really need more than that?

    Yes. I _need_ a HiRes head mounted display, i.e. two screens, i.e. double pixel freq.

  • 7 TIMES FASTER THAN THE GEFORCE 2 ULTRA?!?!?! Man that's one serious Vid Card. It's good too see that NVIDIA is keeping up the good work. As soon as companies get into the lead they tend to get sloppy and try to push their products for as long as possible(intel with it's pentium core, ATI with the Rage 128). Of course these cards are gonna be expensive, right? ;-)
  • by Anonymous Coward
    Is NVIDIA going to take display clarity and color accuracy seriously, especially at high resolutions? If they could put together a card which would rival Matrox output quality with incredible 3D performance, I would be the first in line! I returned a GeForce2MX not long ago because anything above 1024x768 started showing annoying artifacts. My Voodoo3, while not perfect, blows the GeForce2MX out of the water in terms of image quality at high resolutions. There's a site on the net dedicated to helping people fix the image quality problems of some of the NVIDIA cards .. it seems the RAMDAC speed is really a bogus figure if the manufacturer puts cheap (or just badly designed) capacitors on the video output. In the GeForce2MX example, it looks like the reference board even NVIDIA used fell victim to this design problem .. Mark
  • Not all the features:

    1. Graphics overlay (for playing DVD's etc..) - driver still not supporting this feature

    2. Page flipping - what gives the NVidia card a real boost under Windows - is not in the driver yet.

    As a person who is working extensivley with lots o f graphics cards I can testify that their drivers are damn fast compared to any driver in XFree 4.0.x - but it's not as stable as the Open Source Matrox G200/G400 driver which is found on XFree 4.0.x
  • Another poster in another forum explained that this apparently huge virtual size was due to virtual-memory-mapping of various bit planes and color depths in the VRAM into virtual memory.

    Then this other poster was simply wrong. The large virtual size is due to memory mapping of the framebuffer (32/64megabytes on modern cards) and mapping of the AGP space (128megabytes or more).

    The various "bit planes and color depths" are called visuals and they'll occupy at most a few hundred bytes each as structures within the X11 server.

  • NVidia wanted to release specs (they even made a good start)...

    I've seen you repeat this a number of times, but I'm afraid it's completely misleading. The information on the nvidia site is not specs at the register level, and it's not even useful information for writing an open source driver. As proof of this claim, try using that information to write a driver for FreeBSD.

    This URL has been floating about for months now and every now and then someone repeats on the utah-glx mailing list "hey look nvidia has full register level specs on their website". Each and every time the person is corrected immediately. So please stop spreading this misinformation.

  • HDTV has a resolution of 1920*1080 or 1280*720. A 350 Mhz ramdac will drive a display at 2048*1536 at 75 Hz.

    Supposedly, the correct formula is (RAMDAC speed (MHz) = x * y * refresh rate * 1.32)

    So, a 500 Mhz RAMDAC would be able to drive a 1536*2048 display at 120 Hz. I'm sure the calculations are slightly different for widescreen displays.
  • NVidia, on the other hand, uses the same codebase for both their Windows and Linux drivers.

    Even for the enormous Linux kernel module that's required to use their drivers? Really?

    Does their Windows driver, after less than a week of use, bloat to consume over 200MB of virtual memory? That's what their closed source XFree86 driver did with my GeForce DDR, on XFree86 4.0.1 and kernel 2.4.0-test9, even without using the 3D features at all. The open source nv.o driver that came with XFree86 isn't exactly a spartan RAM user either, but at least after it's sucked up a big chunk it stops asking for more.

    Granted, they don't seem to care about keeping up with development kernels (their kernel module didn't even compile against 2.4-test for a while); I haven't exactly put much work into fixing the problem (but how can I, when I can't even recompile with debugging symbols?); and their drivers did seem to work OK with kernel 2.2.16.

    Nevertheless, I don't intend to buy another NVidia card until I have an open source 3D driver to run it with. By contrast, my previous 3D acceleration in Linux came from Mesa on top of Voodoo2 glide; the frame rate may not have been as fast, but the rate of driver improvment certainly was faster.
  • The reason I noticed at all is that my machine was swapping like mad, even with 128MB of physical RAM, and even after turning the usual memory hog culprit (Netscape) off.
  • It can be done, but not nearly as easily as you seem to think, by several orders of magnitude.

    I never claimed it was easy! ;) It's kinda like being stuck in the middle of the ocean in a rowboat. With closed drivers, you have no paddles, and the rowboat is covered with a sealed, opaque top so you don't even know when you're near land. Open drivers is like having the top open, and a large soupspoon. Rowing yourself to shore with oars would be hard enough, and harder with a spoon. But at least it's possible.

    Other benefits come for other OSes (NetBSD, FreeBSd, etc.) for which nVidia will never write drivers. Also companies don't in general last forever. What happens if nVidia goes belly up? All the people who bought their cards and are using their drivers are up shit creek without a paddle (to extend a metaphor too far). Having the code allows you to generate an extremely specific bug report, which can then be passed on to someone more knowledgable. It's very hard for core developers to fix bugs like "It crashes when I click on the menu in starcraft", which could be a hardware problem...

    --Bob

  • I agree totally. The premise of this argument, translated to sound hardware, goes like:
    It's harder to write game music for a modern soundcard than for the Commodore 64, because with the Commodore 64 you only had to worry about doing 3 channels of sound.

    Try that one on a musician friend one day and see how far you get :)

  • Is that it apparently goes without saying that any new polygon pumpin' uberchip is only going to be used to give us Yet Another First Person Shooter.

    When will this technology break out of this ghetto? Aren't there more interesting things to do?

    Personally, I think 3D technology has been stuck in the "keystone cops" era long enough. In early film, the only thing people could think to show was chase scenes and other stunts. A lot of that had to do with the immaturity of the medium (no sound, poor picture quality). Eventually, I think "3D entertainment" won't by synonymous with "graphic violence."
  • Other benefits come for other OSes (NetBSD, FreeBSd, etc.) for which nVidia will never write drivers.

    Actually, I gather from other posts in this thread that the abstraction layer between the driver core and the OS's driver interface is open (or at least published). This should make porting fairly straightforward, even with most of the driver being a black box.

    There's also the option of wrapping Linux drivers in their entirity to run under *BSD, though I don't know if *BSD's Linux support has been extended *that* far.
  • Stick with the more open 3dfx, or Matrox. With them, if it crashes, you can track the bug down and fix it!

    You'd have a lot of trouble doing that, unless it was a silly problem like a memory leak (admittedly worth fixing).

    I've worked for a couple of years with a well-known software company that does third-party driver development (well-known cards, well-known platforms). Debugging a driver even *with* the standard reference texts for the card is a royal pain. Doing it blind - say, for hardware bugs or restrictions that aren't documented - is so much trouble it's not funny. This eats a vast amount of time even for us. Trying to debug a driver while having to guess at restrictions/errata in a register spec without support documentation - or worse, having to reverse-engineer the spec from code - would be at best a vast undertaking and at worst impractical.

    It can be done, but not nearly as easily as you seem to think, by several orders of magnitude.
  • I doubt it. The tools they use to create the scenes will get more and more sophisticated. They might use fractal algorithms to generate super-fine detail in trees, landscapes, water, fog, etc. The point is, they won't be hand-placing every polygon--the same way you don't hand color-every pixel in any other form of computer art.

    There will continue to be applications that push the limits of this and many subsequent 3D accelerators. Trust me.
  • I too have an Nvidia TNT2 card and I'm using
    Nvidia's driver with XFree 4.

    Here's what top says:
    Size RSS Share
    252M 252M 2024 S 0 1.7 100.4 5:18 X

    Seems excessive, doesn't it? Well, I've only
    got 256M on my machine, and guess what?
    NO SWAP SPACE IS USED.

    PS tells a different story:
    VSZ RSS
    276408 12704 ? S 10:00 5:21 X :0 -bpp 32

    VSZ is the VIRTUAL size of the process, 276M
    12.7M is what it actually uses.

    Another poster in another forum explained that
    this apparently huge virtual size was due to
    virtual-memory-mapping of various bit planes and color depths in the VRAM into virtual memory.

    12.7M is still pretty high, but hardly burdensome on my 256M machine.

    PeterM
  • This new card may be nice but I still will not support nvidia till they open their drivers. Their drivers are among the most unstable drivers around for linux. Yeah it is nice they are fast but not at the cost of stability. I also don't like binary only kernel mods. I have no idea what might be in that driver and what it might do in the kernel. Also when updating kernels that driver break a lot since it is binary only. If it were open it would probably work a lot better.

    Look at the sblive for an example of this. In the beginning it was closed and a pain in the ass to get working under linux. There were kernel version mismatches etc. When they opnened the driver it progressed much faster and it got incorporated into the kernel. Now the sblive is one of the best cards to get for linux since it is supported by every major dist out of the box. In some dists they even use the alsa driver instead of the oss one which is even more capable.

    I am not going to get locked into nvidias way of doing things again. When I bought the card they had announcements about how they were going to open their drivers. This did not happen. My next card is going to be an ATI, Matrox, or 3DFX. I am waiting a bit on the radeon till I see the open drivers for them. However the matrox cards and 3dfx cards do have open drivers. I do like 3d but I like stability more and the box with the g200 here has never crashed in x. The nvidia geforce box crashes a lot more often then that.

    So please even if you like their hardware don't support them till they open the drivers. In the long run it will help us a lot more. Teaching companies that drivers alone are not enough.
  • Actually, right now, modellers first create the model they want, and then do their best to reduce the polygon count.

    And you can kill a lot of polygons just modelling a realistic telephone. Which you can then reuse everywhere you need a telephone.
  • To clarify exactly what a RAMDAC is for the few posters that do not seem to understand:

    A RAMDAC has nothing to do in reality with 3d acelleration. Instead, the RAMDAC relates to converting from graphic's card's display memory to Analog signals on the monitor. Hence where RAMDAC comes from: Random Access Memory Digital to Analog Converter. A fast RAMDAC can support very high refresh rates. Now a 500Mhz RAMDAC will probably become necessary with high definition TV's which have a resolution a bit higher than 1600x1200, at a decent refresh rate. But as an above poster pointed out, it is likely a mistake in the article.

  • NVidia might have good video cards, but their way of handling themselves really sucks. Check here [gamershardware.com].
  • by Manaz ( 46799 )
    I quote from the article...

    "Pioneer of the first ever GPU (Graphics Processing Unit), Nvidia is now introducing a programmable GPU, seven times faster than the previous Geforce 2 Ultra, the NV20."

    Now, there are two possibilities here - either the article's author has a shocking grasp of the English language (wouldn't THAT be bad, considering he writes for ZDNet UK, the home of "The Queen's English"), or he hasn't done his research properly, and thinks that NV20 refers to the GeForce2 Ultra.

    The GeForce2 Ultra is the NV15 if I'm not mistaken - simply a GeForce2 GTS with faster RAM. GeForce2 MX is the NV10.

    The NV20 is the new GPU ZDNet's supposed "leaked documents" claim will be 7 times faster in complex scenes (ie TreeMark). You've gotta love it when the speed at which a product performs is judged by how it performs in a program designed to make it shine.

    Bad journalism all round I think - not that we should be surprised....
  • Most consumers and compulsive upgraders don't realize how underutilized video cards are. We go through two or three generations of cards during the development of a single game, and more than anything we're just trying to keep up with it all. The bottom line, and I mean this sincerely, is that the kind of performance people are seeing from cards based on chips like the GeForce 2 could be coaxed out of cards from two years ago. It's not that the card is crap, but that the bottleneck is almost always on the code side of things. People don't want to hear that, though. Or maybe they do, because it validates their reasons for upgrading to new CPUs and video cards.

    If you look at the PlayStation 1 hardware from five years ago, it doesn't even have bilinear filtering or zbuffering. It's also a total dog. And yet there are PS1 games that look as good or better than many current PC titles that require a TNT2 or better (maybe 15x faster than the PS1 hardware). So theoretically an "old" card like the Voodoo2, which is still 10x faster than a PS1, could do amazing, amazing things--much better than what people expect to see from a GeForce. But we don't bother, because things keep changing at a crazy rate and we're simply trying to get things out the door.

    In a way, I'm starting to see new video cards as a way of getting suckers to part with their money.
  • I second that 'Moron'.

    Son, I was wondering, you're playing that new hyper realistic game. How can you tell the difference between that and reality?

    Er, we bought it at the store Dad. You were there, remember?

    Yes, but when you've been playing it all day, don't the lines blur between games and reality?

    Er, no? I load up the game, sit motionless for 10 hours. People shoot me and I feel no pain. I can carry a bazooka and 20 rockets without getting winded. My game guy picks up stuff with his hands, not mine. IT'S a GAME Dad.

    I think you need protection from games. I'm going to start a group against realistic games. Uh, can you show me how to use this new 'HyperNet' to make a web page?

    Dad, you don't make web pages anymore. You have to make fully interactive 3-D enviroments. Since everything went analog they haven't used IPv6 in YEARS.

    Don't take that tone with me! I was a Unix guru back in the day!

    Later
    ErikZ
  • Damn, I thought I'd had it ;)
  • No, I mean that a lot of PC game makers ignore gameplay in favor of graphics. Example of those that don't, Epic with UT. Examples of those who do, ID with Quake3.
  • The point is that Ring0 gives access to hardware that you'd otherwise not have. And yes, ASM will ALWAYS be needed. Think about it, the hardware platform won't change for several years. (upgrades have proved to be complete and utter failures on consoles. Nintendo couldn't sell the $40 RAM upgrade, and MS sure as hell won't be able to seel a new GPU.) In order to keep each generation of game looking better, you have to bypass standard APis and write to the metal. The first games will use OS features, but I can bet you that by the second or third generation games come out, people will have built up custom ASM libraries and will use as little of the OS as possible. Look at the Saturn, for example. The first few games used DirectX, then all games afterwords used custom routines and the to-the-metal Sega OS. Windows 2K doesn't work EXACTLY like the underlying hardware. Given that the hardware is constant, why bother to write to the OS when you can get a nice 10-15% speedup by writing to the hardware. DirectX doesn't work like the NVIDIA chip does, so why bother with an API? Its not that much harder to code, so why not do it? PSX developers still use a large amount of ASM, and only recently has there been a trend towards C-only games. However, even those access hardware directly. Unless DirectX8 is a hell of a lot thinner than it was when I was using it (yesterday) I can guarentee you that most of the good games will be XBox-only. Also remember than console and PC gamers are totally different demographics, and there is little incentive for many console manufaturers to jump the fence.
  • Not necessarily. When you don't have an OS in the way and you get to run in ring0, you do everything you can to show up your competitors and tweek the system. The reason ASM and direct access have little use these days is because of all the diversity in PC hardware. Given a stable platform, game developers ALWAYS find a way to tweek games beyond what would be possible by going through the standard APIs. Saying that people will use DirectX8 all the time is like saying PS developers will use OpenGL all the time. It just won't happen.
  • Graphics are graphics, physics are something else. Physics engines are incredibly diverse in what they do. A one-size fits API handicaps physicas designers MUCH more than it does graphics designers. Gameplay and story don't really take any processing power. If you can offload graphics and audio processing to dedicated processesors (a much more practical design) then the remaining 1000MHz of your CPU can be dedicated to physics and AI. Gameplay and story is totally unlimited by hardware, and just dependant on what the designers do. The Final Fantasy series has great graphics, sound and gameplay. A lot of PC game makers just aren't that skilled.
  • Umm, with 3DFx down until Rampage comes out god knows when (Q2 next year), and Matrox sitting this round out, GPL fanatics have no choice but to sit on their G400MAX's and hope that ATI takes pity on them and decides to GPL the T&L part of their drivers. (The Radeon drivers are not completly OSS, just the rasterization parts are) Lastly, GPL fanatics will most likely ONLY get low quality drivers. Besides the legal reasons why NVIDIA cannot OSS their drivers, there is the fact that the drivers kick serious ass. They are the fastest, most stable implementation of OpenGL (what did you think an OpenGL driver was? This isn't an ethernet card!) on the face of consumer hardware. Now why in god's name would NVIDIA just give away an entire OpenGL pipeline to competitors who could take advantage of it (eg, ATI) to make their cards competitive with NVIDIA's?
  • They won't be compatible. Games written for XBox will be assuming Ring0 operation on a stripped down Win2K kernel. It's going to take some work to port, though probably not as much as it normally would. Of course, if the developers take advantage of the console nature, they'll start using custom ASM routines and bypassing the OS, in which case a port might be much harder.
  • Umm, complex scenes can be done just by setting MAX to not reduce the polygons so much. Trust me, people are FAR, FAR away from the day when the scene becomes too simple for the hardware.
  • I don't know, it think it is just a PC-gamer demographic. If you look at console games, usually new technology is designed to create breathtaking works of art like Final Fantasy, Shenmeu, or Mario 64 ;)
  • Another implication of 3D games is the annihilation of imagination. Why try to think of your own universes, when they are provided them on a plate?
    >>>>>>>>>>>>>>>>>>>>>>>
    Actually, the columnist from MaximumPC (can't remember his name, forgieve me, he's the one with the beard) poited out that games are much closer to books than movies in that movies give you a prepackaged world on a plate, while games give a tool for you own mind to imagine things more vividly. I am inclined to agree. Well-done games take a lot of imagination to play, and can often stimulate the mind like a book does. (I'm not talking Quake, I'm talking Final Fantasy or Zelda.)
  • The point is that very few people run at 16x12 now, and for NOW they will percieve no benefit from a faster RAMDAC. In time, perhaps with a higher dpi, of course the faster RAMDAC will be justified.

    -----------------------

  • Sure - several good 21 inch screens support 2048x1536 @ 75 Hz. At about $1000, they aren't too expensive - I'd rather spend more on the monitor and less on the system.
  • In complex scenes, it could be up to 7x faster because of Hidden Surface Removal (HSR). Many modern video cards still render surfaces that can't be seen by the user because they're blocked by other objects. In the NV20, a new Z-Buffer tech would remove those surfaces from the rendering pipeline, which can dramatically improve performance.

    The Radeon has a version of this implemented, but (to be honest), the Radeon isn't really too powerful. Imagine a powerful NVIDIA chip loaded up with HSR, and you'd get up to 7x faster in complex scenes, while simple scenes would only be a bit faster (less hidden surfaces to begin with).

  • nVidia makes a "high-end/low-end" distinction that's something of a joke in the industry. There are "high end" boards for professional 3D work (the Quadro line), and "low end" boards for gamers (the GeForce line). The "high end" product offer antialiased lines, work better with the window system, come with real warranties, and cost much more.

    They're the same boards, with the same chips. The only difference is the position of two chip resistors which identify the product type. [sitegadgets.com] In some models, the "high end" board was a part selection; the faster chips went to the high end. But with the latest round, the GeForce 2 Ultra, the low end is faster. So the reason for the distinction has vanished.

    nVidia finally bought ELSA [elsa.com], the last maker of high-end boards that used nVidia chips. At this point, ELSA basically is a sales and tech support operation. It's not clear yet whether nVidia is going to bother with the high end/low end distinction much longer. I hope they get rid of it; its time has passed.

  • Defining a "unified physics API" isn't that hard. Havok [havok.com] and Mathengine [mathengine.com] each have one. It's making the engine behind it fast and reliable that's hard. There are still tough theoretical problems in this area. But we know how to do it right [animats.com]; efforts now focus on getting more speed out of the algorithms. The ever-faster hardware helps.

    Did you see the claimed numerical performance for the new NVidia chip? 100 gigaflops. I can hardly wait until we have that kind of performance in the main CPU(s).

  • Don't believe everything you see, especially on ZDnet. They're not exactly the most reliable source around. Plus, different places have all claimed to have had leaked specs, and they all conflict with each other. I've yet to find two sites that have "leaked specs" that agree. Check The Register [theregister.co.uk] for instance for a different set of specs.

    This isn't to say that these aren't right, but be sure to take with a grain of salt.
  • You think your problem with the NVidia driver would be fixed if it were open source?

    No, I think a driver would exist for my OSes of choice if NVidia opened the driver sources. It's not all about Linux. On FreeBSD, OpenBSD, and NetBSD, recent NVidia hardware is as useless as an HP or Lexmark WinPrinter. Feh.

  • The maximum complexity for worlds will be a limit that won't be hit for a very long time. Look at the difference in world complexity between what is done for the highest quality 3d movies ( think Final Fantasy ), and compare with the most complexe real-time worlds.

    Obviously the developers of 3d worlds in film have not yet maxed out their imaginations in terms of what to build, and how detailed to make it - and the gap between their work and the realtime 3d scene is a very big gulf indeed.

    So I really don't think we'll be coming up to any significant blockages in terms of human imagination anytime soon. I suppose one might argue that as soon as we hit the point when 3d world complexity is visually indistiguishable from reality we may have hit the max. needed realism. But then of course there's always the visual effects of cosmic-zoom where you might want to soar through the microscopic cracks in someone's skin, etc. So there's plenty of room to keep plugging away.
  • Thats one helluva secret document Nvidia surely wouldn't mind to reveal...
  • by Anonymous Coward on Sunday November 26, 2000 @06:01AM (#601635)
    What value!!! Not only can you peddle your corporate propaganda on zdnet, but you get it linked on slashdot. New card 7 times faster. pfft. Remember the PR stunt when Nvidia announced their new video card would change the face of gaming blah blah blah. Well it didn't (big surprise). Of course that tree benchmark that written by Nvidia sure made it look good.

    The fact is, these "secret" documents are released as a form of cheap marketing. In fact, a large portion of todays "journalism" is written directly from the company spin doctors. PR twats wastly outnumber journalists and the trends are extending this.

    Always look for hyperbole (like the zdnet headline) and emotive adjectives in phrases like "screaming chipset design". They mention that it will only double performance when large polygons are used. Well I haven't seen many games that are written only for the GeForce. Given the price of development these days, games companies are reluctant to alienate potiential customers by asking for huge specs (unless your name is Geoff Crammond). I think it is safe to assume that the new cards will follow the trend of Nvidia's chipsets since the riva128, at least until benchmarks are out. The next generation is about twice as quick as the plain vanilla flavour of their current best chipset although they can probably manufacture benchmarks to make it look better.

    Corporate hype is not newsworthy (as much as they like you to believe it is), but I will be interested when someone reputable publishes benchmarks (and no, not Toms).

    Me, I'm happy to run quake3 on my riva128/amd233. Sure it looks like crap and is choppy as hell but ..... man, I really gotta upgrade. Where was that link again ..

  • by crisco ( 4669 ) on Sunday November 26, 2000 @12:32PM (#601636) Homepage
    The content developers are the ones that will hit the wall. Its one thing to blast out a few rectangular rooms and a few textures for a low poly limited engine like the ones we've been playing for the last few years. But when you've got the ability to decorate the room, trick it out with the high poly telephones, furniture and animate objects, it suddenly takes longer to get the game out the door.

    We've already seen that in the game industry. Teams of 20 to 100 people cranking out stuff for 2 or 3 or 4 (Daik... nevermind) years.

    Sure, movies do it, we get some beautiful movies that kill anything gaming hardware will be able to do. But movies are, what, 2 hours long? A proper game has to have 30 hours of gameplay at the very least (I'm thinking Diablo II at about 25-30 hours to take one character through), I'd rather have 75-100 hours. And with a movie, you might visit a model/texture once, where with a game it might be something that you can look at from all angles, as long as you like.

    So we'll have a couple of guys working on an engine to pass geometry and texture, another 5 or 10 working out AI and extensibility and 200 artists and modelers creating the world.

  • by mcelrath ( 8027 ) on Sunday November 26, 2000 @07:13PM (#601637) Homepage
    Unfortunately, you are incorrect. Compare NVidia's drivers to 3dfx's Voodoo 5 drivers. It seems as if 3dfx was simply expecting a few hundred developers to show up as soon as they made the drivers open source. As it turns out, only a couple of people outside 3dfx have made contributions, and one of them was paid to do it. It's sad, but it's true.

    My argument is simply that there's nothing more frustrating than having a bug that you can't fix. I'm the type that at least gives a hack at it if I find a bug. Open source is not about open source developers fixing bugs for you. It's about coherent, concise bug reports that come from an examination of the code. It's also about (as you mention) fixing simple things that I could find and fix like memory leaks. Clearly I could not write or reverse engineer a driver in any reasonable period of time. Clearly nVidia are the best people to write the driver. Open source is most useful in the last 10% of the development process, fixing bugs and refining the code. If a company expects a magic cavalry of developers to appear to write their driver for them, they are sadly mistaken. But they can expect people to do a little hacking to get an existing driver to work with their hardware combination.

    Open Source is not the panacea of magic software creation that some people (3dfx, apparently) think it is. But when I buy a product with closed source drivers and those drivers suck, I'm fucked. If those drivers are open, at least there is hope. I find that in general, if I depend on other people to fix my problems, they will never be fixed. I hack "open source" to fix my problems. nVidia can't own every possible combination of motherboard/processor/OS, and therefore can't fix every possible problem. Open source is simply the only way to go, and I won't ever again bother with companies that aren't open with their drivers.

    --Bob

    Egad, who modded my original post down as "Troll"? Do your worst, metamoderators.

  • by Guppy ( 12314 ) on Sunday November 26, 2000 @09:47AM (#601638)
    "The problem lies between the keyboard and the chair."

    No, the problem lies between the chip and the mini-D connector. NVidia only sells chips to boardmakers, who make the actual card. While almost all of them make similar variations of the reference design, it is the boardmakers who choose where they get the rest of their PCBs, filtering components, etc.

    The same thing happened with nVidia's TNT and TNT2 (And with 3dfx's chips before they stopped selling to other companies). The end result is that some are quite good, and others cut corners (And a brand name is little guarantee of quality these days).
  • by be-fan ( 61476 ) on Sunday November 26, 2000 @07:25AM (#601639)
    Wow, a real non-gamer crowd! Doesn't anybody remember the PowerVR Series chips? The reason NV20 will be 7x faster for complex models and only 2x faster for simple ones is that it will use tiling (my hypothesis only.) In a tiling-based architecture, the 3D renderer first sorts all the geometry, and then splits it up into small tiles. The tiles are then loaded into small on chip buffers and rendered. This has several advantages:

    A) It doesn't over-render. If geometry isn't going to be seen, it doesn't get rendered. Normally, cards have to render the pixel, and then discard it if the Zbuffer test fails. With tiling, there is no Zbuffer and pixels get discarded before they're rendered.

    B) The sorting allows transparency to be handled very easily since geometry doesn't have to be presorted by the game engine.

    C) It allows a hideous number of texture layers. The Kyro (PowerVR Series 3 chip-based) can apply up to 8 without taking a noticible speed hit. Also, it lower the bandwidth requirement significantly since the card doesn't have to access the framebuffer repeatedly.

    D) It allows incredibly complex geometry. Even though the Kyro is a 120MHz chip, it can beat a GF2Ultra by nearly double the fps in games that have high overdraw (such as Deus Ex.)

    The main problem with tiling is that standard APIs like OpenGL and D3D are designed for standard triangle accelerators. As such, the internal jiggering tiling cards have to do often outweight their performance benifets. Also, up until now, only 2bit companies have made tiling accelerators, so they haven't caught on.

    If you want to read the Kyro preview, head over to Sharky Extreme. [sharkyextreme.com]
  • by Temporal ( 96070 ) on Sunday November 26, 2000 @11:48AM (#601640) Journal

    Even for the enormous Linux kernel module that's required to use their drivers? Really?

    Yes. The only part that is Linux-dependent is the abstraction layer, for which the source code is provided. The same kernel module with a different abstraction layer is used on Windows. (If you don't believe me, head on over to that Linux dev page at nvidia -- the one with the register-level specs [nvidia.com] and such. Too bad the specs are incomplete due to NDA's...)

    Does their Windows driver, after less than a week of use, bloat to consume over 200MB of virtual memory?

    No one knows -- Windows itself bloats faster. :) OK, that memory leak is obviously something they are working on. It is beta software. Does it hurt so much to restart X once every few days?

    they don't seem to care about keeping up with development kernels

    Do you honestly expect them to?

    I haven't exactly put much work into fixing the problem (but how can I, when I can't even recompile with debugging symbols?)

    You have the source code for the abstraction layer in the NVidia kernel module. Any changes necessary can be made there.

    Voodoo2 glide; the frame rate may not have been as fast, but the rate of driver improvment certainly was faster.

    That's because NVidia's Linux driver does not require much in the way of improvements. It is pretty much complete, except for some minor bug fixes. Compare this to the Voodoo 5 driver, which was supposed to be ready a month after the release of the hardware. It is still in very poor shape (only supports one processor, no FSAA) despite having open source code.

    ------

  • by Temporal ( 96070 ) on Sunday November 26, 2000 @07:00AM (#601641) Journal

    The increase in speed doesn't just go to framerate. Newer game engines will have their framerate locked (at a user-specified value) and will vary visual quality based on how fast the hardware is.

    How can that extra speed be used? More polygons, gloss maps, dot product bump mapping, elevation maps, detail maps, better transparency/opacity, motion blur, cartoon rendering, shadow maps/volumes, dynamic lighting, environment mapping, reflections, full screen anti-aliasing, motion blur, etc. I could go on forever. All this will be in my game engine, of course. :)

    ------

  • by mr3038 ( 121693 ) on Sunday November 26, 2000 @10:24AM (#601642)
    The problem isn't creating of complex scenes or worlds but rendering complex worlds. It has been estimated that one needs about 80M polygons in scene to render realistic looking image. Say you want 25 fps for your game - that equals to 2000M polygons/s - Even if NV20 is 7 times faster than current chips it's still far too slow to render images like this in real time. And the problem is you need those 80M polygons for just one frame. It's like you sit in a virtual room and it takes 80M polygons to present it to computer. Now imagine whole house with similar accuracy: if you had 10 "rooms" you would need roughly 800M polygons for it. Now imagine a city! The problem is how to decide 25 times per second which 80M polygons to render because you cannot even calculate whole geometry fast enough.
    _________________________
  • by mcelrath ( 8027 ) on Sunday November 26, 2000 @07:17AM (#601643) Homepage
    Let us remember that nVidia does not have open source drivers. I have an nVidia card and their drivers manage to hang the linux kernel. Egad, there's nothing worse than to have a known bug, and not be able to fix it, or be able to do anything about it at all! Of course, nVidia said "they'd look into it". It's been several months...haven't heard anything, and no new driver versions.

    I know they have carefully thought out arguments as to why their non-open source, crappy drivers are better than open source ones. But folks, it just ain't worth it. I don't care how fast their cards are, I'll never make the mistake of buying nVidia again. Stick with the more open 3dfx, or Matrox. With them, if it crashes, you can track the bug down and fix it! Or someone else can. The number of open source hackers that might fix a bug are much, much larger than the number of employees at nVivia working on drivers.

    --Bob

  • by Xevion ( 157875 ) on Sunday November 26, 2000 @09:48AM (#601644)
    No, the NV20 does not use Tiling, but it uses something called hidden surface removal, which uses the standard rendering techniques that most cards use today, but it does to a small extent what tiling does. It is not nearly as efficent, but it will probably increase performance 20-30% realistically. The 7x performance gain number is in environments specifically made to take advantage of the NV20 over the GeForce2 Ultra (A lot of overdraw, high res textures, lots of polygons all at the same time), similar to the inclusion of T&L allowed the GeForce to walk all over the TNT2 Ultra in treemark.

    One other big feature of the NV20 is the programmable T&L unit. That way you can add in small features you want to what the video card processes instead of relying on the CPU.

    Another performance advantage that people will see is from the increased theoretical max fillrate. The GeForce 2 runs at 200Mhz, and has 4 pipelines each capable of processing 2 textures per clock, which gives you a fillrate of 800 Megapixels/second or 1.6 Gigatexels/second. The NV20 will likely run around 250Mhz with 4 pipelines that can handle 3 textures per clock, which will give fillrates of 1 Gigapixel/second and 3Gigatexels/second. This would allow for a theoretical performance increase of about 30% with single and dual textured games and a performance increase on the order of 100-130% in games that use 3 textures per pixel and more. This is of course assuming that there is enough memory bandwidth to push all of those pixels left.

    Price wise I would expect a 32MB version with ~200Mhz DDR memory for $300-$350 when it comes out, and a 64MB version for $600 with perhaps 233Mhz DDR memory.

  • by Temporal ( 96070 ) on Sunday November 26, 2000 @08:25AM (#601645) Journal

    Their drivers are among the most unstable drivers around for linux.

    Odd... they have not crashed on me in... umm... two driver versions ago... and then the only crashes I ever had were when switching VC's. The only problem is the memory leak when OpenGL programs crash. My OpenGL programs crash alot when I'm writing them. :) But restarting X once every few days isn't much trouble.

    Also when updating kernels that driver break a lot since it is binary only.

    The NVidia kernel module is different from the old SBLive binary module in that the NVidia module has a source code layer between it and the kernel. To make the driver work with a new kernel version, you just have to update the source code layer, and in most cases you don't have to make any changes anyway. The binary part of the distribution is in no way dependent on your kernel version.

    The SBLive was also different in that Creative didn't really give a rat's ass about the Linux support, whereas NVidia has basically made Linux an official supported platform and is keeping the Linux drivers exactly up-to-date with the Windows drivers.

    So please even if you like their hardware don't support them till they open the drivers. In the long run it will help us a lot more. Teaching companies that drivers alone are not enough.

    Don't forget that NVidia's OpenGL driver is the best in consumer 3D graphics. A significant portion of this driver could easily be used to enhance any other company's drivers. The software T&L engine, for example, which contains optimizations for all those instruction sets -- I'm sure 3dfx would love to get its hands on that! Graphics hardware manufacturers typically don't even support OpenGL since writing D3D drivers takes far less work, but NVidia has gone so far as to have better OpenGL support than D3D support. They would lose a significant edge if they openned their drivers.

    Let's not forget why we use open source software. I don't know about you, but I use whatever software is of the highest quality. I don't care if it is open or not. In many cases, open source produces better quality software than closed source, which is why I use it. In some cases, though, closed source is better. NVidia's closed Linux drivers are far and away the highest quality 3D graphics drivers available on Linux, and the GeForce 2 has been fully supported since before the card was even announced. The open source Voodoo 5 drivers, on the other hand, are crap to this day. I'm sure you won't have much trouble finding a Linux user who will trade you a Voodoo 5 for whatever NVidia card you have, if that's really what you want.

    ------

  • by Temporal ( 96070 ) on Sunday November 26, 2000 @10:26AM (#601646) Journal

    The number of open source hackers that might fix a bug are much, much larger than the number of employees at nVivia working on drivers.

    Unfortunately, you are incorrect. Compare NVidia's drivers to 3dfx's Voodoo 5 drivers. It seems as if 3dfx was simply expecting a few hundred developers to show up as soon as they made the drivers open source. As it turns out, only a couple of people outside 3dfx have made contributions, and one of them was paid to do it. It's sad, but it's true.

    NVidia, on the other hand, uses the same codebase for both their Windows and Linux drivers. As a result, one could pretty much say that most of NVidia's in-house developers (over one hundred of them) are actively working on the Linux drivers. That's far more people than are working on the Linux Voodoo 5 driver, and because they are all in-house, they are much better prepared to write the drivers. After all, if one of them has a question about the hardware, they can walk down the hall and ask the lead designer.

    I get this funny feeling that someone is going to say, "Well, they only have a few people working on the Linux-specific stuff." This is true, but the Linux-specific code is a very small part of the driver (less that 5%). In contrast, the far fewer 3dfx people have to implement the whole Voodoo 5 driver, including all the non-system-specific stuff, on their own. DRI helps, but it doesn't do everything.

    You think your problem with the NVidia driver would be fixed if it were open source? Well, maybe, but open source really isn't the software development Utopia that you think it is. At least the NVidia driver supports all of the features of the hardware (all of them), and at (almost) full speed, as opposed to the Voodoo 5 driver which still does not support the V5's trademark parallel SLI processing or FSAA.

    Disclaimer: I am by no means against open source software. Hell, I write open source software.

    ------

  • by electricmonk ( 169355 ) on Sunday November 26, 2000 @05:45AM (#601647) Homepage

    I think that with all this new 3D hardware that has come out in the last 6 months, and then the addition of the rumor of this chip, developers are going to have a hard time actually creating worlds complex enough for gamers to actually tell the difference in what card they are using.

    For example, this chipset is 7 times faster in rendering complex scenes, but only 2 times faster for rendering simple 3D scenes. I know that things like shadowing and lighting effects can be built into the gaming engine, but, still, isn't there a lot left to the developer's imagination (such as actually modeling and skinning characters and the objects in the world)? I can see this bumping up the development time for games slightly more every 6 months...

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...