Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

ATI Announces Next Generation 3D Technology 122

Jonas writes, "I spotted that ATI has announced their next generation intentions where the 3D industry is concerned with their "Charisma Engine" and "Pixel Tapestry" technologies at this year's GDC. There's also an interesting article discussing the technology involved on their next gen 3D part. "
This discussion has been archived. No new comments can be posted.

ATI Announces Next Generation 3D Technology

Comments Filter:
  • There is already hardware you can get for a PC that does hardware-accelerated volumetric rendering - check out 3DLabs' [3dlabs.com] often sadly-overlooked hardware. All the cards based around the GLINT R3 have this capability. Although they have lost out in the megapixel wars, for serious OpenGL work their hardware is probably the best you can buy for the PC, the quality of their Windows drivers is second-to-none, and they are an ardent supporter of OpenGL, being a member of the OpenGL Architecture Review Board. Their own support for Open Source drivers is unfortuately rather poor - however Open Source developers have made up for this deficiency - the 3DLabs GLINT driver is one of the stablest and fastest 2D drivers in XFree86, and the 3DLabs GMX2000 card (their previous flagship) was the testbed for XFree86 4.0's new 3D Direct Rendering Infrastructure. No, I don't work for 3DLabs - it just pains me to see such nice hardware ignored by the mainstream.
  • by Anonymous Coward
    I've seen comparative sales figures for all the 3D cards. ATI, nVidia, and Intel are the top three. ATI is winning by a large margin, and 3dfx is at about 5% of total market and falling. Even Matrox has more sales than 3Dfx. I'll post the actual numbers when I get in to work.

    I believe their glory days are numbered. Voodoo is just too overpriced and has no stunning performance advantage anymore to justify the cost.
  • by Anonymous Coward
    3D Market Share

    ATI 29%--
    Intel 16%++
    NVidia 15%++
    S3 13%
    SiS 11%
    Matrox 9%
    3Dfx 4%--
    Trident 2%
    Other 2%

    ++ indicates increasing market share, -- indicates decreasing.
  • Hehe, you render bones, I render prostates for radiotherapy treatment planning. Still I guess that like me you're only rendering isosurfaces, not CT or MRI voxel data.

    For this purpose, one would need proper raytracing acceleration and I don't think 3D accelerators do raytracing stuff...

    Anyway, that would be a nice feature on a videocard... hardware povray :-) (like povray compiled on a playstation 2 which came pretty high in the povray benchmark)

    Oh well, that's just me mumbling about how great it would be to get cheapo 3D harware into hospitals to do therapy planning...

    ---

  • Well .. with time it seems that 3D card are now so powerful and have so much memory, you can just about port linux to run on the card alone.

    This made me think that maybe the x86 arch needs to be made to play catchup ASAP. what use is a 64MB 200mhz 3D card that can pump 1bil pixels etcetc if the core arch it runs on is still based on 70/80's tech.

    bain
  • I'd take those results with a grain of salt: on the Direct3d side,
    there's a 486 SX 25 (8MB) cranking out a score of 281.81 with 8bit 640x480 (built in S3-2mb).
    A gateway Pentium III "other" (128 MB) with a nVidia TNT (16 MB) gets 260.4 at 16bit 640x480. Yet a gateway Pentium MMX 233 (not II, MMX) (128MB) with an ATI Rage 4MB, gets 266.4 at the same resolution and bit depth.
    The same chart shows a 8MB ATI card running at 32bits*1600*1200, which is just silly...(264 on a Celeron 333)
  • Do you have any figures for actual fill rates and triangle drawing rates

    All those figures are irrelevant anyhow :) the only thing that matters is real-world fps on real-world games. And you can't tell how fast they are on real games till you see them ... (haven't even gotten a GeForce yet, I'll have to work on that).

    with release dates for the hardware

    Anything I could say (besides the fact it would be breaking peoples' confidences in me) might very well be wrong. For example, lot of what 3dfx said (under NDA) to developers at GDC '99 about the geometry acceleration abilities of the Voodoo4/5 turned out to be wrong, because of last-minute design changes.
  • Cycle time is about 9 months (instead of Intel's 18).

    This should be no surprise: Intel can adjust their R&D budget to hit Moore right on the nose. They haven't had much competition, so there's no reason for them to spend money going any faster. But the 3D hardware market is cutthroat (and no there hasn't been any reduction in competition -- all the key chipset manufacturers are still out there, with the exception of Rendition, who disappeared a long time ago). These guys are pushing their cycle time about as fast as possible -- they're basically looking at a few months of design, then fab on each generation...

    Their stuff is easily as complex (element-count wise, I can't say if it's easier to harder to design) as the big CPU makers....
  • Apple's been renaming just about everything they throw in their computers nowdays. First, the PowerPC 750 became the PowerPC G3 processor, then the 7400 became the G4. And the 7400 has the AltiVec unit on it, which Apple renamed.. Velocity Engine! Woohoo!

    I will say that G4 sounds a heck of a lot cooler than 7400. 7400 sounds like the name of a Volvo. ;-)
  • Volume rendering can be done on specialised hardware (many medical imagining systems) or by pretending with 3D texture mapping. There is a small technical difference between the two, but close enough for the average person not to notice the difference.

    The reason that you've seen it run better on more equipped systems is that they probably were not MS oriented. Win32 OpenGl does not have 3D texture mapping (still is only OGL 1.1) so you have probably been seeing these on Sun's or SGIs, where there is already hardware support. The reason your system ran slow was because it was an all software implementation.

  • Your post is a bit misleading. Glide is 3dfx's baby through and through. Only 3dfx-based cards will ever support the glide API directly.(*)

    Quake II/III can draw using either Glide or OpenGL. If you select OpenGL, Glide isn't involved at all. So you're not quite right when you say you need a card with Glide support to play Quake III on Linux. You have a choice between Glide (3dfx only) and OpenGL (all others).

    That's not to say that any card with Linux OpenGL drivers will perform as well under Linux as it does under Windows. This is where that direct rendering thing comes in. My TNT2 card only manages about 20 FPS in Q3A on my machine because the OpenGL calls need to pass through the X server. When XFree 4.0 comes out (and somebody said "early March") and drivers using its Direct Rendering Infrastructure (DRI) are released, this bottleneck will be removed and owners of TNT2 and other non-3dfx cards can enjoy the full power offered by their hardware.

    (*) I believe there's a library that translates Glide calls to OpenGL calls, enabling software that was written only for the Glide API to run on any OpenGL implementation. 3dfx's lawyers were hassling the authors and I don't know how that turned out.
    --

  • by lambda ( 4236 )
    This is a little bit suspicious given the recent buyout of ArtX, are they incorporating some stuff that ArtX was working on for Nintendo's Dolphin?

    (Don't flame me about how ArtX's PC stuff suck[s|ed]!)
  • I'm sure ATI produces good, reliable products. In the end though quantity doesn't always win. ATI seems happy to sit pat on injecting excitment into the market. Moreover, long term success in the PC industry is often dictated by a company's ability to react to consumer demands and innovate. Case in point is ATI's Rage Fury Maxx (Rage Fury Maxx [tomshardware.com]. Some are thrilled by dropping a Corvette motor into a Chevette but I prefer to see creative engineering from the ground up.

    ATI or NVidia, either way I like to see competition. When companies don't give competitors a chance to sit on their laurals the consumer benefits. As long as we have choice. With the recent trend toward integrating video support into the Mainboard chipsets and the threat of integrated solutions like the PSX2 and X-box driving choice out of the market, there are bigger worries.

    When I bought a Matrox Millenium to play Duke Nukem II years back hoping to get a glimpse of a virtual world I'd surely hate to see the road to that dream take a sharp turn. Not when recent video hardware advancements make the goal seem so near.

  • Look back in John Carmack's plan files, where he talks about it.... basically, you make a 3d texture that maps to the light intensity from a light and then tranform the cube into your world space. Then it's easy to just transform geometry vertices into the space of the lighting texture cube, and you can use it as a second pass for lighting. Nice ;).
  • They've got a lot of nice stuff - it's good to see 3D texture maps - they should be very useful for lighting, and it's good to see bump mapping in there. I'm quite interested in their shadow mapping stuff, but I'm not sure how fast it will be, and I'm not sure how well it will work with multiple lights...

    However, most of this is beside the point. For any of these features to be useful, they really need to be implemented on the majority of 3D cards out there, otherwise it's too much effort for too little return to make use of the extra features. Just look at most 3d games out there - they're still mainly just using 1 pass renderering. (Well, apart from 1st person shooters, but they're mainly only using 2 passes for their static shadow maps...)
  • Professional graphics people already
    use this stuff.
  • "Where is Voodoo4?"

    Both the V4 and the V5 use the same chip, 3dfx's VSA-100. The V4 uses one chip, while the V5 uses 2 or 4 chips, depending on the model. The two cards should come out about the same time.
  • That SGI stuff is expensive. This stuff will ship on a (pulling number out of my ass) $250 graphics card for PeeCees. Then it'll be on a $150 graphics card the year after that.

    The pro equipment will always be ahead. When ATI/3DFX/etc catch up to where SGI is, SGI will be at the next level by then. But it doesn't matter. This is about affordable stuff that you'll put into a $2000 box that you'll play Carmack's games on.

    "Yawn, this Learjet is lame. Me-262 did this 55 years ago."


    ---
  • the 20' this week they are supposed to ship!
  • And people actually complain about the bandwidth of most types of video memory to be too slow.. Imagine trying to transfer the rendered scenes back to the client system to display over the internet! 60+ fps at 160x1200 with 23 bpp?? Forget that, maybe 320x200 at 8 bpp with ~ 5 fps if you have a relatively decent connection :) Even if you have outrageous connections, nothing can even begin to handle the needed load. Of course, there is also the network latency introduced too :)
  • Congratulations, Bain! Your self-scored 2 reply to a First Post has been selected for a Monkey Moderation!

    Due to this post, a monkey was allowed to live. Yes! That's right! Instead of dismembering or poisoning another of our hairy friends, we figured Bain would make a proper substitute.

    We'll be coming, Bain. Be ready for us! We've selected yet another interesting and creative method of termination just for you! :)

    (Note: This message was meant in jest. No matter how much we want to do violent things to Bain, a proper monkey will be selected, dressed up as Bain, and killed in Bain's stead in a Monkey Moderation to follow.)

    LouZiffer

  • While not strictly related to this chipset (which seems rather impressive, and if the support is there under OpenGL and/or DirectX, could make a large impact on game realism), I have a thought on something that I have wanted for a long time to be included as standard on any graphics card:

    A high-quality TV output

    Why, you might ask? Well, my use of it wouldn't be typical (I want an easy way to plug in a homebrew, or el cheapo HMD, that uses a composite video signal), but I can see others using it for more normal usage - to play games on the TV, to watch their DVD's on the TV (I know many DVD decoder cards have the output to TV on them, but this should be a function of the video card), or to set up a TV-based internet connection (yeah, I know - yech! - but it should be an option!). One other use I could think of would be to be able to record a 3D movie to a VCR (of course, you and I would just keep it digital, render it to an MPEG or AVI file).

    I don't want the cheesy TV-out system either (where you have to set the system to 640x480x30Hz or something, then the scan-rate conversion is done, but you can't see it on the monitor) - I want to be able to view the image on the monitor and TV at the same time. For my purpose, this would allow me to preview a world rendered for an HMD on my monitor as I work, but then put on the HMD to do actual testing. I also want something that can handle the fast motion that can accompany FPS games and VR sims.

    Now, for my purpose, all of this would be moot if el cheapo HMDs had SVGA quality LCDs and interfaces - but they don't. My application is a niche anyhow - I am sure many have wanted to play their FPS game on the TV from their computer (esp. when the TV is generally much larger than the monitor on the PC).

    I know that there exists external hardware to do this - but much of it isn't cheap, and the cheap stuff isn't great. I think, though, that an intergrated, low-cost solution could be done, if some company would do it.
  • Now if only I could get the drivers for the Rage 128 to work right.

    Ever seen the "Flying Windows" screen saver crash? Always fun!

  • Hmm.. Delta Force I and II use volumetric pixels, and both look good, but use way too much CPU.. I tried the demo of DF II, but it was too much for my 400 MHz AMD.

    Voxels might be the next feature to be added to 3d-accelators. I think SGI already does voxel with hardware...

  • All that is nice. Cool new 3D Hardware to use in my cpu. The problem is these pieces of hardware never get proper drivers written for them. They never maximize the full capability of the specific hardware. ATI specifically has never had proper support for their drivers so why would I buy another card from them? The point is if you're going to release hardware. Release proper drivers for it. Whats the point in having a nice card that just sits there?

    Shouts to 3dfx for staying on top.. I junked my rage fury and bought a voodoo.
  • Hell, I've had that happen on a regular
    basis with everything from a S3ViRGE to a Matrox.... I don't think it's nescessarily the driver. :>
  • I went through the whole article on Sharky's and there was only one picture of a human model. (The part where they showed image morphing interpolation - very cool.) But - I want to see better hair. Anybody watching that new show on the WB, Max Steel? The hair on those characters is awesome! Granted, they still have a few bothersome jerky movements, and their joints are a bit unnatural looking... I want a big hair-flying frag fest! ;-)
    The Divine Creatrix in a Mortal Shell that stays Crunchy in Milk
  • Is it really that bad...
    I was looking at the all in wonder 128 pro
    or the Voodoo 3 3500, Is there anything i should know about these or similar cards?
    ...I run RH 6.1, BeOS, and windows98.
  • Thanyou!...
    Is the AiW 128 a true mpeg2(dvd)decoder?
    ..the only thing that kept me looking at the voodoo 3 3500 was the radio tuner, and the fact that BeOS suppports glide pretty well.

    Thanks again ... great link, I always forget to look there.
  • What planet are you on?

    Practically every 3D game out there makes extensive, if not sole use of hand-drawn texture maps. Theres no way you can reasonably procedurally texture a low-poly model and make it look like, say, a face.

    Practically every broadcast or film 3D project uses hand-drawn or photgraphically based texture maps.

    I agree, these are often combined with procedural maps and volume shaders, but a chrome sphere on a checkerboard with a marble column in the background rendered in POV-Ray doesn't exactly cut it when it comes to games and other professionally produced 3D content.

    Whoever moderated this as 'informative' needs to get their head checked.
  • I'm not sure where everyone's getting their data for nVidia's supremacy, but 3dfx's release of PC Data's latest report is here [yahoo.com] . This probably includes OEM boards shipped with machines and the like, but all opinions aside, I'm pretty sure 3dfx has almost always (during the life of the Voodoos) held the top spot in multiple categories.
  • But, I don't think nVidia or ATI ever had a proprietary 3D API (please correct me if I'm wrong)

    Back in 1995 the NV1 multimedia accelerator performed curved surface rendering. It could perform forward texturing of a quadratic surface defined by 9 control points. Unlike any current card, it could do a realistic curved tire with a dozen patches, instead of needing hundreds of triangles to avoid that caveman/combine harvester appearance. It could do an unrealistic Lara Croft breast in a single patch.

    Absolutely amazing stuff, and since it was (and still is) totally unrelated to any standard 3-D API, there was an SDK which exposed the interesting programming model of the chip. It was more than the hardware registers but less than a software API.

    At the time no developer wanted to write for non-standard hardware unless the hardware vendor shelled out the money. Plus, quadratic patches (parabolas) aren't supported by low-end 3-D authoring tools, and high-end tools use bicubic (NURBS) splines, which don't always degrade to parabolas well. So apart from Martin Hash's Animation Master, no 3-D authoring tool ever supported quadratic textures, which made development tricky. Plus it's pretty hard to do collision detection of curved surfaces.

    I saw the future back in 1995 but the game industry wasn't interested.


  • I myself will probably never take ATI seriously when considering a high-end gaming card. Why? Nearly all of their products in the past have been quite substandard in the quality department. If the hardware was good, then the drivers and software suffered badly and vice versa.

    I know of no serious gamer that owns an ATI card. My friend bought one once (Rage 128, I think) based on a good review in PC Gamer, and has regretted it ever since. (He doesn't read PC Gamer anymore, either.)

    A few people mentioned that ATI has the majority market share on the 3D card racket. This is true, but misleading when you look at the big picture. The reason for this is that ATI ships a lot of cards and graphic chipsets to OEMs like Dell and Gateway for use in lower budget machines ($500-$1500). In the last couple of years, this price range has dominated any others due to the huge amount of people buying a PC for the first time. These people do not even know what a video card is, therefore Gateway & Co can get away with putting anything in their machines that is barely capable of generating graphics.

    Need I go on?

    Of course, it is completely feasible that ATI could turn themselves around. AMD rose up from the ashes to compete directly against Intel. (Yeah, apples and oranges, but I think the principle is the same.) And 3Dfx, once the 3D king has fallen to almost last position because they were awful good on getting chips out the door, but simply sat around while companies like nVidia started to innovate and add features to their products.

    Could write more, but I'm about to go home right now and play Q3 on my GeForce. :)
  • Very good statements. By allowing other companies to make great cards from their chips, NVidia has also made an extremely successful economic situation within its own chips! While NVidia is competing with companies like ATI for great chips, they are also allowing card manufacturers to also grasp these chips, and up the limits on them as well, while competing for lower prices.

    This is great because there is a HUGE fight for quality and price-control, and it gets done at TWO levels! This means that NVidia chips from a good card-company with great options and speed will be top-notch.

    Once NVidia gets out their XFree4 drivers (IF XFree4.....), they will destroy everything in sight. Supposedly, these drivers will "kick the snot out of anything else out there", says their Senior Engineer, Nick.

    Because of this, I am in full support of NVidia

    Mike Roberto
    - roberto@apk.net
    -- AOL IM: MicroBerto
  • The Voodoo5 has a feature 3dfx calls the "T-Buffer", basically an accumulation buffer which can also be used for Full Screen Anti-Aliasing, Motion Blur, Soft Shadows, and Soft Reflections. All effects require the software to support the feature, except for anti-aliasing, which can be done automatically.

    This is all fine and dandy, but this is hardly hardware support for things like depth of field. It requires that you render the scene n-times just to achieve a single frame. Real hardware support (for depth of field) would allow you to specify the focal length and the hardware would automatically blur non-focussed areas automatically.

    While the 3dfx "T-Buffer" affects are nice, I can't imagine developers using them, since it would lock the developers into certain hardware (again!). This is something that I thought was insane when Glide first came around. These type of proprietary extensions are exactly why I have never and will never buy 3dfx-only games or 3dfx video cards. You said it above, "basically an accumulation buffer", instead of using this type of thing, why not make an open extension to OpenGL/Direct3D to support these type of effects.

    Full scene anti-aliasing is one of the few "new" features that 3dfx cards sport that I actually like, and that I think will be useful in future games.

    Just my 2c
    The "Top 10" Reasons to procrastinate:
  • Thanks for the reply,
    I believe you are correct about the T-Buffer hardware "jitter" thing. I'll have to check 3dfx's documentation. My main point was that the card has to render the same scene multiple times (into each of it's n T-buffers), then merge the scene together to produce the effect. This is how it was shown in documentation I read, and in the demos that 3dfx gave (of course these were emulated).

    I agree with you that Glide was needed at the time, but I would have rather seen them work with others, or with the OpenGL standards committee to get the job done. I realize (and don't think I ever said otherwise) that 3dfx was never a monopoly (although they acted like one, @ least in the 3D consumer market). And I agree that they did hold onto Glide longer than was good for the gaming/development community.

    But, I don't think nVidia or ATI ever had a proprietary 3D API (please correct me if I'm wrong), I know of S3's (wasn't it much older than Glide??) and PowerVRs (I used to own one... yuck!). I didn't agree with PowerVR's work with PowerSGL either, it was just much cheaper than buying a voodoo1 when I got it :).

    I don't think 3dfx's Glide was wrong for the time (and you're right, the hardware was quite amazing for the time period [jeesh! not too long ago]), but the fact that it even began shows how short-sighted the entire 3D industry was at the time. Don't be mistaken, 3dfx (3Dfx at the time) wanted to become the M$ of 3D, lucky for us that they failed.


    The "Top 10" Reasons to procrastinate:
  • >The fact that nVidia is worth approximately half
    >as much as ATI doesn't tell me anything about how
    >many people actually use ATI or nVidia cards

    You are completely correct - the number of computers with an ATI Rage Pro chip *FAR* exceeds that of all nVidia card combined. Now if you count all other ATI cards in...

    I'd say nVidia's worth is largely due to stock fluctuations, which in turn should thanks to their announcements (often hypes).

  • I agree that a lot of these advances are being driven more by the marketing dept than any real requirement from developers or users. But on your point of having to code for multiple cards - surely this is the point of things like (gasp) DirectX? My (limited) understanding of it is that basically the architecture gives a whole load of API calls for texturing, rendering, bump mapping, whatever - and software implementations of them all. Then along comes your lovely new EmotionalPixelCarpet Engine(tm) and says - "hang on - I can do X, Y and Z in hardware". The idea is that to the application it's transparent - it says - "here's a cube, please bump & tex map it, then phong shade it and put it at this position". Whether that's slow or fast depends on the underlying drivers & hardware.

    The remaining problem is keeping the API up to date with the new features which are not "good implementations" but rather completly new stuff. One for the architects I think - but it can't be that hard.

  • If ATI follows through on their promise and loads things like hardware with vertex skinning and all that, and the 3d programmer comes up with a more advanced version of the feature in software, what then?

    Does the game stick with the eventually outdated hardware version of the feature, or do they implement a newer optimized one via software, which'll sort of defeat one or more purposes of the card?

    (Am I making any sense here?)

    ========================
    63,000 bugs in the code, 63,000 bugs,
    ya get 1 whacked with a service pack,
  • I have found the All in Wonder 128 to be an excellent card, and the drivers under XFree86 and Windows 98 (though not 95) top-notch (but, then again, my last card was a 2 meg Diamond Trio).

    I don't know about BeOS, though.

    You might want to know that the AiW128 can do real time MPEG2 compression, though not in hardware, and now has 3d support in Linux. Also, the V3500 cannot do 640x480 capture.

    Ah, why have me tell you when Tom can tell you? There is a comparison of these cards and others in their on TomsHardware...

    Multitalented All in One Graphic Boards [tomshardware.com] on tomshardware.com

    Shop wisely, and don't forget that ATI has amazing DVD playback :)

  • Surely that was flame.

    While ATI had some problems getting its products to market in enough time to compete with the likes of NVIDIA and 3DFX, you cannot judge them based on the fact that their cards are not as fast as the cards in a completely different market segment. I have never seen an ATI card that failed to excel at the job it was intended to do.

    Aside from a few past driver issues, ATI has, in my opinion, done a fine job (I am biased, though, as I own and work with ATI cards these days).

    On another note, if I hear one more person talk about the rage 128 being slow I'm going to scream. Why don't we all just throw those TNT2 cards out, too?

    J. T. MacLeod

  • If you take a good look into this it becomes clear that graphics cards have actually exceed Moore's Law within the past three years. Current graphics chips have nearly as many transistors as the original Pentiums and when you compare that with their predecessors... there is no comparison.
    The interesting thing about graphic chips is they are still not on the bleeding edge as far as total number of transistors or speed and throughput. Eventually they may catch up to the CPU simply because the road is already paved for them. I won't be surprised to see a 500MHZ graphics engine within the year and possibly a 1GHZ sometime next year. The technology is already there and "proven" by the CPUs, its just a matter of pushing the graphic chips into that ballpark.
    What does this all mean? Well I think our StarTrek Holodeck idea isn't too far fetched. Movies will involve more digital actors and maybe even become the majority in the acting world. Games of course will continue to boom and will only get better, more lifelike and realistic. I think simulators are the next big step, ones that really fool your brain into thinking that your actually within your virtual environment. What I don't understand is why the pipeline between the CPU and the graphics card isn't opened up more or at least made more direct. If this one done, truly astounding products would result.


    Nathaniel P. Wilkerson
    NPS Internet Solutions, LLC
    www.npsis.com [npsis.com]
  • but the price is more than the dicussion here would bear. I built an animation workstation for my brother-in-law. We went with the Oxygen RPM card and it is a great card but it cost $1250.00 at the time. Alias/Wavefront's Maya works like a dream on this card and so does Lightwave but you can't go below 24bit color. The majority of people here are more interested in the ultimate gaming card and would be better off with a Voodoo5 6000 (very soon)for $600.00 if they want to kick some butt.

    A side note: If you saw the simulations for Robbie Kinevil's train and Grand Canyon jumps then you saw my brother-in-law's work using this system.

  • There's a speed comparison site here. [3dbenchmark.com]

    Best results for a Banshee -which I believe was a really fast card 18 months ago- was 168.5.

    The fastest result for a modern card was 462.1, but that could have been overclocked. There are several results that are over 300. This isn't a totally accurate way of comparing, given the effect of the processor, but it does suggest that graphics cards are matching Moore's law pretty closely.
  • Well, this would require very fast networking, so that you could send all the texture information and vectors to the card, then get a high resolution rendered scene back, but it could be done. I don't think that the origional poster was being totally serious although Similar systems have been used for incredibly processor intensive ray tracing work where scenes take more than a few hours to render.
  • You're probably right. My half-hearted research didn't support this, but as I mentioned at the time, it isn't very accurate, and I didn't take account of the increased resolution and colour depth.

    Do you have any figures for actual fill rates and triangle drawing rates with release dates for the hardware?

    and no there hasn't been any reduction in competition

    There were a lot of graphics chip makers about 4 years ago. Most of them died very quickly. I'm not suggesting the reduction in competitors has reduced the strength of competition - in fact quite the opposite. Natural selection means that only strong aggresive companies have survived.
  • There is a feedback method under OpenGL... For picking object you render your image (passing in named objects) and as they intersect your mouse position, you get back a list (depth sorted) of things your mouse "hit". That being said, it's quite slow (fine for an editor, bad for a game).
  • What ever happened to pixel volume rendering? I remember there was a game that came out a few years ago, Voxel or something, that used it. On my PIII 500/Voodoo 3 3k it didn't run to fast, but I have since seen it run on faster systems and it's pretty sweet.

    Looking forward, when can we expect to see mainstream games that use geometry acceleration?

    thanx

  • When I always see these articles on the net about how ATI will rule the world with its cutting-edge next-generation 3D technology, I never see any mention of the drivers. When I bought the ATI Rage 128 when it came out, it sounded like a good product (especially hardware-wise). But the drivers had significant problems, and ATI went for at least 6 months without updating the drivers, even though they acknowledged problems on their "known bugs" page. I can only think their future 3D cards will be the same. I finally gave up, threw the ATI card away and bought a NVidia TNT2. So what I am saying is that a company's programming talent is just as important as its hardware engineering. Without good drivers, that new 3D card is useless.
  • Many game developers, including Dave Taylor (id, crack.com, Transmeta) have, to some extend, decried the lack of flexibility that comes about when more and more of the 3D pipeline is implemented in software...

    Unfortunately for them, and for us in general, we are stuck with this situation whilest using current generation PC technology. Many components of the PC arcitecture, including even the AGP bus, are just too slow to currently allow for this flexibility, even though processor speeds are relatively blazingly fast these days.

    As we move up to the next generation (64bit PCs, fast buses), I'd expect a lot of the 3D stuff that is hardware accelerated now to move back into software on general purpose CPUs, until the ceiling starts being hit there too.

  • Sorry about the bold part, I added b tags when I should have added p tags ;) But I think my comment was a good one none-the-less ;)
  • I agree. You now have this nifty geometry processor sitting on your card. There has to be more you can do with it.

    It would be cool to see some OEM's releasing some extension to allow bezier patch rendering.

    As far as depth of field goes the new 3dfx part is supposed to be able to do it with their T-Buffers. Also full scene AA (blurring the edges).

    I doubt a physics engine or hit detection is a possibilty though, as these are pretty implementation dependent on the game. With bezier patches, I could send the control points to the card, it could render them without further intervention. Things like physics and hit detection require return values (and a stall in the video card).

  • I think it is going to be interesting to see what kinds of interesting things these OEMs are going to be doing with those nifty geometry accellerators. I mean that is a lot of power to just be HWA the matrix transformations. Things like the keyframe interpolations sound very promising. It would be really cool to be able to submit the start mesh, then the tareget, then specify a value that is a linear interpretation. Also, lets not forget NVIDIA's extension to do primative skinning via 2 HW matrices (ATI says they are going to do 4). Just try and imagine what else those geometry proccessors can be made to do. And from a developers standpoint I am putting my money on OGL as the API that is best positioned to expose these new HW features.
  • I understand where you ar coming from. But there are certain features that I think can be supported across the board, and emulated where it doesn't exist. For instance our product uses bones based animation. If your card supports HW multiple matrices we use them, else we use the software imp (which was written before HW multiple matrices existed in consumer space). I think that OEM's in most cases are doing a good job of selecting technologies that can be adapted to very quickly. For instance we all interpolate between key-frames at some point right? So now we get that HW accelerated by ATI cards. Prolly through an OGL extension. I think that OEM's are taking your concerns (or rather our concerns) into carefull consideration when designing these features. Except Matrox, I can't see a real good reason to learn all that bump mapping stuff expesially if I can't have a software fallback that runs decent. But HW geometry parts, and higher fillrate are always good things.
  • And on't forget that Apple helped ship about 4 million ATI chipsets in the past 18 months.
  • Mac OS Rumors is typically a trollcenter for substantive rumors. They start with half-*ssed misinformation, and go from there.

    Three thoughts/points to ponder:

    1. Apple has way too much invested in ATI right now to jump ship.

    2. It's true that Apple's ATI-sourced video lags behind PCs, and that the drivers are a little under-featured--but usually because Apple hasn't provided anough memory for the ATI hardware to do it's job properly, or because of other hardware restraints.

    3. The latest Apple/ATI efforts, coupled with the vector-based Quartz graphics in Mac OS X, make for screen drawing so slick, you'll wonder why everyone's spending so much money on nVidia cards.

  • by Anonymous Coward
    Do the graphic cards follow Moore's law? Or are they faster, or, slower in their evolution?

    I sure need more power!
  • by Anonymous Coward
    I'll say the same thing I've said since the ATi RagePro came out: cool, but will it take them less than a year to refine the drivers such that they'll actually take advantage of this?
  • If you're suggesting that the comparitive market caps have anything to do with the "size" of the companies as you refer to them in the first paragraph, you're mistaken. As a games player/developer, you don't especially care how much a company is worth on paper. The fact that nVidia is worth approximately half as much as ATI doesn't tell me anything about how many people actually use ATI or nVidia cards (I have no statistics about these amounts).
    Here's to hoping that 3dfx dies a miserable death.
  • I, like most gamers, don't care how many companies are making what they call "3D graphics cards". I think it was the S3 Virge chip that the company called a "3D graphics card" but the industry called a "3D decelerator", since actually using the card's hardware was slower than software rendering on a PII. I care how many companies are making good 3D graphics cards.

    And that number's only going up. First there was the Voodoo, the Voodoo, and only the Voodoo. If you wanted a choice, you shopped between different cards with the same Voodoo chipset. If you wanted a high end card, it had TV out.

    There was much rejoicing when the TNT came out and started heating up the graphics card competition... and even Matrox seemed to want to do 3D, although it took them until the G200 before they even had an OpenGL driver using their card... but now they have the G400, and their card's actually good at it. For people buying a new system, it's generally worthwhile to look at the Voodoo 3, TNT 2, G400, GeForce(of course), and ATI cards... and that's a good thing.
  • ATI sure is talking a good game, but do they have what is takes to back it up. NVidia did three things that changed the video card industry:

    • #1 Pruduced the fastest, high quality consumer oriented video chipsets
    • #2 Consistently met demand in the OEM and retail market
    • #3 Produced better reference drivers than chipset implementers could produce, and updated these drivers for advancements like Direct X very quickly.

    ATI has shown it can produce a good video solution, but lacks in meeting retail market demand and driver support. Matrox builds awesome chipsets and cards (and excellent driver support) but doesn't give a damn about meeting demand. 3DFx kept their 3d spec closed therby limiting potential developer support, lost momentum, and didn't provide good reference drivers.

    NVidia has proven they can do all three consistantly. And let's not forget #4 -Support from Software developers. Metting the first three criteria directly impact the fourth. Developers don't waste their time developing for hardware no-one owns.

  • And how does it compare to the Voodoo5? It's announced on 3DFX's website but I don't remember seeing any review yet. Anyone knows the status of this baby?
  • 3D rendering is an easily parallelisable process. I remember, early Voodoo cards could use 2 Voodoo chips, one would render even lines, the other odd scan lines. There are a few bottlenecks to parallelisation, and mostly it's the texture access, but with cheap memory you should be able to cache that.
  • Depth of Field effects [3dfx.com] will be supported in hardware by 3dfx's Voodoo5 series, scheduled to be be released sometime this spring (most likely in late April/early May). The Voodoo5 has a feature 3dfx calls the "T-Buffer" [3dfx.com], basically an accumulation buffer which can also be used for Full Screen Anti-Aliasing [3dfx.com], Motion Blur [3dfx.com], Soft Shadows [3dfx.com], and Soft Reflections [3dfx.com]. All effects require the software to support the feature, except for anti-aliasing, which can be done automatically.

  • This is all fine and dandy, but this is hardly hardware support for things like depth of field. It requires that you render the scene n-times just to achieve a single frame. Real hardware support (for depth of field) would allow you to specify the focal length and the hardware would automatically blur non-focussed areas automatically.

    I'm not too familiar with the details of 3dfx's approach (not being a 3d programmer myself), but my understanding is that what the VSA-100 does is use hardware method for producing "jittered" samples slightly off from the target pixel, which are then blended together in the accumulation buffer.

    The "T-Buffer" effects can then be specified to be applied to certain objects, but not others. If a sub-pixel jitter is specified, then you get anti-aliasing. A larger jitter gives softening, and a very large jitter gives blurring. So far as I can tell, a program does not need to perform"n-renderings". The program still needs to specify the object (Or it can specify a mask and just do an area, I think), and the degree and quality of jittered samples, but from their I believe the chip does the rest automatically.

    While the 3dfx "T-Buffer" affects are nice, I can't imagine developers using them, since it would lock the developers into certain hardware (again!). This is something that I thought was insane when Glide first came around. These type of proprietary extensions are exactly why I have never and will never buy 3dfx-only games or 3dfx video cards. You said it above, "basically an accumulation buffer", instead of using this type of thing, why not make an open extension to OpenGL/Direct3D to support these type of effects.

    When the Voodoo 1 was first introduced, competing accelerators included the S3 Virge (The most numerous by a wide margin), the Rendition Verite, the nVidia NV1, etc... Good, fast OpenGL support was still the providence of ultra-high end professional accelerators, while Direct3D (Still below version 3 at that time, I think) was slow and glitchy. As a result, in addition to OpenGL and D3D, *everyone* at the time everyone had their own proprietary API which matched their own hardware closely, thus giving a sometimes substantial performance boost. nVidia had one for their NV1, PowerVR was pushing their PowerSGL, and I sort of remember S3 and ATI had their own as well. You still see PowerSGL support in Unreal, and I think S3 still pushes programmers to use Metal for the Savage2000, but of all the cards from that era only the Voodoo1 is still somewhat fast enough to be useable today, and only Glide is still programmed for.

    Glide survived because it appeared at a time when consumer-level 3D was first starting to appear, plus the hardware was good, plus it was fast, and doubleplus because it was easier to learn and program for than D3D at the time. I think 3dfx held onto Glide far too tightly for too long, but it's existence is due more to history than any monopoly power on 3dfx's part (In fact, ATI and S3, both then and now, are each several times larger than 3dfx, both in terms of market cap and number of card shipped. nVidia has something like 5x the marketshare that 3dfx has.)

  • "If you're suggesting that the comparitive market caps have anything to do with the "size" of the companies as you refer to them in the first paragraph, you're mistaken."

    I'm well aware of that, but I didn't have any hard statistics on hand for the more interesting data, like marketshare. I was researching this a few days ago, but I can't seem to find my source again.

    Anyway, skipping the hard statistics, in terms of 3D accelerator market share (IE, who sells the most $$ worth of cards/chipsets), nVidia is number one. I forget who comes next, but I believe it's ATI, then S3, then Intel (Big with OEMs). Then way in the back comes everyone else, including 3dfx and Matrox. 3dfx has a pretty strong retail presence, but that's a small slice compared with OEM pie.

    Here's to hoping that 3dfx dies a miserable death.

    I have a hard time understanding the anger directed towards 3dfx. They don't have any monopoly power over the market (Never did), have released just about all of their source code, and their cards offer a pretty good bang for the buck.

    At any rate, you could very well get your wish. 3dfx has been experiencing severe and accelerating losses for the last few quarters. They just had a layoff a few weeks ago. At the current rate, they only have enough cash to last for a year or two.

    And one less 3D company means less competition in the marketplace. In the past few years we've seen a huge number of companies leave the field--Tseng Labs (Out of Business), Cirrus (Now doing audio/modem chips instead), Trident (Still around but miniscule), Real3D (Remains bought by Intel), Rendition (Remains bought by Micron), Hercules (Remains bought by Gullimot), Number9 (Still around, but just a brand that sells S3/nVidia chips), and Chromatics (Bought by ATI). I think Permedia might be out too.

    That's a pretty big number of companies that used to design chips, but no longer. Now everbody else like Diamond and Creative just slaps a label and an S3/nVidia chip onto a board. A lot of industry analysts think the consolidation is going to continue.

    Think nVidia wouldn't try to establish a lock on the market if they get big enough? Intel, ATI, and nVidia have all been looking at integrated chipsets--in the short term as a low-cost part, but in the long term as a possible way to get that lock. And their investors seem to like the idea.
  • Speaking of ArtX (Which ATI recently bought), there's an interesting article up at Ars Technica, "ArtX: Half-truths and Misrepresentation?" [arstechnica.com].

    The article details what happened when Jon "Hannibal" Stokes, a writer for Ars Techica, posted a negative article on an ArtX trade show appearance. Afterwards, a number of Anonymous posts appeared on the Ars Technica forum which appeared to support ArtX, but which turned out to be from an ArtX's Director of Marketing.

    This incident appeared on Slashdot as ArtX, Hannibal and Consumer Fraud [slashdot.org].
  • There are persistent rumors on investing boards that several companies are working with Voxel acceleration. One particularly interesting rumor concerns 3dfx's Rampage chip, scheduled for the end of this year. In one interview with 3dfx's European Product Manager, Luciano Alibrandi, the interviewer asked if 3dfx was working on Voxel technology. Mr. Alibrandi replied "No"--but several days later the interview was updated at 3dfx's request, with the "No" struck out and replaced with a "Can't Comment".

    Anyway, we may find out if any of the rumors are true at the Game Developer's Conference that is taking place March 8-12.

  • Here's a quote from 3dfx's PR Manager:

    "We have placed orders for production silicon already. Our software development is right on track. We are on the same release schedule as when the VSA-100 product was introduced at Comdex, which we stated would be in the Spring. That product will include all the features that have been promised. It will deliver real time, full scene anti aliasing. It will support dazzling cinematic effects via our t-buffer. It will feature 32-bit color depth, SLI implementations and astronomical fill rates. Despite the outstanding state of this first silicon, the boards used in the Cebit demonstrations do not represent production silicon. Shortly after GDC, we expect to be demonstrating Voodoo4 and Voodoo5 boards that are much closer to production quality."

    The GDC is being held right now, March 8-12, so we should be getting some reports soon. Right now it looks like 3dfx is shooting for late April or May.
  • Is this even possible or feasible?
    What with cards with 64mb+ of memory, 'GPU', etc.

    IE, framebuffer and display is no problem. Data would be loaded through a simple, stupid, microprocessor across the AGP bus; all you'd need. I'm sure there are Linux distros that could fit themselves comfortably within 64mb ^^

    Anyone?

    -AS
  • I agree with the curved surface rendering, and collision detection, but not the rest.

    Depth of field is not appropriate for interactive games. In RL you refocus your eyes to look at different things, if you can't refocus just by controlling your eyes, you'd be half blind. It'd drive people crazy.

    Integrated physics would lock the programmer into a certain physics model. Physics is not terribly CPU intense, and the demands vary a lot from game to game. Having specialized physics hardware on the video card is about as appropriate as having specialized AI hardware (IOW, it's not).

    Voxels are either huge memory pigs or butt ugly. They might make nice 3d texture maps (if you're okay with fuzzy interpolation), but I wouldn't want to bother with them for whole 3d models.

    Chromatics are a waste. They are so rarely useful that it would be better to special case the lighting effects when needed.

    Radiosity would be nice, but it's not something you can just pipeline in (ditto for casting rays). However, there might be cheaper ways to get the same effect.
  • because I don't understand how putting a video card on a remote server could possibly speed up the rendering on a local machine?

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • I remember reading recently (on MacOSRumors; make of that what you will) that Apple was so pissed off at ATi for delays in the mobile Rage128 in the new PowerBooks that they're taking another look at companies like nVidia for OEM support.

    IIRC Apple did rewrite parts of the RagePro driver library for the Mac, although I don't know if they're working on Linux versions as well. I'm guessing they're working very hard on BSD versions, though.

    The G4 and even the newer iMacs make a quite decent Q3A platform thanks to the Rage128, but they still lag behind the latest PC video cards. Hopefully Apple will persuade developers to write the appropriate drivers for OS X so you can stick any AGP card in a G4 and have it work out of the box.

  • Actually, the reason that graphics cards don't run at a GHz is not because the technology isn't there, (I think S3 had .18u fabs before AMD and Intel) but that the proc is a lot more complex in terms of pipeline. Its easy to pipeline a proc and make each step go very fast, thus you can complete that primative operation in x nanoseconds and drive the clock rate up to 1 GHz. For graphics cards, however, these primative ops take a lot more time thus you can only clock them at 100 or 200 MHz. Even though the GeForce uses something like a .2u process, it still can only clear 120 MHz, when a Pentium 120 did that witha .6u process.
  • Collision detection is a no-go too. It would require a feed-back mechanism (the card says: Ok, there was a collision, now what?) which would cause the card to stall. 3D hardware is good as output devices only - you get everything set up and then send it off to the card to render. Collision detection sounds nice because it is geometry based and 3D cards seem so good at doing things with geometry, but, the app is also doing a lot with geometry so it makes sense to do it in the app as well. Plus, collision detection can mean different things in different situations. Safer to leave it on the app side. Let the card worry about gettings out to the display as quickly and as nicely as possible

    To look into the future of consumer 3D one might want to look at high-end companies like SGI. Their machines can do cool stuff with the various buffers (i.e. render into texture memory, mutliple, independently controllable paths, etc.)

    Finally, in reference to nVidia vs. ATI. It seems that ATI has always scrambled to get competitive products to market (good marketing and channels though), whereas nVidia has been following a well-controlled technology curve and introducing innovative products (for the consumer market) that are well-rounded and work. Following this trend, I'll bet that in 6 months nVidia will have a good solid product with usefull features, but ATI and 3dfx will have products with quirky features and will be of questionable quality (how 3dfx could get away with saying that 16bpp is "good enough" and all we really need or want for so long is beyond me). Example: hardware T&L at the consumer level is truly useful (placing a major part of the rendering pipeline onto the card!!) whereas FSAA, which is very cool, is nothing more than oversampling. Technically, it can be done on any general-purpose 3D graphics which supports an accumulation buffer. I hope that 3dfx can do something useful besides pushing fill-rate, and I hope that ATI can come up with a truly powerful and timely product, but history doesn't bode well for these two. I'd love to be proven wrong by either company.
  • There must be others

    How about Phong instead of Gourard shading? Fast Phong algorithms for hardware implementation have been about for years. They're still more computationally intensive than Gourard but remove the need for specular texture maps and reduce mach-banding artifacts.

    Real-time radiosity? Not for a very long time, methinks. Radiosity is usually pre-computed. I remember reading one of John Carmack's .plans where he said that ID had bought a supercomputer (a Sequent?) to perform the Quake radiosity calculations.

    HH


    Yellow tigers crouched in jungles in her dark eyes.
  • generate the appropriate texture volume and then put that in the image cache with the standard 2D versions

    As I mentioned elsewhere in this discussion, precomputed 3D texture maps would take up vast amounts of memory on your video card. IMHO it would probably be better to let the CPU compute the procedural textures and transfer them to the card using AGP.

    Better still, provide a texture compiler that produces bytecode that can be executed directly by the card. Now that would be cool. Procedural displacement mapping (like RenderMan uses) would be ultra-cool.

    So the first 3D card that can execute RenderMan shader bytecode will get my money :-)

    HH


    Yellow tigers crouched in jungles in her dark eyes.
  • Nice to see support for 3D textures. These are very cool. The article says:

    Traditionally, polygons are used to represent 3D objects. However, with 3D textures, volumes of texels (textured pixels) may also be used. In a 2D texture map (the kind that we see "glued" to a wall for instance) indexing occurs via two texture coordinates, whereas in a 3D texture, there are three coordinates.

    One good example of 3D texture use would be that of a marble cube. If the corner of the cube were to be chipped off, any veins running through the marble would already be defined and visible without any additional textures being generated.


    This means that you could chop a block of wood up, and have the wood grain on the cut surfaces rendered correcly. However, the article then goes on to say:

    Unfortunately, we feel 3D textures will have to be used incredibly sparingly because in order to implement the marble cube example explained above, an artist would have to draw the entire 3D surface (including the veins inside the cube which may never be seen!).

    This is incorrect. How can an artist draw the inside of a solid cube of marble or wood? I've never heard of a 3D texture being created in this way. They are normally generated procedurally, where you have a function that mathematically calculates the texture colour given the x,y,z coordinates within the texture. This does mean that you can't store 3D textures on the card, unless you pre-calculate an array of texels, but this would require vast amounts of texture memory on the card.

    HH


    Yellow tigers crouched in jungles in her dark eyes.
  • Is this hardware, or an election? Honestly, this is just ridiculous. Somebody tell ATI that hardware can't have charisma. Somebody tell Sony that hardware can't have emotions.

    Yeah, that's me, trendspotter extraordinaire. Takes a genius these days, eh?

  • Wile all your suggestions are good, i think a rel photo-realistic engine will be quite different. it will basically be POVRay on a chip or perhaps renderman since I think the most appropriate thing would be to forget bump-mapping and go with a shading language that actually deforms the geometry of the object and then does true ray-tracing.

    What comes after that? Well, I have to fight hard to keep my eretion down when thinking of this... Hardware-based realtime radiosity. *uNF*

    BTW, the idea of hardware-based collision detection... I hope to GAWD that the hardware manufacturers out there, nVidia in particular (cause I haven't read what they're doing after the geForce256, everyone else has something in the public works) read your post. It would be most gorgeous and possible to have such a thing...

    Esperandi
  • If collision detection was done on a separate chip on the card and not in the main CPU, it could be very helpful. You send the object you want to test and its potential positon relative to a "reference point" in the object and the card replies with a 1-bit yes or no, maybe more info, but I think that'd be enough. This way the card would never stall. Also, the collision detection wouldn't be one frame behind. The separate chip would do a lookup into the same memory the main CPU is using but wouldn't actually be testing based on that but the coordinates you send. Already matrices are accelerated (ATI is doing it in their new card with their bump mapping) in hardware, and that's all this would be would be a transformation matrix being applied and then a test of collision. I think it could work...

    Esperandi
  • This was queitly done recently, but in the most recent geForce256 drivers, Fullscreen AA can be enabled with a little registry twiddling. From what I read there is a "performance hit" but I'm not sure how significant it is...

    Esperandi
  • Another problem I had with this article along the same lines is his view (as is POVRays current view, but this is fixed in Megapov and will hopefully make it into the next release) that bump-mapping is the same as surface-deforming textures. Sure, it might look the same on a screenshot of a bumpy pear sitting on some blistering wood (the screenshot on sharkyextreme), but when you get into a game, you gotta keep the bumps really small. Imagine you've got the wall of a dungeon. You think it'd be easier to make a plasma-fractal generated bump map instead of breaking the triangles on every single section of the wall (which I'm planning on doing in an upcoming game ;)... well, when your users get into the game and start walking down this really kickass-looking craggy hallway, they're going to notice something. The crags aren't there. They can walk right through them. Which would be laughed at and viewed as very annoying by gameplayers. (it obscures your view and its ethereal?!) If REAL geometry-deforming maps were able to be applied, collision detection could work, it would keep the polygon count down and the game would be wickedly fast... but POVRay help off for such a long time, I fear if every card supports this fake bump-mapping (it takes the light and just moves the point and texture maps it a little different based on the new virtual position) that game developers will get happy with it and never realize how versatile real surface deforming textures can be... Combine it with a separate collision detection engine and we could have some amazingly realistic games...

    Esperandi
  • This card that ATI is making is wondrous. I mean, stunning. However, until a few days ago I owned an ATI All In Wonder Pro. I didn't get it for its 3D because I knew it sucked, i got it for the video capture capabilities which it had in abundance. Only a dedicated Videum card I used to have performed better. I soon found out that ATI has absolutely horrid drivers. I mean, this card never worked right even when I got super bleeding edge beta drivers (they were beter than the released ones and more stable I don't know why they weren't public, they gave me a secret URL, forbade me from dispensing it, etc). I would capture a video using their proprietry VCR2 codec (more efficient codec than I had ever seen or have ever yet seen!) and then go to edit it. Playing the video would work, but seeking in any sense would crash the drivers. To this day (and the all in wonder pro is 2 or 3 years old) they have not fixed the problem.

    I honestly hope that they support this card well and do a good job with the drivers because even if you've got the best card, its only as good as its drivers.

    Esperandi
  • Why not? The hardware has to have emotions and charisma, cos most game players and geeks don't.

    I'm kidding. It was a joke.
  • Although a lot of people I talk to think Matrox or 3Dfx are to be the biggest competitors for nVidia, I think ATI is a more worthy opponent: they're huge, since they're selling about a billion of chipsets daily to a lot of large OEMs (Gateway, Dell) so they've got the financial strength to take on nVidia's roadmap. And besides, their previous efforts (like the dual Rage Pro Maxx) were already quite good.

    I hope they'll use this chipset to target the hardcore gamers and start a good battle against nVidia's supremacy. Like the AMD vs. Intel thing, us consumers will only benefit :)

    (Another cool article on the charisma is here [gamersdepot.com].)

  • by Mithrandir ( 3459 ) on Wednesday March 08, 2000 @05:07AM (#1218136) Homepage
    Unfortunately, this article highlight's the author's shortcomings in understand what a lot of high-end 3D graphics is about and how it is implemented.

    The one major thing the author misses about 3D texture maps is that rarely are they hand drawn by an illustrator. A typical map is a procedural texture (think of rendering a marble texture using POVRay) so generating a lump of marble is not that difficult a thing to do.

    For games, the programmer just needs to fire up Povray, 3DS etc and get it to generate the appropriate texture volume and then put that in the image cache with the standard 2D versions. I'm sure a lot of game engines will handle this pretty quickly.

  • by doomy ( 7461 ) on Wednesday March 08, 2000 @06:49AM (#1218137) Homepage Journal
    OTOH,

    I feel that those two names come real close to describing these two very excellent egines as best describle on earth :) (this was before the destruction of earth by the vogons).

    Lets see..

    On Charisma, David Jenson wrote


    When scientists and technicians hear the word charisma, they may first think of sales reps or politicians. But you'd be hard pressed to
    find a person in any influential biotechnology position who doesn't have some measure of charisma. Those on the scientific track are
    not exempt from this need.

    Charisma is derived from the ancient Greek word kharis, meaning "to cause to strive or desire." The ability to motivate others to strive
    and succeed is a major building block of successful management, whether in a QC lab or in a corner office. Charismatic people
    describe goals by painting word pictures, thereby motivating others to a particular end. They have an exceptional ability to win the
    devotion and support of others. They have no fear of presenting their ideas to anyone who may be able to help them. And they have
    excellent persuasion and negotiation skills.


    But more to the effect, I see charisma here derived from the Indian (as in South Asia) word. It too has a similar meaning to the regular charasmatic word. In this way, the word comes closer to a powerful healing force that is being ispearsed around the subject than anything else. This is a very visual word. A very charasmatic word. The word conjures a halo around it's subject and renders it in a light that leaves a very strong impression on anyone who hears it uttered. Thus, it is fitting a name for this chip (Which I believe would live up to this name). As would, the emotion engine in PlayStation 2. Which also conjures strong vibes and powerfully drawn meanings to the word and what the chip can do. Human emotions are powerful, machines, from the start of time (execpt for Marvin) are known to lack them. The very thought of a machine having these very emotions drive a very strong stab at anyone who looks at the PS2. The engine was made for it's artistic quality, it's ability to render something beautiful, so beautiful that it is almost real, that is the emotion, the charisma.
    --
  • by Guppy ( 12314 ) on Wednesday March 08, 2000 @05:26AM (#1218138)
    HotHardware [hothardware.com] has another article on the R6 "Charisma" [hothardware.com], as well as a copy of ATI's White Paper [hothardware.com].
  • by Guppy ( 12314 ) on Wednesday March 08, 2000 @05:46AM (#1218139)
    Everybody likes to compare nVidia and 3dfx as the two top companies, but in reality 3dfx is a small fraction of nVidia's size. I don't have exact numbers offhand, but nVidia currently has about 45-50% of the graphics market while 3dfx has something like 10-15%, and I believe Matrox is even smaller than 3dfx.

    Here's a comparision of some market caps (data from The Motley Fool).

    ATI: 4,141.51 million
    S3: 1,607.80 million
    nVidia: 1,808.46 million
    TDFX: 218.09 million

  • by Junks Jerzey ( 54586 ) on Wednesday March 08, 2000 @05:12AM (#1218140)
    Any game that uses OpenGL for more than a rasterization-only API automatically uses geometry acceleration on cards/drivers that support it.

    Realistically, the boost is less than you may think. An average game doesn't spend more than 15% of a frame doing transformation. So the Ultra-Fast-Geometry-Accelerator-of-the-Future is going to buy you a 15% speedup in that case. The other issue is that geometry acceleration is only useful when you pump the data straight to the card and don't want or need intermediate results. For example, you'll have to transform points (one way or another) to do collision detection against instanced objects. But you can't use the geometry acceleration in that case, because the CPU needs the results.

    Geometry acceleration is good, but it's not the panacea that many people are expecting it to be.
  • by Microlith ( 54737 ) on Wednesday March 08, 2000 @05:01AM (#1218141)
    That card's core is a highly specialized chip, focusing on the math for the transformation of points, lights, textures, blurs, and several things that don't involve most of the core of an x86 chip. Linux will never run on it unless they make one with a big flash rom, and a general purpose controller chip (read: i960 or other).

    What kind of use is the card? LOTS! Go check out the Intergraph Intense3D Wildcat 4110. It runs in most prebuilt p2/p3 graphics workstations (huge card), and takes so much of the processing off of it, the only thing the cpus are needed for is getting the software started, and other extended math calculations (we love those fcurves!), and rendering of the final image. By the way, this card does everything the geforce AND v5 do... but i'm not sure as to it's fillrate, but 21fps in a scene with 80000 polys is impressive. And game cards are catching up quite quickly to the power of the "industrial card".

    Acelleration... What needs to be done is the accelleration of the front side bus. It's just poking along at roughly 133mhz now, maybe 200 on athlon systems (but that's only ram to cpu). It'd be better if that were 1/1 with the cpu speed, or if it were faster, leaving the cpu with a wide open pipe to the ram and the other system peripherals.
  • by luckykaa ( 134517 ) on Wednesday March 08, 2000 @04:58AM (#1218142)
    The 3d card market seems to have an alternative version of Moore's law:

    Every 18 months, the number of people making 3D graphics cards halves. There's only about 6 companies making 3D chips now.
  • by Junks Jerzey ( 54586 ) on Wednesday March 08, 2000 @05:46AM (#1218143)
    Speaking as a game programmer, these advances are coming so fast that there's no time to concentrate on (1) pushing the limits of what a current generation of cards can do, and (2) dealing with card-specific features.

    On the first point, there's not enough time to sit down and focus on where all the rendering time is going in a complex game. Well, more like there are so many card and driver combos out there that the best we can do is try to write generic code and have it work across the board. If we could focus on one card, say a Voodoo 2, then we could push the limits of that chipset out beyond what people only expect from a GeForce. But there's no time for that, so we plow ahead using about 50% of each card's capabilities for the three month window until the next card comes along.

    On the second point, 3dfx, Nvidia, Matrox and ATI (and S3, and...) are all branching out into odd and card specific feature sets. 3dfx has their T-buffer. Nvidia has "8 lights per triangle hardware lighting." Matrox has a certain kind of hardware bump-mapping. ATI has all sorts of wacky stuff. The bottom line becomes "Do we want to just focus on writing a great game, or do we want to spend an extra six months of development so we can support special features of all these cards that were considered hot eight months ago when we were still pre-beta?" And tacking in special Matrox-only support, for example, is hell on QA. It makes a lot of sense to ignore such features, unless we're getting a bundle from the card company to cover us for the trouble.
  • by tjwhaynes ( 114792 ) on Wednesday March 08, 2000 @05:21AM (#1218144)

    It seems that while the push for ever increasing image quality is going on, we are getting much closer to realistic, real time rendering of scenes. I wonder just what else is needed to really be able to push the envelope of visualization and realism further. Here's my current wish list.

    • Proper curved surface rendering - not just pushing the polygon count ever higher but actually rendering, for example, bezier patches with multi-pass textures.
    • Depth of field - most graphic cards today blur the insides of polygons when they are close (tri-linear mapping) but do nothing to blur the edges of the polygon, breaking the realism. And everything in the far field is in clean focus. Having real depth of field, so that there is some defined focal distance would help.
    • Integrated collision detection - we pass the cards all these vertex coordinates, fans and strips. It must be possible to pass some of the collision detection from the CPU to the graphics card. Using something like Orientated-bounding boxes at various detail levels and then passing the final collision detection to the card for some arbitration at the polygon level might help.
    • Integrated physics engine - gravity, flexion, distortion both plastic and elastic, hinges, rotation and friction. And anything else :-)
    • Volume rendering - either voxels per se or some iso-surface rendering based on potential fields.

    There must be others - it looks like ATI is going to finally give us proper bump mapping and range-based fogging. Do we also need a proper chromatic model so that we can get rainbows through glass objects? Should there be real-time ray-casting or radiosity support so that real lighting effects (say carrying a flaming torch down a corridor and having proper soft-edged shadows) can be achieved?

    Cheers,

    Toby Haynes

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...