Forgot your password?
typodupeerror

Ageia PhysX Tested 179

Posted by ScuttleMonkey
from the useless-in-the-short-term dept.
MojoKid writes "When Mountain View California start-up Ageia announced a new co-processor architecture for Desktop 3D Graphics that off-loaded the heavy burden physics places on the CPU-GPU rendering pipeline, the industry applauded what looked like the enabling of a new era of PC Gaming realism. Of course, on paper and in PowerPoint, things always look impressive, so many waited with baited breath for hardware to ship. That day has come and HotHardware has fully tested a new card shipped from BFG Tech, built on Ageia's new PPU. But is this technology evolutionary or revolutionary? "
This discussion has been archived. No new comments can be posted.

Ageia PhysX Tested

Comments Filter:
  • Skeptical (Score:5, Interesting)

    by HunterZ (20035) on Tuesday May 09, 2006 @06:38PM (#15297296) Journal
    From what I was able to read of the article before it got slashdotted, it sounds like games that can take advantage of it require installation of the Ageia drivers whether you have the card or not. This leads me to believe that without the card installed, those games will use a software physics engine written by Ageia, which is likely to be unoptimized in an attempt to encourage users to buy the accelerator card.

    Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product. I really can't see myself investing in something that has limited support and is likely to be replaced by something designed around a non-proprietary API in the case that it does catch on.
  • by AuMatar (183847) on Tuesday May 09, 2006 @06:43PM (#15297342)
    The purpose of a clock- ease of development. With a clock, you can advance new input into the next pipe stage at known intervals, allowing each stage to finish completely. Without a clock, you need to make sure that no circuit feeds its data into the next part too soon. Doing so would end up causing glitches. For example, if the wire that says to write RAM comes in at time t=0, but the new address comes in at time t=1, you could corrupt whatever address was on the line previously. With a clock, all signals update at the same time.

    Its possible to make simple circuits go the clockless route. Complex circuits are nearly impossible. There's no way a p4 could be made clockless, the complexity of such an undertaking is mind boggling. Even testing it would be nearly impossible.

    The problem with data ready flags is the same as with the rest of the circuit- how do you prevent glitches without a latching mechanism?

    And this isn't about modularizing hardware. Its about adding extra processing power with specific hardware optimizations for physics computation. Wether its a good idea or not depends on how much we need the extra power. I'm not about to run out and buy one though.

    Actually, in desktops to day the trend is to remove modularization. AMD got a nice speedboost by moving the memory controller into the Athlon (at the cost of requiring a new chip design for new memory types). I'd expect to see more of that in the future- speed boosts are drying up, and moving things like memory and bus controllers are low hanging fruit.
  • by asliarun (636603) on Tuesday May 09, 2006 @06:44PM (#15297347)
    I'm not so sure about that. Down the years, the trend has usually been that companies have always released specialized chipsets or mini-CPUs that can take over some part of the CPU's workload. While this has worked in the short run (think math coprocessor), the CPU has become sufficiently powerful over time to negate the advantage. Look at it this way: If Intel/AMD releases a quad-core or octa-core CPU, in which each core is more powerful than the fastest single-core today, any of those cores could take up the physics processing workload. Best of all, this can happen without sacrificing performance on the other threads that're running. Furthermore, if Intel/AMD realizes that physics processing is becoming increasingly important, they will add special processing units for it in the future CPUs and come out with an additional instruction set, just like they've done for MMX/SSE. This would almost totally negate the value of having these specialized co-processors, albeit only in the long run. This will work as a quick fix for an immediate problem though.

    To cut a long story short, i think that these specialized chips solve today's problem, not tomorrow's. I predict that this company will get bought over by either nVidia/ATI or Intel/AMD.
  • no titles yet (Score:2, Interesting)

    by jigjigga (903943) on Tuesday May 09, 2006 @06:58PM (#15297418)
    Ive been following them for a long time- their software demos blew my mind a few years ago (the one with the towers made of bricks that you could destroy oh so fun). We should wait for real games to make use of the physics. Ghost recon uses it as a gimmick. The tech demo game as listed in one of the articles is a real showing of what the card is capable of. When the game engines catch up and use it as an intrical part rather than a gimmick it will usher in a new era of gaming. It really will, look at what happened with hardware 3d.
  • Evolutionary (Score:3, Interesting)

    by phorm (591458) on Tuesday May 09, 2006 @07:08PM (#15297466) Journal
    Not necessarily true. While dedicated cards for physics haven't existed, dedicated cards for other operations have, and much of the physics calculations themselves are still being done in games, just with an extra load on the CPU in software rather than a dedicated unit. As physics becomes a bigger focus in the realism of 3d games, perhaps it is in fact a foreseeable evolutionary step that specific devices would exist to process this.
  • by throx (42621) on Tuesday May 09, 2006 @07:10PM (#15297482) Homepage
    I really don't see a custom "Physics Processor" being a long-lived add-on for the PC platform. It's essentially just another floating point SIMD processor with specialized drivers for game engine physics. With multicore+hyperthreaded CPUs coming out very soon, the physics engines can be offloaded to extra processing units in your system rather than having to fork out money for a card that can only be used for a special purpose.

    In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the video card". With the advent of DirectX 10 hardware which lets the card GPU write it's intermediate calculations back to main memory rather than forcing it all out to the frame buffer, a whole bunch of physics processing can suddenly be done through the GPU.

    Lastly, the API to talk to these cards is single-vendor and proprietary. That's never been a long term solution for longevity (unless you're Microsoft), so it won't really take off until DirectX 11 or later integrates a DirectPhysics layer to allow multiple hardware vendors to compete without game devs having to write radically different code.

    So, between multicore/hyperthreaded CPUs and DirectX10 or better GPUs with a proprietary API to the card... cute hardware but not a long term solution.
  • glide history (Score:1, Interesting)

    by Anonymous Coward on Tuesday May 09, 2006 @08:26PM (#15297816)
    Your history of glide is a little backwards. OpenGL predates glide and was a clean rewrite of SGI's original graphics API with input from other graphics vendors at the time. The current graphics pipeline was a solved problem by the early '90s. The primary problem that glide solved was how to make a $30000 workstation into a $300 graphics card. The answer was to throw out most of the pipeline and make a passthrough card that didn't even do video. Glide itself wasn't anything particularly fancy and mostly consisted of functions to send untransformed polygons to the hardware. It took a few weeks for there to be a glide->OpenGL compatability layer.

    3DFX simply failed to keep up with NVIDIA. They did an incredible job integrating both video cards and graphics engines together in the RIVA128 chipset as well as adding a basic lighting and transform pipeline to the hardware. They also did a much better job supporting the standard software APIS of OpenGL and DirectX. They still do a much better job with drivers than ATI.

        Michael

Nothing is easier than to denounce the evildoer; nothing is more difficult than to understand him. - Fyodor Dostoevski

Working...