Ageia PhysX Tested 179
MojoKid writes "When Mountain View California
start-up Ageia announced a new co-processor architecture for Desktop 3D Graphics that off-loaded the heavy burden physics places on the CPU-GPU rendering pipeline, the industry applauded what looked like the enabling of a new era of PC Gaming realism. Of course, on paper and in PowerPoint, things always look impressive, so many waited with baited breath for hardware to ship. That day has come and HotHardware has fully tested a new card shipped from BFG Tech, built
on Ageia's new PPU. But is this technology evolutionary or revolutionary? "
Revolutionary (Score:1)
Evolutionary (Score:3, Interesting)
Re:Evolutionary (Score:2)
Dedicated cards? Probably. Dedicated computers? Definitely, especially if you consider that the very first computer [wikipedia.org] was built to essentially perform physics calculations (artillery trajectories).
Hardly revolutionary.
So? (Score:2)
Either say "Dedicated cards? Probably not. Dedicated computers? Definitely" or "Dedicated cards? Probably. Dedicated computers? Definitely not".
Re:Revolutionary (Score:2)
Actually I think that definition is a little closer to Evolutionary than revolutionary.
To be revolutionary it has to cause change in the industry.Something makes it so from now on all cards/computers will ship with one, all new games will require them.
Revolutionary products aren't usually judged as such on release just upon reflection. 3dfx is good example of a revolutionary change in the graphics industry.
Looks like... (Score:5, Funny)
Re:Looks like... (Score:3, Informative)
Re:Looks like... (Score:3, Funny)
Re:Looks like... (Score:2, Funny)
Re:Looks like... (Score:3, Funny)
Anandtech already did a review - a while back (Score:3, Informative)
"The added realism and immersion of playing Ghost Recon Advanced Warfighter with hardware physics is a huge success in this gamer's opinion. Granted, the improved visuals aren't the holy grail of game physics, but this is an excellent first step. In a fast fire fight with bullets streaming by, helicopters raining destruction from the heavens, and grenades tearing up the streets, the experience is just that much more hair raising with a PPU plugged in."
Re:Anandtech already did a review - a while back (Score:2)
http://enthusiast.hardocp.com/article.html?art=MTA 1MiwxLCxoZW50aHVzaWFzdA [hardocp.com]
Summary: At the moment, it's not worth it.
That's pretty much what your Anandtech article says
From Anandtech:
Re:Looks like... (Score:2)
Re:Looks like... (Score:3, Funny)
Wave of the future... (Score:4, Insightful)
While studying for my EE, I often wondered what the purpose of having a clock was, since so much of the individual chips often had finished their calculations before the next clock cycle came around.
I think we are going to see the clock go away, replaced with "Data Ready" lines, which will also help heavily in determining the bottlenecks in a given system (Hint: it's the system that is taking the longest to put up the "Data Ready" flag).
I also think that optics will be the way of the future. Quantum will be like Mechanical Television: cute idea, but impractical for mass production.
Optics. Think of it this way: Imagine a bus that can address individual I/O cards with full duplex, simply by using different colors for the lasers. Motherboards are going to get a lot smaller.
That's my opinion, anyway.
Joe
---
Q:Why couldn't Helen Keller drive?
A:Because she was a woman.
Re:Wave of the future... (Score:4, Insightful)
A clock is a syncronization scheme, and it solves a very low-level issue: How do I syncronize my reads and writes on a physical level?
Many people have tried to create systems that don't have clocks. Without exception, they have all failed or have been unscalable.
Re:Wave of the future... (Score:2)
Re:Wave of the future... (Score:2)
-matthew
Re:Wave of the future... (Score:2)
Re:Wave of the future... (Score:2, Informative)
Re:Wave of the future... (Score:5, Interesting)
Its possible to make simple circuits go the clockless route. Complex circuits are nearly impossible. There's no way a p4 could be made clockless, the complexity of such an undertaking is mind boggling. Even testing it would be nearly impossible.
The problem with data ready flags is the same as with the rest of the circuit- how do you prevent glitches without a latching mechanism?
And this isn't about modularizing hardware. Its about adding extra processing power with specific hardware optimizations for physics computation. Wether its a good idea or not depends on how much we need the extra power. I'm not about to run out and buy one though.
Actually, in desktops to day the trend is to remove modularization. AMD got a nice speedboost by moving the memory controller into the Athlon (at the cost of requiring a new chip design for new memory types). I'd expect to see more of that in the future- speed boosts are drying up, and moving things like memory and bus controllers are low hanging fruit.
Re:Wave of the future... (Score:2, Informative)
But then, I'm only working on a Bachelor's in Computer Information Systems...what would I know about signalling in a complex silicon device?
Re:Wave of the future... (Score:2)
And I have a bachelor's in computer engineering. WHich means I have designed both synchronous and asynchronous circuits.
Re:Wave of the future... (Score:2)
Which typically means that they (synchronous designs) win. I am a Phyiscs major, so I know almost nothing about the topic though.
Re:Wave of the future... (Score:2)
Re:Wave of the future... (Score:2)
I don't know where you got your degree, but a design that is easier to understand and debug is good.
Re:Wave of the future... (Score:2)
Where are good references about asynch design?
I have a MsEE, and design logic for FPGAs for a living. I would like to learn more about asynch design, but the best that I have been able to find is either a bunch of useless marketing junk, or info that is at such a high level as to be useless.
Any links?
Re:Wave of the future... (Score:2)
Re:Wave of the future... (Score:3, Interesting)
Re:Wave of the future... (Score:5, Insightful)
With graphics, small visual differences between hardware implementations are not a big problem. Physics processing needs a standard interface, and precise specs on what the output should be. If there is only going to be one vendor, and one proprietary interface, this market will fail.
Re:Wave of the future... (Score:2)
Not really. Quake3 and tux racer are few examples which the software based approach isn't cutting it.
And thats with athlon64 3000 with integrated graphics. There is either very bad quality of graphics compared to dedicated card or slideshow.
It's a type of problem that generic CPUs can't keep up with. Physics may be a similar type of problem, one
Re:Wave of the future... (Score:2)
I/O is one of the areas that could really use some help. I envision a contactless bus where expansion devices are powered by induction; high-power devices could have good ol' electrical contacts. Just as PCI Express features 1-n lanes support, my fantasy bus uses multiple fiberoptic connections, with some slots supporting more than others for additional bandwidth.
The only thing on the motherboard would be the bus arbitrator. Everything else would go into a module. Modules would also be able to not only
Re:Wave of the future... (Score:2)
No clock, no data. Did you pass?
Re:Quantum Computers (Score:2)
There's a fair few useful algorithms that aren't in P but are in NP.
Anandtech too ... (Score:3, Informative)
Short summary: Great for synthetic benchmarks, probably not real-world ready.
Slashdotted, but I got the first page (Score:5, Informative)
For the foreseeable future, the only vendors which will be manufacturing and selling physics processors based on the Ageia PhysX PPU are ASUS and BFG. With ASUS primarily focusing on the OEM market, BFG will enjoy a monopoly of sorts within the retail channel, as they will comprise the vast majority of all available cards on store shelves. Today, we will be running a retail sample of BFG's first ever Physics processor through its paces. Judging from the packaging alone, you can tell that this box contains something out of the ordinary. Housed in an unusual triangular box with a flip-down front panel, consumers can glimpse the card's heatsink assembly through a clear plastic window.
BFG Tech PhysX
Card And Bundle
Flipping the box, consumers are presented with a quick listing of features complete with summaries and a small screen-shot. Most importantly, the package also lists the small handful of games which actually support the PPU hardware. This short list consists of City of Villains, Ghost Recon Advanced Warfighter, and Bet on Soldier: Blood Sport.
Upon opening the packaging, we are presented with a standard fare of accessories. Beyond the actual card itself, we find a power cable splitter, a driver CD, a demo CD, and a quick install guide. Somewhat surprisingly, we also find a neon flyer warning of a driver issue with Ghost Recon Advanced Warfighter that instructs users to download the latest driver from Ageia to avoid the problem. This is a bit disheartening as there are only three games which currently support this hardware. With this in mind, it is hard to not feel as though the hardware is being rushed to market a bit sooner than it probably should have.
Directing our attention to the card itself, we find a rather unassuming blue PCB with a somewhat standard aluminum active heatsink assembly. Amidst the collection of power circuitry, we also find a 4-pin molex power connector to feed the card as a standard PCI slot does not provide adequate power source for the processor. At first glance, the card looks remarkably similar to a mainstream graphics card. It's not until you see the bare back-plate with no connectivity options that you realize this is not a GeForce 6600 or similar product.
Thankfully, the BFG PhysX card does not incorporate yet another massive dual-slot heatsink assembly as so many new pieces of high-end hardware do these days. Rather, we find a small single-slot active heatsink that manages to effectively cool the PPU while keeping noise at a minimum. Removing the heatsink, we were pleased to find that BFG has done an excellent job of applying the proper amount of thermal paste and that the base of the heatsink was flat with no dead spots. After powering the system, we see that BFG has dressed the card up with three blue LED's to appease those with case windows.
With the heatsink removed, we have our first opportunity to glimpse the Ageia PhysX PPU in all its glory. Manufactured on a 0.13u process at TSMC, the die is comprised of 125 million transistors. Overall, the size of the die is slightly larger than the memory modules which surround it. Looking closely at the board, we see that the 128MB of memory consists of Samsung K4J55323QF-GC20 GDDR3 SDRAM which are rated for a maximum frequency of 500MHz. Unfortunately, neither BFG nor Ageia have disclosed what frequency the PPU memory and core operate at, so we are unsure
I'll wait for 64-bit TYVM... (Score:3, Informative)
Nice comparison concerning current 32-bit applications/limitations over 64-bit. If this video is TRUE, then I won't bother with a PPU - my Athlon 64 3000+ may already to be able to handle those extra physics calculations while any WELL-PROGRAMMED game will use any extra resources I have available for extra object/texture/physics rendering.
Sorry, IMHO, PPU is at a loss. Mod down at will.
Re:Slashdotted, but I got the first page (Score:2)
Skeptical (Score:5, Interesting)
Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product. I really can't see myself investing in something that has limited support and is likely to be replaced by something designed around a non-proprietary API in the case that it does catch on.
Re: (Score:2)
Re:Skeptical (Score:2)
Yes, except that OpenGL was and is an open standard. It's not controlled by one company who is trying to push a product that accelerates software which uses their API.
Re:Skeptical (Score:2)
I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.
LK
Re:Skeptical (Score:5, Insightful)
For a while, since 3dfx was the only one innovating for a while. Once they got hold of the market, nobody else could because the games only supported Glide, and nobody else was able to make Glide-supported hardware due to it being a proprietary API.
Then nVidia came along with superior cards that only supported Direct3D and OpenGL because Glide was 3dfx proprietary. Game developers were forced to switch to D3D/OpenGL to support the new wider array of hardware. Since 3dfx cards were overly-optimized for Glide, this resulted in games that ran crappy on 3dfx hardware but great on nVidia. The rest is history.
EAX is a similar story. Creative owns it, but what has happened is that many game developers don't bother to take advantage of it, instead relying on DirectSound3D or OpenAL as the lowest-common-denominator. The widespread use of SDKs suck as Miles Sound System do also help to allow transparent use of various sounds API features though, so mileage varies. Personally, I've been without Creative products for years now and haven't missed them one but. I'm currently waiting for the next generation of DDL/DTS Connect sound cards to come out, and then I'll give those a shot.
The same thing is likely to happen here; competitors will make their own products, but because they won't be able the use the PhysX engine they will make their own. It will be an open API because they'll have to band together to get game developers to support their cards. Ageia will be forced to add driver support for the standard API, but it won't perform as well on their cards. If they're smart, they'll either open the API early on, or else release new hardware built around the open API. This is all assuming the PPUs even catch on, of course.
The problem with the PC gaming hardware market is that when there's only one company making a certain type of product, they tend to stop innovating. Then, when someone else develops a competing product they try to use marketing to stay ahead instead of coming up with more competitive products. Sometimes gamers see through the marketing (3dfx) and sometimes they have a harder time doing so (EAX). It will be interesting to see how it turns out this time.
Re:Skeptical (Score:2)
Re:Skeptical (Score:2)
I find myself a bit puzzled by what this thing's actually supposed to do for me. Given that there are currently no applications that require it (because since it's not actually shipping yet, it would be the kiss of death), then supporting the PbysX can make no difference to the actual gamepl
Re:Skeptical (Score:2)
It looks like the way they're setting it up is that they're building a physics engine that can offload some of its processing to this card. Apparently this is reflected in these initial games in the form of additional dynamic objects in the game environments.
Re:Skeptical (Score:2)
Re:Skeptical (Score:2)
-matthew
Re:Skeptical (Score:3, Insightful)
I think most people don't realize it's a great physics engine by itself that has the added bonus of supporting dedicated hardware. Plus, a lot of the larger developers presumably have source access, so if it doesn't look optimized or if there are big
Re:Skeptical (Score:2)
Maybe the world isn't ready (Score:3, Insightful)
Re:Maybe the world isn't ready (Score:2)
Actually, I think this problem could be solved with a little bit of creative coding. You see, you don't really need to send the complete position of every object during the movement. You could just send the starting point of each object, and the amount of force applied to it, and let the PPU on each client computer work out t
Re:Maybe the world isn't ready (Score:2)
more articles with videos (Score:1)
http://www.pcper.com/article.php?aid=244 [pcper.com]
Coral Cache link (Score:2, Informative)
http://www.hothardware.com.nyud.net:8080/viewarti
Ghost Recon video (Score:5, Informative)
The Anandtech article [anandtech.com] states that the physics hardware slows down the framerates which Aegis can't possibly be happy about.
it's BATED breath, dammit (Score:5, Insightful)
Re:it's BATED breath, dammit (Score:2)
Wait for the response with a wikipedia link.....
Re:it's BATED breath, dammit (Score:5, Funny)
Re:it's BATED breath, dammit (Score:2)
Re:it's BATED breath, dammit (Score:2)
http://www.worldwidewords.org/qa/qa-bai1.htm [worldwidewords.org]
I wish I could mod this up 100 points. (Score:5, Insightful)
Re:I wish I could mod this up 100 points. (Score:2)
Re:it's BATED breath, dammit (Score:2)
no titles yet (Score:2, Interesting)
Where's the competition? (Score:3, Insightful)
I wonder how long it will be before there is a mainstream demand for a separtate physics unit (probably as soon as games require them). It sounds like a great idea to take some of the load off the CPU. Does this mean that now game performance will be more directly linked to the speed and power of the GPU and PPU and that the CPU will be more of an I/O director and less of a number cruncher?
I've seen numerous posts of people saying that they do not have any available PCI slots. Will the introduction of a new type of card lead to larger motherboards with more slots or might it lead to a small graphics card that does not monopolize the PCI space? Also, there is the concern of adding another heat source to the mix.
"Get you facts first - then you can distort them as you please." -Mark Twain
Re:Where's the competition? (Score:2)
I don't see this as long lived (Score:5, Interesting)
In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the video card". With the advent of DirectX 10 hardware which lets the card GPU write it's intermediate calculations back to main memory rather than forcing it all out to the frame buffer, a whole bunch of physics processing can suddenly be done through the GPU.
Lastly, the API to talk to these cards is single-vendor and proprietary. That's never been a long term solution for longevity (unless you're Microsoft), so it won't really take off until DirectX 11 or later integrates a DirectPhysics layer to allow multiple hardware vendors to compete without game devs having to write radically different code.
So, between multicore/hyperthreaded CPUs and DirectX10 or better GPUs with a proprietary API to the card... cute hardware but not a long term solution.
Re:I don't see this as long lived (Score:2)
I think the same argument used to be made for 3D acellerator cards many years ago. Still hasn't come to pass. The basic problem is that dedicated hardware will always be more powerful than a generic processor. The real question is whether or not the physics problems that are offlo
Re:I don't see this as long lived (Score:2)
I think that it's something like:
I think that interrupt 21h is the DOS interrupt for things like printing text to the screen and other miscellaneous stuff. If I had to guess, I'd say function 4Ch is "exit program." Of course, I could be completely wrong.
I was a BIOS programmer for several years... I did a lot of work on the BIOSes for the NForce-1, NForce-2, and NForce-4 chips. So with a sig like that, I'm guessing you w
Re:I don't see this as long lived (Score:2)
Re:I don't see this as long lived (Score:2)
Eh? I already have a multicore CPU, and don't consider myself to be an early adopter by any means. If you mean specifically "CPUs with multiple cores and hyperthreading support", then
a) I believe they're already available (although I'm not in the market at the moment so am not really keeping up with it)
b) HT was never that big a deal, performance-wise
In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the
Re:I don't see this as long lived (Score:2)
So what's the point of putting a totally incompatible SIMD engine in there then? Put in something that can be used to improv
Sorely Lacking in Pizazz (Score:2)
And that's not even mentioning a lack of DRM. Doesn't Hollywood own gravity these days? I'm sure a patent was filed somewhere - or was it a copyright?
Re:Sorely Lacking in Pizazz (Score:2)
many waited with baited breath... (Score:2)
short peek (Score:2)
Doesn't look like a very good performance improvement for the money. In fact, CPU's new "dual-core" marketing push may just eat up the dollars for something like this. If you simply move your physics engine to hardware, it only solves 1 part of a larger, and very delicate puzzle.
PPU a repeat of FPU (Score:2)
Re:PPU a repeat of FPU (Score:2)
Unlike the GPU which is quite happy sitting out on its own, doing what its told, to my mind a physics engine should be more interactive.
Graphics processor:
Transform polygons in to pi
FPU making a comeback (Score:2)
Also, DRC has come out with a prog
evolutionary (Score:2)
X11 Support (Score:2)
I want to see how they will implement this in X11 or Xgl-type desktops. When My icons collide into each other, I want it done realistically! When I kill Firefox because it's frozen, I want to see it shatter into a million pieces! And then have those pieces push around the rest of my desktop.
This isn't serious, of course, but the reason I say this is I wonder if there are applications for things other than video games.
Re:X11 Support (Score:2)
Should do hair and cloth in the 2.4 release (Score:3, Informative)
Still, I would have expected a bigger improvement in performance on existing stuff. There may be too much of a bottleneck getting in and out of the physics processor, which is the usual problem with coprocessors. I'd expect more improvement in fluids, particles, hair and cloth physics, which usually don't feed back into the gameplay engine and thus can be done concurrently with the main engine work. If you're banging boxes around, the main game engine probably has to wait for the physics engine to get the new box positions, so there's no big win there. Even if you have feedback to the game engine from cloth, you can probably delay it a cycle, so that when the cape gets caught in the door, it doesn't yank on the character until one cycle later.
Tearable cloth in Illusion games (Score:2)
Another specialized processor? (Score:2)
Re:Another specialized processor? (Score:2)
daughtercard physics processing? (Score:2)
No, I don't see anyone trying to sell us "x86 math coprocessors" any more either.
Re:Spelling fix. (Score:1)
but whatever turns you on...
Re:Spelling fix. (Score:1)
Re:Spelling fix. (Score:2)
Re:Spelling fix. (Score:2)
Bated as in masturbated."
We value your expertise on masterbation breath. Thank you.
Re:Spelling fix. (Score:2)
Re:Spelling fix. (Score:2)
Re:Physics Engine !!! (Score:2)
After all, there are only so many P4s you can throw at a problem in a public computer lab before people start whining about "wanting to do their homework!". Undergrads . . . . . .
Re:Physics Engine !!! (Score:2)
Re:Physics Engine !!! (Score:2)
http://www.clearspeed.com/ [clearspeed.com]
They are making an accelerator card for numerical work. They claim they can get a sustained 50 GFlops in a BLAS matrix multiply. They hope to put several cards into a PC, make a farm of them, and sell the thing as a supercomputer.
Their card is much, much more expensive than PhysX, but they still cant get (in my opinion) the kind of performance advantage that'd you'd need to really make a compelling case for the
Re:Physics Engine !!! (Score:2)
FPGA for Socket 940 [drccomputer.com]
Plugging a FPGA directly into the Hypertransport bus on a multisocket mobo sounds like a fantastic idea. It's not quite as easy to scale as coprocessors that live on PCI cards, but the bandwidth benefit should be huge. The downer is that these chips cost $4500 now, so the performance improvement would have to be pretty tremendous to be cost effective.
Re:PCI Express (Score:2)
pci-e x1 is still somewhat of a niche feature (I don't know many x1 cards at all) but no doubt the card makers like BFG will produce a pci-e version in the near future if demand is there - which will probably come when cellfactor or unreal 3 engined games come out.
Re:Data parallel? (Score:2)
That said I think specialized hardware would be extremely useful for the most common numerical tasks such as solving a large linear system, or ODE solving.