ATI Introduces Physics Solution 79
An anonymous reader writes "HardOCP has a scoop posted on ATI and their new physics solutions. ATI has teamed up with Havok as expected and shows off their 'Boundless Gaming' on an Intel Conroe box with three X1900s inside. [H] also has a full ATI PDF posted on the technology as well. Don't throw away those old ATI video cards just yet."
Sensory sacrifice... (Score:1)
Re:Finally.. (Score:2)
Re:Finally.. (Score:2)
The quality of a chess game is almost never measured by its graphics.
Re:Finally.. (Score:2)
rhY
Re:Finally.. (Score:2)
So from my experience there is a difference.
Re:Finally.. (Score:1)
Re:Finally.. (Score:2)
Re:Finally.. (Score:2)
Re:Finally.. (Score:2)
Re:Finally.. (Score:1)
I just looked on the new page [ageia.com], and it seems to have disappeared.
new definition of "old" (Score:4, Insightful)
From TFA: The third element that finishes the Boundless Gaming triangle is actually another Radeon X1600 or better being utilized as a physics processor or PPU.
Apparently someone's definition of "old" is drastically different from mine...
Re:new definition of "old" (Score:2, Insightful)
Re:new definition of "old" (Score:2)
Heck, I just went from AGP4x to 8x about 3 months ago. My PC motherboard doesn't support a single PCI-E card, much less three.
Re:new definition of "old" (Score:2)
Re:new definition of "old" (Score:2)
http://mikebabcock.ca/mypc [mikebabcock.ca]
Boundless Gaming? (Score:1)
Re:Boundless Gaming? (Score:2, Interesting)
There are also the owls in Super Mario Land 2 but since they work as floating platforms jumping over them (instead of onto them) means certain death.
Odd (Score:4, Interesting)
I'm an NVidia guy myself, but I figure if AMD buys out ATI atleast we'd see Linux support from them.
More to the point, this has been rumored for a while. I feel sorry for those who invested heavily in a PPU. In the end, I'm pretty sure most end users are content to allow physics to be split between GPU and CPU, especially when CPU and GPU makers are happy to try and upsell you on their products to carry the load.
What's the difference between spending a little extra on the CPU/GPU side versus spending extra for the PPU? The CPU/GPU money will benefit you more often. With hardware accelerated desktops in the near future, your GPU will be even more valuable. The PPU will only be useful some of the time. Furthermore, venturing into the unknown territory of the PPU means that you can get dropped like any other fad, just as fast as UMD.
Remember when UMD was the hot thing for all of two months?
Re:Odd (Score:1)
I really hope they do get dropped honestly. Everything a PPU can do a 2nd GPU can do, and hell a CPU can probably due it (program games to utilize both cores instead of a PPU). It's a giant get-rich-quick scheme probably, funded by who knows what in the beginning, but b/c it has UT2007 support, it will last at least until then. However, ATI has announced prior to this that its x1800 and x1900 cards (and the x1600?) will be able to be stuck in crossfire an
Re:Odd (Score:2)
Re:Odd (Score:3, Informative)
http://www.anandtech.com/video/showdoc.aspx?i=275
Re:Odd (Score:2)
Re:Odd (Score:1)
I know you're just flaming a troll, but the very next page of the article you linked explains that the benchmarks are questionable.
http://www.anandtech.com/video/showdoc.aspx?i=2759 &p=3 [anandtech.com]
I think the only trustworthy benchmark is going to be one where the same calculations are performed using the
Re:Odd (Score:2)
Re:Odd (Score:2)
A suitably programmable GPU (like ones capable of supporting DirectX 10) would be great at it, of course
Re:Odd (Score:2)
Re:Odd (Score:1)
Bulls*it (Score:1)
If nvidia will as well offer physics only with dual card configuration (SLI), then PhysX has better chances in this battle.
Of course, if it will work with only a $50 nvidia/ATI card for physics, things might be different.
Although PhysX is technically still superior to this HavokFX model, they will have to slove performance problems they currently have.
Re:Bulls*it (Score:1)
That said, this is just as silly. I'm just going to wait until physics coprocessors are built straight onto video cards, or until developers realize that relatively idle processors in multi-core CPUs are just dandy for performing this kind of number crunching. I imagine that all of the technologies will mature at
Re:Bulls*it (Score:2)
A nice thing is that ATI's solution could do a lot of stuff with existing chips and PCBs, but with less chips on them. You don't need DVI transmitters, no RAMDACs, no display connectors. All you need is a really stripped-down card, which gets you a physics accelerator with a somewhat "open" interface. That would also mean a lot of aftermarket cooling solutions, which is obviously a plus for a lot of "enthusiasts" (well, I'd say "lunatics in a good way").
Re:Bulls*it (Score:2)
The PhysX card however can actually lower framerates in games. So far the PPU hasn't shown much benefit if any. And the money you drop on it does nothing when it isn't computing physics. The GPU will benefit you in any game, as well as the hardware accelerated desktop.
Of course they yield slower framerates (Score:2)
Makes a helluva difference in most games, usually between %5-20 of your performance, even with a hardware-assisted sound card like an X-Fi.
The PhysX is like a soundcard, in that it is yet another device saturating the already heavily used PCI bus, and like the soundcard, it has a driver sapping your CPU...yes, there is some overhead for a HARDWARE solution. Hell, I remember seeing benchmarks of Quake on a Pentium
Re:Of course they yield slower framerates (Score:2)
Again, my point is simple. Do I spend extra money on a Crossfire setup, knowing that extra GPU will be useful all the time, or do I spend the extra money on a PhysX card that sits being unused most of the time, that will actually drop the performance on my machine.
I don't see how this is a difficult decision to make.
Re:Of course they yield slower framerates (Score:2)
The only way to solve this issue is to:
A: do it the PhysX way, and put another triangle processing layer in-between the CPU and the video card, thus adding late
Re:Bulls*it (Score:2)
Re:Bulls*it (Score:1)
Heh. This is just the marketing spin machine taking aim at Ageia. A buddy of mine works there. He helped write the software version of the PhysX PPU. Right now, he reports to me that Ageia is hiring temps to sit around and play games all day. (He asked if I could help out. >:D ) The PC's they use are hooked up to a bunch of monitoring hardware that records various data poin
Re:x86 GPU Makers Bring Toy Physics To The Masses (Score:2)
It's really too bad this was posted AC, because it's right on the money, but it'll probably be modded "troll" instead of the "insightful" it deserves.
Er ok (Score:2)
"What a dumb idea, you don't need a dedicated physx card to large scale physics in future gaming, our video cards can handle it, you just sorta need.....3 of them.
Cool in concept, but for some odd reason I doubt they really want you to hang onto your older cards, more like a forced upgrade path to the newer ones.
The plus side to all of this if they make a successful push to move physx to a spare video card maybe we can see a more rapid transition to 8x and 16x slots on motherboards, instead of fart
Nah (Score:2)
There's lots of uses where anything over 1x is overkill. PCIe is a point-to-point design, so all that extra capacity would go completely unused if we start sticking NICs and SATA cards in 16x slots (PCIe has 250 MB/s per lane!).
Re:Nah (Score:2)
Re:Nah (Score:2)
As for PhysX, it doesn't really need a lot of bandwidth. All it does is send updated positions for all the objects, which can be done with one 3x4 matrix per object (4 * 12 bytes). Even on a scene with 5000 objects running at 100 updates/s it would only n
Re:Nah (Score:2)
while you could get 2 gigabit ethernet controlers onto a single PCI-e lane, thats probably just another bonus for anyone doing it, since not only do you have the lower latency pci-e bus, you also have the ability to play all the kinds of fancy games that need at least 2 controlers, such as utilising the second for priority/QoS without interfering with
Re:Nah (Score:2)
"Build it and they will come."
Make them available and people will come up with a use for them.
You are very correct that more than 1x is overkill, but to go along with what another posted it would be nice to be able to plunk down any card in any slot in order to get around issues of space. Whether it be for cable reach, slot blockage, or cooling arrangments.
PCI_pick-your-standard_ slots (Score:2, Insightful)
Is it me or is the demo system fast running out of expansion slots. If these large Physics/Graphic card setups become a mainstay of the future gamers system, the Motherboard manufactures are going to have to add more slots, or at least space them out to allow for the slot hugging cooling systems that these powerful and hot cpu/gpu/ppu's need.
It might even be worth revising the ATX standard to allow for a more independent graphics card section (of the bord/case) that could allow for a specialised cooling co
Re:PCI_pick-your-standard_ slots (Score:1)
Re:PCI_pick-your-standard_ slots (Score:2)
Re:PCI_pick-your-standard_ slots (Score:1)
NVIDIA already supports HAVOK (Score:2)
Re:NVIDIA already supports HAVOK (Score:2)
Re:NVIDIA already supports HAVOK (Score:1)
The Ageia PPU card isn't the bottleneck. The shitty game code is. Ageia has the unfortunate and ugly task of teaching game devs how to dev games. They're doing it right now. There have been no hangups with the PhysX API so far, only poor implementations in games programmed by people who are used to doing things the old way (that is, without a PPU... or multithreading for that matter).
The code looks something like this:
while(1)
{
physics.start
Re:NVIDIA already supports HAVOK (Score:1)
take anandtech's review. they turn the object number up to about 4x the amount compared to the software only test, the result is of course obvious because the gpu has to draw 4x~ the amount of objects.
i have yet to find a single review that does not do what anandtech does, which is stack the deck against ageia.
btw in a proper revi
Re:NVIDIA already supports HAVOK (Score:1)
They have a solution? Great! (Score:3, Interesting)
I'd be more interested in an AI accelerator. I want my games to be interesting to play; having the most realistically-modelled explosions EVAR is fun for a few hours, but having enemies that behave like humans could be fun for months.
(No, online play is not a solution. Human enemies don't behave like this [hlcomic.com].)
Re:They have a solution? Great! (Score:2)
Re:They have a solution? Great! (Score:2)
Re:They have a solution? Great! (Score:1)
Hmmm (Score:3)
Re:Hmmm (Score:1)
Good thing (Score:2, Interesting)
Why not just add extensions to the GPU? (Score:1, Interesting)
Re:Finally! (Score:2)
weee!
And just wait to see how minesweeper will be!
If you still need to install an extra card (Score:1)
Flawed (Score:1)