Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

ATI Introduces Physics Solution 79

An anonymous reader writes "HardOCP has a scoop posted on ATI and their new physics solutions. ATI has teamed up with Havok as expected and shows off their 'Boundless Gaming' on an Intel Conroe box with three X1900s inside. [H] also has a full ATI PDF posted on the technology as well. Don't throw away those old ATI video cards just yet."
This discussion has been archived. No new comments can be posted.

ATI Introduces Physics Solution

Comments Filter:
  • by gEvil (beta) ( 945888 ) on Tuesday June 06, 2006 @07:25AM (#15478613)
    From TFS: Don't throw away those old ATI video cards just yet.

    From TFA: The third element that finishes the Boundless Gaming triangle is actually another Radeon X1600 or better being utilized as a physics processor or PPU.

    Apparently someone's definition of "old" is drastically different from mine...
    • Believe me that same someone is positively orgasmic at the thought of people buying three high end graphics cards to install in one system.
    • Apparently someone's definition of "old" is drastically different from mine...


      Heck, I just went from AGP4x to 8x about 3 months ago. My PC motherboard doesn't support a single PCI-E card, much less three.
      • Yep. Same here. I upgraded to AGP 8x when my friend gave me his "old" Radeon 9800. I don't particularly feel that the 9800 is all that old, but I can certainly understand it being considered old to some--it's two generations behind. My problem comes when TFA refers to midrange cards from the current generation as being "old."
  • Who wants to play Mario Brothers and not be able to jump over owls, turtles, and spiny creatures?
  • Odd (Score:4, Interesting)

    by Enderandrew ( 866215 ) <enderandrew@NOsPAM.gmail.com> on Tuesday June 06, 2006 @07:35AM (#15478649) Homepage Journal
    AMD is in talks to buy out ATI supposedly, and yet ATI is optimizing the line of GPU's specifically for Intel. Is ATI fighting to remain independent?

    I'm an NVidia guy myself, but I figure if AMD buys out ATI atleast we'd see Linux support from them.

    More to the point, this has been rumored for a while. I feel sorry for those who invested heavily in a PPU. In the end, I'm pretty sure most end users are content to allow physics to be split between GPU and CPU, especially when CPU and GPU makers are happy to try and upsell you on their products to carry the load.

    What's the difference between spending a little extra on the CPU/GPU side versus spending extra for the PPU? The CPU/GPU money will benefit you more often. With hardware accelerated desktops in the near future, your GPU will be even more valuable. The PPU will only be useful some of the time. Furthermore, venturing into the unknown territory of the PPU means that you can get dropped like any other fad, just as fast as UMD.

    Remember when UMD was the hot thing for all of two months?
    • means that you can get dropped like any other fad

      I really hope they do get dropped honestly. Everything a PPU can do a 2nd GPU can do, and hell a CPU can probably due it (program games to utilize both cores instead of a PPU). It's a giant get-rich-quick scheme probably, funded by who knows what in the beginning, but b/c it has UT2007 support, it will last at least until then. However, ATI has announced prior to this that its x1800 and x1900 cards (and the x1600?) will be able to be stuck in crossfire an

      • Seems to make sense to me. The PPU sounded really good to me until I thought about it.
      • Everything a PPU can do a 2nd GPU can do, and hell a CPU can probably due it (program games to utilize both cores instead of a PPU).
        Yeah, but a CPU is poorly suited for it -- that's why GPUs exist in the first place. Just try to run Quake 4 or something on a dual-processor system with software rendering, and see how far you get.

        A suitably programmable GPU (like ones capable of supporting DirectX 10) would be great at it, of course
        • Physics currently run on the CPU primarily, and do so relatively well. That is what this article is about however, that the major physics middleware software provider is working with ATI so that the GPU is designed to take on this load, and do so well.
        • I did not say the CPU would render VIDEO, what I was saying is it would do the physics calculations. My point was why add ANOTHER CARD when you could instead the CPU or a 2nd core or additional core on the GPU can do the calculations. I never mentioned the CPU rendering video, I know how bad that would be, just doing the calculations is all I meant. Sorry for the confusion.
  • It is still cheaper to buy a PhysX card than additional X1900 radeon for Crossfire.
    If nvidia will as well offer physics only with dual card configuration (SLI), then PhysX has better chances in this battle.
    Of course, if it will work with only a $50 nvidia/ATI card for physics, things might be different.

    Although PhysX is technically still superior to this HavokFX model, they will have to slove performance problems they currently have.
    • They'll also have to get developers to adopt the PhysX API. Given that Havok already has a foothold and isn't tied to a piece of hardware with dubious benefits, Ageia has an uphill battle to fight.

      That said, this is just as silly. I'm just going to wait until physics coprocessors are built straight onto video cards, or until developers realize that relatively idle processors in multi-core CPUs are just dandy for performing this kind of number crunching. I imagine that all of the technologies will mature at

    • A nice thing is that ATI's solution could do a lot of stuff with existing chips and PCBs, but with less chips on them. You don't need DVI transmitters, no RAMDACs, no display connectors. All you need is a really stripped-down card, which gets you a physics accelerator with a somewhat "open" interface. That would also mean a lot of aftermarket cooling solutions, which is obviously a plus for a lot of "enthusiasts" (well, I'd say "lunatics in a good way").

    • The dual x1900 setup will increase performance in gaming, help with hardware accelerated desktops, and tackle physics.

      The PhysX card however can actually lower framerates in games. So far the PPU hasn't shown much benefit if any. And the money you drop on it does nothing when it isn't computing physics. The GPU will benefit you in any game, as well as the hardware accelerated desktop.
      • Tell me, have you ever tried benchmarking with the sound turned on, and then turned sound off?

        Makes a helluva difference in most games, usually between %5-20 of your performance, even with a hardware-assisted sound card like an X-Fi.

        The PhysX is like a soundcard, in that it is yet another device saturating the already heavily used PCI bus, and like the soundcard, it has a driver sapping your CPU...yes, there is some overhead for a HARDWARE solution. Hell, I remember seeing benchmarks of Quake on a Pentium
        • However you insist the PhysX model is better, despite lower performance.

          Again, my point is simple. Do I spend extra money on a Crossfire setup, knowing that extra GPU will be useful all the time, or do I spend the extra money on a PhysX card that sits being unused most of the time, that will actually drop the performance on my machine.

          I don't see how this is a difficult decision to make.
          • I insist that the PhysX model is MORE CAPABLE. That cannot be denied, regardless of the piss-poor performance. It is impossible for the current crop of Havok acceleration via Nvidia and ATI cards to accelerate interactive objects. This means you can have flowing hair and clothes, things you don'tinteract with, but everything else will be "normal."

            The only way to solve this issue is to:

            A: do it the PhysX way, and put another triangle processing layer in-between the CPU and the video card, thus adding late
    • you only need an x1600 or higher for the physics card. $126.99 CAD on TD, thats a little cheaper that the PhysX card.
    • by Anonymous Coward
      Although PhysX is technically still superior to this HavokFX model, they will have to slove performance problems they currently have.

      Heh. This is just the marketing spin machine taking aim at Ageia. A buddy of mine works there. He helped write the software version of the PhysX PPU. Right now, he reports to me that Ageia is hiring temps to sit around and play games all day. (He asked if I could help out. >:D ) The PC's they use are hooked up to a bunch of monitoring hardware that records various data poin
  • ATI

    "What a dumb idea, you don't need a dedicated physx card to large scale physics in future gaming, our video cards can handle it, you just sorta need.....3 of them.

    Cool in concept, but for some odd reason I doubt they really want you to hang onto your older cards, more like a forced upgrade path to the newer ones.

    The plus side to all of this if they make a successful push to move physx to a spare video card maybe we can see a more rapid transition to 8x and 16x slots on motherboards, instead of fart

    • The plus side to all of this if they make a successful push to move physx to a spare video card maybe we can see a more rapid transition to 8x and 16x slots on motherboards, instead of farting around with a bunch of PCI and 1x 2x 4x slots.

      There's lots of uses where anything over 1x is overkill. PCIe is a point-to-point design, so all that extra capacity would go completely unused if we start sticking NICs and SATA cards in 16x slots (PCIe has 250 MB/s per lane!).
      • i want a board with a full compliment of 7 (ATX form factor) full size PCI-e slots, they dont ALL have to be able to do 16x ... it would be nice to hope for, but id be happy with mac style "any pci-e card in this slot" universal slots for the damn things, so i can stop having to screw around with "oh damn that fantastic board has its 4x pci-e wedged under the heatsink of the sli card in the 16x slot... no 8 drive sata raid controler, damn there goes the 2TB raid... stuff like this i actualy think about when
        • The SATA specification is able to reach 300 MB/s, but single harddrives [storagereview.com] can't even saturate UDMA. Gigabit NICs only reach 125 MB/s, so one could have almost two of them in a 1xPCIe slot. Good point with the RAID cards, but isn't that what 4xPCIe is for? :)

          As for PhysX, it doesn't really need a lot of bandwidth. All it does is send updated positions for all the objects, which can be done with one 3x4 matrix per object (4 * 12 bytes). Even on a scene with 5000 objects running at 100 updates/s it would only n
          • SATA 2 with all the fancy NCQ extras can well and truly saturate the bus when you use a controler with raid ( and thats why the realy good cards are 4x PCI-e )

            while you could get 2 gigabit ethernet controlers onto a single PCI-e lane, thats probably just another bonus for anyone doing it, since not only do you have the lower latency pci-e bus, you also have the ability to play all the kinds of fancy games that need at least 2 controlers, such as utilising the second for priority/QoS without interfering with
      • To quote Field of Dreams

        "Build it and they will come."

        Make them available and people will come up with a use for them.

        You are very correct that more than 1x is overkill, but to go along with what another posted it would be nice to be able to plunk down any card in any slot in order to get around issues of space. Whether it be for cable reach, slot blockage, or cooling arrangments.

  • Is it me or is the demo system fast running out of expansion slots. If these large Physics/Graphic card setups become a mainstay of the future gamers system, the Motherboard manufactures are going to have to add more slots, or at least space them out to allow for the slot hugging cooling systems that these powerful and hot cpu/gpu/ppu's need.

    It might even be worth revising the ATX standard to allow for a more independent graphics card section (of the bord/case) that could allow for a specialised cooling co

    • Sshhh, dude, I just got a new case.
    • What are you talking about? Most mobos gamers buy these days come with everything onboard except graphics (and that's replaced if it is), and then they have about five PCI slots in addition to whatever AGP/PCIe ports they might have. Using the high speed ports for graphics, and maybe two of the PCI slots for a sound card and a RAID controller if you don't like or don't have the on-board stuff, you've still got half of your expansion slots left.
      • Well, most people have one desktop computer that they tend to do most things on. In my computer I have a 5.1 compatible sound card, an ultra wide SCSI card for some old but expensive drives, a tv card, an extra ethernet card and a PCI radeon for the tv out/other display in addition to the agp graphics card. Admittedly most people will forgo the SCSI and the extra ethernet, but there's still a lot of cards once you add the physics card and take into account the extra slot occupied by the GPU fan.
  • So, ATI is behind the curve again. NVIDIA announced HAVOK support months ago. And to you PhysX fanboys, you must have missed the Maximum PC review which trounced Agia's new offering.
    • Y'know, I don't remember seeing any reviews of NVIDIA or ATI's kit, at all, so far... so, lets compare when they're out and in being used in games?
    • by Anonymous Coward
      Pffphf. I bet the Havok guys advertise in Maximum PC.

      The Ageia PPU card isn't the bottleneck. The shitty game code is. Ageia has the unfortunate and ugly task of teaching game devs how to dev games. They're doing it right now. There have been no hangups with the PhysX API so far, only poor implementations in games programmed by people who are used to doing things the old way (that is, without a PPU... or multithreading for that matter).

      The code looks something like this:
      while(1)
      {
      physics.start
    • i don't call any of those reviews because they fail to say the reason that the performance drops is because the system has to draw MORE objects with the card in it then without it.
      take anandtech's review. they turn the object number up to about 4x the amount compared to the software only test, the result is of course obvious because the gpu has to draw 4x~ the amount of objects.
      i have yet to find a single review that does not do what anandtech does, which is stack the deck against ageia.
      btw in a proper revi
    • Some magazine gave a bad review to something? OMG! It must suck!
  • by Haeleth ( 414428 ) on Tuesday June 06, 2006 @09:20AM (#15479150) Journal
    So, have they found the problem yet?

    I'd be more interested in an AI accelerator. I want my games to be interesting to play; having the most realistically-modelled explosions EVAR is fun for a few hours, but having enemies that behave like humans could be fun for months.

    (No, online play is not a solution. Human enemies don't behave like this [hlcomic.com].)
  • by MrSquirrel ( 976630 ) on Tuesday June 06, 2006 @09:41AM (#15479321)
    Instead of spending countless R&D dollars on stuff the average computer gaming enthusiast will never use (let alone the average person), can't they just improve their singular cards? I mean, come on -- two x1900's and then an "old" x1600. ...OLD?! I'm using an x700 -- not three, not two... just one. ATI, Nvidia -- stop screwing around with stuff that's not worth it (2 x1900 [$500 a piece] + x1600 [$150] = $1,650... cost of games that take advantage of this = $0 -- because there aren't any and I doubt there will be in the near future). Nothing needs this, not even on the highest of high settings would a game need to take advantage of this. I would expect the earliest release that could really use this would be in maybe the next 6-12 months... even then, it won't have been developed to use this, it'll be just a tacked on "feature".
    • Expensive "gaming PC" components are almost always launched before there are any games to take advantage of them. They sell to the people willing to pay 2 * $500 for the latest e-penis enhancement, not the people who just want to play some games. Like when programmable shaders appeared in videocards (five years ago), it took a long time until the majority of games used them at all.
  • Good thing (Score:2, Interesting)

    by AndrewNeo ( 979708 )
    I think this is a good thing, to start out with. Video game companies are already writing games that likely depend on either a NVIDIA or ATI card, at least for high-end games. With the PhysX card, it won't be supported by all games because fifty other physics cards companies could come up, all with different APIs and such. With the card companies doing it, they could all get together and physics could become a part of DirectX (I know, I know, but don't forget what platform we play games on) and then develop
  • by Anonymous Coward
    I wonder why nVidia and ATi don't just add extensions to the GPU for physics, ala SSE or MMX on CPUs. You wouldn't even have to have a full feature set, just slowly work your way up to one over a few generations. That would give developers time to implement the added instructions and keep costs low (none of this SLI/Crossfire tech needed). You might also be able to do a dual-core GPU, one that works with graphics display and the other for physics all on one piece of silicon.
  • Why not just purchase the PhysX solution? I though that the ATI/Nvidia solution would somehow just take advantage of your existing hardware. I didn't realise that if you had an SLI solution you would still need to install an extra PCI card.
  • Theres a fundamental flaw in using a GPu as a PPU...most of the technology in a GPU that youve PAID FOR (fast AA, texture filtering schemes, mipmapping, z-buffer technology, optimisation schemes etc) will be totally irrelavent for physics work.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...