Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

PhysX Dedicated Physics Processor Explored 142

Ned_Network writes "Yahoo! News & Reuters has a story about a start-up who have created a dedicated physics processor for gamers' PCs. The processor undertakes physics calculations for the CPU and is said to make gaming more realistic - examples such as falling rocks, exploding debris and the way that opponents collapse when you shoot them are cited as advantages of the chip. Only 6 current titles take advantage of the chip but the FAQ claims that another 100 are in production."
This discussion has been archived. No new comments can be posted.

PhysX Dedicated Physics Processor Explored

Comments Filter:
  • Is that a chess game in that list? Why would a chess game need a phsyics engine. Perhaps the programmers would like to use an engine for animations (the king falling down perhaps?) instead of frame by frame and filler animation.
    • by TubeSteak ( 669689 ) on Sunday April 30, 2006 @04:53PM (#15233408) Journal
      Chess games rely on brute computation to up the difficulty level.

      Anything the programmers can do to examine more moves into the future is a good thing for them. Even Deep Blue couldn't look more than 30 moves into the future. Dunno about the 'son of' Deep Blue.

      Animations, etc consume trivial amounts of CPU/graphics power compared to examining the next XY possible moves in a chess game.
      • Yeah, no kidding! Even Deep Blue couldn't defeat more than the world's best chess player [wikipedia.org]!
      • Chess games rely on brute computation to up the difficulty level.

        Yeah, but as the OP asked -- what in the world would a physics coprocessor have to do with a chess game?

        Purpose specific devices, such as sound processing DSPs, video card GPUs, or in this case a physics processor, beat out general purpose chips (like the AMDs and Intels that we know and love) because they've been designed for a very specific task. Where a general purpose device might require 1000 operations for a FFT, a DSP might require thre
        • by Anonymous Coward
          It's not simply a DSP. It's a fully programmable physics chip - which PROBABLY means it's a single instruction, multiple data type chip (much like the programmable pixel shader logic in a gpu).

          This type of CPU would be vastly superior to a standard cpu for calculating possible moves.

          Though, while it helps with chess move logic, it wouldn't help with Go logic.

          Go logic is still vastly inferior and more difficult. Why I brought up go, I have no idea.
          • by kitsunewarlock ( 971818 ) on Sunday April 30, 2006 @07:54PM (#15234118) Journal
            Actually, I was thinking of Go when I read your post...then I saw the word and was like "wow".
            You are probably thinking of it since Go is pseudo-famous (among engineers who have attempting thusly and in Japan) as a game that cannot be easily made into a computer simulation properly. While chess has 16 opening moves, Go has...well 12 decent ones, but statistically 361. Finding the variations in a game of Go would just...be impossible currently. It is commonly said that no game has ever been played twice. This may be true: On a 19×19 board, there are about 3361×0.012 = 2.1×10^170 possible positions, most of which are the end result of about (120!)^2 = 4.5×10^397 different (no-capture) games, for a total of about 9.3×10^567 games. Allowing captures gives as many as 10^7.49x10^48

            There's more go games then theorized protons in the visable universe!
        • Chess is all about strategy, and strategy earns tremendously from foresight and planning ahead.

          >> what in the world would a physics coprocessor have to do with a chess game?

          Think in terms of metaphores. Real world physics are mainly a matter of particles, gravity and kinetics, which are functions of space, time and mass.

          Now think Chess.
          It's also a function of particles (pieces), space (position), time (playing turn) and you could encode its rules as physics too (the way pieces move). If you
          • Now think Chess.
            It's also a function of particles (pieces), space (position), time (playing turn) and you could encode its rules as physics too (the way pieces move). If you can get the engine to calculate where the pieces will be in later turns following chess rules (and not real-world physics rules) you will gain tremendous foresight.

            Unfortunately, no. The difference is that in the Real World, physics will run just fine without anyone intervening with the system. There's a single state that results "

      • That makes me wonder: is the chess algorithm suitable for running on a GPU, or even possibly this physics chip (i.e., this [gpgpu.org] kind of thing)?
      • Actually, Deep Blue was "Fast and Dumb" - it could indeed search fast and thus foresee many moves ahead ("into the future"), but it didn't have a good sense of which moves are worth checking out. If there are 10 moves available in the position, DB would generally check all of them out. Which meant that:

        1. It wasted a lot of power calculating hopeless and downright stupid moves. That's especially evident when you consider the huge branching factor of exploring all moves in each position.
        2. It would make mistake
        • Sorry, I miss-clicked the Submit instead of Preview button. Here are some format corrections and clarifications of parent post:

          Actually, Deep Blue was "Fast and Dumb" - it could indeed search fast and thus foresee many moves ahead ("into the future"), but it didn't have a good sense of which moves are worth checking out. If there were 10 moves available in the position, DB would generally check all of them out. Which meant that:

          1. It wasted a lot of power calculating hopeless and downright stupid moves. Tha
  • by Cy Sperling ( 960158 ) on Sunday April 30, 2006 @04:44PM (#15233357)
    I like the idea of offloading physics processing to a speciallized card. Seems like it should up the ante for games to move beyond just ragdoll physics for characters and into more environmental sims as well. I would love to see volumetric dynamics like fog that swirls in reaction to masses moving through it. A deeper physics simulation hopefully means more to do rather than more to look at as well. Playing with gameworld physics from an emergent gameplay standpoint has real play value versus larger prettier textures.
    • by Anonymous Coward
      The problem is either it's just eyecandy or it isn't. If it's not just eyecandy, and actually affects how you play the game, then you can't sell the same game to people without the card. This is a problem.

      At this point, there's only one game that takes any advantage of dual-core CPUs. Most games are still targetted towards lowend 2Ghz/GeForceMX systems. Seems kind of ridiclous to run headlong into specalized PHYZICKS processors when high-end games already fail to take better advantage of existing hardware.
    • I like the idea, too, though in practice I've got two big questions:
      1) Is it going to come down in price? Considering that "mid-range" GPUs are going for around $300, this card at $300 (okay, $299) represents a doubling of the cost to bring a gaming system "up to speed." Right now, with only one option, it's a one-time thing but we all know that if it's successful there will be upgrades.
      2) Is this really going to make a huge difference in a world where dual-core CPUs are becoming mainstream, and more core
      • Is the performance advantage of their specially designed physics processor so important that, say, an eight-core CPU in 2008 couldn't perform similarly"

        Yes. In general, purpose built hardware can do its job orders of magnitude faster than a general purpose CPU. For example, the 3D performance of an old low end video card will still smoke the software renderer on a high end CPU.

        The traditional PC players seem to be set on multiple copies of the same core. CPUs like the Cell, or KiloCore, are taking a midd

        • Your point about purpose-built versus general-purpose processors is well taken, and it's a big part of Ageia's marketing. As others have noted, though, right now a developer has to cater to that particular hardware when designing the game. This is something that has been done before (I remember having to choose my audio card from a list in the DOS days) but it requires an installed base to really take off.

          I think you hit on something potentially big, though, in your second paragraph. Many have talked
        • For example, the 3D performance of an old low end video card will still smoke the software renderer on a high end CPU.

          That is because of the IO bottleneck moreso than because the purpose built processor is more powerful for the particular task. That's why 2D acceleration is still important, even though modern CPUs can render 2D scenes in signifigantly faster than real time. IO intesive tasks, of which graphics display is one, are well suited to specialized hardware.

          Physics is not an IO intensive process, an
          • That is because of the IO bottleneck moreso than because the purpose built processor is more powerful for the particular task. That's why 2D acceleration is still important, even though modern CPUs can render 2D scenes in signifigantly faster than real time. IO intesive tasks, of which graphics display is one, are well suited to specialized hardware.

            Even though you didn't really address the matter you quoted, I'll point out that even though texturing is a I/O intensive task, the bottleneck in graphics har
            • Now either the engineers at Havok are crappy at optimizing, or Intel/AMD needs to make their CPUs ~500 times faster before physics will run acceptably on a CPU.

              Neither... The CPUs have to be more parallelized, and until they are the developers have to be more conservative with the number of discrete objects they track... With quad core CPUs on the horizon, we're on our way there. In the long term, the only way we have a 'physics processing unit' in every machine is if some IO component, probably the video c
      • I think many games are going to find it's not really usable without mandidating it. Let's say I make a multi-player game and I want players to be able to do things like trow objects at each other, bash down doors, and so on. The PhysX proves to be ideal, allowing me to do all the calculations I need for my realistic environment. However, now I have a problem: There's no way to simplify things for non-PhysX computers that still makes the game act the same. The actual gameplay is influenced by having this phy
        • I'm sure people said the same thing about the 3D video cards when they first came out. Really, at that point, it's all just eyecandy, and while you can do some stuff in software at reasonable speeds, no software renderer can compete with a 3D card that does everything in hardware. They don't make games anymore that run on software emulation. Most games require $150 video card to run. I think the physics cards will come down to this price range.

          I've often wondered why they didn't make a game like a F
          • People DID say this with the first generation of dedicated 3d hardware chips, which is effectively why 3dfx went out of business. There wasn't enough installed base to make the developer cost worthwhile balanced with the benefit of reaching such a small installed base.

            There are several things wrong with the Ageia business model:

            1) they mandate that you use THEIR physics engine in order to access the physics hardware - there is no low-level hardware API that any engine can access - so by supporting their ha
            • Until they sign on a vendor like Dell or HP to actually build machines with these chips, then it's a no-go for developers.

              It should be noted that Dell/Alienware is (and has been for at least a month or more) offering the PhysX card as a build-to-order option. :)
            • People DID say this with the first generation of dedicated 3d hardware chips, which is effectively why 3dfx went out of business. There wasn't enough installed base to make the developer cost worthwhile balanced with the benefit of reaching such a small installed base.

              No. 3dfx went out of business because the Voodoo4 shipped way behind schedule and wasn't optimized for 32-bit rendering. It did 16-bit rendering fast, but only about on par with nVidia's chip, which could do 32-bit and make everything look m
          • But graphics can be made to scale without changing gameplay. Quake 1 played fine in software, just didn't look as good. Physics is a more integral part of gameplay. Used just as eye candy, I'm not sure it'll be effective enough to sell people on a $300 part. Espically because it needs to be a lot better than what software offers. I remember getting my first 3d card, it was night and day the difference. Well worth the money to me. How well will the PhysX do?

            Goes double when games start using dual core proces
            • But graphics can be made to scale without changing gameplay. Quake 1 played fine in software, just didn't look as good. Physics is a more integral part of gameplay. Used just as eye candy, I'm not sure it'll be effective enough to sell people on a $300 part. Espically because it needs to be a lot better than what software offers. I remember getting my first 3d card, it was night and day the difference. Well worth the money to me. How well will the PhysX do?

              Make some multiplayer maps require a PPU by havin

      • Not to mention, you point out that a good graphics card will cost you $300... and for another $300, I'd rather have another identical card and rock some SLI.

        ~Will
    • yeah. that's why I bought an athlon dual core chip damnit!
  • by 9mm Censor ( 705379 ) * on Sunday April 30, 2006 @04:46PM (#15233373) Homepage
    I want a GPPU. A card to enhance the game play of vids. Screw graphics and physics. I want a card that makes games more fun.
  • i mean already the only reason people buy a mid to upper range card is to play games. it makes alot of sense to put it on the graphics card.

    admittedly, im not addressing whether this chip is useful.
    • Supposedly PPUs are going to have much longer refresh cycles than GPUs, so you'd end up buying the same physics chip if you upgrade your graphics card yearly. There aren't any games that require a PPU yet, so a separate card makes a lot more sense. Now audio on the other hand would be a great addition to graphics cards.
  • Would they put a extra port on a motherboard to give it it's own bandwidth or would they be forced to use the existing ports, Which I admit haven't even begun to get fully utilized. But the only place I can see this having any use is possibly in renderfarms. Otherwise, I'm buying the cheapest card for the best value. Regardless of namebrand, reviews,etc.
    • I think that's the plan with PCI express. Put multiple high speed ports on the same Motherboard, for Video cards, and anything else that requires high speed access.
  • by vertinox ( 846076 ) on Sunday April 30, 2006 @04:54PM (#15233415)
    Check out this link: http://physx.ageia.com/footage.html [ageia.com]

    Go to the section that says "I'm old enough" with the Cellfactor video and take a look at the flash movie. Although Cellfactors almost could be a poster child game of mother of all physics engines. It looks like it puts Half Life 2 to shame. (Although I wonder if you character has that much physic power to fly through the air and throw jeeps at people then why bother with having a gun?)

    I really dig the blood particle effets as someone is gibbed while standing on the ledge and the blood just splashes down the side of the platform.

    And you can really tell the difference in particle debris in the comparison videos at the top. However, I wonder if the same effect can be acheived with cranking up your settings on a high end gaming rig without the card. I'd wait til some 3rd party hardware review site gives the final verdict.
    • However, I wonder if the same effect can be achieved with cranking up your settings on a high end gaming rig without the card.

      TFA points out that even a high end gaming rig can't handle all the objects the chip allows the game to generate:

      But before starting the demonstration, Hegde had to lower the resolution of the game.

      The reason? The chip can generate so many objects that even the twin graphics processors in Hegde's top-end PC have trouble tracking them at the highest image quality.

      Basically, the tech i

      • If this card is generating more objects then are useable, it's overpowered. Create a cheaper, lower-powered version that people are actually willing to pay for and they may have a winner.

        Right now, nvidia's plan to partner with havok is looking far more appealing because I can buy a geforce 6600 for $100 and dedicate it to physics.
    • On a side note, notice how all of those games are FPS?

      Cellfactor seems to really take advantage of the idea of using physics as

      The ghost recon videos could easily be replicated by using non-colliding particle systems which simply transpose through geometry before wearing out. Heck, add a dirt-cheap ground level collision plane, and you're all set. In the heat of an explosion, it would look just about as realistic, and without the additional hundred dollars in hardware to upgrade every year. As is they di
      • "On a side note, notice how all of those games are FPS?"

        Rise of Legends (RTS) is supported, as is City of Villains (MMO). I guess they figure the FPS visuals are probably a better showcase for the demos. Blammo! Crap flies everywhere.
        There was one tech demo that showed a game where throwing objects around was an inherent part of the gameplay. Kind of like Half Life 2 deathmatch on steroids. Interesting.

    • Watching that video, it does look cool. But the first thing that comes to mind is "tech demo". That's what that game looks like. I can't think of any reason that's cool other than showing off a bunch of physics; and I also can't imagine that the commercially standard hardware that will be available at the release of that game won't be sufficient to run it just fine.

      It does look cool. But, c'mon. Essentially, they're trying to sell you a $350 game. And that blood? I haven't seen blood that fake since
  • Ok, as far as I can tell, the Physx will be PCI atleast at first. I am upgrading my computer soon and I'm trying to leave plenty of room for the future. To that end, I'm looking to get a mobo with 2 PCIe x16 slots (which I am guesstimating would be the slot type the Phyx would use in a future varient, I'll have two other sizes as well but that was unitentional.) But to get a mobo with 2 PCIe x16 slots it comes in the form of an Nvidia SLI mobo. Does anyone know if these SLI capable boards will accept someth
    • most of the x16 sli boards have a x4 slot as well. How much bandwith does this card need?
    • On most boards, you can put whatever you like in the slots. I have a GeForce 7800 in one and an Areca RAID card in the other. Just note that, in SLI mode both cards usually run at 8x, not 16x. The Areca is max 8x anyway, and when benchmarking I found absolutely no difference between 8x and 16x on the GeForce.
  • by ignatz72 ( 891623 ) on Sunday April 30, 2006 @05:07PM (#15233476)
    From the article: "The consumers will see how the games behave better," Hegde said.

    But in the same article, they mention that the extra particles the processor generates swamps the DUAL gpu setup he's got in a demo system. How many of you want to wager the demo system is a hoss in it's own right?

    Apparently this card isn't going to help those of us holding out with our Athlon XP AGP systems that perform fine on current gen games, if a current bleeding edge rig can't cut it. :(

    SO now I have to plan for a quad AM2 CPU, quad dual-sli chip GPU w/ 32 Gigs of memory? Damnit all to hell...

    */me researches mortgage rates to subsidize next box-build*
  • by Ned_Network ( 952200 ) on Sunday April 30, 2006 @05:08PM (#15233486) Homepage
    Bah! They cut some of the best bits of my submission!

    The price of $300 seems a bit steep right now to a casual player like me, but this bit from the site's FAQ I find very appealing:

    Buildings and landscapes are now massively destructible with extreme explosions of thousands of shards of glass and shrapnel that cause collateral damage
    The PPU seems to be available as a PCI card [ageia.com] but is also available in off-the-shelf machines [ageia.com] from Dell & Alienware.

    There's a comparison video [ageia.com] showing the difference between Tom Clancy's Ghost Recon Advanced Warfighterwith & without the PhysX installed and a couple of hi-res [ageia.com] videos [ageia.com] that are available by FTP, so can't be cached by Coral, I don't think.

    What I really have to wonder, if this thing is as good as they reckon, is why I haven't heard of it before?

  • by SlayerDave ( 555409 ) <elddm1@g m a i l .com> on Sunday April 30, 2006 @05:10PM (#15233492) Homepage
    There is no common, open API for physics. Rather, there are several proprietary, closed APIs which offer similar functionality, but have no common specification. For instance, there are Havok [havok.com], Ageia [ageia.com], Open Dynamics [ode.org], and Newton [newtondynamics.com], just to name a few. The PhysX chip from Ageia only accelerates games written with their proprietary library in the game engine. Other games written with Havok, for instance, should receive no benefit at all from the installed PPU. On the other hand, Havok and NVIDIA have a GPU-accelerated physics library [havok.com], but games without Havok (or users without NVIDIA SLI systems) won't get the benefit.

    On the other hand, graphics cards make sense for consumers because there are only two graphics APIs, OpenGL and DirectX, and they offer very similar functionality under the hood (but significantly different high-level APIs). So a graphics card can accelerate games written with either OpenGL or DirectX, but that's not the case with the emerging PPU field. In graphics, the APIs developed and converged on common functionality long before hardware acceleration was available at the consumer level, but I don't think the physics API situation is stable or mature enough to warrant dedicated hardware add-in cards at this time.

    However, I think there are two possible scenarios that could change this.

    1) Havok and Ageia could create open or closed physics API specifications and make them available to chip manufacturers, e.g. ATI and NVIDIA, which have the market penetration and manufacturing capability to make PPUs widely available. I could imagine a high-end PCIe card that had both a GPU and a PPU on-board.

    2) Microsoft. Think what you will about them, but DirectX has greatly influenced the game industry and is the de-facto standard low-level API (although there are notable exceptions, such as id [idsoftware.com]). Microsoft could introduce a new component of DirectX which specifies a physics API that could then be implemented in hardware.

    But unless one of those things happens, I don't think proprietary PPUs are going to make a lot of sense for consumers.

    • Nitpick (Score:5, Informative)

      by loqi ( 754476 ) on Sunday April 30, 2006 @05:25PM (#15233553)
      ODE isn't closed and proprietary.
    • All you are saying is that Physics engines are now in the same state as GPU's were when they first emerged. Hell at the time game mags even had little icons to show wich games supported wich cards.

      No common interface and the game makers just had to make sure to include code for the cards they thought of as important enough.

      This lasted quite a long time until things settled down. Oh but wait NO!

      Check Tomb Raider Legends. It has a special option, "Next gen content" wich is claimed to be optomized for the N

      • I agree with most of your points, but the game market of 2006 is very different than 1995. One significant difference is that the GPU market has stabilized. There are OpenGL and DirectX, and all modern cards support both. I imagine a similar thing has happened in the soundcard market, but I don't know for sure. I think it would be difficult to introduce a new expensive piece of hardware that only improved certain games in today's market. Consumers today expect that if they shell out $300 for a card, it
        • The point about how the 3D video card market has stabilized and how the sound card market has done so in a similar manner is a little off.

          With 3D video cards, there's 2 major players - ATI and nVidia. Both support Direct3D and OpenGL. The only real differences are what extras are thrown in the mix for each new revision of their respective chips.

          With sound cards (for gaming purposes, anyway), it's pretty much all Creative Labs. There's really only multiple variations of EAX for accelerated sound in games.
    • We need an OpenPL to sort this out. Something very much like OpenGL, but for physics.

      Imagine defining (for example) a feather. You create a simple model and a nice texture with an alpha map. You define a few OpenGL parameters and that'll render nicely on any GPU.

      Then you assign it some physics parameters - mass, air resistance, shape, density - and that feather can then be instantiated with all the parameters needed to control it.

      Now think of a chicken panicking and running away. A bunch of feathers fall ou
    • As Havok was threatened to be pushed out of business by novodex API and physX hardware, they seem to have signed the agreement with Nvidia. Though I don't believe this solution will be as good as ageia cards (even SLI), because GPU must do other shader processing, and is generally not designed for such tasks (although they have advantage because all processing is done on one card and geometry data doesn't have to take round trips).

      If I were a game developer, I'd be confused which API to pick. I'm sure Novod
      • yes, and with newer GPUs, the program size has increased dramatically, which makes them much more versatile. 3 years ago I was cramming a vertex program into 256 lines - now I've got 65535. Fragment programs increased similarly (though I just finally got a card that supports them in the last 3 months to play Oblivion, so I'm still learning the ropes).

        Take a look at the GPU based samples [nvidia.com] (unfortunately, most require Windows) - many are incorporating physics (water, cloth, etc). Another good source is http [gpgpu.org]

    • And this is exactly what's wrong.

      If you go to their site, and you watch the video clips, you think "Wow, what have I been missing". But, what's happened in reality is this:

      if (physx card)
      then
      explosion(do_extra_shit)
      else
      explo sion(normal_boring)
      end

      That's all. Proprietary API and exclusive deals with game manufacturers mean that people who have the card see extra shit, even if their normal graphics card setup could have handled it without. I'd like to see the exact same code run on a $300 graphics car

  • Because I could have sworn the article was about a "Dedicated Physics Professor", not a peripheral processor. For a moment, I had visions of a computer program that teaches advanced physics to its users. Silly /me
  • by drwiii ( 434 )
    With dual-core CPUs taking hold, and quad-core CPUs on the way, is the addition of a fixed-purpose processor really a viable long-term solution?

    They seem to think so [ageia.com], but then again they have an interest in selling fixed-purpose processors.

  • Multiplayer (Score:5, Insightful)

    by lord_sarpedon ( 917201 ) on Sunday April 30, 2006 @06:23PM (#15233772)
    There's a major flaw. Multiplayer gameplay requires certain clientside behaviors to be deterministic, otherwise clients will fall out of sync. Physics is one of those. If Bob uses a PhysX card and an explosion lands a box in position X, but Alice, without a PhysX card, has the same box in position Y, then there is a problem. Both can't be right. The server would have to correct for discrepancies such as that because the position of a box affects gameplay; bullets and players can impact it. Perhaps more position updates would have to be sent to make sure Alice ends up in the same spot as Bob. But what about midflight? I suppose this doesn't matter for blood smears and purely aesthetic effects, but as the videos show, thats not where PhysX really shines. This puts a physics accelerator in an entirely different class than a graphics card. You can adjust your graphics settings, but the quality of your physics simulation in multiplayer can only be as good as the least common denominator without killing gameplay for some of the parties involved. Sure, AGEIA could have non-accelerated versions for everything in its library when acceleration isn't available that produce the same result, but then you are offloading the entire functionality of an addon card on to the cpu...imagine running Doom at full settings using software rendering. Extreme example. But that defeats the very purpose of the card, if developers are limited because most of their customers might not have it.
    • I suppose that such games would have to share and synchronize their physics data with each other. Every machine with a physics co-processor would improve the quality of the physics for all machines playing the same game.

      Eventually, when physics co-processors are commonplace, they might have to act like a distributed parallel computer for multiplayer games. Instead of each machine individually simulating the same world redundantly, the networked machines would co-operatively simulate the game world together.
    • For many things that must be synced (projectile velocity, etc) it wouldn't be of much help. For those that are into such things, the non-interactive elements such as flying gibs or dust effects... things that 99% of the time don't affect gameplay but do affect eyecandy... would benefit.

      Of course, the other side is that if the game is merged with an API, then you would have the same result using either the hardware or a software emulation, but the processing of such would be last CPU intensive or generally
  • Does anyone know where their engine came from?

    Has anybody seen this card in person?

    This is something that OpenSource could be doing are http://www.ode.org/ [ode.org] responding to this?

    My guess is that this engine is OpenSource and running on some sort of FPGA. Would help if a standard such as OpenGL could be drafted.

    Forget games, there's a large market for physics models in design houses.
  • I can see paying $300 for a 3D card - I've done it plenty of times - but $300 more to tweak out some physics effects? Not a chance for a gaming machine. They should get support for these things written into popular particle effects systems for video editors - $300 for real-time high-quality particles would sell like a charm in the visual effects world.

    I'm guessing that Ageia is hoping on a buyout by Nvidia or ATI. Getting this technology built into GPUs would be a great selling point, and be a great way to
    • "I can see paying $300 for a 3D card - I've done it plenty of times - but $300 more to tweak out some physics effects? Not a chance for a gaming machine."

      Depends on what they do with it. The biggest drawback to modern FPS's is that the environment isn't destructable. Supposedly, this type of card will help deal with that. Honestly, I was hoping that the PS3 and the 360 would usher in this new era, but so far it's looking like we're still another generation away from that.

      In any event, if a few games capit
  • So I understand this is for games but, could this help scientific research such as molecular dynamics or other physics simulations? What is the accuracy? What type of calculations can it speed up?
  • this seems like a wonderful tech but i don't get why they are aiming so low
    physics is a vital part of games yes it yes ... but
    this makes me think they are only aiming for the easy money
    i do think if the specs were open hence call it gpl'ed if you want
    not only would the game market benefit from this tech but also
    research centra, universities, ...
  • by ShyGuy91284 ( 701108 ) on Sunday April 30, 2006 @08:39PM (#15234274)
    The physics simulation needed for a variety of scientific problems has always needed incredible processing power (such as the Earth Simulator). I'm wondering how accurate they can make this physics simulation, and if it would work better at physics simulation then traditional CPU-powered methods. It makes me want to compare it to the Grape Clusters used for some highly-specialized force-related research (I know University of Tokyo and Rochester Institute of Technology have them).
  • by Anonymous Coward
    Everyone knows that computer technology just gets better and better as time goes on but your ISP is still stuck in the past as the the execs go out and play a few rounds of golf. How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2? I honestly don't know the first thing about physics or how this stuff gets across a network but Counter-Strike:Source doesn't even let you take advantage of the 5-6 physics barrels in a map and even the
    • by MobileTatsu-NJG ( 946591 ) on Monday May 01, 2006 @12:06AM (#15234864)
      "How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2?"

      I can offer an uninformed theory. If an event is passed to the other players such as "barrel at explodes", then the processing is done at the client end for all of the players. If the event is done properly, they should all reach the same conclusion.

      Unfortunately, as I'm writing this, I can start to see the problem. Okay, I apologize, but I'm going to do a 180 here. Imagine a car crashes through a brick wall and a hundred bricks go flying away. That alone should work fine. But if another player runs into the path of one of the bricks and it bounces off of him, suddenly it's no longer as predictable. His latency along with everybody else's latency means ONE of the computers has to make the decision of where everything goes. That, in and of itself, is probably okay, but then you have to pass a great deal more data along to let the other clients know what's happening.

      So... yeah, I see your point.
      • I don't know about that particular game, but I have a feeling that's why multiplayer games have centralized servers.... Things are probably simpler when only one node decides who-shot-who.

        Even worse, consensus in a distributed system with any packet loss is not guaranteed (famous FLP paper in the 80s). The only guarantees are probabilistic... (And the world seems to run okay on that.) Which means no matter what algoritim, if x players start shooting at each other, their computers will not always a
  • this should definitely be a feature attached to the video card. Either that, or they should bundle physics accelerators with graphics accelerators. Also, like others have mentioned, its important that we get a standard API for this for it to catch on...

    Really, it would have been a lot better to introduce this technology on a console than on the PC. If the ps3, for instance were to come with this, developers would get a chance to play around with it in earnest and prove its usefullness, if any, to the consum
  • by rasmusneckelmann ( 840111 ) on Monday May 01, 2006 @04:08AM (#15235503)
    Back in 1995 game developers made 3D games using software rendering; then suddenly a company called 3Dfx introduced a dedicated 3D chip called Voodoo Graphics. Hardware acceleration of 3D was no new thing at that time, but 3Dfx was the first who would sell it to normal consumers. In the beginning, everyone thought it was insane to offer that kind of dedicated chip. Everyone was wrong, 3Dfx with their Voodoo Graphics was a massive success; soon all game developers supported 3Dfx's proprietary 3D API "Glide". Then came all the other "conventional" big players of graphics hardware, like ATI, nVidia, and Matrox, and started implementing similar features into their video cards. Microsoft introduced Direct3D to offer a uniform interface to consumer 3D rendering, and video card manufacturers even started to support OpenGL. 3Dfx and their proprietary API slowly faded away.

    My best guess is that this is going to repeat. AGEIA have now done what 3Dfx did, introducing a dedicated hardware chip for something that until now has been done in software. They even have their own proprietary physics API. Soon ATI and nVidia will incorporate similar features into their GPUs, and Microsoft will create a brand new DirectX subsystem called DirectPhysics. And AGEIA will slowly fade away (if they don't learn from 3Dfx's mistakes).
    • Whether or not it's workable in the GPU depends on the bandwidth available. I'll admit I don't know what kind of utilization PCI Express busses are seeing with graphics accelerators these days, but for "interactive" physics calculations, the data will need a ton o' bandwidth to feed back into the game engine.

      "Incidental" physics, like dust spray or blood spatter that don't affect the game at all except as eyecandy, can be done as a last step by the GPU with no feedback to the game whatsoever. Obstructions
    • The biggest problem I see here is that physics are gameplay relevant, while graphics are not. In the early days of 3D acceleration most games provided both hardware and software rendering, since graphics didn't add any to the gameplay, that wasn't a big issue. With physics on the other side you have a problem, since you can't just fallback to software-rendering without changing the actual game. So it might be a bit more difficult to break into the gaming world for PPUs. On the other side it might of course
  • Maybe it's time to overhaul our "general purpose" CPUs. While dedicating a processor to physics and graphics is sensible, there is little reason that processor should have an architecture different from the CPU that handles everything else--many of the features in a physics and graphics chip are useful for lots of other applications as well.
  • is it just me or the explosion seems to be somewhat in slow motion? i think blasting something (as shown in tv) is very fast that you do not see the shrapnels flying slowly (it seems the videos of tornados offer even faster movements.) maybe they just enhanced their effect to make it wow instead of actually patterning it to actual stuff (like did they really study ballistics or explosions?) another money milking machine.
  • weird explosions (Score:3, Interesting)

    by john_uy ( 187459 ) on Monday May 01, 2006 @01:47PM (#15238661)
    i find the explosions to be weird. it seems they are in slow motion. looking at actual explosions in tv, you almost hardly see debris flying unless they are played frame by frame. it seems they are doing this for the wow factor instead. another way to milk money. i feel that real world explosions, blood spatter, and other violent and gory stuff do not match the supposedly "real" things the game does. (of course i cannot bash them to the extent because i have to consider the computing power required, etc.)
  • Sorry if this sounds ignorant... but I have a fairly good grasp of the kind of physics that would be required in a game... for example in Quake or something like that. That said, here's my question:

    I am a computational physicist ... I write code (usually parallel, using MPI, for speed) that is obviously very physics intense. What the code amounts to is calculating forces between fluid elements and integrating Newton's 2nd law for them. Do these new Physics co-processors offer anything such that I could impr

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...