Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

PhysX Dedicated Physics Processor Explored 142

Ned_Network writes "Yahoo! News & Reuters has a story about a start-up who have created a dedicated physics processor for gamers' PCs. The processor undertakes physics calculations for the CPU and is said to make gaming more realistic - examples such as falling rocks, exploding debris and the way that opponents collapse when you shoot them are cited as advantages of the chip. Only 6 current titles take advantage of the chip but the FAQ claims that another 100 are in production."
This discussion has been archived. No new comments can be posted.

PhysX Dedicated Physics Processor Explored

Comments Filter:
  • by Cy Sperling ( 960158 ) on Sunday April 30, 2006 @04:44PM (#15233357)
    I like the idea of offloading physics processing to a speciallized card. Seems like it should up the ante for games to move beyond just ragdoll physics for characters and into more environmental sims as well. I would love to see volumetric dynamics like fog that swirls in reaction to masses moving through it. A deeper physics simulation hopefully means more to do rather than more to look at as well. Playing with gameworld physics from an emergent gameplay standpoint has real play value versus larger prettier textures.
  • by TubeSteak ( 669689 ) on Sunday April 30, 2006 @04:53PM (#15233408) Journal
    Chess games rely on brute computation to up the difficulty level.

    Anything the programmers can do to examine more moves into the future is a good thing for them. Even Deep Blue couldn't look more than 30 moves into the future. Dunno about the 'son of' Deep Blue.

    Animations, etc consume trivial amounts of CPU/graphics power compared to examining the next XY possible moves in a chess game.
  • by Anonymous Coward on Sunday April 30, 2006 @05:07PM (#15233479)
    The problem is either it's just eyecandy or it isn't. If it's not just eyecandy, and actually affects how you play the game, then you can't sell the same game to people without the card. This is a problem.

    At this point, there's only one game that takes any advantage of dual-core CPUs. Most games are still targetted towards lowend 2Ghz/GeForceMX systems. Seems kind of ridiclous to run headlong into specalized PHYZICKS processors when high-end games already fail to take better advantage of existing hardware.

  • by TubeSteak ( 669689 ) on Sunday April 30, 2006 @05:08PM (#15233487) Journal
    However, I wonder if the same effect can be achieved with cranking up your settings on a high end gaming rig without the card.
    TFA points out that even a high end gaming rig can't handle all the objects the chip allows the game to generate:
    But before starting the demonstration, Hegde had to lower the resolution of the game.

    The reason? The chip can generate so many objects that even the twin graphics processors in Hegde's top-end PC have trouble tracking them at the highest image quality.
    Basically, the tech in this chip is ahead of its time. It would seem like a wise idea to have PhysX enabled-games (optionally) benchmark your rig & automagically limit the # of objects generated so that the gaming experience doesn't drop below a certain FPS at your chose graphics settings.
  • by SlayerDave ( 555409 ) <elddm1@gmaiMOSCOWl.com minus city> on Sunday April 30, 2006 @05:10PM (#15233492) Homepage
    There is no common, open API for physics. Rather, there are several proprietary, closed APIs which offer similar functionality, but have no common specification. For instance, there are Havok [havok.com], Ageia [ageia.com], Open Dynamics [ode.org], and Newton [newtondynamics.com], just to name a few. The PhysX chip from Ageia only accelerates games written with their proprietary library in the game engine. Other games written with Havok, for instance, should receive no benefit at all from the installed PPU. On the other hand, Havok and NVIDIA have a GPU-accelerated physics library [havok.com], but games without Havok (or users without NVIDIA SLI systems) won't get the benefit.

    On the other hand, graphics cards make sense for consumers because there are only two graphics APIs, OpenGL and DirectX, and they offer very similar functionality under the hood (but significantly different high-level APIs). So a graphics card can accelerate games written with either OpenGL or DirectX, but that's not the case with the emerging PPU field. In graphics, the APIs developed and converged on common functionality long before hardware acceleration was available at the consumer level, but I don't think the physics API situation is stable or mature enough to warrant dedicated hardware add-in cards at this time.

    However, I think there are two possible scenarios that could change this.

    1) Havok and Ageia could create open or closed physics API specifications and make them available to chip manufacturers, e.g. ATI and NVIDIA, which have the market penetration and manufacturing capability to make PPUs widely available. I could imagine a high-end PCIe card that had both a GPU and a PPU on-board.

    2) Microsoft. Think what you will about them, but DirectX has greatly influenced the game industry and is the de-facto standard low-level API (although there are notable exceptions, such as id [idsoftware.com]). Microsoft could introduce a new component of DirectX which specifies a physics API that could then be implemented in hardware.

    But unless one of those things happens, I don't think proprietary PPUs are going to make a lot of sense for consumers.

  • Re:no way in hell (Score:1, Insightful)

    by WML MUNSON ( 895262 ) on Sunday April 30, 2006 @05:17PM (#15233525)
    I'm going to assume you weren't around when 3D accelerators first came into existence and everyone was saying the same thing as what you just said.

    Improved physics matter only to "hardcore techies?" I challenge you to explain Half-Life2's success without including the use of physics in your answer.

    Physics is an emerging area in gaming and huge quantities of resources are being poured into its improvement. A card that not only offloads the physics calculations to a separate chip, but as a result gives us the capabilities for more and better in-game physics capabilities is absolutely a great idea. Puzzles can become more interesting, visuals can become more immersive due to improved particle physics just for starters, you'll have creative ways to destroy your enemies without shooting them directly, destructible environments... and the list keeps going..

    It's only a matter of time until these take off. Some folks might have a tough time finding an empty slot for one of these on their motherboard (with all the QUINTUPLE-SLI configs people have now-adays), but they'll just upgrade to a bigger case and a board with more slots especially if developers keep stepping on-board.

    Games probably won't REQUIRE one for quite some time, but I would expect these will be about as widespread as 5.1+ sound-cards in just a few years..
  • by Opportunist ( 166417 ) on Sunday April 30, 2006 @05:19PM (#15233529)
    It's called "creativity" and is normally used only in the development of games. Actually has been for ages before studios found it too expensive, and realized it's cheaper to develop games without it.
  • by Sycraft-fu ( 314770 ) on Sunday April 30, 2006 @06:12PM (#15233743)
    I think many games are going to find it's not really usable without mandidating it. Let's say I make a multi-player game and I want players to be able to do things like trow objects at each other, bash down doors, and so on. The PhysX proves to be ideal, allowing me to do all the calculations I need for my realistic environment. However, now I have a problem: There's no way to simplify things for non-PhysX computers that still makes the game act the same. The actual gameplay is influenced by having this physics engine, and there's no going both ways.

    Well that clearly isn't going to work, not enough people will own it to mandidate it.

    Ok that means you are stuck using it for eye candy. Physics effects that make things look cooler, but don't really change gameplay. Hmmm, well at $300 just for eye candy, you face some stiff competition. I bet $300 spent on a PhysX doesn't make games as pretty as $300 spent on a GeForce 7900 does.

    We'll see but I think your processor argument has a lot of merit. Is this thing going to be far enough ahead to outpace processors for some time to come? Because I don't think it's the kind of thing people will upgrade every year, and I think there;s going to be a lot of intertia to overcome. I mean I'm intrigued, and $300 is not out fo the range I'd consider spending for an addin card is I like what it does. However I've got to wait and see if it's got any legs and if the difference is big enough for me to care. Well during that time, I'm going to have to guess people will improve physics in software and start using dual cores for that. Right now I have a processor core that sits almost idle during games, just tending to system tasks. I have to ask how much more you could get out of it when it's used, how close to the PhysX accelerator can you come. Answer may be close enough I don't care to purchase one.
  • Multiplayer (Score:5, Insightful)

    by lord_sarpedon ( 917201 ) on Sunday April 30, 2006 @06:23PM (#15233772)
    There's a major flaw. Multiplayer gameplay requires certain clientside behaviors to be deterministic, otherwise clients will fall out of sync. Physics is one of those. If Bob uses a PhysX card and an explosion lands a box in position X, but Alice, without a PhysX card, has the same box in position Y, then there is a problem. Both can't be right. The server would have to correct for discrepancies such as that because the position of a box affects gameplay; bullets and players can impact it. Perhaps more position updates would have to be sent to make sure Alice ends up in the same spot as Bob. But what about midflight? I suppose this doesn't matter for blood smears and purely aesthetic effects, but as the videos show, thats not where PhysX really shines. This puts a physics accelerator in an entirely different class than a graphics card. You can adjust your graphics settings, but the quality of your physics simulation in multiplayer can only be as good as the least common denominator without killing gameplay for some of the parties involved. Sure, AGEIA could have non-accelerated versions for everything in its library when acceleration isn't available that produce the same result, but then you are offloading the entire functionality of an addon card on to the cpu...imagine running Doom at full settings using software rendering. Extreme example. But that defeats the very purpose of the card, if developers are limited because most of their customers might not have it.
  • by mrchaotica ( 681592 ) * on Sunday April 30, 2006 @07:18PM (#15233987)
    Well, that's just too bad for them, now isn't it? 'Cause the important people -- namely us, the readers of the site -- care about usability, not Flash!

    If they can't do animated vector graphics the right way, they shouldn't do them at all!
  • by MobileTatsu-NJG ( 946591 ) on Monday May 01, 2006 @12:06AM (#15234864)
    "How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2?"

    I can offer an uninformed theory. If an event is passed to the other players such as "barrel at explodes", then the processing is done at the client end for all of the players. If the event is done properly, they should all reach the same conclusion.

    Unfortunately, as I'm writing this, I can start to see the problem. Okay, I apologize, but I'm going to do a 180 here. Imagine a car crashes through a brick wall and a hundred bricks go flying away. That alone should work fine. But if another player runs into the path of one of the bricks and it bounces off of him, suddenly it's no longer as predictable. His latency along with everybody else's latency means ONE of the computers has to make the decision of where everything goes. That, in and of itself, is probably okay, but then you have to pass a great deal more data along to let the other clients know what's happening.

    So... yeah, I see your point.
  • by rasmusneckelmann ( 840111 ) on Monday May 01, 2006 @04:08AM (#15235503)
    Back in 1995 game developers made 3D games using software rendering; then suddenly a company called 3Dfx introduced a dedicated 3D chip called Voodoo Graphics. Hardware acceleration of 3D was no new thing at that time, but 3Dfx was the first who would sell it to normal consumers. In the beginning, everyone thought it was insane to offer that kind of dedicated chip. Everyone was wrong, 3Dfx with their Voodoo Graphics was a massive success; soon all game developers supported 3Dfx's proprietary 3D API "Glide". Then came all the other "conventional" big players of graphics hardware, like ATI, nVidia, and Matrox, and started implementing similar features into their video cards. Microsoft introduced Direct3D to offer a uniform interface to consumer 3D rendering, and video card manufacturers even started to support OpenGL. 3Dfx and their proprietary API slowly faded away.

    My best guess is that this is going to repeat. AGEIA have now done what 3Dfx did, introducing a dedicated hardware chip for something that until now has been done in software. They even have their own proprietary physics API. Soon ATI and nVidia will incorporate similar features into their GPUs, and Microsoft will create a brand new DirectX subsystem called DirectPhysics. And AGEIA will slowly fade away (if they don't learn from 3Dfx's mistakes).

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...