Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

PhysX Dedicated Physics Processor Explored 142

Ned_Network writes "Yahoo! News & Reuters has a story about a start-up who have created a dedicated physics processor for gamers' PCs. The processor undertakes physics calculations for the CPU and is said to make gaming more realistic - examples such as falling rocks, exploding debris and the way that opponents collapse when you shoot them are cited as advantages of the chip. Only 6 current titles take advantage of the chip but the FAQ claims that another 100 are in production."
This discussion has been archived. No new comments can be posted.

PhysX Dedicated Physics Processor Explored

Comments Filter:
  • by ignatz72 ( 891623 ) on Sunday April 30, 2006 @05:07PM (#15233476)
    From the article: "The consumers will see how the games behave better," Hegde said.

    But in the same article, they mention that the extra particles the processor generates swamps the DUAL gpu setup he's got in a demo system. How many of you want to wager the demo system is a hoss in it's own right?

    Apparently this card isn't going to help those of us holding out with our Athlon XP AGP systems that perform fine on current gen games, if a current bleeding edge rig can't cut it. :(

    SO now I have to plan for a quad AM2 CPU, quad dual-sli chip GPU w/ 32 Gigs of memory? Damnit all to hell...

    */me researches mortgage rates to subsidize next box-build*
  • by Babbster ( 107076 ) <aaronbabb&gmail,com> on Sunday April 30, 2006 @05:14PM (#15233515) Homepage
    I like the idea, too, though in practice I've got two big questions:
    1) Is it going to come down in price? Considering that "mid-range" GPUs are going for around $300, this card at $300 (okay, $299) represents a doubling of the cost to bring a gaming system "up to speed." Right now, with only one option, it's a one-time thing but we all know that if it's successful there will be upgrades.
    2) Is this really going to make a huge difference in a world where dual-core CPUs are becoming mainstream, and more cores are coming in the future? Is the performance advantage of their specially designed physics processor so important that, say, an eight-core CPU in 2008 couldn't perform similarly (given enough memory for the software engine), making the existing PhysX cards obsolete?

    Considering that one of the titles they tout - Ghost Recon for the Xbox 360 - already implements their technology in software (and they brag about how great it is there), I just don't think that this add-in card has any staying power.
  • by Anonymous Coward on Sunday April 30, 2006 @06:24PM (#15233778)
    "You might as well ditch the worthless x86 chip and link the PPU to the GPU..."

    Funny you should say that, one of my friends who is a very senior engineer at NVidia has been talking about the same thing for the past year or so. He has been saying how NVidia views the x86 chips that drive pc gaming systems as a worthless relic that they would like to make irrelevant and have pc game developers to essentially start writing their entire game engines on their GPU.

    He seems to be just gushing with excitement over what they are doing in partnership with Sony - it sounds like the PS3 is just the beginning.

    No wonder Microsoft went through all the trouble to switch to the more powerful PPC chips and ditched x86.

  • Scientific Research (Score:2, Interesting)

    by Bipedismaximus ( 713734 ) on Sunday April 30, 2006 @07:06PM (#15233954)
    So I understand this is for games but, could this help scientific research such as molecular dynamics or other physics simulations? What is the accuracy? What type of calculations can it speed up?
  • by Anonymous Coward on Sunday April 30, 2006 @07:37PM (#15234041)
    It's not simply a DSP. It's a fully programmable physics chip - which PROBABLY means it's a single instruction, multiple data type chip (much like the programmable pixel shader logic in a gpu).

    This type of CPU would be vastly superior to a standard cpu for calculating possible moves.

    Though, while it helps with chess move logic, it wouldn't help with Go logic.

    Go logic is still vastly inferior and more difficult. Why I brought up go, I have no idea.
  • by kitsunewarlock ( 971818 ) on Sunday April 30, 2006 @07:54PM (#15234118) Journal
    Actually, I was thinking of Go when I read your post...then I saw the word and was like "wow".
    You are probably thinking of it since Go is pseudo-famous (among engineers who have attempting thusly and in Japan) as a game that cannot be easily made into a computer simulation properly. While chess has 16 opening moves, Go has...well 12 decent ones, but statistically 361. Finding the variations in a game of Go would just...be impossible currently. It is commonly said that no game has ever been played twice. This may be true: On a 19×19 board, there are about 3361×0.012 = 2.1×10^170 possible positions, most of which are the end result of about (120!)^2 = 4.5×10^397 different (no-capture) games, for a total of about 9.3×10^567 games. Allowing captures gives as many as 10^7.49x10^48

    There's more go games then theorized protons in the visable universe!
  • People DID say this with the first generation of dedicated 3d hardware chips, which is effectively why 3dfx went out of business. There wasn't enough installed base to make the developer cost worthwhile balanced with the benefit of reaching such a small installed base.

    There are several things wrong with the Ageia business model:

    1) they mandate that you use THEIR physics engine in order to access the physics hardware - there is no low-level hardware API that any engine can access - so by supporting their hardware, you are excluding using well-known and stable physics engines such as Havok, ODE (for the open-source crowd) etc for your games. This is a major issue from a development standpoint.

    2) the cost issue (which others have brought up). The added cost vs the benefit of actually having these chips installed is simply too much for hardware vendors to actually see this as being a worthwhile thing to add to machines. Currently Ageia is relying on the hardcore gamer crowd seeing this as something that MUST be supported by games, which is a bad way to go about things. Until they sign on a vendor like Dell or HP to actually build machines with these chips, then it's a no-go for developers.

    ---------

    Re: 1) I've heard that Havok & Nvidia are partnering together to create a bundled video card with an extra dedicated CPU for physics in a single card - so instead of having the single GPU, you will be able to have a GPU, PPU all on a single card in your machine. This will bring the cost down significantly and actually be worth supporting (both for the hardware vendors looking to build machines for the lowest cost) as well as developers - Nvidia's marketing muscle and existing OEM chain will guarantee that vendors will actually build machines using their cards.

    As well, from their experience in the video world, i'm guessing that Nvidia's version will provide a low-level API for accessing the hardware, which any physics vendor can then support, instead of forcing developers to use THEIR physics engine (whether it's havok or otherwise).

    Until this happens, the concept of a dedicated processor for physics is inevitably going to go the way of 3dfx. Perhaps Ageia will be bought by ATI looking to create their own dedicated GPU / PPU combination, but otherwise I don't see it catching on.

    With dedicated 3d graphics, at least there are OTHER applications / reasons that a general mass-market consumer might want such a card - ie the aero-style 3dish interfaces, etc. With a physics processor, unless you are playing games that require it, it's an unnecessary add-on.
  • by ShyGuy91284 ( 701108 ) on Sunday April 30, 2006 @08:39PM (#15234274)
    The physics simulation needed for a variety of scientific problems has always needed incredible processing power (such as the Earth Simulator). I'm wondering how accurate they can make this physics simulation, and if it would work better at physics simulation then traditional CPU-powered methods. It makes me want to compare it to the Grape Clusters used for some highly-specialized force-related research (I know University of Tokyo and Rochester Institute of Technology have them).
  • by Anonymous Coward on Sunday April 30, 2006 @10:04PM (#15234528)
    Everyone knows that computer technology just gets better and better as time goes on but your ISP is still stuck in the past as the the execs go out and play a few rounds of golf. How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2? I honestly don't know the first thing about physics or how this stuff gets across a network but Counter-Strike:Source doesn't even let you take advantage of the 5-6 physics barrels in a map and even these barrels are rumoured to cause much lag! What kind of effect would a realistically modeled house demolition have on network performance? Is our shitty bandwidth gonna force us back to the gaming stone age on 8 player servers with the only tradeoff being pretty physics to make up for the other 24 players?
  • by Qa1 ( 592969 ) on Monday May 01, 2006 @02:35AM (#15235272)

    Actually, Deep Blue was "Fast and Dumb" - it could indeed search fast and thus foresee many moves ahead ("into the future"), but it didn't have a good sense of which moves are worth checking out. If there are 10 moves available in the position, DB would generally check all of them out. Which meant that:

    1. It wasted a lot of power calculating hopeless and downright stupid moves. That's especially evident when you consider the huge branching factor of exploring all moves in each position.
    2. It would make mistakes if the position required calculating beyond 20-30 moves ahead - i.e. making a strategic move, as opposed to tactical (short range, immediate appearant profit) moves.
    3. Contrary to popular opinion, DB wasn't the best chess computer that could be built at the time. It was the strongest chess playing hardware ever created (at that point). The software recieved very little attention, and if you'd swap the generic DB engine with a decent program on the same hardware, it would be much better. In fact, you could substantially reduce the hardware and still get a stronger chess game with a better program. DB was very dumb, even more than the dumbest "fast searchers" professional level playing software.

    It's pretty evident that fast searching has reached its limits. The branching factor makes "more muscle" (as per the famous "brute force" method) pretty useless. The current top programs are the "smart searchers": Hiarcs especially (the epitome of a very wise, very "slow" program), and also Shredder [telia.com]. In fact, even the formerly "fast and dumb" programs need to be smarter than they used to be to remain competitive at the top of the computer chess league. But, as mentioned above, none of them ever was as dumb as the fastest, dumbest program ever: Deep Blue.

  • weird explosions (Score:3, Interesting)

    by john_uy ( 187459 ) on Monday May 01, 2006 @01:47PM (#15238661)
    i find the explosions to be weird. it seems they are in slow motion. looking at actual explosions in tv, you almost hardly see debris flying unless they are played frame by frame. it seems they are doing this for the wow factor instead. another way to milk money. i feel that real world explosions, blood spatter, and other violent and gory stuff do not match the supposedly "real" things the game does. (of course i cannot bash them to the extent because i have to consider the computing power required, etc.)

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...