Forgot your password?
typodupeerror

NVIDIA Launches New SLI Physics Technology 299

Posted by ScuttleMonkey
from the volunteer-rendering dept.
Thomas Hines writes "NVIDIA just launched a new SLI Physics technology. It offloads the physics processing from the CPU to the graphics card. According to the benchmark, it improves the frame rate by more than 10x. Certainly worth investing in SLI if it works."
This discussion has been archived. No new comments can be posted.

NVIDIA Launches New SLI Physics Technology

Comments Filter:
  • by Anonymous Coward on Monday March 20, 2006 @04:11PM (#14959335)
    This physics system is used for visual physics (i.e., realistic graphical effects), not gameplay physics, which are still done on the CPU.

    Therefore you get a 10x framerate increase over running massively intensive effects on the CPU.

    This is good, because games will look nicer. But if you don't have the GPU grunt, you can simply disable (or cut them down) them in game - it won't affect the gameplay.
  • SLI? (Score:5, Insightful)

    by temojen (678985) on Monday March 20, 2006 @04:11PM (#14959339) Journal
    Why does this require SLI? You can do stream processing on most relatively-modern accelerated 3d video cards.
  • by Hortensia Patel (101296) on Monday March 20, 2006 @04:14PM (#14959374)
    I don't think this is a general physics processor. It seems to be aimed at "eyecandy" physics calculations - mostly particle systems - whose results don't need to feed back into application logic. Which makes sense, given than GPU->CPU readbacks are a notorious perfomance killer.

    Potentially shiny, but not really revolutionary or new. People have been doing particle system updates with shaders for a while now.
  • by GundamFan (848341) on Monday March 20, 2006 @04:14PM (#14959376)
    Compitition is good!

    If ATI was out of busness do you think nVidia would ever inovate again?

    A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work.
  • by TheSkepticalOptimist (898384) on Monday March 20, 2006 @04:18PM (#14959410)
    By offloading physics from the CPU to the graphics card, this improves frame rates?

    Why would I waste precious GPU processing to process Physics? I mean, all the CPU does these days is handle AI, physics, and texture loading. If you offload physics to the GPU, then the CPU is doing less and your swamping the GPU with more work.

    If it does increase frame rates, then I would suggest why not improve graphics rendering rather then physics processing. I find that for all the advances nVidia and ATI have made over the years, 3D gaming visual quality is still inferior to cinematic quality 3D rendering. I mean, playing F.E.A.R, a relatively new game on the market, with ALL the settings to maximum, while I get 12 FPS the image quality just isn't that great on a current generation card.

    I would prefer if nVidia and ATI actually focused on bringing cinematic quality 3D rendering to gaming, instead of just claiming they do. I want smooth high-poly models with realistic lighting and 60fps. I could care less about a game running at 120fps that looks bad. All 3D games suffer from a kind of mundane pseudo style of 3D modeling that leaves relatively well designed models playing in big rectangles with high-res texture cheats. Give me more lushes organic environments. Bring nurbs into the mix by creating actual curved surfaces into real time 3D rendering instead of just lots of triangles mimicking a curved surface.

    So, while nVidia may have its heart in the right place, the last thing people need is their GPU being taxed with physics processing. Isn't there supposed to be a physics add-on card entering the market soon anyways? Won't multi-core CPU's offer better physics performance then a single GPU? Instead of trying to compete against add-in cards and multi-core CPU's, nVidia should just focus on improving 3D rendering quality and actually start delivering on their promises of offering cinematic 3D rendering to each new generation of video card they hype about.
  • PCI Express (Score:3, Insightful)

    by CastrTroy (595695) on Monday March 20, 2006 @04:19PM (#14959416) Homepage
    Why not have a complete physics card? It would be a nice use for that PCI express bus which only has video cards as an option right now. That way you could just buy the physics card, without having to upgrade the video card. Although this is all kind of weird. Start offloading everything off to specialized cards, you pretty much have a multiple CPU machine, where each CPU is specially tuned to do a specific type of processing. Might be the leap necessary to maintain Moore's law.
  • by Jerry Coffin (824726) on Monday March 20, 2006 @04:21PM (#14959434)
    A few years ago, they were being slammed for doing load balancing where they offloaded graphics processing onto the CPU when/if the CPU was less busy than the GPU. Now the GPUs are enough faster that they can frequently expect to be "ahead" of the CPU -- so now they're starting to work on doing the opposite, offloading work from the CPU to the GPU instead.

    Of course, the basic isn't exactly brand new -- some of us have been writing shaders to handle heavy duty math for a while. The difference is that up until now, most real support for this has been more or less experimental (e.g. the Brook system [stanford.edu] for doing numeric processing on GPUs. Brook is also sufficiently different from an average programming language that it's probably fairly difficult to put to use in quite a few situations.

    Having a physics-oriented framework will probably make this capability quite a bit easier to apply in quite a few more situations, which is clearly a good thing (especially for nVidia and perhaps ATI, of course).

    The part I find interesting is that Intel has taken a big part of the low-end graphics market. Now nVidia is working at taking more of the computing high-end market. I can see it now: a game that does all the computing on a couple of big nVidia chips, and then displays the results via Intel integrated graphics...

  • by 9mm Censor (705379) * on Monday March 20, 2006 @04:24PM (#14959457) Homepage
    Applications should be built to be more efficient, to handle modern hardware, instead of simply relying on consumers purchasing faster hardware.
  • Re:"Physics" (Score:5, Insightful)

    by Quaoar (614366) on Monday March 20, 2006 @04:26PM (#14959481)
    I dunno what company would release a game that needs to SOLVE ODE's on the fly...I imagine you'd solve the equations before-hand, and put them in a nice form where all you need to do is multiply/add terms. If a company wants a cloak to behave realistically in their game, I'm sure they just find the proper coefficients in development, and all the game has to do is crunch the numbers on the fly.
  • by geekoid (135745) <dadinportland@ya[ ].com ['hoo' in gap]> on Monday March 20, 2006 @04:27PM (#14959486) Homepage Journal
    "A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work."

    You can have a socialist government, and market competition.
    The USSR's "implementation" of socialism was flawed. Don't get that confused with actual socialism.
  • by lbrandy (923907) on Monday March 20, 2006 @04:30PM (#14959514)
    By offloading physics from the CPU to the graphics card, this improves frame rates?

    Yes. Why does that surprise you? When you do incredibly complicated physics simulation, things can be very parallel and consequently GPUs outperform CPUs.

    Why would I waste precious GPU processing to process Physics? I mean, all the CPU does these days is handle AI, physics, and texture loading. If you offload physics to the GPU, then the CPU is doing less and your swamping the GPU with more work.

    You seem to be under the impression that your GPU cycles are more important than your cpu cycles. This is done with SLI for a reason..

    If it does increase frame rates, then I would suggest why not improve graphics rendering rather then physics processing.

    Because the quality of the render is controlled in software? Because hardware is currently limited by, ya know, physics and technology?

    I find that for all the advances nVidia and ATI have made over the years, 3D gaming visual quality is still inferior to cinematic quality 3D rendering.

    And in other news, offline processing is still more powerful than online processing. There's a shocker.

    I would prefer if nVidia and ATI actually focused on bringing cinematic quality 3D rendering to gaming, instead of just claiming they do.

    First of all, 99.9% of what nVidia and ATI do is exactly that. They are also starting to realize that the GPU paradigm, with minor modification, can be turned into a very powerful co-processor... and they are the experts at creating those types of chips. The market for them is growing... and they don't want to miss the boat.

    I want smooth high-poly models with realistic lighting and 60fps.

    And I want peace in the middle east. Give it 10 years, one of us may get our wish.
  • Re:Competition (Score:3, Insightful)

    by LLuthor (909583) <lexington.luthor@gmail.com> on Monday March 20, 2006 @04:38PM (#14959577)
    Many many people already have a capable GPU and would only need a driver/software update.

    The physX card is considerably more cumbersome to use for the average gamer, and is consequently less likely to be supported by game developers. Not to mention the fact that the cards are likely to be quite expensive.
  • by l33t-gu3lph1t3 (567059) <arch_angel16@nosPam.hotmail.com> on Monday March 20, 2006 @04:39PM (#14959584) Homepage
    Real-time cinematic quality graphics rendering = HARD.
    Physics acceleration that allows for rather impressive collisions and water: MUCH EASIER.

    Maximum output for minimum input. Having physics acceleration in the GPU makes sense as you don't have to buy an extra accelerator card.
  • by Hektor_Troy (262592) on Monday March 20, 2006 @04:41PM (#14959600)
    I want smooth high-poly models with realistic lighting and 60fps.

    And I want peace in the middle east. Give it 10 years, one of us may get our wish.
    Well, compared to 10 years ago, we probably HAVE cinematic quality rendering in games, and we definately have smooth high-poly models with realistic lighting and 60 fps. Trouble is that apart from 60 fps, every thing else in that statement is a very moving target.
  • by TomorrowPlusX (571956) on Monday March 20, 2006 @04:43PM (#14959608)
    I can't read the article since it's slashdotted, but here's what I want to know:

    First, what physics API are they using? This is, after all, a little like OpenGL vs DirectX. You need a physics API to do this stuff, and there are out there a *lot* of portable and high quality APIs. Havok, Newton, Aegeia (spelling?), and the open source ODE ( which I use ). The APIs aren't interchangeable, and aren't necessarily free.

    Second, at least when I'm doing this work, there's a *lot* of back and forth between the physics and my game engine. Maybe not a whole lot of data, but a lot of callbacks -- a lot of situations where the collision system determines A & B are about to touch and has to ask my code what to do about it. And my code has to do some hairy stuff to forward these events to A & B ( since physics engines have their own idea of what a physical object instance is, and it's orthogonal to my game objects, so I have to have some container and void ptr mojo ) and so on and so forth. If all this is running on the GPU, sure the math may be fast but I worry about all the stalls resulting from the back and forth. Sure, that can be parallelized and the callbacks can be queued, but still.

    Anyway, I want info, not marketing.

    Oh christ, and finally, I work on a Mac. When will I see support? ( lol. this is me falling off my chair, crying and laughing, crying... sobbing. Because I know the answer ). Can we at least assume/hope that they'll provide a software fallback api, and that that api will be available for linux and mac? After all, NVIDIA has linux and mac ports of Cg, so why not this? I'm keeping my fingers crossed.
  • Re:10x faster? (Score:3, Insightful)

    by richdun (672214) on Monday March 20, 2006 @04:47PM (#14959634)
    The GPU may be 10x faster at physics calculations, but the summary says framerate improvments of 10x - so how realistic is something like 600 fps? Ridiculous, even if you had a monitor/graphics system capable of 600 refreshes per second.
  • by temojen (678985) on Monday March 20, 2006 @04:47PM (#14959637) Journal
    Non-root, user-level access to IO ports (by authorized programs) is not evil; it's what allows non-kernel level display servers. It keeps some really complicated stuff out of the kernel, thus improving system stability.
  • Liars (Score:2, Insightful)

    by nnnneedles (216864) on Monday March 20, 2006 @04:48PM (#14959641)
    "..improves the frame rate by more than 10x"

    Liars end up in Hell.
  • by Anonymous Coward on Monday March 20, 2006 @04:57PM (#14959707)
    This has little to do with gaming, where the fastest known algorithm to perform any particular task is usually chosen. If you want really pretty graphics, you're going to need fast hardware.
  • by zippthorne (748122) on Monday March 20, 2006 @05:12PM (#14959828) Journal
    For a game, the best way to solve ODEs is numerically. Since you don't need the precision of the exact solution, the solutions are considerably simpler computationally once you've linearized them. Doing RK4 on the fly is precisely the best solution to the problem. Well, depending on the stiffness.. but you can always fall back on plain ol' trapezoid rule if you just wanna know, "what does the thing do until it hits the ground" to enough precision to be pretty.

    solving a linearized ODE is just plain ol' ordinary matrix math, very parallelizeable and a lot less computationally expensive than breaking up a transcendental function into piecewise conitinuous steps and calculating the result every time.
  • by Jherek Carnelian (831679) on Monday March 20, 2006 @05:14PM (#14959838)
    The guys over at http://www.gpgpu.org/ [gpgpu.org] have been doing various math calculations, including 'physics' on GPUs for a while now. One big problem is that the only real API is OpenGL. So not only do you have to be a smart math programmer (which is pretty rare to begin with) but you also have to understand graphics programming too and then figure out how to map traditional math operations onto the graphics operations that OpenGL makes available. It isn't that hard to do simple things like matrix math, but trying to really optimize it for really good performance requires almost wizard-level understanding of OpenGL and the underlying hardware implementation.

    The cards' math capabilities would be so much more accessible (and thus used by so many more programmers) if Nvidia (and ATI) would come out with standard math-library interfaces to their cards. Give us something that looks like FFTW and has been tweaked by the card engineers for maximum performance and then we will see everbody and his brother using these video cards for math co-processing.
  • by Anonymous Coward on Monday March 20, 2006 @05:52PM (#14960169)
    The only socialistic countries that seem to be surviving show very little in the way of technical progression.


    Finland is:

    1) a socialist country.
    2) the home of Linus Torvalds.
    3) the home of Nokia.

    Do stop spouting your ridiculous American propaganda. Socialism works, and most of the world uses it.
  • by fallen1 (230220) on Monday March 20, 2006 @05:53PM (#14960174) Homepage
    While I hate to ride a horse into the ground and then feed off its bones, every time I hear something like this happening I immediately think "Amiga". Why? I would guess that it is because the Amiga had a CPU and then it had dedicated chips to handle other functions - math, graphics, sounds, etc. This arrangement created a computer system that did not get surpassed for MANY years after its demise and, some would say, it still hasn't been bested in many areas (multi-tasking is one of those).

    Each time I hear that an "advance" has been made and I read that it is basically re-integrating various components back into the primary system or tying those components tighter to the CPU then I can't help but scream "AMIGA!" Of course, this leads to co-workers walking wider paths around me while having avoiding eye contact '-).

    Still, all of these advances lead me to believe that we might going back to a dedicated chip style of computing BUT what I am also hoping for is a completely upgradeable system that I can pull the, say, physics processor out and plug a newer version or better chip into without having to replace the entire motherboard or daughterboard. Which, of course, leade me right back to that whole screaming scenario :) The Amiga style of computing may yet live again.

  • by pclminion (145572) on Monday March 20, 2006 @07:04PM (#14960669)
    BF2 for example. This game is ALL physics... I love it personally, and one of the coolest (and crappy) things is when you get shelled by artillary 500 yards into the air. Your limbs are flying everywhere and the FPS decrease is noticeable when there are 10 or so soliders all ragdolled in the air (i'm sure particles have a lot to do with this also, but IAN a FPS programmer).

    Uh, that's highly unlikely. The physics of a flying body is no more difficult to compute that the physics of a running body. "Particle systems" are not the reason for the slowdown, more likely, it has to do with the fact that a player at high altitude can see a LOT of the game world and therefore more packets have to be sent in order to maintain a consistent view as the player flies through the air.

  • by BewireNomali (618969) on Monday March 20, 2006 @07:49PM (#14960935)
    I agree with you. But the same goes for the US. In the cold war - we went from scratch to repeat moon landings in ten years. Not so now. My point is that democracy doesn't have an inherent monopoly on innovation.
  • by Armagguedes (873270) on Monday March 20, 2006 @09:10PM (#14961243) Homepage Journal
    Didn't a company called Ageia (?) design a PCI-express addon (or PCIx or wtv) that was basically a separate chip completely dedicated to physics calculations (ragdoll thingies and that sort of stuff)?

    In fact, wasn't the PS3 supposed to have said chip from Ageia (or wtv)?

    This would be cool, but i wonder how many would actually flock to it (if cheap enough (~40) then probably it would lead developers to assume its existence, and if not to default to using good old ix86).
  • by zippthorne (748122) on Tuesday March 21, 2006 @12:28AM (#14961957) Journal
    um.. 3+ body problem for an "asteroid" game comes to mind, realistic damping due to wind resistance, the same but weather, realistic looking vehicle suspension, stinger-missile simulation, aerodynamic simulation, realistic looking water/weather..

    Pretty much anywhere that the underlying equation is more complicated than a simple spherical potential.

    Sure you *could* hard code it in, but if the analytical result is a function like exp(-x^2)cos(x), you're going to have include quite a few higher order terms to evaluate it to any precision, and you lose any advantage you had if the ODE could've been solved with a few simple multiplication/additions.

    If all you care about is, "given state f({x},t=n) what is state f({x},t=n+1)" a numerical ODE solver is pretty much ideal for that.
  • by xactoguy (555443) on Tuesday March 21, 2006 @03:58AM (#14962416)
    ... Most of you didn't get the point. It's not that you can access the GPU from userland (it depends on that access, but that's not the point). The main point is that that the current gen of programmable GPUs allow you to (theoretically) directly access kernel memory, as pointed out later in the thread by Theo:


    > Are these new programable cards capable of reading main memory, which
    > OpenBSD would not be able to prevent if machdep.allowaperture were
    > set to something other than 0?

    Yes, they have DMA engines. If the privilege seperate X server has a
    bug, it can still wiggle the IO registers of the card to do DMA to
    physical addresses, entirely bypassing system security.


    Thus, a resourceful attacker theoretically could get access to kernel memory through anything which allows access to the video card. An unusual and probably difficult-to-exploit hole, but a possible hole none the less.

Never trust a computer you can't repair yourself.

Working...