NVIDIA Launches New SLI Physics Technology 299
Thomas Hines writes "NVIDIA just launched a new SLI Physics technology. It offloads the physics processing from the CPU to the graphics card. According to the benchmark, it improves the frame rate by more than 10x. Certainly worth investing in SLI if it works."
Improves framerate by 10x (Score:5, Insightful)
Therefore you get a 10x framerate increase over running massively intensive effects on the CPU.
This is good, because games will look nicer. But if you don't have the GPU grunt, you can simply disable (or cut them down) them in game - it won't affect the gameplay.
SLI? (Score:5, Insightful)
Before people get too excited... (Score:3, Insightful)
Potentially shiny, but not really revolutionary or new. People have been doing particle system updates with shaders for a while now.
Re:You know what... (Score:2, Insightful)
If ATI was out of busness do you think nVidia would ever inovate again?
A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work.
I don't understand? (Score:1, Insightful)
Why would I waste precious GPU processing to process Physics? I mean, all the CPU does these days is handle AI, physics, and texture loading. If you offload physics to the GPU, then the CPU is doing less and your swamping the GPU with more work.
If it does increase frame rates, then I would suggest why not improve graphics rendering rather then physics processing. I find that for all the advances nVidia and ATI have made over the years, 3D gaming visual quality is still inferior to cinematic quality 3D rendering. I mean, playing F.E.A.R, a relatively new game on the market, with ALL the settings to maximum, while I get 12 FPS the image quality just isn't that great on a current generation card.
I would prefer if nVidia and ATI actually focused on bringing cinematic quality 3D rendering to gaming, instead of just claiming they do. I want smooth high-poly models with realistic lighting and 60fps. I could care less about a game running at 120fps that looks bad. All 3D games suffer from a kind of mundane pseudo style of 3D modeling that leaves relatively well designed models playing in big rectangles with high-res texture cheats. Give me more lushes organic environments. Bring nurbs into the mix by creating actual curved surfaces into real time 3D rendering instead of just lots of triangles mimicking a curved surface.
So, while nVidia may have its heart in the right place, the last thing people need is their GPU being taxed with physics processing. Isn't there supposed to be a physics add-on card entering the market soon anyways? Won't multi-core CPU's offer better physics performance then a single GPU? Instead of trying to compete against add-in cards and multi-core CPU's, nVidia should just focus on improving 3D rendering quality and actually start delivering on their promises of offering cinematic 3D rendering to each new generation of video card they hype about.
PCI Express (Score:3, Insightful)
Just more load balancing (Score:5, Insightful)
Of course, the basic isn't exactly brand new -- some of us have been writing shaders to handle heavy duty math for a while. The difference is that up until now, most real support for this has been more or less experimental (e.g. the Brook system [stanford.edu] for doing numeric processing on GPUs. Brook is also sufficiently different from an average programming language that it's probably fairly difficult to put to use in quite a few situations.
Having a physics-oriented framework will probably make this capability quite a bit easier to apply in quite a few more situations, which is clearly a good thing (especially for nVidia and perhaps ATI, of course).
The part I find interesting is that Intel has taken a big part of the low-end graphics market. Now nVidia is working at taking more of the computing high-end market. I can see it now: a game that does all the computing on a couple of big nVidia chips, and then displays the results via Intel integrated graphics...
Hardware is not the only preformance answer (Score:2, Insightful)
Re:"Physics" (Score:5, Insightful)
Re:You know what... (Score:5, Insightful)
You can have a socialist government, and market competition.
The USSR's "implementation" of socialism was flawed. Don't get that confused with actual socialism.
Re:I don't understand? (Score:5, Insightful)
Yes. Why does that surprise you? When you do incredibly complicated physics simulation, things can be very parallel and consequently GPUs outperform CPUs.
Why would I waste precious GPU processing to process Physics? I mean, all the CPU does these days is handle AI, physics, and texture loading. If you offload physics to the GPU, then the CPU is doing less and your swamping the GPU with more work.
You seem to be under the impression that your GPU cycles are more important than your cpu cycles. This is done with SLI for a reason..
If it does increase frame rates, then I would suggest why not improve graphics rendering rather then physics processing.
Because the quality of the render is controlled in software? Because hardware is currently limited by, ya know, physics and technology?
I find that for all the advances nVidia and ATI have made over the years, 3D gaming visual quality is still inferior to cinematic quality 3D rendering.
And in other news, offline processing is still more powerful than online processing. There's a shocker.
I would prefer if nVidia and ATI actually focused on bringing cinematic quality 3D rendering to gaming, instead of just claiming they do.
First of all, 99.9% of what nVidia and ATI do is exactly that. They are also starting to realize that the GPU paradigm, with minor modification, can be turned into a very powerful co-processor... and they are the experts at creating those types of chips. The market for them is growing... and they don't want to miss the boat.
I want smooth high-poly models with realistic lighting and 60fps.
And I want peace in the middle east. Give it 10 years, one of us may get our wish.
Re:Competition (Score:3, Insightful)
The physX card is considerably more cumbersome to use for the average gamer, and is consequently less likely to be supported by game developers. Not to mention the fact that the cards are likely to be quite expensive.
well, they COULD but... (Score:5, Insightful)
Physics acceleration that allows for rather impressive collisions and water: MUCH EASIER.
Maximum output for minimum input. Having physics acceleration in the GPU makes sense as you don't have to buy an extra accelerator card.
Re:I don't understand? (Score:4, Insightful)
Can't read the article... (Score:3, Insightful)
First, what physics API are they using? This is, after all, a little like OpenGL vs DirectX. You need a physics API to do this stuff, and there are out there a *lot* of portable and high quality APIs. Havok, Newton, Aegeia (spelling?), and the open source ODE ( which I use ). The APIs aren't interchangeable, and aren't necessarily free.
Second, at least when I'm doing this work, there's a *lot* of back and forth between the physics and my game engine. Maybe not a whole lot of data, but a lot of callbacks -- a lot of situations where the collision system determines A & B are about to touch and has to ask my code what to do about it. And my code has to do some hairy stuff to forward these events to A & B ( since physics engines have their own idea of what a physical object instance is, and it's orthogonal to my game objects, so I have to have some container and void ptr mojo ) and so on and so forth. If all this is running on the GPU, sure the math may be fast but I worry about all the stalls resulting from the back and forth. Sure, that can be parallelized and the callbacks can be queued, but still.
Anyway, I want info, not marketing.
Oh christ, and finally, I work on a Mac. When will I see support? ( lol. this is me falling off my chair, crying and laughing, crying... sobbing. Because I know the answer ). Can we at least assume/hope that they'll provide a software fallback api, and that that api will be available for linux and mac? After all, NVIDIA has linux and mac ports of Cg, so why not this? I'm keeping my fingers crossed.
Re:10x faster? (Score:3, Insightful)
Re:The PURE EVIL contained in modern graphics card (Score:3, Insightful)
Liars (Score:2, Insightful)
Liars end up in Hell.
Re:Hardware is not the only preformance answer (Score:1, Insightful)
That's exactly what you wanna do (Score:3, Insightful)
solving a linearized ODE is just plain ol' ordinary matrix math, very parallelizeable and a lot less computationally expensive than breaking up a transcendental function into piecewise conitinuous steps and calculating the result every time.
Forget 'physics' - give me a good math API (Score:5, Insightful)
The cards' math capabilities would be so much more accessible (and thus used by so many more programmers) if Nvidia (and ATI) would come out with standard math-library interfaces to their cards. Give us something that looks like FFTW and has been tweaked by the card engineers for maximum performance and then we will see everbody and his brother using these video cards for math co-processing.
Re:You know what... (Score:2, Insightful)
Finland is:
1) a socialist country.
2) the home of Linus Torvalds.
3) the home of Nokia.
Do stop spouting your ridiculous American propaganda. Socialism works, and most of the world uses it.
Old School is new again? (Score:5, Insightful)
Each time I hear that an "advance" has been made and I read that it is basically re-integrating various components back into the primary system or tying those components tighter to the CPU then I can't help but scream "AMIGA!" Of course, this leads to co-workers walking wider paths around me while having avoiding eye contact '-).
Still, all of these advances lead me to believe that we might going back to a dedicated chip style of computing BUT what I am also hoping for is a completely upgradeable system that I can pull the, say, physics processor out and plug a newer version or better chip into without having to replace the entire motherboard or daughterboard. Which, of course, leade me right back to that whole screaming scenario :) The Amiga style of computing may yet live again.
Re:FPS are a perfect example of where this will se (Score:3, Insightful)
Uh, that's highly unlikely. The physics of a flying body is no more difficult to compute that the physics of a running body. "Particle systems" are not the reason for the slowdown, more likely, it has to do with the fact that a player at high altitude can see a LOT of the game world and therefore more packets have to be sent in order to maintain a consistent view as the player flies through the air.
Re:You know what... (Score:4, Insightful)
A previously-announced physics processor unit? (Score:2, Insightful)
In fact, wasn't the PS3 supposed to have said chip from Ageia (or wtv)?
This would be cool, but i wonder how many would actually flock to it (if cheap enough (~40) then probably it would lead developers to assume its existence, and if not to default to using good old ix86).
Re:That's exactly what you wanna do (Score:3, Insightful)
Pretty much anywhere that the underlying equation is more complicated than a simple spherical potential.
Sure you *could* hard code it in, but if the analytical result is a function like exp(-x^2)cos(x), you're going to have include quite a few higher order terms to evaluate it to any precision, and you lose any advantage you had if the ODE could've been solved with a few simple multiplication/additions.
If all you care about is, "given state f({x},t=n) what is state f({x},t=n+1)" a numerical ODE solver is pretty much ideal for that.
While there are lots of funnies off of this... (Score:4, Insightful)
> Are these new programable cards capable of reading main memory, which
> OpenBSD would not be able to prevent if machdep.allowaperture were
> set to something other than 0?
Yes, they have DMA engines. If the privilege seperate X server has a
bug, it can still wiggle the IO registers of the card to do DMA to
physical addresses, entirely bypassing system security.
Thus, a resourceful attacker theoretically could get access to kernel memory through anything which allows access to the video card. An unusual and probably difficult-to-exploit hole, but a possible hole none the less.