NVIDIA Launches New SLI Physics Technology 299
Thomas Hines writes "NVIDIA just launched a new SLI Physics technology. It offloads the physics processing from the CPU to the graphics card. According to the benchmark, it improves the frame rate by more than 10x. Certainly worth investing in SLI if it works."
You know what... (Score:2, Interesting)
Re:You know what... (Score:2, Insightful)
If ATI was out of busness do you think nVidia would ever inovate again?
A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work.
Re:You know what... (Score:5, Insightful)
You can have a socialist government, and market competition.
The USSR's "implementation" of socialism was flawed. Don't get that confused with actual socialism.
Re:You know what... (Score:5, Funny)
"And lastly, for reasons unknown, the AICS decided that half of the advisory board would consist of Communists and half of Libertarians. Since Communists believe that practically no one is a Communist including each other; and Libertarians believe that just about everything is indicative of Communism including most extant forms of Capitalism, the board reached an impasse in about half a second. "
Re:You know what... (Score:4, Informative)
Re:You know what... (Score:2, Interesting)
Re:You know what... (Score:4, Insightful)
Re:You know what... (Score:2)
Of course there are many compounding factors like participation in globalization, but there are socialistic states with few trade barriers that still fail to push out parti
Re:You know what... (Score:3, Informative)
Don't worry about it though, I'm just spouting off American propaganda.
Who needs Ford, GM, IBM, Apple, HP, Microsoft, Intel, etc? Finland has Nokia!
Samsung, LG, and Hyundai? They are no Nokia!
Sony, Panasonic, Toyota, and Honda do
Re:You know what... (Score:3, Interesting)
Finland is be a democratic country with heavy socialist leanings. It used to have even stronger socialist tendencies, but has suffered from incompetent leadership for the past two decades (ever since Kekkonen came too old and sick to rule, IMHO), and that has lead to a tighter integration with the globalized ultra-capitalist economy, much to the detriment of both econo
Re:You know what... (Score:2)
1. "Game" Physics tend to be more fun than real-world physics. (Who really wants to compute orbits for their starfighter? We want to bank on afterburners!)
2. Game programmers haven't yet managed to create complex enough engines to demand physics engines. See point 1.
3. Game production is hideously expensive these days, and game programmers are already stretched to the limit. If you try to get realistic physics like rigid body dynamics [wikipedia.org] into the gam
Someone hasn't seen the spore video (Score:2, Interesting)
Besides, who doesn't like rag dolling? I played through HL2 just so I could toss bodies around with the gravity gun.
Re:You know what... (Score:2)
It's hard to decipher what it is you're trying to say here, but if you're claiming that game physics haven't become sophisticated enough that they've given up scratching together some simple calculations and turned to third-party, dedicated, professional physics engines, you're dead wrong [wikipedia.org].
Slashdotted - How wide are the floats? (Score:3, Interesting)
What ever happened to the hype about dedicated physics chips?
The original article appears to be slashdotted.
So could somebody tell me how wide the floats are in this "SLI" engine? [I don't even know what "SLI" stands for.]
AFAIK, nVidia [like IBM/Sony "cell"] uses only 32-bit single-precision floats [and, as bad as that is, ATi uses only 24-bit "three-quarters"-precision floats].
What math/physics/chemistry/engineering types need is as much precision as possible - preferably 128 bits.
Why? Because t
Re:Slashdotted - How wide are the floats? (Score:3, Informative)
"Physics" (Score:5, Funny)
"Technology" (Score:3, Interesting)
Besides, I rather think this is what nVidia had in mind when they first started making SLI boards. It was always obvious that the rendering benefit from SLI wasn't going to be cost-effective. Turning their boards into general purpose game accelerators has probably been in their thoughts for a while.
Re:"Physics" (Score:5, Insightful)
Re:"Physics" (Score:2)
2. ???
3. Win Nobel Prize.
Seriously, let me, uh, see your hardware/code before you go patent it. Just curious, you know.
Re:"Physics" (Score:2)
Re:"Physics" (Score:2)
*runs off to patent on-the-fly ODE solving*
We already have that patent (Score:4, Informative)
Our approach produces better-looking movement than the low-end physics packages. We don't have the "boink problem", where everything bounces as if it were very light. Heavy objects look heavy. Our physics has "ease in" and "ease out" in collisions, as animators put it, derived directly from the real physics. When we first did this, back in the 200MHz era, it was slow for real time (a two-player fighter was barely possible) but now, game physics can get better.
Take a look at our videos. [animats.com] Few if any other physics systems can even do the spinning top correctly, let alone the hard cases shown.
That's exactly what you wanna do (Score:3, Insightful)
solving a linearized ODE is just plain ol' or
Re:That's exactly what you wanna do (Score:3, Insightful)
Pretty much anywhere that the underlying equation is more complicated than a simple spherical potential.
Sure you *could* hard code it in, but if the analytical result is a function like exp(-x^2)cos(x), you're going to have include quite a few higher order terms to
Improves framerate by 10x (Score:5, Insightful)
Therefore you get a 10x framerate increase over running massively intensive effects on the CPU.
This is good, because games will look nicer. But if you don't have the GPU grunt, you can simply disable (or cut them down) them in game - it won't affect the gameplay.
SLI? (Score:5, Insightful)
Re:SLI? (Score:4, Informative)
Re:SLI? (Score:2)
This also means that the bottleneck will more often be the graphics card, which
Re:SLI? (Score:2)
Basically, all you need is a video card that supports shader model 3. I believe this is all 6000 series GeForces (nVidia) and all X1000 series Radeons (ATI).
It also appears that they are working hard to parallelize their physics engine, so the bit about SLI is just icing on the cake- it can support multiple cards on one machine.
Nice (Score:5, Interesting)
Re:Nice (Score:2, Informative)
Re:Nice (Score:2)
Re:Nice (Score:3, Funny)
It's because we're getting closer to Advanced Technology #1.
Like in Civilization, the way olden times rush by quickly, but once you start getting closer and closer to modern times, it starts taking longer and longer and then it's 5:30 in the morning and you can only sleep a half hour before school?
Yeah, that's what technology's doing to all of us. ;-)
co-processor (Score:5, Interesting)
Re:co-processor (Score:2)
Maybe that will change, but if the GPU can do the work, why invest in a separate piece of hardware?
Re:co-processor (Score:2)
General purpose GPUs (Score:5, Interesting)
Re:General purpose GPUs (Score:2, Interesting)
Hence the Cell processor.
Press release. (Score:3, Interesting)
1) What limitations are there on calculations. A GPU is not as general as a cpu and it would probably suck when dealing with branches especially when they aren't independant.
2) How much faster could this actually be. Is it simply a matter of looking to the future? (ie: we can already run with Aniso and AA and high resolutions so 5 years from now they'll be "overpowered"). IMO the next logical step is full fledged HDR and then more polygons.
3) What is exactly expected of these. General physics shouldn't be, but i can understand if they do small effects here or there.
All the answers to your questions... (Score:5, Informative)
Before people get too excited... (Score:3, Insightful)
Potentially shiny, but not really revolutionary or new. People have been doing particle system updates with shaders for a while now.
Re:Before people get too excited... (Score:5, Informative)
That has not been true for a long long time. Since PCIe became a standard, bidirectional communication between CPUs and GPUs has been as easy as unidirectional communication.
Re:Before people get too excited... (Score:2)
Even if communication from the GPU to the CPU was instantaneous, this would still be a performance bottleneck. GPU's are typically 1-2 frames behind the CPU. If you wanted CPU readback of GPU results, the CPU has to stall until the GPU finishes it's task. It's not the bandwidth (which was limited on AGP) that is the bottleneck, it'
Re:Before people get too excited... (Score:2)
Welcome to the future
not limited to NVIDIA chips (Score:3, Informative)
Re:not limited to NVIDIA chips (Score:2)
I really don't mean to be nasty, but it seems like an awful lot of what ATI has announced recently has taken an _awfully_ long time to really become available.
The usual explanation for this is that the effort that goes into designing the chip for the XBox distracts the company involved (
Re:not limited to NVIDIA chips (Score:2)
10x faster? (Score:5, Funny)
Re:10x faster? (Score:4, Interesting)
I used brook to compute some SVM calculations, and my 7800GT was about 40x faster than my Athlon64 3000+ (even after I hand-optimized some loops using SSE instructions). So its perfectly understandable for physics to be 10x faster on the GPU.
Re:10x faster? (Score:3, Insightful)
Re:10x faster? (Score:2)
Emphasis mine. You already know what I'm going to say, based on my emphasis, right?
Regardless, the summary doesn't even say that. It says according to the benchmark, it got a 10x framerate improvement. The benchmark happened to be a very intensive physics simulation.
Re:10x faster? (Score:2)
Hehe, no whatever could you mean?
I just love what makes it through as summaries on here. I think grandparent of my original post was right - you have to immediately call BS on something that has "benchmark" and "10x" in it. Very few posters got what you pointed out, that it was a physics-itensive benchmark, and that its results can't be reasonably extrapolated to all cases. We almost need a Slashdot for Dummies that is just links
Re:10x faster? (Score:3, Informative)
Re:10x faster? (Score:2)
Everything I've ever read (and it's been alot) on people moving proper algorithms from CPUs to GPUs routinely get 10x speedups. If you don't believe it... try to play an FPS game with software emulation and no graphics hardware... I can promise you the speedup from the hardware is well above 10x.
Competition (Score:2, Informative)
Re:Competition (Score:3, Insightful)
The physX card is considerably more cumbersome to use for the average gamer, and is consequently less likely to be supported by game developers. Not to mention the fact that the cards are likely to be quite expensive.
Re:Competition (Score:2)
1. Absolutely everything is dealt with by general purpose chips which can handle anything you throw at them, or:
2. Everything can be dealt with by its own dedicated unit. Physics, graphics, AI, audio, everything.
Either way, it works best if you can properly thread your engines so that things can be properly parallel processed. Fix that first, then we can start worrying about if our games need PPUs, GPUs, CPUs or any other PU we can come up with.
Personally I'm in favour of seperate
Re:Competition (Score:2)
"The physX card is considerably more cumbersome to use for the average gamer..."
It would just be another driver for gamer. The onus would be on the developers to support it.
"Not to mention the fact that the cards
PCI Express (Score:3, Insightful)
Moore's law? (Score:2)
Re:PCI Express (Score:2)
Re:PCI Express (Score:3, Informative)
Single and Dual Port 4X SDR and DDR InfiniBand over PCI-Express x8
Dual port 2Gb and 4Gb FibreChannel over PCI-Express x4
Ethernet (multiport 1 gigabit and 10 gigabit), over PCI-Express x4
Multi port FireWire 800 over PCI-Express x1
DualChannel UltraSCSI320 over PCI-Express x1
There are more probably... PCI-Express grew out of InfiniBand. They cut out the networking to make it cheaper for just inside a single system. Ironically, they put a lot of the networking b
Articles with more bite (Score:2, Informative)
this one is better:
http://www.tgdaily.com/2006/03/20/nvidia_sli_forph ysics/ [tgdaily.com]
Or choose you own adventure via Google news:
http://news.google.ca/news?client=firefox-a&rls=or g.mozilla%3Aen-US%3Aofficial_s&hl=en&percentage_se rved=100&tab=wn&ie=ISO-8859-1&q=NVIDIA+SLI+Physics &btnG=Search+News [google.ca]
Just more load balancing (Score:5, Insightful)
Of course, the basic isn't exactly brand new -- some of us have been writing shaders to handle heavy duty math for a while. The difference is that up until now, most real support for this has been more or less experimental (e.g. the Brook system [stanford.edu] for doing numeric processing on GPUs. Brook is also sufficiently different from an average programming language that it's probably fairly difficult to put to use in quite a few situations.
Having a physics-oriented framework will probably make this capability quite a bit easier to apply in quite a few more situations, which is clearly a good thing (especially for nVidia and perhaps ATI, of course).
The part I find interesting is that Intel has taken a big part of the low-end graphics market. Now nVidia is working at taking more of the computing high-end market. I can see it now: a game that does all the computing on a couple of big nVidia chips, and then displays the results via Intel integrated graphics...
Math coprocessor? (Score:2)
Re:Math coprocessor? (Score:2)
Re:Math coprocessor? (Score:2)
-- ST
Hardware is not the only preformance answer (Score:2, Insightful)
Re:Hardware is not the only preformance answer (Score:2, Funny)
Re:Hardware is not the only preformance answer (Score:2)
Look a few years back and see what the cutting edge of graphics was, and look to the kind of things we get today. Go forward a few years and we will be having realistic computer-generated images.
Now, if they can only work on making websites more scalable for a slashdotting, I could read the article!
Re:Hardware is not the only preformance answer (Score:2)
It's far cheaper to write applications in high level languages, and run them on today's hardware, than it is to hyper-optimize for yesterday's hardware.
The PURE EVIL contained in modern graphics cards.. (Score:5, Interesting)
Re:The PURE EVIL contained in modern graphics card (Score:2)
I know the hacker jargon use of evil, but this is really over used.
OTOH, it's Theo, and he likes to hear himself talk.
Re:The PURE EVIL contained in modern graphics card (Score:3, Insightful)
Re:The PURE EVIL contained in modern graphics card (Score:3, Interesting)
While there are lots of funnies off of this... (Score:4, Insightful)
> Are these new programable cards capable of reading main memory, which
> OpenBSD would not be able to prevent if machdep.allowaperture were
> set to something other than 0?
Yes, they have DMA engines. If the privilege seperate X server has a
bug, it can still wiggle the IO registers of the card to do DMA to
physical addresses, entirely bypassing system security.
Thus, a resourceful attacker theoretically could get access to kernel memory through anything which allows access to the video card. An unusual and probably difficult-to-exploit hole, but a possible hole none the less.
Ageia making physics card (Score:2)
well, they COULD but... (Score:5, Insightful)
Physics acceleration that allows for rather impressive collisions and water: MUCH EASIER.
Maximum output for minimum input. Having physics acceleration in the GPU makes sense as you don't have to buy an extra accelerator card.
Can't read the article... (Score:3, Insightful)
First, what physics API are they using? This is, after all, a little like OpenGL vs DirectX. You need a physics API to do this stuff, and there are out there a *lot* of portable and high quality APIs. Havok, Newton, Aegeia (spelling?), and the open source ODE ( which I use ). The APIs aren't interchangeable, and aren't necessarily free.
Second, at least when I'm doing this work, there's a *lot* of back and forth between the physics and my game engine. Maybe not a whole lot of data, but a lot of callbacks -- a lot of situations where the collision system determines A & B are about to touch and has to ask my code what to do about it. And my code has to do some hairy stuff to forward these events to A & B ( since physics engines have their own idea of what a physical object instance is, and it's orthogonal to my game objects, so I have to have some container and void ptr mojo ) and so on and so forth. If all this is running on the GPU, sure the math may be fast but I worry about all the stalls resulting from the back and forth. Sure, that can be parallelized and the callbacks can be queued, but still.
Anyway, I want info, not marketing.
Oh christ, and finally, I work on a Mac. When will I see support? ( lol. this is me falling off my chair, crying and laughing, crying... sobbing. Because I know the answer ). Can we at least assume/hope that they'll provide a software fallback api, and that that api will be available for linux and mac? After all, NVIDIA has linux and mac ports of Cg, so why not this? I'm keeping my fingers crossed.
Re:Can't read the article... (Score:3, Informative)
That said, it would presumably be possible to implement other APIs (if there is sufficient demand), given that the GPU hardware is now general enough to handle that level of computation.
New server tech! (Score:2)
Liars (Score:2, Insightful)
Liars end up in Hell.
How about a video card.... (Score:2)
FPS are a perfect example of where this will sell (Score:2)
Re:FPS are a perfect example of where this will se (Score:3, Insightful)
Uh, that's highly unlikely. The physics of a flying body is no more difficult to compute that the physics of a running body. "Particl
Forget 'physics' - give me a good math API (Score:5, Insightful)
The cards' math capabilities would be so much more accessible (and thus used by so many more programmers) if Nvidia (and ATI) would come out with standard math-library interfaces to their cards. Give us something that looks like FFTW and has been tweaked by the card engineers for maximum performance and then we will see everbody and his brother using these video cards for math co-processing.
My prediction (Score:2)
Er, Not Exactly (Score:2)
How about, Certainly worth investing in SLI if it works on the specific game(s) you most want to play at higher speeds and/or resolutions.
Otherwise, not worth investing in at all.
Unintended Gameplay Effects (Score:2)
The problem is that the internal logic of games like Quake and Half-Life is that, if the environment reacts correctly with respect to the physics of weapons, any game environment will quite quickly be reduced to piles of rubble and little more. Think of the pictures of Europe in WWII,
Old School is new again? (Score:5, Insightful)
Each time I hear that an "advance" has been made and I read that it is basically re-integrating various components back into the primary system or tying those components tighter to the CPU then I can't help but scream "AMIGA!" Of course, this leads to co-workers walking wider paths around me while having avoiding eye contact '-).
Still, all of these advances lead me to believe that we might going back to a dedicated chip style of computing BUT what I am also hoping for is a completely upgradeable system that I can pull the, say, physics processor out and plug a newer version or better chip into without having to replace the entire motherboard or daughterboard. Which, of course, leade me right back to that whole screaming scenario :) The Amiga style of computing may yet live again.
Re:I don't understand? (Score:2)
Because there is more to computer graphics than simply playing games. Lots of people use computers to model graphics for everything from building airplanes to modeling combustion and nuclear weapons research. GPU's actually are fairly sophisticated computing platforms and can assist tremendously in helpi
Re:I don't understand? (Score:4, Informative)
Re:I don't understand? (Score:5, Insightful)
Yes. Why does that surprise you? When you do incredibly complicated physics simulation, things can be very parallel and consequently GPUs outperform CPUs.
Why would I waste precious GPU processing to process Physics? I mean, all the CPU does these days is handle AI, physics, and texture loading. If you offload physics to the GPU, then the CPU is doing less and your swamping the GPU with more work.
You seem to be under the impression that your GPU cycles are more important than your cpu cycles. This is done with SLI for a reason..
If it does increase frame rates, then I would suggest why not improve graphics rendering rather then physics processing.
Because the quality of the render is controlled in software? Because hardware is currently limited by, ya know, physics and technology?
I find that for all the advances nVidia and ATI have made over the years, 3D gaming visual quality is still inferior to cinematic quality 3D rendering.
And in other news, offline processing is still more powerful than online processing. There's a shocker.
I would prefer if nVidia and ATI actually focused on bringing cinematic quality 3D rendering to gaming, instead of just claiming they do.
First of all, 99.9% of what nVidia and ATI do is exactly that. They are also starting to realize that the GPU paradigm, with minor modification, can be turned into a very powerful co-processor... and they are the experts at creating those types of chips. The market for them is growing... and they don't want to miss the boat.
I want smooth high-poly models with realistic lighting and 60fps.
And I want peace in the middle east. Give it 10 years, one of us may get our wish.
Re:I don't understand? (Score:4, Insightful)
Correct, you don't understand (Score:2)
As I understand it the "physics" being modeled here is not the trajectory of the incoming rocket to determine if it hits you. It is the trajectories of the flames leaping forth as the rocket explodes; it's still just graphics processing. It is better for the CPU to describe things at an abstract level, then forget about it and let a dedicated processor project that into space and figure out how lights bounces off it, etc. Currently, these abstract descriptions are mostly in the form of sets of polygons.
Re:I don't understand? (Score:3, Informative)
Clearly, you misunderstand how cinematic 3D is rendered
Desktop GPUs will always be inferior to cinematic 3D, simply because cinematic 3D is rendered at a rate of several frames per day by a multi-million dollar farm
Re:Why use a GPU, use a PPU (Score:2)
Re:Why use a GPU, use a PPU (Score:5, Interesting)
Re:10x? (Score:2)
Hmmm, so if I'm getting 10FPS in some game, then it'll boost it to around 100FPS? That can't be right...
Why did you remove the According to the benchmark... part before quoting? You can't honestly believe that a benchmark specifically tailored to test a particular functionality actually can represent all cases and all the time, can you? When Intel tells you the SSE instruction set can increase speed by up to 4x, you don't actually thinkg "H
Re:This is a bad idea (Score:2, Interesting)