Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

NVIDIA Launches New SLI Physics Technology 299

Thomas Hines writes "NVIDIA just launched a new SLI Physics technology. It offloads the physics processing from the CPU to the graphics card. According to the benchmark, it improves the frame rate by more than 10x. Certainly worth investing in SLI if it works."
This discussion has been archived. No new comments can be posted.

NVIDIA Launches New SLI Physics Technology

Comments Filter:
  • You know what... (Score:2, Interesting)

    by fatduck ( 961824 ) on Monday March 20, 2006 @04:10PM (#14959324)
    Sounds like an ATI-killer to me! What ever happened to the hype about dedicated physics chips?
  • Nice (Score:5, Interesting)

    by BWJones ( 18351 ) * on Monday March 20, 2006 @04:12PM (#14959348) Homepage Journal
    This will be critically important as programs start to push particle and geometry modeling. I remember back when I had my Quadra 840av in 1993, I popped a couple of Wizard 3dfx Voodoo cards in it when they first started supporting SLI and the performance benefits were noticeable. Of course we were all hoping for the performance to continue to scale, but 3Dfx started getting interested in other markets including defense and then were bought by Nvidia making me wonder if SLI would ever really take off. It's nice to see that the technology is still around and flourishing.

  • co-processor (Score:5, Interesting)

    by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Monday March 20, 2006 @04:13PM (#14959356)
    How does this work in relation to AMD's consideration of a physics coprocessor or another specialized processor? It seems like that solution is superior.
  • General purpose GPUs (Score:5, Interesting)

    by Mr. Vandemar ( 797798 ) on Monday March 20, 2006 @04:13PM (#14959360) Homepage
    I've been waiting for this for a while. It's the obvious next step in GPU design. I have a feeling GPUs are going to become more and more general, and eventually accelerate the majority of inherently parallel processes, while the CPU executes everything else. We don't even have to change the acronym. Just call it a "Generic Processing Unit"...
  • Press release. (Score:3, Interesting)

    by Goalie_Ca ( 584234 ) on Monday March 20, 2006 @04:13PM (#14959363)
    Of course its nothing more than a press release but there are numerous questions it raises:

    1) What limitations are there on calculations. A GPU is not as general as a cpu and it would probably suck when dealing with branches especially when they aren't independant.

    2) How much faster could this actually be. Is it simply a matter of looking to the future? (ie: we can already run with Aniso and AA and high resolutions so 5 years from now they'll be "overpowered"). IMO the next logical step is full fledged HDR and then more polygons.

    3) What is exactly expected of these. General physics shouldn't be, but i can understand if they do small effects here or there.
  • "Technology" (Score:3, Interesting)

    by Anonymous Coward on Monday March 20, 2006 @04:14PM (#14959377)
    The "technology" is specifically designed for physics. The hardware is not, but the driver, API, and havok engine enhancements are. This is therefore "physics technology".

    Besides, I rather think this is what nVidia had in mind when they first started making SLI boards. It was always obvious that the rendering benefit from SLI wasn't going to be cost-effective. Turning their boards into general purpose game accelerators has probably been in their thoughts for a while.
  • by Homology ( 639438 ) on Monday March 20, 2006 @04:27PM (#14959484)
    Modern graphics cards can be used to bypass security measures as an unprivileged user (reading kernel memory, say). Theo de Raadt of OpenBSD reminded [theaimsgroup.com] users how modern X works:

    I would like to educate people of something which many are not aware of -- how X works on a modern machine.

    Some of our architectures use a tricky and horrid thing to allow X to run. This is due to modern PC video card architecture containing a large quantity of PURE EVIL. To get around this evil the X developers have done some rather expedient things, such as directly accessing the cards via IO registers, directly from userland. It is hard to see how they could have done other -- that is how much evil the cards contain. Most operating systems make accessing these cards trivially easy for X to do this, but OpenBSD creates a small security barrier through the use of an "aperture driver", called xf86(4) (...)

  • by supra ( 888583 ) on Monday March 20, 2006 @04:31PM (#14959519)
    And if you continue down this line of thinking, you realize that the GPU and CPU are asymptotically approaching each other.
    Hence the Cell processor.
  • Re:Nice (Score:1, Interesting)

    by Anonymous Coward on Monday March 20, 2006 @04:33PM (#14959535)
    Same idea (two cards sharing the work), but completely different technology this time around.
  • Re:10x faster? (Score:4, Interesting)

    by LLuthor ( 909583 ) <lexington.luthor@gmail.com> on Monday March 20, 2006 @04:34PM (#14959545)
    10 times faster is not all that unreasonable.

    I used brook to compute some SVM calculations, and my 7800GT was about 40x faster than my Athlon64 3000+ (even after I hand-optimized some loops using SSE instructions). So its perfectly understandable for physics to be 10x faster on the GPU.
  • by Vlad2.0 ( 956796 ) on Monday March 20, 2006 @04:42PM (#14959606)
    All your points are certainly valid, but I'd say the next era of physics in games is just around the corner. Go watch the spore video [google.com] to see an example of what's coming.

    Besides, who doesn't like rag dolling? I played through HL2 just so I could toss bodies around with the gravity gun. :)
  • Re:You know what... (Score:2, Interesting)

    by datawhore ( 161997 ) on Monday March 20, 2006 @04:47PM (#14959639)
    I think there's a confound in your argument: War is good for innovation, regardless of social system. Let me pose it another way - in peace time do you think the soviets would have been much interested in innovation? Without a market or a way for an individual to benefit from their hard work there is less purpose or drive toward innovation.
  • by jandrese ( 485 ) * <kensama@vt.edu> on Monday March 20, 2006 @04:48PM (#14959643) Homepage Journal
    I think the point is that this is for games where the bottleneck is in the CPU and the graphics card is sitting idle half of the time. By pulling 10% of the graphics card's resources to physics calculations, you could offload enough of the work from the CPU that it could keep the rest of the card completely fed and see a framerateimprovement with no additional hardware or loss in video quality.
  • by mosel-saar-ruwer ( 732341 ) on Monday March 20, 2006 @04:54PM (#14959680)

    What ever happened to the hype about dedicated physics chips?

    The original article appears to be slashdotted.

    So could somebody tell me how wide the floats are in this "SLI" engine? [I don't even know what "SLI" stands for.]

    AFAIK, nVidia [like IBM/Sony "cell"] uses only 32-bit single-precision floats [and, as bad as that is, ATi uses only 24-bit "three-quarters"-precision floats].

    What math/physics/chemistry/engineering types need is as much precision as possible - preferably 128 bits.

    Why? Because the stuff they are modelling tends to be highly non-linear and the calculations tend to be highly unstable.

    32-bits isn't even enough to give integer granularity up to 16 million:

    16777216 + 0 = 16777216
    16777216 + 1 = 16777216
    16777216 + 2 = 16777218
    16777216 + 3 = 16777220
    16777216 + 4 = 16777220
    16777216 + 5 = 16777220
    16777216 + 6 = 16777222
    16777216 + 7 = 16777224
    16777216 + 8 = 16777224
    16777216 + 9 = 16777224
    16777216 + 10 = 16777226
    16777216 + 11 = 16777228
    16777216 + 12 = 16777228
    16777216 + 13 = 16777228
    16777216 + 14 = 16777230
    16777216 + 15 = 16777232
    16777216 + 16 = 16777232
    16777216 + 17 = 16777232
    16777216 + 18 = 16777234
    16777216 + 19 = 16777236
    16777216 + 20 = 16777236
    16777216 + 21 = 16777236
    16777216 + 22 = 16777238
    16777216 + 23 = 16777240
    etc
  • Re:You know what... (Score:1, Interesting)

    by Anonymous Coward on Monday March 20, 2006 @05:04PM (#14959777)
    "A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work."

    Socialism does not imply monopolies any more than capitalism does (though both can lead to them). Scandanavian countries all have quasi-socialist governments and they are prosperous and competative.

    One more point, every American family is communist at heart (shared resources, centralized planning, etc).
  • by Warlokk ( 812154 ) on Monday March 20, 2006 @05:12PM (#14959827) Homepage
    I have 2 6600GT's SLI'd... the first cost me about $175, the 2nd was about $130. You don't necessarily have to buy the super-expensive cards to do SLI. Even today, you could buy a pair of 7600's for about $400, and those are brand new.
  • Re:Nice (Score:1, Interesting)

    by Anonymous Coward on Monday March 20, 2006 @05:30PM (#14959971)
    this isn't the same sli that 3dfx had. this strieght from the sli faq on slizone.com [slizone.com]

    "How does this technology differ from 3dfx's SLI? NVIDIA SLI differs in many ways. First, 3dfx SLI was implemented on a shared bus using PCI. The PCI bus delivered ~100MB/sec. of bus throughput, while PCI Express is a point-to-point interface that can deliver ~60x the total bandwidth of PCI. Second, 3dfx SLI performed interleaving of scan lines, and combined in the analog domain, which could result in image quality issues due to DAC differences and other factors. 3dfx Voodoo technology also only performed triangle setup, leaving the geometry workload for the CPU, hence 3dfx SLI only scaled simple texture fill rate, and then used inter-frame scalability. NVIDIA SLI technology is PCI Express based, uses a completely digital frame combining method that has no impact on image quality, can scale geometry performance, and supports a variety of scalability algorithms to best match the scalability method with application demands."
  • by Anonymous Coward on Monday March 20, 2006 @05:52PM (#14960163)
    Nope. Nvidia cripple their Consumer level cards to 800mb/s readback rate. You need to upgrade to a Quadro level card to get full duplex PCIe speeds.

  • Re:You know what... (Score:3, Interesting)

    by ultranova ( 717540 ) on Monday March 20, 2006 @08:13PM (#14961033)

    Wrong, Finland is democratic republic, and has always been one.
    There is a socialist party but that doesn't make the country socialistic.

    Finland is be a democratic country with heavy socialist leanings. It used to have even stronger socialist tendencies, but has suffered from incompetent leadership for the past two decades (ever since Kekkonen came too old and sick to rule, IMHO), and that has lead to a tighter integration with the globalized ultra-capitalist economy, much to the detriment of both economy and citizens.

    In any case, democratic countries tend to lean towards socialism, simply because socialism means public healthcare and other safety nets of a welfare state, and who wouldn't want assurances of safety ?

  • by Anonymous Coward on Monday March 20, 2006 @10:20PM (#14961535)
    Well, you're not wrong per se...

    There has been a constant battle over the last 20 years over who gets to do the processing, the CPU or dedicated chips. Although right now it may seem like multiple special-purpose chips may seem to have decisively won, these things go in cycles. The major, largely forgotten contribution of the Macintosh is to replace the modest CPU plus multiple support chips that were common in computers of that day (C64, Atari 16bit, Lisa, Xerox, Apple IIgs) with a blazingly fast 8 MHz CPU and a bare minimum of support chips. By comparison, the Lisa cost five times as much as the first Mac, and only ran at 5 MHz. Although it seems strange now, at the time, the benefits of a fast CPU were not considered to be all that great. This model gained ascendency with the IBM PC when it became clear that the support chips paradigm had large backwards compatibility problems, for example when the massive sales and huge install base of the C64 failed to carry over to the Amiga.

    In IBM land, Intel went on to wage a decades-long battle to have the CPU do everything. At one time, sound cards, network cards and modems were complex beasts that did a lot of processing that the CPU couldn't handle. Rapid advances in CPU speed lead to things like win-modems, simpler sounds cards (without the awesome wavetable and midi stuff), and networks cards that weren't, like an early ATM card I had, more powerful than the CPU. So, for awhile it seemed that the highly CPU-centric model was here to stay.

    Right now, it seems the pendulum is swinging the other way, but I wouldn't dare to make a prediction about 10 years from now. The PC business can be screwy...

    - Apostate
  • by temojen ( 678985 ) on Tuesday March 21, 2006 @01:02PM (#14964903) Journal
    Does he have this concern with soundcards, HDD controllers and network cards too? They've all got DMA capability, coprocessors, and firmware. Network cards even have network connectivity, making them potentially WAY more dangerous than a video card.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...