Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

NVIDIA Launches New SLI Physics Technology 299

Thomas Hines writes "NVIDIA just launched a new SLI Physics technology. It offloads the physics processing from the CPU to the graphics card. According to the benchmark, it improves the frame rate by more than 10x. Certainly worth investing in SLI if it works."
This discussion has been archived. No new comments can be posted.

NVIDIA Launches New SLI Physics Technology

Comments Filter:
  • You know what... (Score:2, Interesting)

    by fatduck ( 961824 )
    Sounds like an ATI-killer to me! What ever happened to the hype about dedicated physics chips?
    • by GundamFan ( 848341 )
      Compitition is good!

      If ATI was out of busness do you think nVidia would ever inovate again?

      A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work.
      • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Monday March 20, 2006 @04:27PM (#14959486) Homepage Journal
        "A monopoly is always bad for the consumer... this is one of the reasons socalism doesn't work."

        You can have a socialist government, and market competition.
        The USSR's "implementation" of socialism was flawed. Don't get that confused with actual socialism.
        • by Rei ( 128717 ) on Monday March 20, 2006 @05:24PM (#14959928) Homepage
          Your comment reminds me a bit of this article [uncyclopedia.org]. Concerning the reasons for the lack of success of the American Institute of Communist Studies' program for granting certificates certifying something that someone said is "communist":

          "And lastly, for reasons unknown, the AICS decided that half of the advisory board would consist of Communists and half of Libertarians. Since Communists believe that practically no one is a Communist including each other; and Libertarians believe that just about everything is indicative of Communism including most extant forms of Capitalism, the board reached an impasse in about half a second. "
    • What ever happened to the hype about dedicated physics chips?

      1. "Game" Physics tend to be more fun than real-world physics. (Who really wants to compute orbits for their starfighter? We want to bank on afterburners!)

      2. Game programmers haven't yet managed to create complex enough engines to demand physics engines. See point 1.

      3. Game production is hideously expensive these days, and game programmers are already stretched to the limit. If you try to get realistic physics like rigid body dynamics [wikipedia.org] into the gam
      • All your points are certainly valid, but I'd say the next era of physics in games is just around the corner. Go watch the spore video [google.com] to see an example of what's coming.

        Besides, who doesn't like rag dolling? I played through HL2 just so I could toss bodies around with the gravity gun. :)
      • Game programmers haven't yet managed to create complex enough engines to demand physics engines.

        It's hard to decipher what it is you're trying to say here, but if you're claiming that game physics haven't become sophisticated enough that they've given up scratching together some simple calculations and turned to third-party, dedicated, professional physics engines, you're dead wrong [wikipedia.org].

    • What ever happened to the hype about dedicated physics chips?

      The original article appears to be slashdotted.

      So could somebody tell me how wide the floats are in this "SLI" engine? [I don't even know what "SLI" stands for.]

      AFAIK, nVidia [like IBM/Sony "cell"] uses only 32-bit single-precision floats [and, as bad as that is, ATi uses only 24-bit "three-quarters"-precision floats].

      What math/physics/chemistry/engineering types need is as much precision as possible - preferably 128 bits.

      Why? Because t

  • "Physics" (Score:5, Funny)

    by 2.7182 ( 819680 ) on Monday March 20, 2006 @04:10PM (#14959326)
    This is a little misleading. The hardware is really just fast at computing, not specifically designed for "physics". For example it doesn't have a build in ODE solver.
    • "Technology" (Score:3, Interesting)

      by Anonymous Coward
      The "technology" is specifically designed for physics. The hardware is not, but the driver, API, and havok engine enhancements are. This is therefore "physics technology".

      Besides, I rather think this is what nVidia had in mind when they first started making SLI boards. It was always obvious that the rendering benefit from SLI wasn't going to be cost-effective. Turning their boards into general purpose game accelerators has probably been in their thoughts for a while.
    • Re:"Physics" (Score:5, Insightful)

      by Quaoar ( 614366 ) on Monday March 20, 2006 @04:26PM (#14959481)
      I dunno what company would release a game that needs to SOLVE ODE's on the fly...I imagine you'd solve the equations before-hand, and put them in a nice form where all you need to do is multiply/add terms. If a company wants a cloak to behave realistically in their game, I'm sure they just find the proper coefficients in development, and all the game has to do is crunch the numbers on the fly.
      • 1. Design game that needs to solve ODEs on the fly.
        2. ???
        3. Win Nobel Prize.

        Seriously, let me, uh, see your hardware/code before you go patent it. Just curious, you know.
        • Heh I think the ??? these days can be replaced with "PATENT IT AND WAIT"
        • by Animats ( 122034 ) on Tuesday March 21, 2006 @01:29AM (#14962090) Homepage
          We already have that patent. [animats.com] For some years, we were locked into a licensing and noncompete agreement, which is why we haven't done much in that area for a while except cash the checks. But that noncompete period is now over. Stay tuned for further developments.

          Our approach produces better-looking movement than the low-end physics packages. We don't have the "boink problem", where everything bounces as if it were very light. Heavy objects look heavy. Our physics has "ease in" and "ease out" in collisions, as animators put it, derived directly from the real physics. When we first did this, back in the 200MHz era, it was slow for real time (a two-player fighter was barely possible) but now, game physics can get better.

          Take a look at our videos. [animats.com] Few if any other physics systems can even do the spinning top correctly, let alone the hard cases shown.

      • For a game, the best way to solve ODEs is numerically. Since you don't need the precision of the exact solution, the solutions are considerably simpler computationally once you've linearized them. Doing RK4 on the fly is precisely the best solution to the problem. Well, depending on the stiffness.. but you can always fall back on plain ol' trapezoid rule if you just wanna know, "what does the thing do until it hits the ground" to enough precision to be pretty.

        solving a linearized ODE is just plain ol' or
  • by Anonymous Coward on Monday March 20, 2006 @04:11PM (#14959335)
    This physics system is used for visual physics (i.e., realistic graphical effects), not gameplay physics, which are still done on the CPU.

    Therefore you get a 10x framerate increase over running massively intensive effects on the CPU.

    This is good, because games will look nicer. But if you don't have the GPU grunt, you can simply disable (or cut them down) them in game - it won't affect the gameplay.
  • SLI? (Score:5, Insightful)

    by temojen ( 678985 ) on Monday March 20, 2006 @04:11PM (#14959339) Journal
    Why does this require SLI? You can do stream processing on most relatively-modern accelerated 3d video cards.
    • Re:SLI? (Score:4, Informative)

      by Aranth Brainfire ( 905606 ) on Monday March 20, 2006 @04:20PM (#14959424)
      It doesn't, according to the article.
    • I think the point is that most SLI systems are bottlenecked on the CPU, whereas most single card systems still bottleneck on the Graphics card. I'm not sure if this is actually true, but that's the impression I got from the article. By offloading some of the physics processing, you can theoretically remove that bottleneck and get slightly better performance (until you bottleneck on something else, perhaps even the CPU again).

      This also means that the bottleneck will more often be the graphics card, which
    • I can't get the article to load, but looking at Havok's site with information about it, it doesn't.

      Basically, all you need is a video card that supports shader model 3. I believe this is all 6000 series GeForces (nVidia) and all X1000 series Radeons (ATI).

      It also appears that they are working hard to parallelize their physics engine, so the bit about SLI is just icing on the cake- it can support multiple cards on one machine.
  • Nice (Score:5, Interesting)

    by BWJones ( 18351 ) * on Monday March 20, 2006 @04:12PM (#14959348) Homepage Journal
    This will be critically important as programs start to push particle and geometry modeling. I remember back when I had my Quadra 840av in 1993, I popped a couple of Wizard 3dfx Voodoo cards in it when they first started supporting SLI and the performance benefits were noticeable. Of course we were all hoping for the performance to continue to scale, but 3Dfx started getting interested in other markets including defense and then were bought by Nvidia making me wonder if SLI would ever really take off. It's nice to see that the technology is still around and flourishing.

    • Re:Nice (Score:2, Informative)

      by jonoid ( 863970 )
      I don't mean to flame, but how did you put Voodoo cards in a Quadra? They never made NuBus Voodoo cards, only PCI. Perhaps you mean a PowerMac of some sort?
      • You know.... I think you are absolutely correct. I've had so many Macs, but it must have been my first PowerMac 9600....which would have made it around late 1996 or early 1997 or so. Thanks for the clarification, because as I remember in the dim recesses of my mind the 840av was the one that I had three Radius cards in allowing me to play Hornet with three monitors. Wow....it seems so long ago.

        • Re:Nice (Score:3, Funny)

          by Thing 1 ( 178996 )
          late 1996 or early 1997 [...] Wow....it seems so long ago.

          It's because we're getting closer to Advanced Technology #1.

          Like in Civilization, the way olden times rush by quickly, but once you start getting closer and closer to modern times, it starts taking longer and longer and then it's 5:30 in the morning and you can only sleep a half hour before school?

          Yeah, that's what technology's doing to all of us. ;-)

  • co-processor (Score:5, Interesting)

    by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Monday March 20, 2006 @04:13PM (#14959356)
    How does this work in relation to AMD's consideration of a physics coprocessor or another specialized processor? It seems like that solution is superior.
    • This has a one-up on things like PhysX [ageia.com], in my opinion, since everyone needs a video card, but you don't really need a "physics" card.

      Maybe that will change, but if the GPU can do the work, why invest in a separate piece of hardware?

  • General purpose GPUs (Score:5, Interesting)

    by Mr. Vandemar ( 797798 ) on Monday March 20, 2006 @04:13PM (#14959360) Homepage
    I've been waiting for this for a while. It's the obvious next step in GPU design. I have a feeling GPUs are going to become more and more general, and eventually accelerate the majority of inherently parallel processes, while the CPU executes everything else. We don't even have to change the acronym. Just call it a "Generic Processing Unit"...
    • by supra ( 888583 )
      And if you continue down this line of thinking, you realize that the GPU and CPU are asymptotically approaching each other.
      Hence the Cell processor.
  • Press release. (Score:3, Interesting)

    by Goalie_Ca ( 584234 ) on Monday March 20, 2006 @04:13PM (#14959363)
    Of course its nothing more than a press release but there are numerous questions it raises:

    1) What limitations are there on calculations. A GPU is not as general as a cpu and it would probably suck when dealing with branches especially when they aren't independant.

    2) How much faster could this actually be. Is it simply a matter of looking to the future? (ie: we can already run with Aniso and AA and high resolutions so 5 years from now they'll be "overpowered"). IMO the next logical step is full fledged HDR and then more polygons.

    3) What is exactly expected of these. General physics shouldn't be, but i can understand if they do small effects here or there.
  • by Hortensia Patel ( 101296 ) on Monday March 20, 2006 @04:14PM (#14959374)
    I don't think this is a general physics processor. It seems to be aimed at "eyecandy" physics calculations - mostly particle systems - whose results don't need to feed back into application logic. Which makes sense, given than GPU->CPU readbacks are a notorious perfomance killer.

    Potentially shiny, but not really revolutionary or new. People have been doing particle system updates with shaders for a while now.
  • by Anonymous Coward on Monday March 20, 2006 @04:17PM (#14959395)
    This neither requires SLI nor is it limited to NVIDIA chips. NVIDIA is just launching it publicly. ATI will be showing it off behind closed doors this week.
    • ATI will be showing it off behind closed doors this week.

      ...but considering how ATI's been lately, showing it off this week means they might be able to deliver a working product this year -- but just as likely not.

      I really don't mean to be nasty, but it seems like an awful lot of what ATI has announced recently has taken an _awfully_ long time to really become available.

      The usual explanation for this is that the effort that goes into designing the chip for the XBox distracts the company involved (

    • Good, because I have my heart set on a Crossfire system, not an SLI system(personal preference).
  • 10x faster? (Score:5, Funny)

    by Anonymous Coward on Monday March 20, 2006 @04:18PM (#14959407)
    10x faster? They might as well just say it's infinity times faster so that we know they are bullshitting from the second we read it...
    • Re:10x faster? (Score:4, Interesting)

      by LLuthor ( 909583 ) <lexington.luthor@gmail.com> on Monday March 20, 2006 @04:34PM (#14959545)
      10 times faster is not all that unreasonable.

      I used brook to compute some SVM calculations, and my 7800GT was about 40x faster than my Athlon64 3000+ (even after I hand-optimized some loops using SSE instructions). So its perfectly understandable for physics to be 10x faster on the GPU.
      • Re:10x faster? (Score:3, Insightful)

        by richdun ( 672214 )
        The GPU may be 10x faster at physics calculations, but the summary says framerate improvments of 10x - so how realistic is something like 600 fps? Ridiculous, even if you had a monitor/graphics system capable of 600 refreshes per second.
        • The GPU may be 10x faster at physics calculations, but the summary says framerate improvments of 10x - so how realistic is something like 600 fps? Ridiculous, even if you had a monitor/graphics system capable of 600 refreshes per second.

          Emphasis mine. You already know what I'm going to say, based on my emphasis, right?

          Regardless, the summary doesn't even say that. It says according to the benchmark, it got a 10x framerate improvement. The benchmark happened to be a very intensive physics simulation.
          • You already know what I'm going to say, based on my emphasis, right?

            Hehe, no whatever could you mean?

            I just love what makes it through as summaries on here. I think grandparent of my original post was right - you have to immediately call BS on something that has "benchmark" and "10x" in it. Very few posters got what you pointed out, that it was a physics-itensive benchmark, and that its results can't be reasonably extrapolated to all cases. We almost need a Slashdot for Dummies that is just links
        • Re:10x faster? (Score:3, Informative)

          by niskel ( 805204 )
          The article compared fancy physics effects on the CPU at ~6fps and fancy physics effects on the GPU at ~60fps. This is completely understandable. It does nothing for current games and you most definitely will not see framerates of 600.
    • 10x faster? They might as well just say it's infinity times faster so that we know they are bullshitting from the second we read it...

      Everything I've ever read (and it's been alot) on people moving proper algorithms from CPUs to GPUs routinely get 10x speedups. If you don't believe it... try to play an FPS game with software emulation and no graphics hardware... I can promise you the speedup from the hardware is well above 10x.
  • Competition (Score:2, Informative)

    by Anonymous Coward
    Don't forget that http://www.ageia.com/ [ageia.com] is already doing this, and set to ship their cards sometime this year hopefully. Of course the significant difference between the two is that you would only have to buy one card for the SLI solution.
    • Re:Competition (Score:3, Insightful)

      by LLuthor ( 909583 )
      Many many people already have a capable GPU and would only need a driver/software update.

      The physX card is considerably more cumbersome to use for the average gamer, and is consequently less likely to be supported by game developers. Not to mention the fact that the cards are likely to be quite expensive.
      • We can go one of two ways:

        1. Absolutely everything is dealt with by general purpose chips which can handle anything you throw at them, or:
        2. Everything can be dealt with by its own dedicated unit. Physics, graphics, AI, audio, everything.

        Either way, it works best if you can properly thread your engines so that things can be properly parallel processed. Fix that first, then we can start worrying about if our games need PPUs, GPUs, CPUs or any other PU we can come up with.

        Personally I'm in favour of seperate
      • You'd still have to buy two cards for the SLI solution because SLI is two (or more, ugh) cards by definition. I agree a graphics card solution would be an easier sell. Maybe ageia could license their chip to ATI/NVidia and integrate onto one board so that it was only a driver update for the user.

        "The physX card is considerably more cumbersome to use for the average gamer..."
        It would just be another driver for gamer. The onus would be on the developers to support it.

        "Not to mention the fact that the cards
  • PCI Express (Score:3, Insightful)

    by CastrTroy ( 595695 ) on Monday March 20, 2006 @04:19PM (#14959416)
    Why not have a complete physics card? It would be a nice use for that PCI express bus which only has video cards as an option right now. That way you could just buy the physics card, without having to upgrade the video card. Although this is all kind of weird. Start offloading everything off to specialized cards, you pretty much have a multiple CPU machine, where each CPU is specially tuned to do a specific type of processing. Might be the leap necessary to maintain Moore's law.
    • How does stacking processors affect Moore's law? That doesn't increase IC complexity, otherwise we might as well claim we've far surpassed Moore's law because Livermore's BlueGene install has 6.8 trillion transistors...
    • You're not the first one [ageia.com] to think of that. Of course this is the old cycle of reincarnation rearing its ugly head again. Sadly, I don't think the PhysX card will be that great of a success unless a LOT of game developers get on board. It's just rather expensive for what it offers and adds yet another layer of complexity to a system that can already be hard to get running correctly.
    • Re:PCI Express (Score:3, Informative)

      by soldack ( 48581 )
      Lots of other things use PCI-Express including:
      Single and Dual Port 4X SDR and DDR InfiniBand over PCI-Express x8
      Dual port 2Gb and 4Gb FibreChannel over PCI-Express x4
      Ethernet (multiport 1 gigabit and 10 gigabit), over PCI-Express x4
      Multi port FireWire 800 over PCI-Express x1
      DualChannel UltraSCSI320 over PCI-Express x1

      There are more probably... PCI-Express grew out of InfiniBand. They cut out the networking to make it cheaper for just inside a single system. Ironically, they put a lot of the networking b
  • by Jerry Coffin ( 824726 ) on Monday March 20, 2006 @04:21PM (#14959434)
    A few years ago, they were being slammed for doing load balancing where they offloaded graphics processing onto the CPU when/if the CPU was less busy than the GPU. Now the GPUs are enough faster that they can frequently expect to be "ahead" of the CPU -- so now they're starting to work on doing the opposite, offloading work from the CPU to the GPU instead.

    Of course, the basic isn't exactly brand new -- some of us have been writing shaders to handle heavy duty math for a while. The difference is that up until now, most real support for this has been more or less experimental (e.g. the Brook system [stanford.edu] for doing numeric processing on GPUs. Brook is also sufficiently different from an average programming language that it's probably fairly difficult to put to use in quite a few situations.

    Having a physics-oriented framework will probably make this capability quite a bit easier to apply in quite a few more situations, which is clearly a good thing (especially for nVidia and perhaps ATI, of course).

    The part I find interesting is that Intel has taken a big part of the low-end graphics market. Now nVidia is working at taking more of the computing high-end market. I can see it now: a game that does all the computing on a couple of big nVidia chips, and then displays the results via Intel integrated graphics...

  • I wonder if these cards could be useful for for Numerical computation? I could use extra cpu power for solving a linear system.
  • Applications should be built to be more efficient, to handle modern hardware, instead of simply relying on consumers purchasing faster hardware.
    • Yeah, I don't get these game programmers, always writing shitty bloated code any old Slashbot could best.
    • Except that, in the world of games and graphics, the programmers are writing efficient apps. The demands they (ie we) make of graphics nowadays mean that the hardware has to be faster and faster too.

      Look a few years back and see what the cutting edge of graphics was, and look to the kind of things we get today. Go forward a few years and we will be having realistic computer-generated images.

      Now, if they can only work on making websites more scalable for a slashdotting, I could read the article!
    • Yes, you could probably re-write a lot of applications in lower level languages, and have them run twice as fast. But it would cost 10 times as much.

      It's far cheaper to write applications in high level languages, and run them on today's hardware, than it is to hyper-optimize for yesterday's hardware.
  • by Homology ( 639438 ) on Monday March 20, 2006 @04:27PM (#14959484)
    Modern graphics cards can be used to bypass security measures as an unprivileged user (reading kernel memory, say). Theo de Raadt of OpenBSD reminded [theaimsgroup.com] users how modern X works:

    I would like to educate people of something which many are not aware of -- how X works on a modern machine.

    Some of our architectures use a tricky and horrid thing to allow X to run. This is due to modern PC video card architecture containing a large quantity of PURE EVIL. To get around this evil the X developers have done some rather expedient things, such as directly accessing the cards via IO registers, directly from userland. It is hard to see how they could have done other -- that is how much evil the cards contain. Most operating systems make accessing these cards trivially easy for X to do this, but OpenBSD creates a small security barrier through the use of an "aperture driver", called xf86(4) (...)

    • Sorry, hard to take someone serious when they use evil like that, espicially in regards to hardware.

      I know the hacker jargon use of evil, but this is really over used.

      OTOH, it's Theo, and he likes to hear himself talk.
    • Non-root, user-level access to IO ports (by authorized programs) is not evil; it's what allows non-kernel level display servers. It keeps some really complicated stuff out of the kernel, thus improving system stability.
    • by xactoguy ( 555443 ) on Tuesday March 21, 2006 @03:58AM (#14962416)
      ... Most of you didn't get the point. It's not that you can access the GPU from userland (it depends on that access, but that's not the point). The main point is that that the current gen of programmable GPUs allow you to (theoretically) directly access kernel memory, as pointed out later in the thread by Theo:


      > Are these new programable cards capable of reading main memory, which
      > OpenBSD would not be able to prevent if machdep.allowaperture were
      > set to something other than 0?

      Yes, they have DMA engines. If the privilege seperate X server has a
      bug, it can still wiggle the IO registers of the card to do DMA to
      physical addresses, entirely bypassing system security.


      Thus, a resourceful attacker theoretically could get access to kernel memory through anything which allows access to the video card. An unusual and probably difficult-to-exploit hole, but a possible hole none the less.
  • A company called Ageia is making a physics processing card [gamespot.com] that will handle physics calculations. It will be supported by City of Heroes/Villains [vnewscenter.com] when it is available.
  • by l33t-gu3lph1t3 ( 567059 ) <arch_angel16.hotmail@com> on Monday March 20, 2006 @04:39PM (#14959584) Homepage
    Real-time cinematic quality graphics rendering = HARD.
    Physics acceleration that allows for rather impressive collisions and water: MUCH EASIER.

    Maximum output for minimum input. Having physics acceleration in the GPU makes sense as you don't have to buy an extra accelerator card.
  • by TomorrowPlusX ( 571956 ) on Monday March 20, 2006 @04:43PM (#14959608)
    I can't read the article since it's slashdotted, but here's what I want to know:

    First, what physics API are they using? This is, after all, a little like OpenGL vs DirectX. You need a physics API to do this stuff, and there are out there a *lot* of portable and high quality APIs. Havok, Newton, Aegeia (spelling?), and the open source ODE ( which I use ). The APIs aren't interchangeable, and aren't necessarily free.

    Second, at least when I'm doing this work, there's a *lot* of back and forth between the physics and my game engine. Maybe not a whole lot of data, but a lot of callbacks -- a lot of situations where the collision system determines A & B are about to touch and has to ask my code what to do about it. And my code has to do some hairy stuff to forward these events to A & B ( since physics engines have their own idea of what a physical object instance is, and it's orthogonal to my game objects, so I have to have some container and void ptr mojo ) and so on and so forth. If all this is running on the GPU, sure the math may be fast but I worry about all the stalls resulting from the back and forth. Sure, that can be parallelized and the callbacks can be queued, but still.

    Anyway, I want info, not marketing.

    Oh christ, and finally, I work on a Mac. When will I see support? ( lol. this is me falling off my chair, crying and laughing, crying... sobbing. Because I know the answer ). Can we at least assume/hope that they'll provide a software fallback api, and that that api will be available for linux and mac? After all, NVIDIA has linux and mac ports of Cg, so why not this? I'm keeping my fingers crossed.
    • Havok is the primarily used API in the gaming industry. It is the one being targeted by this implementation.

      That said, it would presumably be possible to implement other APIs (if there is sufficient demand), given that the GPU hardware is now general enough to handle that level of computation.
  • Microsoft has also announced a new server technology that unloads all calls from Slashdot to a seperate system avoiding the dreaded "Slashdotting" effect.
  • Liars (Score:2, Insightful)

    by nnnneedles ( 216864 )
    "..improves the frame rate by more than 10x"

    Liars end up in Hell.
  • .....that makes games fun to play?
  • BF2 for example. This game is ALL physics... I love it personally, and one of the coolest (and crappy) things is when you get shelled by artillary 500 yards into the air. Your limbs are flying everywhere and the FPS decrease is noticeable when there are 10 or so soliders all ragdolled in the air (i'm sure particles have a lot to do with this also, but IAN a FPS programmer). FEAR is another great example. This is just the turning point in realism for FPS games, and that is very cool IMHO.
    • BF2 for example. This game is ALL physics... I love it personally, and one of the coolest (and crappy) things is when you get shelled by artillary 500 yards into the air. Your limbs are flying everywhere and the FPS decrease is noticeable when there are 10 or so soliders all ragdolled in the air (i'm sure particles have a lot to do with this also, but IAN a FPS programmer).

      Uh, that's highly unlikely. The physics of a flying body is no more difficult to compute that the physics of a running body. "Particl

  • by Jherek Carnelian ( 831679 ) on Monday March 20, 2006 @05:14PM (#14959838)
    The guys over at http://www.gpgpu.org/ [gpgpu.org] have been doing various math calculations, including 'physics' on GPUs for a while now. One big problem is that the only real API is OpenGL. So not only do you have to be a smart math programmer (which is pretty rare to begin with) but you also have to understand graphics programming too and then figure out how to map traditional math operations onto the graphics operations that OpenGL makes available. It isn't that hard to do simple things like matrix math, but trying to really optimize it for really good performance requires almost wizard-level understanding of OpenGL and the underlying hardware implementation.

    The cards' math capabilities would be so much more accessible (and thus used by so many more programmers) if Nvidia (and ATI) would come out with standard math-library interfaces to their cards. Give us something that looks like FFTW and has been tweaked by the card engineers for maximum performance and then we will see everbody and his brother using these video cards for math co-processing.
  • By the end of next week there will be a game out on the market that will require one of these cards...
  • Certainly worth investing in SLI if it works.

    How about, Certainly worth investing in SLI if it works on the specific game(s) you most want to play at higher speeds and/or resolutions.

    Otherwise, not worth investing in at all.

  • The (next) holy grail of games is the fully destructable environment, where damage isn't a sprite on a plane, but actual particle rearrangement of walls and buildings and such. This is a pretty strong step in that direction.

    The problem is that the internal logic of games like Quake and Half-Life is that, if the environment reacts correctly with respect to the physics of weapons, any game environment will quite quickly be reduced to piles of rubble and little more. Think of the pictures of Europe in WWII,
  • by fallen1 ( 230220 ) on Monday March 20, 2006 @05:53PM (#14960174) Homepage
    While I hate to ride a horse into the ground and then feed off its bones, every time I hear something like this happening I immediately think "Amiga". Why? I would guess that it is because the Amiga had a CPU and then it had dedicated chips to handle other functions - math, graphics, sounds, etc. This arrangement created a computer system that did not get surpassed for MANY years after its demise and, some would say, it still hasn't been bested in many areas (multi-tasking is one of those).

    Each time I hear that an "advance" has been made and I read that it is basically re-integrating various components back into the primary system or tying those components tighter to the CPU then I can't help but scream "AMIGA!" Of course, this leads to co-workers walking wider paths around me while having avoiding eye contact '-).

    Still, all of these advances lead me to believe that we might going back to a dedicated chip style of computing BUT what I am also hoping for is a completely upgradeable system that I can pull the, say, physics processor out and plug a newer version or better chip into without having to replace the entire motherboard or daughterboard. Which, of course, leade me right back to that whole screaming scenario :) The Amiga style of computing may yet live again.

Work is the crab grass in the lawn of life. -- Schulz

Working...