Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

GPUs To Power Supercomputing's Next Revolution 78

evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."
This discussion has been archived. No new comments can be posted.

GPUs To Power Supercomputing's Next Revolution

Comments Filter:
  • Reply to #16789087 (Score:4, Insightful)

    by Dr. Eggman ( 932300 ) on Thursday November 09, 2006 @06:18PM (#16789255)
    "Let me see if I have this down right: With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system. Meanwhile, supercomputers are moving towards using GPU's as the main workhorse. Doesn't that strike anybody else as a little odd?"
    16789087 [slashdot.org]

    I picture this:

    Before:
    CPU makers: "Hardware's expensive, keep it simple."
    GPU makers: "We can specialize the expensive hardware separatly!"


    Now:
    CPU makers: "Hardware's cheaper and cheaper, lets keep up our profits by making our more inclusive."
    GPU makers: "We can specialize the cheap hardware in really really big number-crunch projects!"


    btw, why isn't the reply button showing up? I'm too lazy to hand type the address.
  • by tayhimself ( 791184 ) on Thursday November 09, 2006 @08:40PM (#16790125)
    Unfortunately, the new NV80 is still not IEEE754 compliant for single precision (32 bit) floating point math. It is mostly compliant however, so may be usable by some people. Forget it if you want to do 64 bit double precision floats though.
  • Re:So... (Score:3, Insightful)

    by mikael ( 484 ) on Friday November 10, 2006 @12:02PM (#16794350)
    With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system.

    Not really...

    PC's run multiple processes that have unpredictable branching - like network protocol stacks, device drivers and word processors and plug'n'play devices. More CPU cores help to spread the load. For the desktop windows system, 3D functionality was simply a bolt-on to the windows system through a separate API, now it is integral to the windows system. However, the new multi-core CPU's will still have the graphics processing logic.

    In the past, supercomputers were either built from custom ASIC's or simply from a large number of CPU's networked together into a particular topology .

    GPU's now support both floating-point textures and downloadable shading programs that are executed in parallel, Combining these two features together, gives the GPU all the functionality of a supercomputer.
    Although up until now, the GPU has only supported 16-bit floating point precision rather than the 32-bit or 64-bit precision that traditional supercomputing applications such as FFT or computational fluid dynamics have required.

    And since these applications are purely mathematical equations with no conditional branching within the innermost loops, these are well suited to being ported onto the GPU. The only limitation has been that GPU's couldn't form scalable architectures - at least until SLI came along. So, you've basically got supercomputing performance on a board. This fits into the scalable architecture of a supercomputer [ibm.com].

Thus spake the master programmer: "Time for you to leave." -- Geoffrey James, "The Tao of Programming"

Working...