Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

GPUs To Power Supercomputing's Next Revolution 78

evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."
This discussion has been archived. No new comments can be posted.

GPUs To Power Supercomputing's Next Revolution

Comments Filter:
  • Sweet (Score:5, Interesting)

    by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Thursday November 09, 2006 @05:55PM (#16789069) Homepage Journal
    GeForce 8800 release the first C-compiler environment for the GPU;

    One more step toward GPU Raytracing. We're already pushing rediculous numbers of polygons, with less and less return for our efforts. The future lies in projects like OpenRT [openrt.de]. With any luck, we'll start being able to blow holes through levels rather than having to run the rat-maze. ;)
  • by Marxist Hacker 42 ( 638312 ) * <seebert42@gmail.com> on Thursday November 09, 2006 @05:57PM (#16789089) Homepage Journal
    Simple video games that run ENTIRELY on the GPU- mainly for developers. Got 3 hours (or I guess it's now going on 7 hours) to wait for an ALTER statement to a table to complete, and you're bored stiff? Fire up this video game, and while your CPU cranks away, you can be playing the video game instead with virtually NO performance hit to the background CPU task.
  • by Baldrson ( 78598 ) * on Thursday November 09, 2006 @06:04PM (#16789167) Homepage Journal
    The 8800 looks like the first GPU that really enters the realm of the old fashioned supercomputing architectures pioneered by Seymour Cray that I cut my teeth on in the mid 1970s. I can't wait to get my hands on their "C" compiler.
  • 8800GTX and HPC (Score:5, Interesting)

    by BigMFC ( 592465 ) on Thursday November 09, 2006 @06:08PM (#16789195)
    The specs on this board are pretty crazy. 128 single precision FP units each capable of doing a FP Multiply add or a multiply and operating at 1.35 GHz and no longer closely coupled to the tradition graphics pipeline. The memory hierarchy also looks interesting... this design is going to be seeing a lot of comparisons to the Cell processor. Memory is attached via a 384 bit bus (320 on the GTS) and operates at 900MHz.

    The addition of a C compiler, drivers specific to GPGPU applications and available for linux (!) as well as XP/Vista means that this is going to be seeing widespread adoption amongst the HPC crowd. There probably won't be any papers on it published at SC06 in Florida next week, but over the next year there probably will be a veritable torrent of publications (there already is a LOT being done with GPUs). The new architecture really promotes GPGPU apps, and the potential performance/$ especially factoring in the development time which should be significantly less with this toolchain. A couple 8800GTXes in SLI and I could be giving traditional clusters a run for their money when it comes to apps like FFTs etc. I can't wait till someone benchmarks FFT performance using CUDA. If anyone finds such numbers post and let me know!

  • power management (Score:2, Interesting)

    by chipace ( 671930 ) on Thursday November 09, 2006 @06:53PM (#16789465)
    I think that implementing the gpu as a collection of configurable ALUs is an awesome idea. I have two gripes:

    (1) Power Management : I want at least 3 settings (lowest power, mid-range and max-performance)

    (2) Where's the killer app? I value my electricty more than contributing to folding and SETI.

    If they address these, I'm a customer... (I'm a cheap bastard who is fine with integrated 6150 graphics)
  • hybrids (Score:2, Interesting)

    by Bill Dog ( 726542 ) on Thursday November 09, 2006 @07:17PM (#16789607) Journal
    The following idea from TFA is what caught my eye:
    "In a sign of the growing importance of graphics processors, chipmaker Advanced Micro Devices inked a deal in July to acquire ATI for $5.4 billion, and then unveiled plans to develop a new "fusion" chip that combines CPU and GPU functions."

    I can see the coming age of multi-core CPU's not necessarily lasting very long now. We don't tend to need a large number of general-purpose CPU's. But a CPU+GPU chip, where the GPU has for example 128 1.35GHz cores (from the Nvidia press release), and with a new generation of compilers written to funnel sections of code marked parallelizable to the GPU portion, and the rest to the CPU, would be tremendous.

    Does Intel have any plans to try to acquire Nvidia?

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...