GPUs To Power Supercomputing's Next Revolution 78
evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."
Sweet (Score:5, Interesting)
One more step toward GPU Raytracing. We're already pushing rediculous numbers of polygons, with less and less return for our efforts. The future lies in projects like OpenRT [openrt.de]. With any luck, we'll start being able to blow holes through levels rather than having to run the rat-maze.
What I'd like to see come from this (Score:5, Interesting)
8800 and Seymour's machines (Score:4, Interesting)
8800GTX and HPC (Score:5, Interesting)
The addition of a C compiler, drivers specific to GPGPU applications and available for linux (!) as well as XP/Vista means that this is going to be seeing widespread adoption amongst the HPC crowd. There probably won't be any papers on it published at SC06 in Florida next week, but over the next year there probably will be a veritable torrent of publications (there already is a LOT being done with GPUs). The new architecture really promotes GPGPU apps, and the potential performance/$ especially factoring in the development time which should be significantly less with this toolchain. A couple 8800GTXes in SLI and I could be giving traditional clusters a run for their money when it comes to apps like FFTs etc. I can't wait till someone benchmarks FFT performance using CUDA. If anyone finds such numbers post and let me know!
power management (Score:2, Interesting)
(1) Power Management : I want at least 3 settings (lowest power, mid-range and max-performance)
(2) Where's the killer app? I value my electricty more than contributing to folding and SETI.
If they address these, I'm a customer... (I'm a cheap bastard who is fine with integrated 6150 graphics)
hybrids (Score:2, Interesting)
"In a sign of the growing importance of graphics processors, chipmaker Advanced Micro Devices inked a deal in July to acquire ATI for $5.4 billion, and then unveiled plans to develop a new "fusion" chip that combines CPU and GPU functions."
I can see the coming age of multi-core CPU's not necessarily lasting very long now. We don't tend to need a large number of general-purpose CPU's. But a CPU+GPU chip, where the GPU has for example 128 1.35GHz cores (from the Nvidia press release), and with a new generation of compilers written to funnel sections of code marked parallelizable to the GPU portion, and the rest to the CPU, would be tremendous.
Does Intel have any plans to try to acquire Nvidia?