GPUs To Power Supercomputing's Next Revolution 78
evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."
So... (Score:5, Informative)
Acronym (Score:3, Informative)
Re: What makes a GPU so great (Score:5, Informative)
GPUs have dedicated circuitry to do math, math, and more math - and to do it *fast*. In a single cycle, they can perform mathematical computations that take general-purpose CPUs an eternity, in comparison.
Practical results (Score:2, Informative)
Another overview article NYTIMES and literature? (Score:5, Informative)
Second. Anyone out there working on books that have examples? Please reply with any good 'how to' sources.
Source: http://www.nytimes.com/2006/11/09/technology/09chi p.html?ref=technology [nytimes.com]
SAN JOSE, Calif., Nov. 8 -- A $90 million supercomputer made for nuclear weapons simulation cannot yet be rivaled by a single PC chip for a serious video gamer. But the gap is closing quickly.
Indeed, a new breed of consumer-oriented graphics chips have roughly the brute computing processing power of the world's fastest computing system of just seven years ago. And the latest advance came Wednesday when the Nvidia Corporation introduced its next-generation processor, capable of more than three trillion mathematical operations per second.
Nvidia and its rival, ATI Technologies, which was recently acquired by the microprocessor maker Advanced Micro Devices, are engaged in a technology race that is rapidly changing the face of computing as the chips -- known as graphical processing units, or G.P.U.'s -- take on more general capabilities.
In recent years, the lead has switched quickly with each new family of chips, and for the moment the new chip, the GeForce 8800, appears to give the performance advantage to Nvidia.
On Wednesday, the company said its processors would be priced at $599 and $449, sold as add-ins for use by video game enthusiasts and for computer users with advanced graphics applications.
Yet both companies have said that the line between such chips and conventional microprocessors is beginning to blur. For example, the new Nvidia chip will handle physics computations that are performed by Sony's Cell microprocessor in the company's forthcoming PlayStation 3 console.
The new Nvidia chip will have 128 processors intended for specific functions, including displaying high-resolution video.
And the next generation of the 8800, scheduled to arrive in about a year, will have "double precision" mathematical capabilities that will make it a more direct competitor to today's supercomputers for many applications.
"I am eagerly looking forward to our next generation," said Andy Keane, general manager of Nvidia's professional products division, a business the company set up recently to aim at commercial high-performance computing applications like geosciences and gene splicing.
The chips made by Nvidia and ATI are shaking up the computing industry and causing a level of excitement among computer designers, who in recent years have complained that the industry seemed to have run out of new ideas for gaining computing speed. ATI and Advanced Micro Devices have said they are working on a chip, likely to emerge in 2008, that would combine the functions of conventional microprocessors and graphics processors.
That convergence was emphasized earlier this year when an annual competition sponsored by Microsoft's research labs to determine the fastest sorting algorithm was won this year by a team that used a G.P.U. instead of a traditional microprocessor. The result is significant, according to Microsoft researchers, because sorting is a basic element of many modern computing operations.
Moreover, while innovation in the world of conventional microprocessors has become more muted and largely confined to adding multiple processors, or "cores," to single chips, G.P.U. technology is continuing to advance rapidly.
"The G.P.U. has this incredible memory bandwidth, and it will continue to double for the foreseeable future," said Jim Gray, manager of Microsoft's eScience group.
Although the comparison has many caveats, both computer scientists and game designers said that Nvidia GeForce 8800 had in some ways moved near the realm for the computing power of the supercomputing world of the last decade.
The fastest of thes
Currently viable solutions (Score:2, Informative)
It's nice to see the name Acceleware mentioned in the NVIDIA press release, although they are missing from the 'comprehensive' report on wired. It should be noted that they have been delivering High performance computing solutions for a couple of years or so already. I guess now it's out of the bag that NVIDIA's little graphics cards had something to with that.
Anyone know of any other companies that have already been commercializing GPGPU technology?
GPGPU companies (Score:2, Informative)
NVIDIA's CUDA Technology Explained (Score:2, Informative)
We go into NVIDIA's "CUDA" (Compute Unified Device Architecture) here [hothardware.com] and it's pretty interesting actually.
Re:Another overview article NYTIMES and literature (Score:4, Informative)
Ok, ok, here's the link [uni-dortmund.de]...
Re: accuracy problems (Score:4, Informative)
You're confusing your technologies. The RAM used on video cards these days is effectively the same RAM you use with your CPU. The memory cannot lose data or very bad things will happen to the rendering pipeline.
What you're thinking of is the intentional inaccuracy of the floating point calculations done by the GPU. In order to obtain the highest absolute graphical performance, most 3D drivers optimized for gaming attempt to drop the precision of the calculations to a degree that's unacceptable for engineering uses, but perfectly acceptable for gaming. NVidia and ATI make a lot of money by selling "professional" cards like the Quadro and the FireGL to engineering companies that need the greater precision. A lot of the difference is in the drivers (especially for the low-end models), but the cards do often have hardware technologies better suited to CAD-type work.