Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

GPUs To Power Supercomputing's Next Revolution 78

evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."
This discussion has been archived. No new comments can be posted.

GPUs To Power Supercomputing's Next Revolution

Comments Filter:
  • So... (Score:5, Informative)

    by Odin_Tiger ( 585113 ) on Thursday November 09, 2006 @05:57PM (#16789087) Journal
    Let me see if I have this down right: With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system. Meanwhile, supercomputers are moving towards using GPU's as the main workhorse. Doesn't that strike anybody else as a little odd?
  • Acronym (Score:3, Informative)

    by benhocking ( 724439 ) <benjaminhocking@nOsPAm.yahoo.com> on Thursday November 09, 2006 @05:59PM (#16789113) Homepage Journal
    For those who are curious, CUDA stands for "compute unified device architecture".
  • by NerveGas ( 168686 ) on Thursday November 09, 2006 @06:02PM (#16789145)
    "I thought .. What is it that a CPU does that a GPU doesn't?"

    GPUs have dedicated circuitry to do math, math, and more math - and to do it *fast*. In a single cycle, they can perform mathematical computations that take general-purpose CPUs an eternity, in comparison.

  • Practical results (Score:2, Informative)

    by Anonymous Coward on Thursday November 09, 2006 @06:04PM (#16789165)
    Nice to see the mention of Acceleware in the press release. While a lot of the article is about lab results, Acceleware has been delivering actual GPU powered products for a couple of years now.
  • by JavaManJim ( 946878 ) on Thursday November 09, 2006 @06:06PM (#16789179)
    Excellent news! Below is the link, registration required, for the New York Times. I will try to paste the article.

    Second. Anyone out there working on books that have examples? Please reply with any good 'how to' sources.

    Source: http://www.nytimes.com/2006/11/09/technology/09chi p.html?ref=technology [nytimes.com]

    SAN JOSE, Calif., Nov. 8 -- A $90 million supercomputer made for nuclear weapons simulation cannot yet be rivaled by a single PC chip for a serious video gamer. But the gap is closing quickly.

    Indeed, a new breed of consumer-oriented graphics chips have roughly the brute computing processing power of the world's fastest computing system of just seven years ago. And the latest advance came Wednesday when the Nvidia Corporation introduced its next-generation processor, capable of more than three trillion mathematical operations per second.

    Nvidia and its rival, ATI Technologies, which was recently acquired by the microprocessor maker Advanced Micro Devices, are engaged in a technology race that is rapidly changing the face of computing as the chips -- known as graphical processing units, or G.P.U.'s -- take on more general capabilities.

    In recent years, the lead has switched quickly with each new family of chips, and for the moment the new chip, the GeForce 8800, appears to give the performance advantage to Nvidia.

    On Wednesday, the company said its processors would be priced at $599 and $449, sold as add-ins for use by video game enthusiasts and for computer users with advanced graphics applications.

    Yet both companies have said that the line between such chips and conventional microprocessors is beginning to blur. For example, the new Nvidia chip will handle physics computations that are performed by Sony's Cell microprocessor in the company's forthcoming PlayStation 3 console.

    The new Nvidia chip will have 128 processors intended for specific functions, including displaying high-resolution video.

    And the next generation of the 8800, scheduled to arrive in about a year, will have "double precision" mathematical capabilities that will make it a more direct competitor to today's supercomputers for many applications.

    "I am eagerly looking forward to our next generation," said Andy Keane, general manager of Nvidia's professional products division, a business the company set up recently to aim at commercial high-performance computing applications like geosciences and gene splicing.

    The chips made by Nvidia and ATI are shaking up the computing industry and causing a level of excitement among computer designers, who in recent years have complained that the industry seemed to have run out of new ideas for gaining computing speed. ATI and Advanced Micro Devices have said they are working on a chip, likely to emerge in 2008, that would combine the functions of conventional microprocessors and graphics processors.

    That convergence was emphasized earlier this year when an annual competition sponsored by Microsoft's research labs to determine the fastest sorting algorithm was won this year by a team that used a G.P.U. instead of a traditional microprocessor. The result is significant, according to Microsoft researchers, because sorting is a basic element of many modern computing operations.

    Moreover, while innovation in the world of conventional microprocessors has become more muted and largely confined to adding multiple processors, or "cores," to single chips, G.P.U. technology is continuing to advance rapidly.

    "The G.P.U. has this incredible memory bandwidth, and it will continue to double for the foreseeable future," said Jim Gray, manager of Microsoft's eScience group.

    Although the comparison has many caveats, both computer scientists and game designers said that Nvidia GeForce 8800 had in some ways moved near the realm for the computing power of the supercomputing world of the last decade.

    The fastest of thes
  • by Conradaroma ( 1025261 ) on Thursday November 09, 2006 @06:17PM (#16789241)

    It's nice to see the name Acceleware mentioned in the NVIDIA press release, although they are missing from the 'comprehensive' report on wired. It should be noted that they have been delivering High performance computing solutions for a couple of years or so already. I guess now it's out of the bag that NVIDIA's little graphics cards had something to with that.

    Anyone know of any other companies that have already been commercializing GPGPU technology?

  • GPGPU companies (Score:2, Informative)

    by BigMFC ( 592465 ) on Thursday November 09, 2006 @06:20PM (#16789259)
    Check out Peakstream (http://www.peakstreaminc.com/). They're a Silicon Valley startup doing a lot of tool development for multicore chips, GPUs and Cell.
  • by MojoKid ( 1002251 ) on Thursday November 09, 2006 @06:20PM (#16789267)

    We go into NVIDIA's "CUDA" (Compute Unified Device Architecture) here [hothardware.com] and it's pretty interesting actually.

  • by pingbak ( 33924 ) on Thursday November 09, 2006 @08:22PM (#16789977)
    Google "Dominik Goeddeke" and read his GPGPU tutorial. It's excellent, as far as tutorials go, and helped me bootstrap.

    Ok, ok, here's the link [uni-dortmund.de]...
  • by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Friday November 10, 2006 @12:58AM (#16791278) Homepage Journal
    Parent Post [slashdot.org]

    Great if you want fast answers, but the RAM used in GPUs isn't as robust accuracy-wise as normal RAM.

    You're confusing your technologies. The RAM used on video cards these days is effectively the same RAM you use with your CPU. The memory cannot lose data or very bad things will happen to the rendering pipeline.

    What you're thinking of is the intentional inaccuracy of the floating point calculations done by the GPU. In order to obtain the highest absolute graphical performance, most 3D drivers optimized for gaming attempt to drop the precision of the calculations to a degree that's unacceptable for engineering uses, but perfectly acceptable for gaming. NVidia and ATI make a lot of money by selling "professional" cards like the Quadro and the FireGL to engineering companies that need the greater precision. A lot of the difference is in the drivers (especially for the low-end models), but the cards do often have hardware technologies better suited to CAD-type work.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...