Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

AMD RV790 Architecture To Change GPGPU Landscape? 102

Posted by ScuttleMonkey
from the continuing-the-leapfrog-process dept.
Vigile writes "To many observers, the success of the GPGPU landscape has really been pushed by NVIDIA and its line of Tesla and Quadro GPUs. While ATI was the first to offer support for consumer applications like Folding@Home, NVIDIA has since taken command of the market with its CUDA architecture and programs like Badaboom and others for the HPC world. PC Perspective has speculation that points to ATI addressing the shortcomings of its lineup with a revised GPU known as RV790 that would both dramatically increase gaming performance as well as more than triple the compute power on double precision floating point operations — one of the keys to HPC acceptance."
This discussion has been archived. No new comments can be posted.

AMD RV790 Architecture To Change GPGPU Landscape?

Comments Filter:
  • OpenCL? (Score:4, Interesting)

    by Yvan256 (722131) on Monday March 09, 2009 @02:37PM (#27125505) Homepage Journal

    I hope all these new things will be compatible with OpenCL.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      since OpenCL is just an abstraction layer like OpenGL and DirectX most modern hardware already does it just needs driver support

      • by Chees0rz (1194661)
        Yes, and if the hardware does NOT support the standard, then the drivers will be doing all the work (workarounds), and we're right where we started- SLOW computations.

        These hardware devices MUST to be designed, developed, and tested for the OpenCL standard... And believe me- they are.
  • nVidia rules (Score:3, Insightful)

    by Anonymous Coward on Monday March 09, 2009 @02:40PM (#27125555)
    ... the "rename the same old shit four times to try and con people"-market, that's for sure.
    • Re:nVidia rules (Score:5, Informative)

      by i.of.the.storm (907783) on Monday March 09, 2009 @03:17PM (#27126065) Homepage
      It's sad that this is actually almost true... Geforce 8800GT->9800GT->GT2x0 (I think 250 or something) are all the same GPU...
      • Re: (Score:3, Informative)

        by StarHeart (27290) *

        No, they are all of the same base architecture, but aren't the same card. The 8800GT and the 9800GT are pretty close. Probably the biggest difference is some 9800GT cards are 55nm chips instead of 65nm. On the other hand there is a lot of difference between 8800GT and the GTX260. The GTX260 has 32 dedicated double precision processors that the 8800GT does not. My rough understanding is that those double precision processors are roughly equal to 1.5x a Q6600(quad core), or 6 cores. The GTX260 also comes with

      • by root_42 (103434)

        Actually it's the GTS250, which uses the G92b chip. The changes compared to the other G92 based chips are relatively small though. Hence the similar chip-name.

  • by residieu (577863) on Monday March 09, 2009 @02:42PM (#27125585)
    Waiting for GPGPGPUs
  • by jandrese (485) <kensama@vt.edu> on Monday March 09, 2009 @02:46PM (#27125647) Homepage Journal
    So this is what some anonymous guy on the internet thinks might happen? Granted, he has a lot of material in there, but in the end it's all just guesswork. Apparently he's a big fan of cheaper lower end video cards as well, and is hoping that ATI releases one.
    • AMD's double-point floating point performance is already great. What they lack is the rest of it. The programming model is pretty bad compared to CUDA (nobody is using Brook+), and they seem to be basically waiting for OpenCL to fix that. The bottlenecks in most attempts to use AMD chips for GPGPU code are also not really the floating-point units themselves, but the rest of the architecture; it's hard to keep the ALUs fed with your data without a magic compiler, a better programming model, a better architec

    • And, of course, like with most people who do a "My favored company will come out with the bestest thing EVAR!" he's ignoring the fact that nVidia won't sit still. I don't know what's coming next from nVidia. What I do know is they currently have a powerful card for gaming and GPGPU (GTX285) that does support double precision as well as single precision, though DP is much slower. So, fairly safe to say their next generation card will also support DP, and will probably be faster than their current card.

      To me,

  • ...because since I learned that BOINC now supports CUDA (but still has no love for GPGPU), I'm about to ditch my ATI cards for a few Nvidia ones.

    • Re: (Score:3, Informative)

      by Tweenk (1274968)

      CUDA = an Nvidia-specific way to do GPGPU...

      Personally I'm waiting for OpenCL, which would be to GPGPU what OpenGL was for 3D graphics when it was released - essentially a vendor and platform neutral general processing interface to the GPU.

      • by mdm-adph (1030332)

        Hey -- whatever it's called, I'm just about to make a purchase decision based upon the fact that my hardware isn't supported. Somebody needs to get coding. :P

      • by heson (915298)
        Sadly, experience dictates that: Whatever card you buy now is insignificant in performance when OpenCl is mature enough to use.
  • LOLNO (Score:5, Insightful)

    by MostAwesomeDude (980382) on Monday March 09, 2009 @03:06PM (#27125941) Homepage

    As far as I know, the RV790 will be in the R600/R700 family and will work almost perfectly with existing R600/R700 code. While I have no guarantees on this, current talks with AMD employees haven't given off any indication that this chipset will be radically different from its cousins.

  • by Belial6 (794905) on Monday March 09, 2009 @04:12PM (#27126861)
    What I want from the GPU is features like what the CPUs have so that the GPU can have multiple VMs running in it. The only reason that I don't run inside of a VM as my primary computing environment is because graphics acceleration pretty much suck in it. When AMD bought ATI I expected virtualized video to be one of their early announcement.

    Imagine if your VMed OS could believe that it had 100% control of the video card, but your video card would display on it's own 'surface', and still use full hardware acceleration for the process. As far as I can tell, video is the only serious stumbling block left in virtualizing the x86 architecture.
    • by kriebz (258828)

      That's an interesting idea and maybe it will happen one day, but hardware virt hasn't trickled down that far yet. It' still at the mid-range server level, except a few power users, developers, and engineers. Cards now how have dynamic virtual memory mapping, which might just make this possible, but certainly not simple.

      In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more u

      • In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more useful than emulating the 25 year old VGA BIOS and umpteen stupid extensions.

        That's a neat idea, I had forgotten OpenGL worked like that. However, I don't really see a use case. You're going to virtualize an X11 app and have it connect to the X11 server on the host? Surely this is something you only want to do for one app at a time, in which case why the VM?

    • Re: (Score:3, Interesting)

      by fast turtle (1118037)

      VirtualBox is supposed to have started solving this problem. It's beta and still experiemental but if it works well, then it's exactly what I've been looking for as it means I can finally run XP ina Vbox setup under a 64bit Gentoo Linux.

  • Some guy who does not know very much posts a long speculation article, all speculation done with his limited undertanding. And then this is posted as news.

    RV790 is just higher-clocked RV770. There are no more shader units. There are no shader units converted to 64-bit. it's just ~10% clock speed increase, giving about 10% more performance.

    RV800 will come at end of the year, that will have much power.

    • Just like the parent says: the actual article is a work of fiction and speculation with no hard facts on future products.... merely "what if's".
    • Some more information how RV7x0 calculates 64-bit floating point:

      All shader processors in RV7x0 are natively 32-bit. There are 5 ALU's in each shader processer. When RV7x0 calculates an 64-bit MUL operation, it does it by using 4 of those 32-bit ALU's together. When RV7x0 calculates an 64-bit ADD operation, it combines 2 32-bit ALU's together.

      That's why RV7x0 has floating point MUL throughput of 1/5 of it's 32-bit MUL thoughtput. There is no "group of 64-bit ALU's" like the article thinks.

  • His predictions about double precision appear to be based on a misunderstanding about how the 4800 series works. Here's what he says about it: "That 680 GFLOPs would be assuming AMD converts 2/5 of the stream units to double precision. Now, if AMD were to convert 3/5 of those units to double precision, a single card could do slightly over 1 TFLOP." He seems to believe that 1/5 of the stream units support double precision, and they could simply convert some additional ones to support it as well. But tha
  • I would rather have quality Open Source drivers. Yeah, you through the specs "over the wall", but it would be nice if you were a bit more active. Like giving us an actual Open Source driver. Or patches. Or something. We shouldn't be doing your work for you.

Trying to be happy is like trying to build a machine for which the only specification is that it should run noiselessly.

Working...