AMD RV790 Architecture To Change GPGPU Landscape? 102
Vigile writes "To many observers, the success of the GPGPU landscape has really been pushed by NVIDIA and its line of Tesla and Quadro GPUs. While ATI was the first to offer support for consumer applications like Folding@Home, NVIDIA has since taken command of the market with its CUDA architecture and programs like Badaboom and others for the HPC world. PC Perspective has speculation that points to ATI addressing the shortcomings of its lineup with a revised GPU known as RV790 that would both dramatically increase gaming performance as well as more than triple the compute power on double precision floating point operations — one of the keys to HPC acceptance."
OpenCL? (Score:4, Interesting)
I hope all these new things will be compatible with OpenCL.
Re: (Score:2, Informative)
since OpenCL is just an abstraction layer like OpenGL and DirectX most modern hardware already does it just needs driver support
Re: (Score:1)
These hardware devices MUST to be designed, developed, and tested for the OpenCL standard... And believe me- they are.
Re: (Score:2)
Surely not!
I'd heard it was recently ratified as an ISO standard...
Re: (Score:2)
Really. What's the ISO #?
Put up or shut up.
Re: (Score:2)
It was a joke in reference to a company recently pushing ISO to ratify a non standard.
But you're right, I should probably have used a tag.
Re:OpenCL? (Score:4, Informative)
I say HUH?
OpenCL is supported by Apple but also AMD and nVidia. The standard is being managed by a Not For Profit.
Compared to CUDA it is actually very open.
It is currently vapor ware but everything starts out that way for the most part.
OpenCL is more Closed BS than is CUDA or DX.
I just hope that it actually becomes a working standard.
Re: (Score:2)
According to wiki:
Re: (Score:2)
What's with the 'tude, do you work for Nvidia or something?
The truth is, if Nvidia wanted Cuda to be a standard, they should have opened it up to Khronos or whoever to make it generalized. They didn't and it will remain in the niche that it is in.
I'm no Apple fan, but at least the recognized a huge hole and did something to fill it.
Would a windows-only GPGPU standard really become the de facto standard? Maybe for games that may use it, but who does HPC on windows?
Re: (Score:2)
CAD (Score:1)
Until my CAD programs use DirectX, I won't call it 'standard', sure most games on Windows use DirectX, but that doesn't mean OpenGL pointless.
Re: (Score:1)
Re: (Score:2)
If my memory serves me correctly CUDA only works with NVIDIA GPUs. That basically makes it USELESS on ATI or Intel or any other GPU. Again you call something that has just been released "vaporware" while something that was released in Feb 2007 the defacto standard. So CUDA was released earlier. That does not mea
Re: (Score:2)
Re: (Score:2)
NVIDIA has about 60% of the market right now. While the it has majority of the market right now, that doesn't mean they will continue to have that market. So NVIDIA OWNING the market is an exaggeration and over simplification.
I wend to Dell and HP and they both offer ATI cards in
Re: (Score:2)
Re: (Score:2)
I'm sorry if that reality hurts your feelings. I am actually sorry that it turned out that way, as I was an early OpenGL adopter and for awhile it looked like they might bring true competition to the 3d market, and I am a firm believer in competition being ultimately good for the market.
So you were actually around programming OpenGL in the very early 90s, or perhaps you consider early adopter the time around when GLQuake was released ('97)? Whichever way you look at it, Direct3D only gained relevance much later than OpenGL was already well established, which makes statements like "it looked like they might bring true competition to the 3d market" sound completely off.
And I'd expect anyone who's been doing 3D graphics for as long as you claim you have to realize that, irrespectively of a m
Re: (Score:2)
Re: (Score:2)
Your proclamations that CUDA won make about as much sense as proclaiming that Glide won the 3D API wars in 1998.
CUDA is about 2 years old, GPGPU is a nascent technology, and everybody was doing their own thing because there was no standard to develop against. Direct3D and OpenGL provided something that everybody could implement, unlike Glide. OpenCL provides the same for the GPGPU landscape. The authors of CUDA (ie, nVidia) are aboard the OpenCL boat too.
Re: (Score:2)
Re: (Score:1)
If OpenCL picks up, then CUDA will be nothing but a layer in the software stack to save the Nvidia driver team some lines of code-
OpenCL -> Cuda -> Hardware
Nvidia has already accepted this by announcing OpenCL support- I d
Re: (Score:2)
OpenCL fanboy religion.
OF COURSE, open standards are more desirable than closed ones, EXCEPT when the open standard doesn't outperform the market standard OR, in OpenCL's case, DOESN'T EXIST.
There are capitalists now that want to make their buck NOW, not wait two years just to find out that they STILL have to wait another X years for something to roll out. If you want to do GPU offload processing, mathematical processing, or the state-of-the-art game NOW, you're stuck with CUDA. If you want to wait for th
Re: (Score:2)
You call OpenCL vaporware when it was first proposed last June, had the initial draft by November, and was released in December. By my count it's less the 9 months old. I don't know about you but in my world, it might considered vaporware if like that was years instead of months.
nVidia rules (Score:3, Insightful)
Re:nVidia rules (Score:5, Informative)
Re: (Score:3, Informative)
No, they are all of the same base architecture, but aren't the same card. The 8800GT and the 9800GT are pretty close. Probably the biggest difference is some 9800GT cards are 55nm chips instead of 65nm. On the other hand there is a lot of difference between 8800GT and the GTX260. The GTX260 has 32 dedicated double precision processors that the 8800GT does not. My rough understanding is that those double precision processors are roughly equal to 1.5x a Q6600(quad core), or 6 cores. The GTX260 also comes with
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
Actually it's the GTS250, which uses the G92b chip. The changes compared to the other G92 based chips are relatively small though. Hence the similar chip-name.
Re: (Score:2)
Waiting for: (Score:5, Funny)
Re: (Score:1, Insightful)
Your wish has been granted:
http://www.gpgpgpu.com/
Re: (Score:2, Funny)
Re: (Score:3, Funny)
And because you love Rick Rolls, you clicked it anyway? ;)
Re:Waiting for: (Score:4, Funny)
Re: (Score:1)
http://www.youtube.com/watch?v=oHg5SJYRHA0 [youtube.com]
Re: (Score:2)
Me too (Score:2)
Well, waiting for cheapie GPGPUs, anyway.
Re: (Score:2)
I want ASGPUs. I want a fat rendering pipeline that is optimized for, and can only render, dancing babies.
Re: (Score:2)
Re: (Score:2)
Well, I haven't heard that Gnu Privacy Guard has announced support for Programmable Graphics Processing Units, but I'm sure it's only a matter of time.
GPGPU= General Purpose GPU (Score:5, Informative)
General Purpose GPU's = massively parallel flops operations possible. ( Think matrix math, real time sims, lab testing, SETI, etc).
Still separate from a CPU, which has additional capabilities.
For the older folks, think of this as a math co-processor :) [ with it's own fan]
Re: (Score:3, Informative)
My wife told me to add "fluid dynamics!"
(note: I often read slashdot comments aloud to her, and sometimes she throws back replies that I dutifully pass along)
Re:GPGPU= General Purpose GPU (Score:5, Funny)
I'm sorry to break this to you, but... *whispers* she's not real...
Slashdot readers having wives was already crazy... but they being interested in it too... Yeah, right...
Re:Apparently I'm behind on my acronyms... (Score:5, Insightful)
Re: (Score:3, Insightful)
It is bad journalism on the part of the slashdot editors to force the readers to google for acronyms. Common, long-standing acronyms, like CPU, are one thing. But GPGPU should absolutely be defined in the summary. I find it hard to believe some people pay money for this site, and that "editors" get paid money for their "editing."
Re: (Score:2)
The small amount I pay to subscribe to Slashdot is just about the best bargain I get on the intertubes. Yes, I often have to google stuff that I find in stories and comments, but now that Firefox lets me just highlight a word or phrase, right-click and google I don't mind a bit. I find that I learn a lot in the process.
[NOW will you take that black mark off of my soul, Commander Taco?]
Re: (Score:2, Offtopic)
Question: What do you get from the subscription, other than being able to read stories half an hour earlier? (No idea why I would need that.)
Re: (Score:2)
Back in the days before AdBlock, another benefit to subscribing was the removal of ads.
Re: (Score:2)
Re: (Score:2, Funny)
What in the screaming blue hell is a GPGPU?
I think you meant "screaming green hell"
That's a lot of pages supporting guesswork. (Score:5, Insightful)
also, seems to be guessing at the wrong thing (Score:3, Informative)
AMD's double-point floating point performance is already great. What they lack is the rest of it. The programming model is pretty bad compared to CUDA (nobody is using Brook+), and they seem to be basically waiting for OpenCL to fix that. The bottlenecks in most attempts to use AMD chips for GPGPU code are also not really the floating-point units themselves, but the rest of the architecture; it's hard to keep the ALUs fed with your data without a magic compiler, a better programming model, a better architec
Yep (Score:2)
And, of course, like with most people who do a "My favored company will come out with the bestest thing EVAR!" he's ignoring the fact that nVidia won't sit still. I don't know what's coming next from nVidia. What I do know is they currently have a powerful card for gaming and GPGPU (GTX285) that does support double precision as well as single precision, though DP is much slower. So, fairly safe to say their next generation card will also support DP, and will probably be faster than their current card.
To me,
Well, I hope they hurry up... (Score:2)
...because since I learned that BOINC now supports CUDA (but still has no love for GPGPU), I'm about to ditch my ATI cards for a few Nvidia ones.
Re: (Score:3, Informative)
CUDA = an Nvidia-specific way to do GPGPU...
Personally I'm waiting for OpenCL, which would be to GPGPU what OpenGL was for 3D graphics when it was released - essentially a vendor and platform neutral general processing interface to the GPU.
Re: (Score:2)
Hey -- whatever it's called, I'm just about to make a purchase decision based upon the fact that my hardware isn't supported. Somebody needs to get coding. :P
Re: (Score:1)
LOLNO (Score:5, Insightful)
As far as I know, the RV790 will be in the R600/R700 family and will work almost perfectly with existing R600/R700 code. While I have no guarantees on this, current talks with AMD employees haven't given off any indication that this chipset will be radically different from its cousins.
Re: (Score:1, Informative)
Re: (Score:1)
Yes.
Re: (Score:2)
Both. Nvidia for linux, ATI for windows.
Re: (Score:2)
What I want from the GPU (Score:3, Interesting)
Imagine if your VMed OS could believe that it had 100% control of the video card, but your video card would display on it's own 'surface', and still use full hardware acceleration for the process. As far as I can tell, video is the only serious stumbling block left in virtualizing the x86 architecture.
Re: (Score:1)
That's an interesting idea and maybe it will happen one day, but hardware virt hasn't trickled down that far yet. It' still at the mid-range server level, except a few power users, developers, and engineers. Cards now how have dynamic virtual memory mapping, which might just make this possible, but certainly not simple.
In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more u
Re: (Score:1)
In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more useful than emulating the 25 year old VGA BIOS and umpteen stupid extensions.
That's a neat idea, I had forgotten OpenGL worked like that. However, I don't really see a use case. You're going to virtualize an X11 app and have it connect to the X11 server on the host? Surely this is something you only want to do for one app at a time, in which case why the VM?
Re: (Score:3, Interesting)
VirtualBox is supposed to have started solving this problem. It's beta and still experiemental but if it works well, then it's exactly what I've been looking for as it means I can finally run XP ina Vbox setup under a 64bit Gentoo Linux.
The article has nothing to do with reality (Score:1)
Some guy who does not know very much posts a long speculation article, all speculation done with his limited undertanding. And then this is posted as news.
RV790 is just higher-clocked RV770. There are no more shader units. There are no shader units converted to 64-bit. it's just ~10% clock speed increase, giving about 10% more performance.
RV800 will come at end of the year, that will have much power.
Mod Parent Up (Score:2)
How 64-bit operations on RV7x0 work (Score:1)
Some more information how RV7x0 calculates 64-bit floating point:
All shader processors in RV7x0 are natively 32-bit. There are 5 ALU's in each shader processer. When RV7x0 calculates an 64-bit MUL operation, it does it by using 4 of those 32-bit ALU's together. When RV7x0 calculates an 64-bit ADD operation, it combines 2 32-bit ALU's together.
That's why RV7x0 has floating point MUL throughput of 1/5 of it's 32-bit MUL thoughtput. There is no "group of 64-bit ALU's" like the article thinks.
Misunderstanding about double precision (Score:1)
Drivers (Score:2)
I would rather have quality Open Source drivers. Yeah, you through the specs "over the wall", but it would be nice if you were a bit more active. Like giving us an actual Open Source driver. Or patches. Or something. We shouldn't be doing your work for you.