ATI's 1GB Video Card 273
Signify writes "ATI recently released pics and info about it's upcoming FireGL V7350 graphics card. The card features 1GB of GDDR3 Memory and a workstation graphics accelerator. From the article: 'The high clock rates of these new graphics cards, combined with full 128-bit precision and extremely high levels of parallel processing, result in floating point processing power that exceeds a 3GHz Pentium processor by a staggering seven times, claims the company.'"
Re:use as a cpu? (Score:3, Insightful)
Re:use as a cpu? (Score:5, Insightful)
a) Tough market to crack. AMD's been around for years, and they're still trying to gain significant ground on Intel. (As in mindshare.) May as well spend the effort battling each other to remain at the top of their field, rather than risk losing focus and faltering.
b) These chips are specialised for graphics processing. Just because you can make a kick-ass sports car, doesn't mean you can make a decent minivan.
So? (Score:5, Insightful)
Re:So? (Score:5, Insightful)
Re:use as a cpu? (Score:1, Insightful)
http://personal.inet.fi/surf/porschenet/text/fut _plan/varrera/index.html
enjoy...
now if only they knew how to make drivers (Score:2, Insightful)
Seriously, I've owned 6 different ATI cards of differing lines this year, and only 2 of them installed properly with the drivers that came on the CD. That just aint right.
Re:use as a cpu? (Score:2, Insightful)
Easier? In some respects. Trivially easy? Not quite.
In contrast general purpose code has (on average) one branch every 7 cycles.
One branch every 5-7 instructions, not cycles.
Other than that, good comment, I didn't know about the (lack of) branch support in GPUs.
Re:use as a cpu? (Score:2, Insightful)
Re:use as a cpu? (Score:5, Insightful)
The thing is that you couuuuld make an x86 that runs using GDDR3 etc, but it would be rather expensive, and nobody (well, no majority market anyway) is going to pay to produce that, if only a few thousand people can actually afford it. In time the costs will come down, but until then we common folk just have to stick with whatever AMD/Intel/Whoever are producing.
But anyway, the main point I made, maybe not in a very technically accurate way, was that it's easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it).
This is great news to software developers (Score:1, Insightful)
Re:Too bad its still an ATI... (Score:3, Insightful)
This is anthrocentric (Score:5, Insightful)
GPUs are not faster than CPUs because the engineers can "concentrate on one area" instead of "spreading their work around". It's not that the floating point performance of the x86 would be faster if only Intel had the time to pay attention to it. That's ridiculous.
GPU tasks are highly parallel. CPU tasks are not. nVidia can toss 24 pipelines onto a chip and realize a huge performance gain. Intel can't, because much of the time those pipelines will be empty waiting for the results of the other lines.
This fundamental difference is what separates the two domains, not it being "easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it)."
You need to keep your science and your homey folk wisdom separate.