ATI's 1GB Video Card 273
Signify writes "ATI recently released pics and info about it's upcoming FireGL V7350 graphics card. The card features 1GB of GDDR3 Memory and a workstation graphics accelerator. From the article: 'The high clock rates of these new graphics cards, combined with full 128-bit precision and extremely high levels of parallel processing, result in floating point processing power that exceeds a 3GHz Pentium processor by a staggering seven times, claims the company.'"
use as a cpu? (Score:5, Interesting)
They obviously could make some very powerful chips.
Re:use as a cpu? (Score:3, Insightful)
Re:use as a cpu? (Score:2, Insightful)
Re:use as a cpu? (Score:5, Insightful)
The thing is that you couuuuld make an x86 that runs using GDDR3 etc, but it would be rather expensive, and nobody (well, no majority market anyway) is going to pay to produce that, if only a few thousand people can actually afford it. In time the costs will come down, but until then we common folk just have to stick with whatever AMD/Intel/Whoever are producing.
But anyway, the main point I made, maybe not in a very technically accurate way, was that it's easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it).
This is anthrocentric (Score:5, Insightful)
GPUs are not faster than CPUs because the engineers can "concentrate on one area" instead of "spreading their work around". It's not that the floating point performance of the x86 would be faster if only Intel had the time to pay attention to it. That's ridiculous.
GPU tasks are highly parallel. CPU tasks are not. nVidia can toss 24 pipelines onto a chip and realize a huge performance gain. Intel can't, because much of the time those pipelines will be empty waiting for the results of the other lines.
This fundamental difference is what separates the two domains, not it being "easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it)."
You need to keep your science and your homey folk wisdom separate.
Re:use as a cpu? (Score:5, Insightful)
a) Tough market to crack. AMD's been around for years, and they're still trying to gain significant ground on Intel. (As in mindshare.) May as well spend the effort battling each other to remain at the top of their field, rather than risk losing focus and faltering.
b) These chips are specialised for graphics processing. Just because you can make a kick-ass sports car, doesn't mean you can make a decent minivan.
Re:use as a cpu? (Score:2)
Re:use as a cpu? (Score:2)
Re:use as a cpu? (Score:3, Informative)
Re:use as a cpu? (Score:3, Interesting)
This is the one part of the Pentium 4 chip that is very much like the parallel execution units of a video card. If you want to maximize performance of the SIMD unit, you want a simple program with packed SIMD instructi
Re:use as a cpu? (Score:5, Informative)
[1] True of ray tracing. Almost true of current graphics techniques.
Re:use as a cpu? (Score:2, Interesting)
However, how difficult would it be to write an operating system that offloaded floating point operations to the GPU, and everything else to the CPU.
Seems like that would be making the most efficent use of available resources..... (then again, isn't that the idea of the Cell processor?)
Not likely OS level (Score:2, Informative)
For more general purpose FPOPs you will have a hell of a time getting enough gain in floating point performance to overcome the overhead chatter between the CPU and GPU that would be required to keep the states in synch.
I'd go so far as to say if the process can't be near complely moved (the CPU will need to feed it data and suck up results) onto the GPU then don't bother.
But I'm talking out of my ass, it could work. I'm just
Re:use as a cpu? (Score:5, Informative)
Funny you should mention that. The Intel 386 (and up) architecture has built in support for a floating point coprocessor, so it can offload floating point operations. In the early days, you could buy a 387 math coprocessor to accelerate floating point performance. Then Intel integrated the 387 coprocessor onto the 486 series cpus, and today we just know it as "the floating point unit" (although it's been much revised, parallelized, and super-scaled).
As for offloading to a GPU, well... that's what we do today. It's called Direct3D, or Mesa, or Glide, or your favorite 3D acceleration library. The problem with this approach is that it requires very specialized code. It's not something that can be automatically done for just any code, as the overhead of loading the GPU, setting up the data, and retrieving the results would far exceed the performance gains. In only extereme cases does it pay off: the workload has to be extremely parallelizable, with almost no branching and predictable calculations. Basically what it ends up is that the algorithm has to be extensively tailered to the GPU. Even IBM has had major issues offloading general purpose operations to their special processing units, and those are much more closely coupled to the CPU.
Re:use as a cpu? (Score:2)
Funny you should mention that. The Intel 386 (and up) architecture has built in support for a floating point coprocessor, so it can offload floating point operations.
Actually, the FPU as an adjunct goes back on the Intel x86 architecture to at least the 8087 [wikipedia.org]
Re:use as a cpu? (Score:2)
Yes that is what the cell does. And it is already being done for the things that they are good at.
You have to remember that GPUs floating point precision tends to be limited. I am not sure that they are even IEEE single precision much less double. They are great for things like... Graphics and video playback. In fact they are used for that a lot. They are not all that useful for science and engineering tasks. One of the things that I hear every now and then that scientific computer users compla
Re:use as a cpu? (Score:2)
you don't render pixels independently... (Score:2)
In fact, unlike ray tracing, where you proceed pixel by pixel, checking all the geometry, in modern techniques you render the parts of the geometry in order into the frame buffer, and due to the magic of z-buffering, the end result frame buffer is complete for
Re:you don't render pixels independently... (Score:2)
Actually, that's pretty much the definition of a parallel process.
The point is the completion of one pixel isn't dependant on the completion of others. The fact that so much of the 'setup' is the same for all pixels just means you can acheive the equivalent of a million independantly computed pixels with far less actual effort.
In other words, if they -weren't- so easy to parallelize these optimizations would be impossible to make.
Re:use as a cpu? (Score:2, Insightful)
Easier? In some respects. Trivially easy? Not quite.
In contrast general purpose code has (on average) one branch every 7 cycles.
One branch every 5-7 instructions, not cycles.
Other than that, good comment, I didn't know about the (lack of) branch support in GPUs.
Re:next thing you know... (Score:2)
Re:use as a cpu? (Score:3, Interesting)
One could make a blisteringly fast, ultra-RISC processor that completes every R-to-M instruction in one clock cycle {read the instruction from memory on the rising edge ["tick"], let the logic matrix outputs stabilise during the high period and write the result to memory on the falling edge ["tock"]} and every M-to-R or M-to-M in two {an extra tick is needed to read the operand from memory and the intervening tock is wasted; unless you can
Re:use as a cpu? (Score:2)
My wife gave me two thumbs up... (Score:5, Funny)
Re:My wife gave me two thumbs up... (Score:2)
And when she does you're either going to ebay that card or be very lonely at night.
Re:My wife gave me two thumbs up... (Score:2)
Re: (Score:2)
Re:My wife gave me two thumbs up... (Score:2)
Just make a pipe that takes the exhaust of the fans to the outside of the walls. You don't even need to vacuum anymore - the constant wind that now blows through your apartment doesn't let any dust gather.
Hot hot hot! (Score:5, Funny)
so... (Score:5, Funny)
Re:so... (Score:2)
Re:so... (Score:2)
Re:so... (Score:2)
Re:so... (Score:2)
Thankfully (Score:2)
Yes but thankfully due to todays powerful use of offloading work to support cards, you'll still be able to hear the fiddles from the sound card until the last possible moment.
Re:so... (Score:5, Funny)
Awesome! (Score:5, Funny)
Re:Awesome! (Score:5, Informative)
Re:Awesome! (Score:2)
I really miss the old days when I knew my friend's EGA card was better than my CGA card and my CGA card was better than a monochrome graphics card. When I got a 386 with a VGA card I made sure to really brag about it to him because he was still stuck in lame 16 color land. Muhahahaha. These days I have no fucking clue if my NVidia GeForce 6600 is slower or faster than an ATI Rade
Re:Awesome! (Score:2)
Re:Awesome! (Score:2)
So? (Score:5, Insightful)
Re:So? (Score:5, Insightful)
Re:So? (Score:2)
Re:So? (Score:2)
256 MB is small (Score:5, Interesting)
follow Nvidia into Physics? (Score:5, Interesting)
it would be nice not having to purchase a top-notch CPU, GPU, and PPU (Physics Processing Unit) in the future, rolling the PPU and GPU together
Re:follow Nvidia into Physics? (Score:2)
what... teh.....fuk (Score:2)
Is it just me or are graphic cards getting ridiculously insane? I know I don't need this thing cus the last game I bought for my comp is 2002 Star Wars: GB. Maybe I'm just a lamer and and l00ser...
Re:what... teh.....fuk (Score:5, Informative)
Re:what... teh.....fuk (Score:3, Informative)
Width of the floats? (Score:3, Interesting)
Signify: full 128-bit precision
TheRaven64: or researches doing GPUPU things. To people in the second category, it's not a graphics card it's a very fast vector co-processor (think SSE/AltiVec, only a lot more so)
Traditionally, ATi floating point numbers were only 24-bits wide [i.e. only "three-quarters" of single precision, which is 32-bits].
nVidia, IBM Sony Cell, and Altivec support only 32-bit floats.
MMX supported no floats whatsoever. SSE supported 32-bit floats. SSE2 & SSE3 support 64-bit f
Re:Width of the floats? (Score:2)
Please don't joke about this. (Score:3, Informative)
Yes, they are 128-bit floats. They're needed for doing HDR.
PLEASE don't joke about this.
Do you have any idea how many math/physics/chem/engineering geeks would just kill for 128-bits in hardware?
It would be very, very cruel to get their hopes up like that, only to find out that you were being sarcastic...
Re:Please don't joke about this. (Score:3, Informative)
Re:what... teh.....fuk (Score:3, Interesting)
Re:what... teh.....fuk (Score:2)
Part of it is why not? Even if you could get away with 4 or 6 GB, if you are building the type of workstation that would use this chip, bringing the ram up to 8GB is a drop in the bucket as far as money goes. Even EEC 1 and 2 GB sticks of ram is cheap compared to what this card will run.
Re:what... teh.....fuk (Score:2)
Re:what... teh.....fuk (Score:2)
Re:what... teh.....fuk (Score:2)
Useful or hype? (Score:2)
Is it worth somone's money to buy such a card?
Too bad its still an ATI... (Score:4, Informative)
Graphic card makers should get with the program and stop releasing firegl's and quadros. Just release really kick ass 3d accelerators for all.
That way we can all have full opengl support and not the lame opengl game drivers by ATI. Nvidia's gaming card opengl drivers are better than ATIs
Re:Too bad its still an ATI... (Score:2)
Not by enough to matter, and will probably never see the developers it needs to work right because nVidia's too much of SGI's bitch to do anything about the situation so they can open-source their drivers...
Re:Too bad its still an ATI... (Score:3, Insightful)
Re:Too bad its still an ATI... (Score:2)
I still have issues now and then running Blender on my 6 month old ATI card, not to mention my linux laptop having no hardware accelleration because ATI doesn't seem to release too many Radeon Xpress 200M drivers that actually work.
As with all video card battles, I'm assuming nVidia will come out with one any moment now, right before ATI releases a physics engine, and back and forth and back and forth until they both get tired for a while. (all the while I'll be saving up for some p
ATI/nVidia make gen purpose chips ? (Score:2)
Re:ATI/nVidia make gen purpose chips ? (Score:2)
ATI Sucks for driver support with Linux (Score:2, Informative)
I really don't give a flying *uck if any company, be it ATI or Nvidia comes out with the latest and greatest video card if it does not have proper driver support! Anyone who's run linux for awhile knows the drill.
Re:ATI Sucks for driver support with Linux (Score:2, Informative)
ATI works great for me with Linux (Score:3, Interesting)
I can certainly say that my laptop, with its ATI Radeon Xpress 200M chip, works wonderfully under Linux. Yes, I'm talking about their binary driver distribution. Using the latest version of their drivers. I'm also using the Xorg 6.9 xserver. It's fully 3D accelerated, as shown in the following command:
$ glxinfo | grep OpenGL
OpenGL vendor string: ATI Technolo
Re:ATI works great for me with Linux (Score:2, Informative)
Also frustrating is the lack of support for games played through Cedega, I just installed the Cedega timedemo tonight a
Re:ATI works great for me with Linux (Score:2)
Re:ATI works great for me with Linux (Score:2)
New for 2010! (Score:2)
(Oh, yeah -- and we think there may be a CPU in there somewhere, too.)
Re: (Score:2)
Re:"workstation cards" what r they 4? (Score:3, Interesting)
Re:"workstation cards" what r they 4? (Score:3, Interesting)
Re:"workstation cards" what r they 4? (Score:2)
I just finished a meeting with our lead Mechanical Engineer who uses some dual PA-RISC thing from HP, which if memory serves, is around 2 or 3 years old. His comment (while showing use various models of the current project) was that if the next project increased as much in complexity as the current project did from the former he would like to get new workstations for his team.
I should also comment that it appears to me that given the right people Pro/E can become surpri
Can i reallocate that memory as system memory? (Score:2, Interesting)
Ars Technica discussion on why so much GPU memory (Score:2)
http://arstechnica.com/reviews/os/macosx-10.4.ars / 13 [arstechnica.com]
http://arstechnica.com/reviews/os/macosx-10.4.ars/ 14 [arstechnica.com]
http://arstechnica.com/reviews/os/macosx-10.4.ars/ 15 [arstechnica.com]
now if only they knew how to make drivers (Score:2, Insightful)
Seriously, I've owned 6 different ATI cards of differing lines this year, and only 2 of them installed properly with the drivers that came on the CD. That just aint right.
Re:now if only they knew how to make drivers (Score:2)
Yes but (Score:2, Funny)
Not for gaming, for graphics workstations!! (Score:3, Informative)
This card is for people who need serious rendering of high detailed scenes and 3D objects, not serious frame rates for games. For applications where image quality, complexity, and accuracy are much more important than frame rate. The GPUs in these high end workstation cards are geared in a totaly different manner and actually suck for video games! These are great for CAD/CAM, medical imaging (like from CAT and EBT scanners), chemical modeling, and lots of other hard core scientific and 3D developement type stuff.
Finally! (Score:5, Interesting)
It took them long enough; this is definitely the direction to go.
Almost 4 years ago Silicon Graphics gave a final revision [sgi.com] hurrah to their best graphics product: InfiniteReality. A pipe sported 1GB dedicated texture memory, 10GB of frame buffer memory, 8 channels per pipe, and 192GB/s internal memory bandwidth.
And an Onyx system could have up to 16 pipes! That's 8.3M pixels per pipe, or 133M pixels from a full system! And all in 48-bit RGBA. And those are just the raw numbers, there were a great many high end features only found on InfiniteReality. Don't ask what it costs ; )
Sorry for the passionate post. It seems that Slashdot is very PC-ish and narrow in its viewpoint (Imagine a Beouwolf of... Can it run Doom3
I've had the pleasure of using a small Onyx system. Too bad SGI is dead dead dead. Still they provide a good target for everyone to shoot for. Some day the above power will be available for a few hundred dollars for the average person. Though I think it will be atleast 5 years before the quality and features of InfiniteReality4 are at a consumer level. And never will we have workstations like SGI's again ; (
oooo... (Score:2)
Re:Finally! (Score:2)
Who writes this stuff? (Score:2)
The result should be a finer level of detail throughout the visible spectrum, enhancing details in shadows and making highlights come to life.
Unless I'm the Predator [imdb.com] and have some special monitor that no one else has, this comment about the "visible spectrum" is ridiculous. Of course it's going to improve fidelity throughout the visible spectrum, do you think they'd just focus on the color green, or try to improve that all-so-important infrared fidelity?
Based on a cutting-edge 90nm process t
Re:Not bad... (Score:3, Interesting)
They're not really for gaming as much as they are for developing stuff.
Re:Not bad... (Score:3, Informative)
-Rick
No (Score:2)
Re:No (Score:2)
Re:Obligatory... (Score:2, Funny)
Re:Obligatory... (Score:2)
Re:Whoa. (Score:4, Informative)
Re:Whoa. (Score:2, Funny)
So...Intel?
*Ducks*
Re:Whoa. (Score:2)
If you showed a modern CPU to a guy from 1996 he'd say exactly the same thing. He'd probably also drool onto your motherboard.
Re:$ 2000 ! yikes.. give me two (Score:2)
ATI loves you in the ass (Score:2, Informative)
http://www.osnews.com/story.php?news_id=13844 [osnews.com]
If you are talking about XGI, ATI just bought them and closed the code.
http://www.linuxgames.com/news/feedback.php?identi ferID=8255&action=flatview [linuxgames.com]
Re:Good thing this is a workstation card (Score:3, Informative)
Because 90% of programming is an excersize in caching, and if you can just cache the textures you can let your GPU just get 'em instead of waiting for it to finish saying "g-g-g-give me a t-t-t-tex... t-t-text-t-t-... gimme a damn bitmap!"