Forgot your password?

ATI's 1GB Video Card 273

Posted by ScuttleMonkey
from the choke-on-the-price dept.
Signify writes "ATI recently released pics and info about it's upcoming FireGL V7350 graphics card. The card features 1GB of GDDR3 Memory and a workstation graphics accelerator. From the article: 'The high clock rates of these new graphics cards, combined with full 128-bit precision and extremely high levels of parallel processing, result in floating point processing power that exceeds a 3GHz Pentium processor by a staggering seven times, claims the company.'"
This discussion has been archived. No new comments can be posted.

ATI's 1GB Video Card

Comments Filter:
  • use as a cpu? (Score:5, Interesting)

    by Toba82 (871257) on Wednesday March 22, 2006 @01:44AM (#14969791) Homepage
    Why doesn't ATi (or nVidia for that matter) make CPUs?

    They obviously could make some very powerful chips.
    • Re:use as a cpu? (Score:3, Insightful)

      by Tyler Eaves (344284)
      Because for most general computing tasks, floating point doesn't matter.
    • Re:use as a cpu? (Score:5, Insightful)

      by Kenshin (43036) <> on Wednesday March 22, 2006 @01:51AM (#14969818) Homepage
      I'm thinking:

      a) Tough market to crack. AMD's been around for years, and they're still trying to gain significant ground on Intel. (As in mindshare.) May as well spend the effort battling each other to remain at the top of their field, rather than risk losing focus and faltering.

      b) These chips are specialised for graphics processing. Just because you can make a kick-ass sports car, doesn't mean you can make a decent minivan.
      • what the hell is a porche cayenne, if not a minivan with regular doors.
    • Re:use as a cpu? (Score:5, Informative)

      by TheRaven64 (641858) on Wednesday March 22, 2006 @01:59AM (#14969836) Journal
      Building a GPU is trivially easy relative to building a CPU. Here are a few reasons why:
      • You have an OpenGL driver for the GPU and a JIT for shader language programs. This means you can completely throw out the instruction set between minor revisions if you want to. An x86 CPU must mimic bugs in the 486 to be compatible with software that relied on them.
      • You have an easy problem. Graphics processing is embarrassingly parallel. You can pretty much render every pixel in your scene independently[1]. This means that you can almost double the performance simply by doubling the number of execution units. To see how well this works for general purpose code, see Itanium.
      • The code you are running is fairly deterministic and unbranching. Up until a year or two ago, GPUs didn't even support branch instructions. If you needed a branch, you executed both code paths and threw the result you didn't need away. Now, branches exist, but they are very expensive. This doesn't matter, since they are only used every few thousand cycles. In contrast general purpose code has (on average) one branch every 7 cycles.
      GPUs and CPUs are very different animals. If all you want is floating point performance, then you can get a large FPGA and program it as an enormous array of FPUs. This will give you many times the maximum theoretical floating point throughput of a 3GHz P4, but will be almost completely useful for over 99% of tasks.

      [1] True of ray tracing. Almost true of current graphics techniques.

      • Re:use as a cpu? (Score:2, Interesting)

        by moosesocks (264553)
        I understand that CPUs and GPUs have dramatically different roles.

        However, how difficult would it be to write an operating system that offloaded floating point operations to the GPU, and everything else to the CPU.

        Seems like that would be making the most efficent use of available resources..... (then again, isn't that the idea of the Cell processor?)
        • Not likely OS level (Score:2, Informative)

          by HornWumpus (783565)
          But you can write very specific GPU code to solve some parallel FP problems.

          For more general purpose FPOPs you will have a hell of a time getting enough gain in floating point performance to overcome the overhead chatter between the CPU and GPU that would be required to keep the states in synch.

          I'd go so far as to say if the process can't be near complely moved (the CPU will need to feed it data and suck up results) onto the GPU then don't bother.

          But I'm talking out of my ass, it could work. I'm just

        • Re:use as a cpu? (Score:5, Informative)

          by pchan- (118053) on Wednesday March 22, 2006 @04:22AM (#14970172) Journal
          However, how difficult would it be to write an operating system that offloaded floating point operations to the GPU, and everything else to the CPU.

          Funny you should mention that. The Intel 386 (and up) architecture has built in support for a floating point coprocessor, so it can offload floating point operations. In the early days, you could buy a 387 math coprocessor to accelerate floating point performance. Then Intel integrated the 387 coprocessor onto the 486 series cpus, and today we just know it as "the floating point unit" (although it's been much revised, parallelized, and super-scaled).

          As for offloading to a GPU, well... that's what we do today. It's called Direct3D, or Mesa, or Glide, or your favorite 3D acceleration library. The problem with this approach is that it requires very specialized code. It's not something that can be automatically done for just any code, as the overhead of loading the GPU, setting up the data, and retrieving the results would far exceed the performance gains. In only extereme cases does it pay off: the workload has to be extremely parallelizable, with almost no branching and predictable calculations. Basically what it ends up is that the algorithm has to be extensively tailered to the GPU. Even IBM has had major issues offloading general purpose operations to their special processing units, and those are much more closely coupled to the CPU.
          • Funny you should mention that. The Intel 386 (and up) architecture has built in support for a floating point coprocessor, so it can offload floating point operations.

            Actually, the FPU as an adjunct goes back on the Intel x86 architecture to at least the 8087 []

        • Short answer
          Yes that is what the cell does. And it is already being done for the things that they are good at.

          You have to remember that GPUs floating point precision tends to be limited. I am not sure that they are even IEEE single precision much less double. They are great for things like... Graphics and video playback. In fact they are used for that a lot. They are not all that useful for science and engineering tasks. One of the things that I hear every now and then that scientific computer users compla
      • You're absolutely right on, but I've got a minor picky point about the first part. While the instruction set must remain stable pretty much downrange forever in a given architecture, this is only really true for the front-end instruction set, which isn't what the CPU actually executes on any current IA-32[e] design. Only the decode stage of things is bound by the instruction set, and the actual bulk of the work is done in the internal architecture which can, and does, change between different designs.
      • For example, if you need to render a triangle, transforming the coordinates and finding the u,v (for texture mapping), then calcuating the normals and interpolating them all has a lot of setup math. You don't want to do this setup math for each pixel.

        In fact, unlike ray tracing, where you proceed pixel by pixel, checking all the geometry, in modern techniques you render the parts of the geometry in order into the frame buffer, and due to the magic of z-buffering, the end result frame buffer is complete for
        • the end result frame buffer is complete for all pixels at the same time.

          Actually, that's pretty much the definition of a parallel process. ;)

          The point is the completion of one pixel isn't dependant on the completion of others. The fact that so much of the 'setup' is the same for all pixels just means you can acheive the equivalent of a million independantly computed pixels with far less actual effort.

          In other words, if they -weren't- so easy to parallelize these optimizations would be impossible to make.
      • Re:use as a cpu? (Score:2, Insightful)

        by pherthyl (445706)
        Building a GPU is trivially easy relative to building a CPU

        Easier? In some respects. Trivially easy? Not quite.

        In contrast general purpose code has (on average) one branch every 7 cycles.

        One branch every 5-7 instructions, not cycles.

        Other than that, good comment, I didn't know about the (lack of) branch support in GPUs.

    • Re:use as a cpu? (Score:3, Interesting)

      by ajs318 (655362)
      There doesn't seem to be a market for a new CPU architecture.

      One could make a blisteringly fast, ultra-RISC processor that completes every R-to-M instruction in one clock cycle {read the instruction from memory on the rising edge ["tick"], let the logic matrix outputs stabilise during the high period and write the result to memory on the falling edge ["tock"]} and every M-to-R or M-to-M in two {an extra tick is needed to read the operand from memory and the intervening tock is wasted; unless you can
    • Are there any open source projects out there that utilize the specific features of the GPU. I imagine it would be great for things like video transcoding. Maybe it could even be used to compile programs. At the times when i'm only browsing the web or typing up a word document it would be nice to have the GPU working on something useful, rather than using my CPU cycles.
  • by Anonymous Coward on Wednesday March 22, 2006 @01:51AM (#14969815)
    ...when I told her that I would buy an ATI card that would allow us to decrease the gas bill for our furnace next winter. Guys, you just have to give your better half a good argument and this graphics card is installed in your computer in no time. Just don't mention that you need to buy a better air conditioner to the summer... she'll discover that one. ;)
    • Just don't mention that you need to buy a better air conditioner to the summer... she'll discover that one. ;)

      And when she does you're either going to ebay that card or be very lonely at night.
    • At that time, argue that you are going to use the thermal output to generate power for the cooling systems. Bonus, massive uptimes!
    • It all started with the GeForce 6800 GPU a couple of years back. My patented process provides a "double-whammy" argument for the wife - A lower heating bill + a lower electricity bill!

      You can check out one of the diagrams I provided in my patent application here [].
    • Just don't mention that you need to buy a better air conditioner to the summer... she'll discover that one. ;)

      Just make a pipe that takes the exhaust of the fans to the outside of the walls. You don't even need to vacuum anymore - the constant wind that now blows through your apartment doesn't let any dust gather.

  • by Anthony Boyd (242971) on Wednesday March 22, 2006 @01:51AM (#14969817) Homepage
    It's called the FireGL because it puts out heat at levels equivalent to a large fire. -T
  • Awesome! (Score:5, Funny)

    by Rank_Tyro (721935) <ranktyro11@LAPLA ... m minus math_god> on Wednesday March 22, 2006 @01:53AM (#14969821) Journal
    Now I can upgrade to Windows "Vista."
    • Re:Awesome! (Score:5, Informative)

      by Jozer99 (693146) on Wednesday March 22, 2006 @02:30AM (#14969930)
      Appreciate the joke, but for folks out there who think he is serious, Microsoft has said that the Intel GMA 900 and ATI Radeon X200 are the minimum graphics cards for using the "new" DirectX GUI. Vista will work on computers with less graphics systems, but in a compatability mode similar to Windows XP's GUI.
      • Microsoft has said that the Intel GMA 900 and ATI Radeon X200 are the minimum graphics cards for using the "new" DirectX GUI.

        I really miss the old days when I knew my friend's EGA card was better than my CGA card and my CGA card was better than a monochrome graphics card. When I got a 386 with a VGA card I made sure to really brag about it to him because he was still stuck in lame 16 color land. Muhahahaha. These days I have no fucking clue if my NVidia GeForce 6600 is slower or faster than an ATI Rade

  • So? (Score:5, Insightful)

    by Tebriel (192168) on Wednesday March 22, 2006 @01:55AM (#14969825)
    Other than high-end graphics work, what the hell will this mean? Are you seriously saying that we will be seeing games needing that must video memory anytime soon? Hell, they have a hard enough time getting people to buy cards with 256 MB of RAM.
    • Re:So? (Score:5, Insightful)

      by Bitter and Cynical (868116) on Wednesday March 22, 2006 @02:18AM (#14969897)
      Other than high-end graphics work, what the hell will this mean?
      Nothing. These cards are not meant for gaming, in fact if you did try and use it for gaming you'd be very upset. The FireGL line is a workstation card meant for things like CAD or Render farms that are very memory intensive and require a high level of precision. Its not meant for delivering a high frame rate and no gamer would stick this card in his machine
      • by atrus (73476)
        The GPU is actually identical except for a few features masked off on the Radeon cards which are available on the FireGL. The framerate would be nearly identical. It would be a monumental waste of money.
      • by Alioth (221270)
        I had one of the early Fire GL Pro cards (not primarily for gaming). But I also played the odd game, mostly MSFS. I discovered a bug in their drivers that MSFS revealed, but they refused to do anything about it 'because it wasn't a gaming card'. So you'll be doubly disappointed if you find something that doesn't work right in a game because they won't accept the bug.
    • 256 MB is small (Score:5, Interesting)

      by emarkp (67813) <> on Wednesday March 22, 2006 @03:34AM (#14970074) Journal
      Try rendering medical image data as a 3D texture (well three textures actually, one for each primary image). With 300 images, 256KB per image, x3 textures, that comes out to 225MB just for the textures. I deal with datasets like these routinely, and more video memory is a welcome development.
  • by arjovenzia (794122) on Wednesday March 22, 2006 @01:55AM (#14969826)
    With all that beef behind them, i sure hope they will follow Nvidia (i actually have no doubt that they will) in offloading physics to the GPU. &pgno=0 []

    it would be nice not having to purchase a top-notch CPU, GPU, and PPU (Physics Processing Unit) in the future, rolling the PPU and GPU together

  • this graphic card has MORE RAM than my entire computer, and faster too, and a faster processor, and probably a bigger heat sink.

    Is it just me or are graphic cards getting ridiculously insane? I know I don't need this thing cus the last game I bought for my comp is 2002 Star Wars: GB. Maybe I'm just a lamer and and l00ser...
    • by TheRaven64 (641858) on Wednesday March 22, 2006 @02:05AM (#14969861) Journal
      This is a workstation card, not a games card. The people buying this are likely to be either CAD/CAM people with models that are over 512MB (the workstation it plugs into will probably have a minimum of 8GB of RAM), or researches doing GPUPU things. To people in the second category, it's not a graphics card it's a very fast vector co-processor (think SSE/AltiVec, only a lot more so).
      • by temojen (678985)
        Stream processor, not vector processor. The programming models are different.

      • Signify: full 128-bit precision

        TheRaven64: or researches doing GPUPU things. To people in the second category, it's not a graphics card it's a very fast vector co-processor (think SSE/AltiVec, only a lot more so)

        Traditionally, ATi floating point numbers were only 24-bits wide [i.e. only "three-quarters" of single precision, which is 32-bits].

        nVidia, IBM Sony Cell, and Altivec support only 32-bit floats.

        MMX supported no floats whatsoever. SSE supported 32-bit floats. SSE2 & SSE3 support 64-bit f

        • Most likely 4d single precision vectors or 2d double precision vectors if they want to market it to researchers who had been cooking their own stream processors from FPGAs
      • by dbIII (701233)
        the workstation it plugs into will probably have a minimum of 8GB of RAM
        Will it really need that much system memory? The first SGI workstation I saw had double the amount of video memory to system memory.
        • Oh, yeah. 8 GB is common.
          Part of it is why not? Even if you could get away with 4 or 6 GB, if you are building the type of workstation that would use this chip, bringing the ram up to 8GB is a drop in the bucket as far as money goes. Even EEC 1 and 2 GB sticks of ram is cheap compared to what this card will run.
      • The argument that a workstation card would suck at playing games couldn't be farther from the truth. Have you guys actually tried playing games on quadros and firegl's ?
    • This isn't a gaming chip, this is part of their pro line. These are for CAD and rendering purposes.
    • Well, if your computer is slower than this graphics card, either admit you don't need a fast computer or fork over $500 to eMachines to get a modern one. Graphics cards are no clockspeed speed-demons, and this thing is clocked at less than 1Ghz, the speed of a CPU from 2001.
  • What I wonder is, can current software and hardware make use of those massive specs and is the memory bandwidth high enough for the GPU to benefit from a gig of video ram. or is it all just a gimmick?

    Is it worth somone's money to buy such a card?

  • by Jackie_Chan_Fan (730745) on Wednesday March 22, 2006 @02:01AM (#14969844)
    ATI's opengl drivers are flakey on their non firegl line of cards. Some suspect thats by design.

    Graphic card makers should get with the program and stop releasing firegl's and quadros. Just release really kick ass 3d accelerators for all.

    That way we can all have full opengl support and not the lame opengl game drivers by ATI. Nvidia's gaming card opengl drivers are better than ATIs

    • That way we can all have full opengl support and not the lame opengl game drivers by ATI. Nvidia's gaming card opengl drivers are better than ATIs

      Not by enough to matter, and will probably never see the developers it needs to work right because nVidia's too much of SGI's bitch to do anything about the situation so they can open-source their drivers...

    • Wouldnt you rather have a card/drives that have been designed for higher framerates, but with lower rendering precision? You dont need 128 bit floating point precision to play Quake. You're not going to care if a vertex on your health pack is 0.000005whatevers off to the left of where it should be, and you're not even going to see that onscreen unless you go in realllly close. I'm maybe oversimplifying things, but I tried a few years ago running Counter-Strike Source with a Quadro graphics card, then some R
    • I'd like to second that!

      I still have issues now and then running Blender on my 6 month old ATI card, not to mention my linux laptop having no hardware accelleration because ATI doesn't seem to release too many Radeon Xpress 200M drivers that actually work.

      As with all video card battles, I'm assuming nVidia will come out with one any moment now, right before ATI releases a physics engine, and back and forth and back and forth until they both get tired for a while. (all the while I'll be saving up for some p
  • If these chips are so powerful, and they do seem to be somewhat general purpose (at least by evidence of people making thinks like pi calculators and other small examples utilizing the graphics hardware), why isnt intel/amd using these same techniques with their main chips ?
    • They are, it is just that things occur on different scales. Graphics processing is hugely parallel, most other code isn't unfortunately. There are fundemental differences that must be dealt with, a general purpose CPU is much more difficult to design than a dedicated GPU.
  • In a nutshell, see the subject.
    I really don't give a flying *uck if any company, be it ATI or Nvidia comes out with the latest and greatest video card if it does not have proper driver support! Anyone who's run linux for awhile knows the drill.
    • thats why I buy nvidia... if the performance is the same, may as well have driver support.
    • Flogging generic statements like "ATI sucks for Linux", is not very accurate. A better way of putting it is "ATI sucks for some cards under Linux".

      I can certainly say that my laptop, with its ATI Radeon Xpress 200M chip, works wonderfully under Linux. Yes, I'm talking about their binary driver distribution. Using the latest version of their drivers. I'm also using the Xorg 6.9 xserver. It's fully 3D accelerated, as shown in the following command:

      $ glxinfo | grep OpenGL
      OpenGL vendor string: ATI Technolo
      • Try turning on the whiz bang hardware accelerated effects in KDE. You get a nice garbage screen if you turn on "Use translucency/shadows" in Window Behavior under desktop in the Control Center. Apparently it works great on the FOSS drivers for the Radeon 8500 and below but I get a screen full of garbage on my 9800. Probably not a big deal for most people but its very annoying to me.

        Also frustrating is the lack of support for games played through Cedega, I just installed the Cedega timedemo tonight a
      • As someone who's been getting increasingly into OpenGL programming, I can tell you GlxInfo isn't everything, just because it say's it supports opengl doesn't mean it won't randomly segfault when you throw a supposedly supported feature at it :(
      • I have a similar chip in my laptop. Xgl causes the system to completely hang itself. That is not a good driver.
  • A fire-breathing, liquid-helium-cooled 256-core graphics engine, complete with 2KW independent power supply, temperature throttling, and VR interfaces.

    (Oh, yeah -- and we think there may be a CPU in there somewhere, too.)
  • I use Pro/E as my job right now but have only ever used it on this 4 year old Dell "workstation" with some kind of 4 y/o Fire card in it. But though this equipment is old, I have never for a second felt like anything was slow at all while using Pro/E. So what are these insane cards for? Im sure they are for something I just would genuinely like to know what it is. I would have thought they were for CAD but now I see other people running Pro/E just fine on like laptops with integrated Intel graphics.
    • I work a lot with the visualisation end of the market and recently have been working with NASA on the CEV project(s). Some models that we deal with are in the gigabyte file size just for the geometry for a single subassembly. This card would make viewing some of these things far easier as you can preprocess and schlepp almost all the geometry to the video card as a VBO and never have to pass it over the bus again. Makes for tremendous performance gains.
    • As a fellow Pro/ENGINEER user this is not my experience. What version are you using and how big are your models? The latest version is a hog (as always). I can't imagine using it on an old Dell with a FireGL and doing anything very complicated. I have to admit I'm not a fan of ATI cards, their OpenGL support seems to be very flaky. But I like the larger memory on these new cards and the price is good. Price wise this card would seem to compare favorably to a top model WildCat Realizm or a top model nV
    • Funny you should word it like that...

      I just finished a meeting with our lead Mechanical Engineer who uses some dual PA-RISC thing from HP, which if memory serves, is around 2 or 3 years old. His comment (while showing use various models of the current project) was that if the next project increased as much in complexity as the current project did from the former he would like to get new workstations for his team.
      I should also comment that it appears to me that given the right people Pro/E can become surpri
  • Can i reallocate that memory as system memory?
  • Last year, Ars Technica talked about how newer OS's are leveraging fast GPUs for advanced graphics. The main problem is the bottleneck between system memory and GPU/VRAM. One solution is to move the bottleneck to the other side of the backing store. / 13 [] 14 [] 15 []
  • sounds great and all, but have they gotten around to paying their own programmers to make drivers that actually work, and install off the CD it comes with, instead of outsourcing it to a few guys in their basement?

    Seriously, I've owned 6 different ATI cards of differing lines this year, and only 2 of them installed properly with the drivers that came on the CD. That just aint right.
    • Are you crazy you actualy use the drivers supplied on CD's? I thought everyone knew that those are usualy outdated piles of junk, be it nVidia, ATI, Via, SIS, Intel, HP, Creative, Ricoh... anybody. Always get the latest drivers off the website and toss the CD's that come with your hardware, immediatly if not sooner.
  • Yes but (Score:2, Funny)

    by Anonymous Coward
    Does it get you banned in World of Warcraft?
  • by Anonymous Coward on Wednesday March 22, 2006 @03:59AM (#14970124)
    Ok, I cannot beleive the absurd number of posts I am seeing from lamers who think this thing is for video games. Hello People! Both ATI and NVidia have had seperate high-end workstation lines for years now! This is nothing new. Where have you people been?

    This card is for people who need serious rendering of high detailed scenes and 3D objects, not serious frame rates for games. For applications where image quality, complexity, and accuracy are much more important than frame rate. The GPUs in these high end workstation cards are geared in a totaly different manner and actually suck for video games! These are great for CAD/CAM, medical imaging (like from CAT and EBT scanners), chemical modeling, and lots of other hard core scientific and 3D developement type stuff.
  • Finally! (Score:5, Interesting)

    by LookoutforChris (957883) on Wednesday March 22, 2006 @04:27AM (#14970183) Homepage
    That's 1GB of unified memory, so less than 1GB is available for textures ; (

    It took them long enough; this is definitely the direction to go.

    Almost 4 years ago Silicon Graphics gave a final revision [] hurrah to their best graphics product: InfiniteReality. A pipe sported 1GB dedicated texture memory, 10GB of frame buffer memory, 8 channels per pipe, and 192GB/s internal memory bandwidth.

    And an Onyx system could have up to 16 pipes! That's 8.3M pixels per pipe, or 133M pixels from a full system! And all in 48-bit RGBA. And those are just the raw numbers, there were a great many high end features only found on InfiniteReality. Don't ask what it costs ; )

    Sorry for the passionate post. It seems that Slashdot is very PC-ish and narrow in its viewpoint (Imagine a Beouwolf of... Can it run Doom3 ... etc.) so I couldn't resist blabbing about high-end kit that's off topic.

    I've had the pleasure of using a small Onyx system. Too bad SGI is dead dead dead. Still they provide a good target for everyone to shoot for. Some day the above power will be available for a few hundred dollars for the average person. Though I think it will be atleast 5 years before the quality and features of InfiniteReality4 are at a consumer level. And never will we have workstations like SGI's again ; (
    • Imagine ImageMagick ported to that... No really, I've been waiting ~15 minutes for the output of identify -verbose on a 25MPixel 48bit RGB image. I'm sure glad I didn't use the 100MPixel scan.
    • Why does the PC only have 8 GB/sec (theoritical maximum on most advanced systems) bandwidth between CPU and main memory? PCs could have 192 GB/sec bandwidth, but Intel and other high tech companies want to maximize their profits by selling new technology one piece at a time.
  • FTFA:

    The result should be a finer level of detail throughout the visible spectrum, enhancing details in shadows and making highlights come to life.

    Unless I'm the Predator [] and have some special monitor that no one else has, this comment about the "visible spectrum" is ridiculous. Of course it's going to improve fidelity throughout the visible spectrum, do you think they'd just focus on the color green, or try to improve that all-so-important infrared fidelity?

    Based on a cutting-edge 90nm process t

If you think nobody cares if you're alive, try missing a couple of car payments. -- Earl Wilson