Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

ATI's 1GB Video Card 273

Signify writes "ATI recently released pics and info about it's upcoming FireGL V7350 graphics card. The card features 1GB of GDDR3 Memory and a workstation graphics accelerator. From the article: 'The high clock rates of these new graphics cards, combined with full 128-bit precision and extremely high levels of parallel processing, result in floating point processing power that exceeds a 3GHz Pentium processor by a staggering seven times, claims the company.'"
This discussion has been archived. No new comments can be posted.

ATI's 1GB Video Card

Comments Filter:
  • Re:use as a cpu? (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday March 22, 2006 @01:59AM (#14969836) Journal
    Building a GPU is trivially easy relative to building a CPU. Here are a few reasons why:
    • You have an OpenGL driver for the GPU and a JIT for shader language programs. This means you can completely throw out the instruction set between minor revisions if you want to. An x86 CPU must mimic bugs in the 486 to be compatible with software that relied on them.
    • You have an easy problem. Graphics processing is embarrassingly parallel. You can pretty much render every pixel in your scene independently[1]. This means that you can almost double the performance simply by doubling the number of execution units. To see how well this works for general purpose code, see Itanium.
    • The code you are running is fairly deterministic and unbranching. Up until a year or two ago, GPUs didn't even support branch instructions. If you needed a branch, you executed both code paths and threw the result you didn't need away. Now, branches exist, but they are very expensive. This doesn't matter, since they are only used every few thousand cycles. In contrast general purpose code has (on average) one branch every 7 cycles.
    GPUs and CPUs are very different animals. If all you want is floating point performance, then you can get a large FPGA and program it as an enormous array of FPUs. This will give you many times the maximum theoretical floating point throughput of a 3GHz P4, but will be almost completely useful for over 99% of tasks.

    [1] True of ray tracing. Almost true of current graphics techniques.

  • by Jackie_Chan_Fan ( 730745 ) on Wednesday March 22, 2006 @02:01AM (#14969844)
    ATI's opengl drivers are flakey on their non firegl line of cards. Some suspect thats by design.

    Graphic card makers should get with the program and stop releasing firegl's and quadros. Just release really kick ass 3d accelerators for all.

    That way we can all have full opengl support and not the lame opengl game drivers by ATI. Nvidia's gaming card opengl drivers are better than ATIs

  • by TheRaven64 ( 641858 ) on Wednesday March 22, 2006 @02:05AM (#14969861) Journal
    This is a workstation card, not a games card. The people buying this are likely to be either CAD/CAM people with models that are over 512MB (the workstation it plugs into will probably have a minimum of 8GB of RAM), or researches doing GPUPU things. To people in the second category, it's not a graphics card it's a very fast vector co-processor (think SSE/AltiVec, only a lot more so).
  • by Vskye ( 9079 ) on Wednesday March 22, 2006 @02:05AM (#14969865)
    In a nutshell, see the subject.
    I really don't give a flying *uck if any company, be it ATI or Nvidia comes out with the latest and greatest video card if it does not have proper driver support! Anyone who's run linux for awhile knows the drill.
  • by kitejumping ( 953022 ) on Wednesday March 22, 2006 @02:09AM (#14969871) Homepage
    thats why I buy nvidia... if the performance is the same, may as well have driver support.
  • Re:Awesome! (Score:5, Informative)

    by Jozer99 ( 693146 ) on Wednesday March 22, 2006 @02:30AM (#14969930)
    Appreciate the joke, but for folks out there who think he is serious, Microsoft has said that the Intel GMA 900 and ATI Radeon X200 are the minimum graphics cards for using the "new" DirectX GUI. Vista will work on computers with less graphics systems, but in a compatability mode similar to Windows XP's GUI.
  • Re:Whoa. (Score:4, Informative)

    by Jozer99 ( 693146 ) on Wednesday March 22, 2006 @02:35AM (#14969941)
    Explained in detail above. Suffice it to say that CPUs and GPUs are radically different. With GPU's, ATI can throw out old architectures and create new ones whenever they want (quite often). Since the hardware is accessed by a driver, the user isn't limited in what programs they can use. With CPU's, everyone is stuck with x86, which was invented in the 1980s. You can't break compatability with x86. GPUs do mostly simple floating point calcuations. Therefore, they are basically massively parallel FPUs. If they need to do a non-floating point calculation, they are quite slow. CPUs can do floating point calculations, but also many other types of calculations, and are about equally good at everything. For the sake of heat, power consumption, size, and cost, the FPU on a CPU is not nearly as large as a GPU. If each processing unit on a CPU was the size/power of a specialized processor (GPU, ect...), the chip would be gigantic, and so would be hard to make, expensive to buy, consume massive amounts of power, and emit unimaginable heat.
  • Not likely OS level (Score:2, Informative)

    by HornWumpus ( 783565 ) on Wednesday March 22, 2006 @02:45AM (#14969962)
    But you can write very specific GPU code to solve some parallel FP problems.

    For more general purpose FPOPs you will have a hell of a time getting enough gain in floating point performance to overcome the overhead chatter between the CPU and GPU that would be required to keep the states in synch.

    I'd go so far as to say if the process can't be near complely moved (the CPU will need to feed it data and suck up results) onto the GPU then don't bother.

    But I'm talking out of my ass, it could work. I'm just skeptical.

  • by temojen ( 678985 ) on Wednesday March 22, 2006 @02:54AM (#14969985) Journal
    Stream processor, not vector processor. The programming models are different.
  • Re:use as a cpu? (Score:1, Informative)

    by Anonymous Coward on Wednesday March 22, 2006 @03:17AM (#14970040)
    The overhead of transferring all of this stuff to the graphics card for processing and back would be way too much for almost all FP operations. Even on a general purpose CPU the overhead of fetching the data is usually more than the time it takes to process. Thats why there are caches. They exist to minimize this overhead.

    However if you have large amounts of data and want to process it over and over, you can afford to transfer it to a dedicated processor. This also happens to be what this card is designed to do.
  • by yellowcord ( 607995 ) on Wednesday March 22, 2006 @03:37AM (#14970085)
    Try turning on the whiz bang hardware accelerated effects in KDE. You get a nice garbage screen if you turn on "Use translucency/shadows" in Window Behavior under desktop in the Control Center. Apparently it works great on the FOSS drivers for the Radeon 8500 and below but I get a screen full of garbage on my 9800. Probably not a big deal for most people but its very annoying to me.

    Also frustrating is the lack of support for games played through Cedega, I just installed the Cedega timedemo tonight and Guild Wars almost but not quite works. I have characters missing heads and the frame rate is about 2 frames per second. I can get decent frame rates with a different setting but it crashes if I go to another district. This may be fixable, but so far my next computer is going to have an nVidia GPU.
  • by Anonymous Coward on Wednesday March 22, 2006 @03:59AM (#14970124)
    Ok, I cannot beleive the absurd number of posts I am seeing from lamers who think this thing is for video games. Hello People! Both ATI and NVidia have had seperate high-end workstation lines for years now! This is nothing new. Where have you people been?

    This card is for people who need serious rendering of high detailed scenes and 3D objects, not serious frame rates for games. For applications where image quality, complexity, and accuracy are much more important than frame rate. The GPUs in these high end workstation cards are geared in a totaly different manner and actually suck for video games! These are great for CAD/CAM, medical imaging (like from CAT and EBT scanners), chemical modeling, and lots of other hard core scientific and 3D developement type stuff.
  • Re:use as a cpu? (Score:5, Informative)

    by pchan- ( 118053 ) on Wednesday March 22, 2006 @04:22AM (#14970172) Journal
    However, how difficult would it be to write an operating system that offloaded floating point operations to the GPU, and everything else to the CPU.

    Funny you should mention that. The Intel 386 (and up) architecture has built in support for a floating point coprocessor, so it can offload floating point operations. In the early days, you could buy a 387 math coprocessor to accelerate floating point performance. Then Intel integrated the 387 coprocessor onto the 486 series cpus, and today we just know it as "the floating point unit" (although it's been much revised, parallelized, and super-scaled).

    As for offloading to a GPU, well... that's what we do today. It's called Direct3D, or Mesa, or Glide, or your favorite 3D acceleration library. The problem with this approach is that it requires very specialized code. It's not something that can be automatically done for just any code, as the overhead of loading the GPU, setting up the data, and retrieving the results would far exceed the performance gains. In only extereme cases does it pay off: the workload has to be extremely parallelizable, with almost no branching and predictable calculations. Basically what it ends up is that the algorithm has to be extensively tailered to the GPU. Even IBM has had major issues offloading general purpose operations to their special processing units, and those are much more closely coupled to the CPU.
  • Re:use as a cpu? (Score:3, Informative)

    by fbjon ( 692006 ) on Wednesday March 22, 2006 @05:19AM (#14970279) Homepage Journal
    No, it's a 404: real linky [personal.inet.fi]
  • by ThoreauHD ( 213527 ) on Wednesday March 22, 2006 @06:54AM (#14970484)
  • Re:Not bad... (Score:3, Informative)

    by RingDev ( 879105 ) on Wednesday March 22, 2006 @10:54AM (#14971468) Homepage Journal
    This is a workstation card, not a gamer's card. 1 gig of memory on a graphics card will not help any video game on the market (even the 512 cards are over kill). Like the FireGL cards, they are not all that great for gaming, but are extremely impressive for rendering. Great for 3-d content creation, scientific modeling and other rendering intencive activities.

    -Rick
  • by KingBahamut ( 615285 ) on Wednesday March 22, 2006 @12:07PM (#14972095)
    Support standpoint, we at the ubuntuforums find the support of ATI cards to be very frustrating. Their drivers dont work , and when they do, its spotchy at best.

    http://ubuntuforums.org/showthread.php?t=148531 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=122094 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=148415 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=141090 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=137343 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=76147 [ubuntuforums.org]
    http://ubuntuforums.org/showthread.php?t=75001 [ubuntuforums.org]

    This is probably the largest complaint we get on the Ubuntu Forums and the UDSF(http://doc.gwos.org/ [gwos.org] in the way of graphics cards. I think I even remember being told at one point that ATI is so driven on DirectX development that they likely dont care much about developing Open Source Drivers, or even a decent working Proprietary driver.

    There have been a few Petitions to do so
    http://www.petitiononline.com/atipet/petition.html [petitiononline.com]
    http://www.petitiononline.com/ati3/petition.html [petitiononline.com]

    And countless others. The community asks, almost begs, and all ATI does it laugh.

    Its sad, really sad.

    http://wiki.cchtml.com/index.php/Main_Page [cchtml.com]
  • by mosel-saar-ruwer ( 732341 ) on Wednesday March 22, 2006 @02:42PM (#14973771)

    Yes, they are 128-bit floats. They're needed for doing HDR.

    PLEASE don't joke about this.

    Do you have any idea how many math/physics/chem/engineering geeks would just kill for 128-bits in hardware?

    It would be very, very cruel to get their hopes up like that, only to find out that you were being sarcastic...

  • why the hell would anyone want 256mb of textures on an already stuttering GPU.

    Because 90% of programming is an excersize in caching, and if you can just cache the textures you can let your GPU just get 'em instead of waiting for it to finish saying "g-g-g-give me a t-t-t-tex... t-t-text-t-t-... gimme a damn bitmap!"
  • by Rufus211 ( 221883 ) <rufus-slashdotNO@SPAMhackish.org> on Wednesday March 22, 2006 @02:57PM (#14973965) Homepage
    Sorry, but it's almost certinaly talking about 4x32bit SP floats. It's simply marketing multiplying out numbers for a bigger one.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...