Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

ATI's 1GB Video Card 273

Signify writes "ATI recently released pics and info about it's upcoming FireGL V7350 graphics card. The card features 1GB of GDDR3 Memory and a workstation graphics accelerator. From the article: 'The high clock rates of these new graphics cards, combined with full 128-bit precision and extremely high levels of parallel processing, result in floating point processing power that exceeds a 3GHz Pentium processor by a staggering seven times, claims the company.'"
This discussion has been archived. No new comments can be posted.

ATI's 1GB Video Card

Comments Filter:
  • Re:use as a cpu? (Score:3, Insightful)

    by Tyler Eaves ( 344284 ) on Wednesday March 22, 2006 @01:49AM (#14969806)
    Because for most general computing tasks, floating point doesn't matter.
  • Re:use as a cpu? (Score:5, Insightful)

    by Kenshin ( 43036 ) <kenshin@lunarOPENBSDworks.ca minus bsd> on Wednesday March 22, 2006 @01:51AM (#14969818) Homepage
    I'm thinking:

    a) Tough market to crack. AMD's been around for years, and they're still trying to gain significant ground on Intel. (As in mindshare.) May as well spend the effort battling each other to remain at the top of their field, rather than risk losing focus and faltering.

    b) These chips are specialised for graphics processing. Just because you can make a kick-ass sports car, doesn't mean you can make a decent minivan.
  • So? (Score:5, Insightful)

    by Tebriel ( 192168 ) on Wednesday March 22, 2006 @01:55AM (#14969825)
    Other than high-end graphics work, what the hell will this mean? Are you seriously saying that we will be seeing games needing that must video memory anytime soon? Hell, they have a hard enough time getting people to buy cards with 256 MB of RAM.
  • Re:So? (Score:5, Insightful)

    by Bitter and Cynical ( 868116 ) on Wednesday March 22, 2006 @02:18AM (#14969897)
    Other than high-end graphics work, what the hell will this mean?
    Nothing. These cards are not meant for gaming, in fact if you did try and use it for gaming you'd be very upset. The FireGL line is a workstation card meant for things like CAD or Render farms that are very memory intensive and require a high level of precision. Its not meant for delivering a high frame rate and no gamer would stick this card in his machine
  • Re:use as a cpu? (Score:1, Insightful)

    by Segfault666 ( 662554 ) on Wednesday March 22, 2006 @02:48AM (#14969971)
    ... Just because you can make a kick-ass sports car, doesn't mean you can make a decent minivan.

    http://personal.inet.fi/surf/porschenet/text/fut _plan/varrera/index.html

    enjoy...

  • by xamomike ( 831092 ) on Wednesday March 22, 2006 @03:20AM (#14970047) Homepage
    sounds great and all, but have they gotten around to paying their own programmers to make drivers that actually work, and install off the CD it comes with, instead of outsourcing it to a few guys in their basement?

    Seriously, I've owned 6 different ATI cards of differing lines this year, and only 2 of them installed properly with the drivers that came on the CD. That just aint right.
  • Re:use as a cpu? (Score:2, Insightful)

    by pherthyl ( 445706 ) on Wednesday March 22, 2006 @04:06AM (#14970143)
    Building a GPU is trivially easy relative to building a CPU

    Easier? In some respects. Trivially easy? Not quite.

    In contrast general purpose code has (on average) one branch every 7 cycles.

    One branch every 5-7 instructions, not cycles.

    Other than that, good comment, I didn't know about the (lack of) branch support in GPUs.

  • Re:use as a cpu? (Score:2, Insightful)

    by clydemaxwell ( 935315 ) on Wednesday March 22, 2006 @08:26AM (#14970710)
    because OP's 'obviously' statement seemed to imply he thought the current GPUs were a good marker of CPU quality -- i.e. he was impressed by the GPU specs, and so wondered why they don't make CPU's (7 times better than current) so GP was right in pointing out that it's not a very good indicator of CPU strength, and thus we have little reason to believe nvidia would make a good CPU. tada!
  • Re:use as a cpu? (Score:5, Insightful)

    by somersault ( 912633 ) on Wednesday March 22, 2006 @08:49AM (#14970793) Homepage Journal
    the problem is that the graphics accelerators work in a very limited domain, so the gfx cards engineers can concentrate on the things that it needs to be fast at (floating point calculations, tranferring memory contents efficiently). Normal CPUs have a much wider scope, and while I'm sure the engineers that design/upgrade x86 processors do try their best to make the chips fast, they have to spread their work around, and make sure that every area is decent, rather than one area spectacular. Also graphics cards are fairly self contained, while processors have their motherboard/chipset (which for intel would include the memory controller) to dictate which type of RAM they can use, and how much bandwidth the whole system has etc.

    The thing is that you couuuuld make an x86 that runs using GDDR3 etc, but it would be rather expensive, and nobody (well, no majority market anyway) is going to pay to produce that, if only a few thousand people can actually afford it. In time the costs will come down, but until then we common folk just have to stick with whatever AMD/Intel/Whoever are producing.

    But anyway, the main point I made, maybe not in a very technically accurate way, was that it's easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it).
  • by Anonymous Coward on Wednesday March 22, 2006 @09:37AM (#14971005)
    Now I can program my graphics engine even more poorly, and when people complain it runs slow, I'll just tell them to buy an even better graphics card!
  • by somersault ( 912633 ) on Wednesday March 22, 2006 @09:44AM (#14971033) Homepage Journal
    Wouldnt you rather have a card/drives that have been designed for higher framerates, but with lower rendering precision? You dont need 128 bit floating point precision to play Quake. You're not going to care if a vertex on your health pack is 0.000005whatevers off to the left of where it should be, and you're not even going to see that onscreen unless you go in realllly close. I'm maybe oversimplifying things, but I tried a few years ago running Counter-Strike Source with a Quadro graphics card, then some Radeon that was lying around; the Radeon had much better framerates. It may have been a generation up from the Quadro, but it would still have been way cheaper to buy initially, and people are *not* going to pay for that 'kick ass 3D accelerator' when a more budget version does the same thing, but with less memory, and drivers that are optimised to make games fast, rather than very precisely rendered.
  • by Anonymous Coward on Wednesday March 22, 2006 @11:40AM (#14971883)
    I'm tired of hearing this anthrocentric nonsense about chips.

    GPUs are not faster than CPUs because the engineers can "concentrate on one area" instead of "spreading their work around". It's not that the floating point performance of the x86 would be faster if only Intel had the time to pay attention to it. That's ridiculous.

    GPU tasks are highly parallel. CPU tasks are not. nVidia can toss 24 pipelines onto a chip and realize a huge performance gain. Intel can't, because much of the time those pipelines will be empty waiting for the results of the other lines.

    This fundamental difference is what separates the two domains, not it being "easier to build something that performs well in one area, than to build something that does everything amazingly well (without costing the earth to buy it)."

    You need to keep your science and your homey folk wisdom separate.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...