Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

ATI's Stream Computing on the Way 129

SQLGuru writes to tell us that ATI has announced plans to release a new graphics product that could provide a shake-up for high performance computing. From the article: "ATI has invited reporters to a Sept. 29 event in San Francisco at which it will reveal 'a new class of processing known as Stream Computing.' The company has refused to divulge much more about the event other than the vague 'stream computing' reference. The Register, however, has learned that a product called FireStream will likely be the star of the show. FireStream product marks ATI's most concerted effort to date in the world of GPGPUs or general purpose graphics processor units. Ignore the acronym hell for a moment because this gear is simple to understand. GPGPU backers just want to take graphics chips from the likes of ATI and Nvidia and tweak them to handle software that normally runs on mainstream server and desktop processors."
This discussion has been archived. No new comments can be posted.

ATI's Stream Computing on the Way

Comments Filter:
  • World beyond x86 (Score:2, Insightful)

    by Cybert4 ( 994278 ) *
    So sick of x86. Look at all the cool stuff the graphics card makers are coming up with. Intel needs to buy NVidia to get real innovation done. I'm sure they have cool stuff cooking up, though. Let's get engineers going and let's get innovating!
    • by tomstdenis ( 446163 ) <tomstdenis.gmail@com> on Wednesday September 20, 2006 @02:10PM (#16148135) Homepage
      Except that GPGPUs are not a competitor for x86. Tell me how fast your C compiler will work on that nvidia or ATI card.

      If you're gonna beef up and make more general a GPU you might as well all it a Cell ... oh wait. IBM did that.

      NEXT!

      Tom
    • by dnoyeb ( 547705 )
      I think people became immunized to CPU makers hype after the first intel MMX that was supposed to replace modems and graphics cards etc...

      Graphics card manufacturers have yet to cease pumping the 8hit. It is however peculiar that ATI would announce a technology that could potentially devalue their parent companies (AMD) product. I suppose next AMD will announce more Graphics processing capabilities on their CPUs. Whats good for the Goose right?
      • I doubt it's going to devaluate the AMD product. I think it's going to start a new trend in computing, being able to choose what kind of horsepower you want in your server/workstation. One cpu, one fpu, or one cpu, three fpu's, three cpu's, one fpu, or even more. Need more floating power? Stick in another streaming processor. Or even another special purpose processor...
        And all based and connected through Hypertransport.
      • We need to get rid of the variable length encoding. The very basis of x86 is a running joke for anyone who is clued.
        • Re: (Score:3, Insightful)

          by AKAImBatman ( 238306 ) *
          We need to get rid of the variable length encoding. The very basis of x86 is a running joke for anyone who is clued.

          Oh really? Then perhaps you'd care to clue the rest of us in? I see very little impact from the x86's VLE instruction set. Only if you make assumptions about the underlying core based on the instruction set (which would not be a wise thing to do) could I see that VLE as an issue.
        • by low-k ( 309815 )
          Ditto the first responder. ISA-goop matters not very much. Modern x86 processors (Intel or AMD) translate the x86 goop into relatively clean RISC-like operations. From that point of the processor and onward, the encoding/decoding warts of x86 don't matter. Almost any half-way sane ISA (and that's a generous "half-way" as it includes x86) can be decomposed into RISC-like operations. Several years ago when anybody cared about Itanium, Intel even had a paper published in an ACM/IEEE conference on how to i
    • Re:World beyond x86 (Score:5, Informative)

      by ErikTheRed ( 162431 ) on Wednesday September 20, 2006 @02:31PM (#16148312) Homepage
      So sick of x86. Look at all the cool stuff the graphics card makers are coming up with. Intel needs to buy NVidia to get real innovation done. I'm sure they have cool stuff cooking up, though. Let's get engineers going and let's get innovating!
      Intel's buying power [yahoo.com] (Total Current Assets - Total Current Liabilities): around US$ 8.5B

      NVidia's current market cap [yahoo.com]: US$ 10.83B

      And that's assuming Intel won't have to write down a ton of their current inventory (all their old Netburst crap). They'd have to issue a ton of new stock to pay for the purchase - I don't think their shareholders would go for it.
      • Re: (Score:3, Insightful)

        by Amouth ( 879122 )
        "NVidia's current market cap: US$ 10.83B"

        to control it they only need $5.42B

        • Yeah, but you can guarantee that if they start considering a buyout nVidia's market cap will go up.
          • by Amouth ( 879122 )
            Intel would just have to quitly plan a one day agressive buyout of voting stock..

            it could happen.. not that i think it ever will
    • "So sick of x86. Look at all the cool stuff the graphics card makers are coming up with. Intel needs to buy NVidia to get real innovation done."

      This makes no sense. Your logic is:

      1) x86 sucks. Intel makes x86.
      2) Graphics card makers are doing great stuff. NVidia is a graphics card maker.
      3) Intel designed the x86, so therefore Intel's product designs suck.
      4) NVidia is making cool stuff, so NVidia's designs are good.

      Your conclusion: Intel should buy NVidia so innovation can start.

      Your conclusion is in dire
  • by DaveM753 ( 844913 ) on Wednesday September 20, 2006 @01:56PM (#16148023)
    Not "Steam Computing"...
    • by ortcutt ( 711694 )
      Steam Computing in action:

      http://www.youtube.com/watch?v=HIgsqYJmtJs&eurl= [youtube.com]
      • by kfg ( 145172 ) *
        I've thought of messing around with something like that, but I have a fondness for gravity drives.

        I'm cuckoo for cuckoo clocks, cuckoo for cuckoo clocks, cuckoo. . .

        Oh, sorry. Anyway, it's true you have to do some work to reset them, but I'm not actually averse to work. You can lift barbells to generate waste heat, or you can lift cement blocks and sand to generate electricity.

        KFG
      • Wow, that was disappointing. I was hoping to see something that somehow used the steam for computation (using "valvsistors" or something). : (

    • Well, it's about time we got some decent peripherals for the difference engine.
  • Stream eh? (Score:3, Funny)

    by ZipR ( 584654 ) on Wednesday September 20, 2006 @01:56PM (#16148024)
    Perhaps they're following Valve's lead and are introducing 'episodic' computing.
  • So AMD's 4X4 could essentially become an 8-way processor? That might be cool, especially when the quads come out.
  • Fantastic hardware, I'm sure. But since they won't release the specs, then we'll all have to rely on their drivers. God job that ATI have such a stellar record when it comes to releasing stable, reliable drivers on Linux.
    • by ahsile ( 187881 )
      Is there any indication the driver situation will improve with the buyout by AMD?
    • RTFA (Score:1, Informative)

      by Anonymous Coward
      I know its slashdot, but.......


      ATI has only recently allowed developers to tap into its CTM (close to the metal) interface, which lets software interact directly with the underlying hardware.

      Presumably, ATI will announce an even more open stance at its event next week.


      Does still leave room for doubt though.
    • Not to diss linux, but I can see why ATI only gives a small portion of their effort to linux drivers (and that effort I hear has improved of late). The majority of their cards are direct X and geared at... windows gamers. It's their bread and butter. That they provide linux drivers at all is a nicety. Many manufacturers of consumer goods for the pc don't give any linux support. Sadly for the linux community, the only way the linux effort will improve is if more big time games come to linux. Right now if you
      • The amount of time ATi devotes to their Linux drivers is beside the point. All we want are specs so that we can write our own drivers!

        Besides, what about all the people using BSD (and the 3 people using HURD) that are completely unsupported by ATi?

  • General purpose graphical processing unit. If only there were some kind of unit that would process instructions centrally, it could be used for all sorts of things! We could call it... hmmm... a central processing unit? Naw, that just doesn't have enough zazz. Seriously, this is about as silly as trying to sell "physics acceleration cards". I don't want general processing to have to be stuck going through a PCI bus. Yuck.
    • Re:GPGPUs... (Score:5, Insightful)

      by shawnce ( 146129 ) on Wednesday September 20, 2006 @02:15PM (#16148192) Homepage
      The point of a thing like this is to ship data in bulk to the VRAM attached to the GPU. Then have the GPU grind away on that data using the large memory bandwidth available on the adapter. Then once finished pull the data back off the adapter. Also note that PCIe is much much better then any prior PCI/AGP bus for feeding this type of thing.
      • I've always thought of data projection queries in terms of Z-buffer processing. It would be interesting to see what a GPGPU could do for such queries.

        For example, pricing, products, and services often have start and end date-times. Given a particular date, the effective pricing is most recently started data set that hasn't expired yet. It sounds easy in english, but it tends to be a rather bruatal set of unions and hierarchical-join queries to implement.

        But given the I/O intensive nature of such pro

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Nah, Think about it a bit more. AMD buys ATI, AMD has hypertransport, ATI has chips capable of running 48 specialised threads *alongside* your normal cpu, admittedly initially PCIe (which isnt that shabby), but eventually they *have* to put it on hypertransport, with direct access to ram yada yada yada. I can see database servers LOVING this, and scientific visualisation software, 3d renderfarms etc. etc.
    • Re: (Score:3, Insightful)

      by cowscows ( 103644 )
      It sounds to me that it's not entirely general purpose, just a recognizing of the fact that optimizing for the sorts of operations that graphics have benefitted from can easily be shifted to some other specific applications.

      So that there are, for example, some specific common database operations that could be significantly more efficient with some optimized hardware. It's just that there's not necessarily a big enough market to design, test, produce, and sell cards designed just for that and make a profit.
      • There has been an example [cmu.edu] presented on this, already. A GPU can be useful for doing large sorts, which is something databases tend to do quite often in complex queries. It was a win if there was more than 1MB of data to sort.

        The benefit comes in that the GPU is tightly connected to a bunch of fast RAM that isn't being competed over by the general purpose CPU(s).

        So, you throw 128MB or such of data onto the GPU, and you can get it sorted several times faster than a regular CPU could do it. Presumab

    • Those of you who haven't worked in the computer graphics field for long (or ever) may not be aware that there have been many cylces of "let's get some specialized hardware in here to help with graphics" followed by it being incorporated into the general purpose CPU eventually. This is simply the first time it's occurred for the mainstream market.

      Maxim
    • They could shorten it to GP2U later on.
    • You are missing the point of stream processing. GPU's excel at filling pixels with data- which is to say they are good at doing the same thing to a whole bunch of pixels. The idea of a stream is to visualize all of those pixels flying by. Also, to carry the metaphor forward, the stream flows one direction, hence the end of stream is not flowing back to start of the stream. For graphics, that means the destination pixels (ie. the screen) depend on things upstream from them (what you are drawing, ie. pi
  • So much as I understand it, a GPU works much more efficiently then a CPU because it is much more flatter then a CPU - whereas a CPU work very quickly on one thread at a time, a GPU can work on a bunch more threads a lot slower. But the GPU adds up to more flops. Okay. Now, as I understand it, a Pentium 2 generally performs 1/5 the amount of work that a modern single core CPU does. As far as I understand it, a pentium 2 requires very, very, little dies space to make if we make it with a modern process. Which
    • by pilkul ( 667659 )
      I imagine if such a thing existed there would be considerable demand for it. However, I'm no expert in CPU design but somehow I doubt that Intel is able to just slap 100 Pentium 2s on one die and have it work just like that. There were no plans for the possibility of multiple cores when that CPU or its interface with the motherboard was designed. A new core would likely have to be engineered from the ground up -- which is pretty much what ATI is doing.
    • by Amouth ( 879122 )
      the logic of just making a p2 smaller via newer tech isn't right.. some things shrink and some things don't.. also you have to look at the timming issues for the paths..

      then you have to figure out how to stick them all on one die.. and have to talk to each other..

      and while you can make somethings smaller .. what about the cache...

      a video card doens't have to fit in a 1-2 in ^2 area.. it can be a full length ATX card with GB's of fast ram and multi core specilized proccessors..

      a CPU .. like intel and amd ha
    • by Phleg ( 523632 )

      I'm not sure why some variant of this argument wouldn't hold true for GPUs, but there is a practical limit to how many cores a CPU can have on-die.

      With the first set of dual-core CPUs, each CPU has its own L1 and L2 cache. However, this isn't optimal -- basically, this heavily favors scheduling processes on the last CPU which they used, to increase the likelihood that needed memory is already in the L1/L2 cache. So, most current dual-core architectures use individual L1 caches per CPU, but a shared L2 cach

  • It seems that GPGPU applicatoins are turning the GPU into something similar to the old math coprocessors, but for parallelizable, SIMD math.

    I predict that they will eventually go the way of the FPU.
    • I predict that they will eventually go the way of the FPU.

      You mean back into the main CPU core where they (GPUs) came from in the first place?
      • Exactly. GPGPU is basically hijacking graphics hardware to perform parallelizable vector operations. However, the GPU isn't general enough for many applications. Hardware is improving in this regard, but if GPGPU actually takes off, it would really make more sense to have a generic vector processor. However, this could also happen by improving the SIMD abilities of the current round of hardware, like Altivec and SSE.
      • Exactly right. If you want to do a lot of massively parallel calculations, design a CPU to do it. Don't ship tons of data back and forth on slow[er than than the memory to CPU] bus.
    • by kfg ( 145172 ) *
      I predict that they will eventually go the way of the FPU.

      All my life's a circle, sunrise and sundown. . .

      And the wheel of reincarnation turns again.

      KFG
    • by Khyber ( 864651 )
      I'll put more money on HyperTransport, which ATI will have access to now that AMD and ATI have joined forces. Who needs PCIe or X when you could just use Hypertransport as your data transfer route? That things got far more bandwidth and speed. I'd love to see a system designed completey around HT. Maybe one, or two expansion slots, but most anything else could just be socket-based (need to upgrade? Just buy a new GPU/SPU instead of a whole freaking card.)
  • GPGPU primer (Score:5, Informative)

    by daVinci1980 ( 73174 ) on Wednesday September 20, 2006 @02:19PM (#16148219) Homepage
    (Full disclosure: I work for a major manufacturer of 3-D accelerators.)

    There's lots of good sites that talk about GPGPU [gpgpu.org]. Wikipedia [wikipedia.org] has an okay article on the subject as well, and NVIDIA has a primer (PDF) [nvidia.com] on the subject. But the summary of this article is a bit overly broad.

    GPGPU isn't about moving arbitrary processing to the GPU, rather it's about moving specific, computationally expensive computing to the massively parallel GPU.

    Effectively, the core idea of GPGPU solutions is that you compute 256x256 (or another granularity) of solutions entirely in one pass.

    NVIDIA has several examples [nvidia.com] on their website, specifically the GPGPU Disease and GPGPU Fluid samples. The Mandelbrot computation they have there could also be considered an example. (More samples here [nvidia.com]).

    GPGPU has already been utilized to perform very fast (comparable to the CPU) FFTs. In an article in GPU Gems 2 (a very good book if you're interested in doing GPGPU work), they indicate that a 1.8x speedup can be had over performing FFTs on the CPU. I've heard that there are now significantly faster implementations as well.
    • In radioastronomy correlations and FFT's is a large part of the computations we do. Some tests we've done on Nvidia 7900's shows those could be sped up by a factor of 300 over a general CPU. The big problem is bandwidth.
      The amount of processing and bandwidth needed for these new telescopes comming online is staggering. For LOFAR we're using a IBM BlueGene with 12.000 Cores (Stella, nr.12 in the top500), using a 144Gb/s connection, and for SKA the nummbers are going to be orders of magnitude larger. The pos
  • If "Stream Computing" is even half as revolutionary as "Blast Processing," [wikipedia.org] count me way the hell in!

    SEGA!!!
  • It seems that lately, with the Cell processor, dual core, and now the use of processors optimized for matrix operations as general purpose units that there's a craze for improving the computation of highly parallelizable tasks. What about the other tasks where dependencies abound and the only way to improve them is through decreased cache latency and more processor cycles

    Or are there not as many of those tasks as we were led to believe?

    • by 2short ( 466733 )

      There's plenty of those tasks. There's just not a heck of a lot to be done about it. The apparent recent focus on paralel tasks is partly because chip makers are running out of easy ways to make non-parralel tasks any faster. But it's relatively easy to do the same task more times in parralel at the same speed. Which probably doesn't help until the software gets re-written to take advantage of that, assuming it can be.

      On the other hand, my impression is that a lot of tasks that seem like they can't bene

  • We could have had this long ago if not for the fact that the AGP bus is slow as hell when moving data from graphics card memory back to main memory/CPU.

    Sure, you could do computations in graphics memeory, if you didn't mind waiting forever to read the results back.

    • by ad0gg ( 594412 )
      Video PCI was before AGP and wasn't asymetrical in bandwidth. :P
    • we had HyperTransport back in the late APG days but now it has been opened up to HTX and Co-processors. I hope that amd and ati can come out with a HTX card with this chip on it.
    • Seriously, how long does it take to get the answer "42" transferred over the bus?
  • Good for them (Score:4, Insightful)

    by gillbates ( 106458 ) on Wednesday September 20, 2006 @02:25PM (#16148267) Homepage Journal

    In the original PC, the VGA interface gave the CPU a direct window into the video memory. Your CPU was your GPU as well - the only thing the graphics card did was convert the raster of bytes in a certain location to a signal recognizable by the monitor. As such, the hardware wasn't optimized for the kinds of operations that would become typical in the games that followed. So video card manufacturers began a mitigation strategy which involved moving the computationally complex parts of rendering off to the video card, where the onboard processor could render much more quickly and more efficiently than the CPU itself. The drawback of this approach was that to take full advantage of your video hardware you had to run a certain buggy, unstable, and rather insecure operating system. Typically, the drivers were written only for Windows. Reinstalling Windows became a semi-annual ritual for serious gamers.

    But, if ATI is successful in standardizing the GPGPU architecture, we may be able to take advantages of the video hardware on platforms other than Windows. While Linux has typical suffered a dearth of FPS games because of the lack of good hardware rendering support in the past, this has the potential for Linux to become the next serious gamer's platform.

    Which is a good thing, IMHO.

    • by suv4x4 ( 956391 )
      . The drawback of this approach was that to take full advantage of your video hardware you had to run a certain buggy, unstable, and rather insecure operating system. Typically, the drivers were written only for Windows. Reinstalling Windows became a semi-annual ritual for serious gamers.

      It's totally Microsoft's fault that Linux is not interesting to most gamers, or that GPL doesn't allow proprietary drivers to be used on Linux. Yea. Totally Microsoft's fault.

      As for gamers, they have lots of stupid rituals
      • I agree with much of your post, but not this part:

        Tell me a piece of consumer software (sorry this doesn't include servers, compilers and hacker tools), that has become successful on Linux.

        It depends on what you mean by "successful." If you mean market penetration, then not much. But I look at "successful" as whether it does what I need it to do and does it better.

        For starters, GNU Lilypond [lilypond.org], which is light-years ahead of software like Finale in its flexibility. True, it doesn't (yet) have all of

        • by suv4x4 ( 956391 )
          None of those are commercial consumer applications, which is what I'm talking about. You can't sell to consumers on Linux. So games are out of the gueston.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      So video card manufacturers began a mitigation strategy which involved moving the computationally complex parts of rendering off to the video card,

      Oh, you mean those massive VME bus cards that SGI created, and the IRIS Graphics Library they also created to access the power of their graphics cards.

      The drawback of this approach was that to take full advantage of your video hardware you had to run a certain buggy, unstable, and rather insecure operating system.

      Oh, you mean after SGI created an open vers

      • You're kidding, right?

        I hate to feed the trolls, but the counterargument is so irresistable: How many gamers would have seriously considered buying the systems you mention?

        I know, I know. I must have had it hard because I had to settle for a $2000 PC back when we had to walk to school, both ways, in the snow...

        Good graphics hardware has always been available to those who could afford to pay an engineer's salary. But even the average engineer would think twice about buying the systems you mention

  • That "stream computing" sound to me like "Citrix on steroids", i. e. dumb terminals which get high-quality, real-time graphics and sound
  • to make it enlarge my pr0n really, really fast. Me likes me pr0n really, really big!
  • And who really needs this much extra processing power? If you already have an Athlon X2, or Core 2 Duo, how often have you maxed out your processor? How much of this is just bragging rights and penis extending?
    • by COMON$ ( 806135 )
      Seriously? Wow, i mean just wow. Either you are considerably shortsighted, young, or just plain dumb.

      Core 2 Duo? Look at those of us who use VMware, any sort of video rendering, or multi-tasking on a general level and we kill those processors in a heartbeat.

    • by dahl_ag ( 415660 )
      I have looked into GPGPU in the past for some of the experiments that I do with neural networks. LOTS of floating point operations... very simple, but lots of it. Aside from image processing, wikipedia offers this list of applications: http://en.wikipedia.org/wiki/GPGPU#Applications [wikipedia.org] Yeah, your average desktop user doesn't need this power, but there are plenty of people who certainly could.
    • Right now, probably very few people need it. But the (admittedly brief) history of computers shows pretty clearly that applications will always expand to fill the new space opened up by more powerful hardware, whether it be storage (RAM, Harddrives), or speed (Processor's GHz, ISA/PCI/AGP/PCIe). And of course, there are some applications that will always benefit from more processing power -- calculating more digits of pi, running simulations, adding more polygons to game levels, etc.
  • We've had stream computing for several years on both platforms see "Metaprogramming GPUs with Sh" by McCool and DuToit. What is significant is that this follows the Purchase by AMD. Leveraging the purchase and providing closer coupling of the CPU and the GPU (HyperTransport?) for GPGPU could allow them to steal the march on Intel the same way they did with the 64 bit instruction set.
  • by inio ( 26835 ) on Wednesday September 20, 2006 @03:22PM (#16148769) Homepage
    Stream processing is not new. There's been academic projects working on massively parallel systems for decades. One particular project I know of is UCSC's Kestrel [ucsc.edu] processor, a 512-way 8-bit stream processor.In the late 90s this thing blew high-end desktops out of the water for linear processing tasks like image convolution and at a fraction of the power.
  • GPGPU backers just want to take graphics chips from the likes of ATI and Nvidia and tweak them to handle software that normally runs on mainstream server and desktop processors.

    This sounds like the GPU-based programs that OS X uses to perform Core Graphics and Core Video operations.
  • New Generation (Score:2, Interesting)

    by Ice Wewe ( 936718 )
    How much will this new generation of cards cost? I'm all for more power, because 2Ghz just doesn't cut it anymore, but I'm not willing to shell out much more for the card then I would for the CPU. I paid $200 CDN for my AMD3200+ a year ago, so that means that I'd expect at least dual core performence from a videocard that cost me the same amount now. I have a 7900GTX in my system (which cost a fucking fortune!) because I like to do high performence gaming (Transgaming, fix your damn Cedega so I can play on
  • ... down to the smaller architectural bits & pieces:

    ftp://ftp.research.microsoft.com/pub/tr/TR-2005-18 4.pdf [microsoft.com]

    published last year.

[We] use bad software and bad machines for the wrong things. -- R.W. Hamming

Working...