Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD

AMD Betting Future On the GPGPU 181

arcticstoat writes with an interview in bit-tech "AMD's manager of Fusion software marketing Terry Makedon revealed that 'AMD as a company made a very, very big bet when it purchased ATI: that things like OpenCL will succeed. I mean, we're betting everything on it.' He also added: 'I'll give you the fact that we don't have any major applications at this point that are going to revolutionize the industry and make people think "oh, I must have this," granted, but we're working very hard on it. Like I said, it's a big bet for us, and it's a bet that we're certain about.'"
This discussion has been archived. No new comments can be posted.

AMD Betting Future On the GPGPU

Comments Filter:
  • AMD lost that bet (Score:5, Informative)

    by blair1q ( 305137 ) on Tuesday May 31, 2011 @03:22PM (#36300506) Journal

    AMD famously overpaid by 250% for ATI, then delayed any fusion products for 2 years, then wrote off all of the overpayment (which they had been required to carry as a "goodwill" asset). At that point, they lost the money.

    Luckily for them, ATI was still good at its job, and kept up with nVidia in video HW, so AMD owned what ATI was, and no more. But their gamble on the synergy was a total bust. It cracked their financial structure and forced them to sell off their manufacturing plants, which drastically reduced their possible profitability.

    What they have finally produced, years later, is a synergy, but of subunits that are substandard. This is not AMD's best CPU and ATI's best GPU melded into one delicious silicon-peanut-butter cup of awesomeness. It's still very, very easy to beat the performance of the combination with discrete parts.

    And instead of leading the industry into this brave new sector, AMD gave its competition a massive head-start. So it's behind on GPGPU, and will probably never get the lead back. Not that its marketing department has a right to admit that.

  • by JBMcB ( 73720 ) on Tuesday May 31, 2011 @03:26PM (#36300556)

    Mathematica 8 can use OpenCL (and CUDA) I think the new MATLAB can, too.

  • by TimothyDavis ( 1124707 ) <tumuchspaam@hotmail.com> on Tuesday May 31, 2011 @04:06PM (#36301008)
    If you look at that AMD Fusion design, they have already addressed this. The memory will be shared between CPU and GPU, and the 'transfer' from CPU to GPU will simply be a pointer exchange. In fact, Fusion is doing away with the concept of a GPU being a discrete device - the GPU that is presented to the OS is really only a virtual device wrapping a bunch of the vector processing units.
  • by Sir_Sri ( 199544 ) on Tuesday May 31, 2011 @04:09PM (#36301048)

    I don't think he means OpenCl specifically. OpenCl is a tool that connects you to GPU hardware. GPU hardware is designed for a different problem than the CPU, so they have different performance characteristics, in the not too distant future heterogenous multi core chips that do both the CPU and GPU calculations of today will be mainstream, and there will general purpose computing tools (which OpenCl is a relatively early generation of, along with CUDA) to access that hardware.

    While I don't agree with the idea that this is the entire future, it's certainly part of it. Right now you can have 1200mm^2 of top tier parts in a computer, roughly split half and half CPU/GPU - but not every machine needs that, and it's hard to cool much more than that. So long as there's a market which maximizes performance and uses all of that, CPU/GPU integration will not be total. But there will be, especially in mobile and not top end machines 'enough' performance in 600-800 mm^2 of space, which can be a single IC package which will be a combined CPU-GPU.

    It is, I suppose, a bit like the integration of the math co-processor into the CPU a decade ago. GPU's are basically just really big co-processors, and eventually all that streaming, floating point mathy stuff will belong in the CPU. That transition doesn't even have to be painful, a 'cheap' fusion product could be 4 cpu cores and 4 GPU cores, whereas an expensive product might be a 8 core CPU in one package, and 8 cores of GPU power on a separate card, but otherwise the same parts (with the same programming API). The unified memory will eventually obsolete the dedicated GPU probably, but GPU RAM is designed for streaming, in order operations, whereas CPU ram is for out of order random memory block grabs, ram that does either in order or out of order equally well would solve that problem (or as long as it does it well enough), but architecturally I would have GPU ram as a *copy* of the piece of memory that the gpu portion of a fusion part will talk to.

    As to what the huge market is: OpenCL gives you easier access to the whole rendering subsystem for non rendering purposes. So your 'killer' apps, are laptops, tablets, mobile phones, low powered desktops, really, anything anyone does any sort of 3D on (games, windows 7, that sort of thing), so basically everything, all your drawing tools.

    The strategy is poorly articulated with OpenCl, but I see where they're going. I'm not sure what Intel is doing in this direction though, which will probably be the deciding factor, and nVIDIA, rather than positioning for a buyout (by Intel), seems to be ready to jump ship to SoC/ARM type products. Intel doesn't seem to have the GPU know how to make a good combined product, but they can certainly try and fix that.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...