Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

The Outlook On AMD's Fusion Plans 122

PreacherTom writes "Now that AMD's acquisition of ATI is complete, what do the cards hold for the parent company? According to most experts, it's a promising outlook for AMD . One of the brightest stars in AMD's future could be the Fusion program, which will 'fuse' AMD's CPUs with ATI's GPUs (graphics processing units) in a single, unified processor. The product is expected to debut in late 2007 or early 2008. Fusion brings a hopes of energy efficiency, with the CPU and GPU residing on a single chip. Fusion chips could also ease the impact on users who plan to use Windows Vista with Aero, an advanced interface that will only run on computers that can handle a heavy graphics load. Lastly, the tight architecture provided by Fusion could lead to a new set of small, compelling devices that can handle rich media."
This discussion has been archived. No new comments can be posted.

The Outlook On AMD's Fusion Plans

Comments Filter:
  • power efficiency?? (Score:2, Interesting)

    by Klintus Fang ( 988910 ) on Thursday November 16, 2006 @05:00PM (#16875496)
    yeah. i'm also wondering how putting the two hottest components on the mother board (the GPU and CPU) into the same package is a power savings... :-/ maybe on the low end of the market where the performance of the GPU is irrelevant, but for those who actually care about GPU performance, putting the two most power hungry and memory bw hungry components together doesn't seem like a good idea.
  • by scuba_steve_1 ( 849912 ) on Thursday November 16, 2006 @05:01PM (#16875536)
    ...which may explain how AMD has managed to keep their costs low over the years. Word of mouth is compelling...even to the point that many folks that I know are now biased against Intel...even though we are at a unique point where AMD's advantage has eroded...at least for the moment.
  • Re:Airport fun (Score:3, Interesting)

    by PFI_Optix ( 936301 ) on Thursday November 16, 2006 @05:10PM (#16875670) Journal
    So...there are only servers and enthusiasts in the market?

    Wow. And here I was thinking there was this vast market for things called "workstations" where businesses didn't need high-end video cards and home systems where users didn't require the best 3D accelerators on the market. Shows what I know.

    Even most enthusiasts only replace their video cards every 12-18 months. If a CPU/GPU combo was in the same price range as a current video card (not farfetched) then there'd be no reason not to use a combo chip.

    But hey, feel free to waste hundreds of dollars just because you think you know how things will work. Don't let the facts get in the way of your fanaticism.

  • but... (Score:2, Interesting)

    by Hangin10 ( 704729 ) on Thursday November 16, 2006 @05:14PM (#16875768)
    does this mean ATI will be opening up its GPU programming specs, or merely what is being stated (that graphics chip and CPU will share a die) ?
  • Maybe... (Score:5, Interesting)

    by MobyDisk ( 75490 ) on Thursday November 16, 2006 @05:20PM (#16875892) Homepage
    The article says that this might be attractive to businesses: I can see that since most businesses don't care about graphics. This is similar to businesses buying computers with cheap on-board video cards. But that means they will be profiting on the low-end. It seems like this is more of a boon for laptops and consoles: Currently, laptops with decent video cards are expensive and power-hungry. Same with consoles. But for mid-range and high-end systems, there must be a modular bus connecting these two parts since they are likely to evolve at different rates, and likely to be swapped-out individually.
  • Linux Drivers (Score:3, Interesting)

    by turgid ( 580780 ) on Thursday November 16, 2006 @05:24PM (#16875944) Journal

    I've been an nVidia advocate since 1999 when I bought a TNT2 Ultra for playing Quake III Arena under Linux on my (then) K6-2 400.

    I'm on my 4th nVidia graphics card, and I have 6 machines, all running Linux. One is a 10-year-old UltraSPARC, one has an ATI card.

    Despite slashbot rantings about the closed-source nVidia drivers, and despite my motley collection of Frankenstein hardware, I've never had a problem with the nVidia stuff. The ATI stuff is junk. The drivers are pathetic (open source) and the display is snowy, and the performance it rubbish.

    I hope AMD do something about the Linux driver situation.

    My next machine will be another AMD, this time with dual dual-core processors and I'll be doing my own Slackware port, but I'll be buying an nVidia graphics card.

  • by Anonymous Coward on Thursday November 16, 2006 @05:33PM (#16876108)
    Ram. Look at the Xenos GPU (Xbox360) or the PS3 GPU. Both have the ram soldered directly over the GPU package. They cant be in the same chip because of the different fabrication processes, but they can be glued together for higher overall speed. But upgrades will suck...
  • Re:this will fail (Score:3, Interesting)

    by NSIM ( 953498 ) on Thursday November 16, 2006 @05:42PM (#16876280)
    Do you really want to have to replace an entire system when you upgrade? You buy a Dell, a new game comes out 6 months and your system can't play it reasonably well. So then you either a) buy a new system or b) gut in a video card and not use the one on the proc.

    Integrating the GPU with the CPU will be about driving down cost and power consumption, not something that is usually a high-priority for folks that want to run the latest greatest games and get all the shiniest graphics. So, I'd be very surprised if this is intended to hit that part of the market, more likely it's designed to address the same market segment that Intel hits with graphics embedded in the CPU's supporting chipset.

    That said, having the CPU & GPU combined (from the point of view of register and memory access etc) might open up some interesting new possibilities of using the the power of the GPU for certain non-graphic functions.

    Back in the day at Intergraph we had a graphics processor that could be combined with a very expensive (and for the time powerful) dedicated floating point array processor. To demonstrate the power of that add-on somebody handcoded an implementation of the Mandelbrot Fractal algorithm on the add-on and it was blistering fast. I can imagine similar highly-parallelized algorithms doing very well on a GPU/CPU combo.
  • by hairpinblue ( 1026846 ) on Thursday November 16, 2006 @05:48PM (#16876378) Journal
    I can appreciate that an integrated CPU/GPU combination may have advantages in many arenas. It feels like a Bad Idea, though, in the same way that televisions with integrated VHS players were a bad idea, and all-in-one stereo systems didn't become a Good Idea until they came down both in price and physical size. In general I'm not comfortable with someone else bundling my technology for me. I'll be more than happy to accept the cost of keeping up to date with researching the individual components, and accepting the small performance drawback of the data bus between processor, memory, and the video card. In some ways it feels like a cog in the wheel of advancing TC and DRM. In other ways it's really inevitable since video display is such an enormously processor intensive task. The computer, for the majority of the population, has become an entertainment device similar to what the television and radio were in progressively earlier generations. Even with the push to F/OSS taking off and catching the attention of more and more consumers the end tasks are solidifying and standardizing for the vast majority of the population. Logically speaking why wouldn't the industry begin to solidify and standardize more and more of the components within the product? Look for the reintroduction of integrated audio chipsets, and maybe even their integration into the processor core, for a single unified network entertainment box (SuneB) rather than a real computer. Then where will the F/OSS movement go? By the time the SuneB hits we'll be back to OS on a chip (much like the Amiga had 20 years ago, or TV set-top network boxes, which the Amiga became in Escom's and later QVDs hands, have, or DVD players have). Technology really seems more and more cyclical every time I see it evolving and progressing.

    As a hobbyist, though, this sort of move makes me uncomfortable and maybe even a little bit sad. I've always liked the puzzles that computers bring: programming, building, troubleshooting, compiling, security monitoring, maintaining, and even the jargon and zealotry that comes with being a computer enthusiast. When computers have become a standard black box commodity what will be the next hobby puzzle to hold my interest?

    Oh. And yes. I'd like to claim intellectual property on the SuneB. Sure, the industry will call it something else and all the patents will have a different name, but at least, 10 years from now when a SuneB clone company is the driving force on the stock market, I can sit back and think to myself,"Somewhere on Slashdot there's a post proving that I should be a billionaire rather than a corporate wage-slave."
  • GPU or GPGPU? (Score:2, Interesting)

    by tbcpp ( 797625 ) on Thursday November 16, 2006 @05:50PM (#16876426)
    From what I understand (and I could be wrong), AMD/ATI is aiming more at the GPGPU market. So we're talking more of a suped up altivec processor in the CPU instead of a full blown GPU. It sounds like the're simply adding a 64 pipleline vector processor to the existing x86-64 core. I'm not sure if this is a bad idea.

    I remember programming assembly graphics code in BASIC back in the day. You would set the VGA card to mode 13h and then write to...what was it now...0xa00? That's probably wrong. Anyway, whatever your wrote to that portion of memory would go to the screen.

    If you had a huge SIMD co-processor, would it not be possible to rival modern GPUs with this model? Not to mention being able to do some cool stuff like having a video input card dump data driectly into that portion of the screen. So you could have video in with the CPU at complete idle.
  • Not what you think (Score:2, Interesting)

    by Anonymous Coward on Thursday November 16, 2006 @05:53PM (#16876468)
    My people are reading this as an integrated GPU and CPU. I don't see it that way. I see it as adding a generic vector processor to the CPU. Similar to the Cell processor and similar to future plans Intel has described. Vector processors are similar to SSE, 3DNow, etc. They are SIMD processors that can execute very regular mathematical computations (Video and audio encoding/decoding) VERY quickly, but aren't much good for generic algorithms.
  • by Vellmont ( 569020 ) on Thursday November 16, 2006 @05:57PM (#16876544) Homepage
    The people claiming this will fail all seem to miss the market this is aimed at. It's obviously not intended to compete with the high-end, or even middle of the road graphics processor. Those boards require gobs of VERY fast video memory. My guess is this thing is aimed at a space between the on-board video (which are really just 2-d chips) and the full 3-d graphics card. Anyone buying this has no intention of buying a super-duper

    With Vista coming out soon, PC-makers are going to want a low-cost 3-d accelerated solution to be able to run some (or maybe all) of the eye-candy that comes with vista.
  • Re:Stock tip ... (Score:3, Interesting)

    by Michael Woodhams ( 112247 ) on Thursday November 16, 2006 @06:00PM (#16876582) Journal
    Yeah, but the heatsink for the processor/graphics card combo system will be righteous.

    "Righteous" = "big"?

    Intel was making 130W CPUs until AMD got better performance with 60W (although Intel have now overtaken AMD on this.) I've got a 40W GPU which is as powerful as a 100W GPU of a couple of years ago.

    A state-of-the-art CPU plus a mid-to-high range GPU today could come in at around 130W. The 130W CPU heat-sink problem is solved (for noisy values of "solved".)

    Also, it is much easier to deal with a big heatsink on the motherboard than on a video card - the size and weight are much less restricted.

    Hm, perhaps if AMD starts making 100+W fusion chips, they'll start supporting Intels BTX form factors (which were largely designed to improve cooling.) As a silent computing nut, I think this would be a Good Thing.
  • Re:Heat??? (Score:5, Interesting)

    by Pulzar ( 81031 ) on Thursday November 16, 2006 @07:58PM (#16878096)
    Although CPUs have gotten better in the past year, GPUs (particularly ATI's) still keep outdoing each other in just how much power they can suck.


    You're talking about the high-end "do everything you can" GPUs... ATI is dominating the (discrete) mobile GPU industry because their mobile GPUs use so little power. Integrating (well) one of those into a CPU should still result in a low-power chip.
  • by modeless ( 978411 ) on Thursday November 16, 2006 @09:27PM (#16879024) Journal
    I highly doubt AMD is planning on using combined CPU/GPU solutions on their mainstream desktop parts, and they are absolutely not going to do so for server parts

    I think they are, and I think it's the right choice. The GPU that will be integrated will not be today's GPU, but a much more general processor. Look at NVidia's G80 for the beginning of this trend; they're adding non-graphics-oriented features like integer math, bitwise operations, and soon double-precision floating point. G80 has 128 (!) fully general-purpose SISD (not SIMD) cores, and soon with their CUDA API you will be able to run C code on them directly instead of hacking it up through DirectX or OpenGL.

    AMD's Fusion will likely look a lot more like a Cell processor than, say, Opteron + X1900 on the same die. ATI is very serious about doing more than graphics: look at their CTM initiative (now in closed beta); they are doing the previously unthinkable and publishing the *machine language* for their shader engines! They want businesses to adopt this in a big way. And it makes a lot of sense: with a GPU this close to the CPU, you can start accelerating tons of things, from scientific calculations to SQL queries. Basically *anything* that is parallelizable can benefit.

    I see this as nothing less than the future of desktop processors. One or two x86 cores for legacy code, and literally hundreds of simpler cores for sheer calculation power. Forget about games, this is much bigger than that. These chips will do things that are simply impossible for today's processors. AMD and Intel should both be jumping to implement this new paradigm, because it sets the stage for a whole new round of increasing performance and hardware upgrades. The next few years will be an exciting time for the processor business.
  • Re:GPU or GPGPU? (Score:1, Interesting)

    by Anonymous Coward on Thursday November 16, 2006 @10:21PM (#16879430)
    Exactly. This could be the move that makes physics acceleration ubiquitous. Sure, el cheapo systems can be built that utilize the on package or on die GPU capabilities for low end graphics. But a higher end (and gamer relavent) use would be to have the integrated "GPU" doing physics calculations while an add in board continues handling the traditional GPU tasks. This would be *far* superior to the current add in board approach, because the tight CPU-Physics integration would allow for some damned sweet gameplay enhancements (as opposed to the current make it prettier effects).
  • Re:Airport fun (Score:3, Interesting)

    by BigFootApe ( 264256 ) on Thursday November 16, 2006 @10:32PM (#16879508)
    I believe they would simply re-purpose the onboard shaders for general computing.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...