Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

AMD Fusion To Add To x86 ISA 270

Giants2.0 writes "Ars Technica has a brief article detailing some of the prospects of AMD's attempt to fuse the CPU and GPU, including the fact that AMD's Fusion will modify the x86 ISA. From the article, 'To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal"TM (CTM) initiative.'"
This discussion has been archived. No new comments can be posted.

AMD Fusion To Add To x86 ISA

Comments Filter:
  • Am I the only one? (Score:5, Insightful)

    by man_of_mr_e ( 217855 ) on Monday November 20, 2006 @07:19PM (#16922638)
    Am I the only that thinks this is a bad idea? Either I change video cards more often than CPU's or CPU's more than graphics cards, but in either case I seldom want to upgrade both at the same time. Although I suppose I wouldn't mind a better GPU "for free" with my CPU, I suspect it won't be "for free".
    • by r_jensen11 ( 598210 ) on Monday November 20, 2006 @07:22PM (#16922674)
      I'm guessing that, as with integrated graphics, having (a) shared GPU/CPU(s) would allow having an additional video card. I seriously doubt they're going to remove the PCIe 16x slot from motherboards any time soon.
      • by mrchaotica ( 681592 ) * on Monday November 20, 2006 @07:50PM (#16923042)
        I seriously doubt they're going to remove the PCIe 16x slot from motherboards any time soon.

        What I'd like to see is for AMD to put the CPU and GPU on separate chips, but make them pin-compatible and attach them both to the hypertransport bus. How cool would it be to have a 4-socket motherboard where you could plug in 4 CPUs, 4 GPUs*, or anything in between?

        *Obviously if it were all GPUs it wouldn't be a general-purpose PC, but it would make one hell of a DSP or cluster node!

        • I like your solution (Score:3, Interesting)

          by TubeSteak ( 669689 )
          It neatly sidesteps the fact that high-end GPUs are massive compared to a CPU core

          Intel Core Duo CPU [tomshardware.com]
          ATI X1800 GPU [legitreviews.com]

          BUT, you'd also have to squeeze all the other microchips that are on a high-end graphics card board... I don't know if you'd be able to squish all that into a CPU sized area. And if you can't, you're just changing the form factor & moving the graphics card onto a faster bus.

          Anyone have a better idea how you can put quality graphics into a CPU?
          • by mabinogi ( 74033 ) on Tuesday November 21, 2006 @12:29AM (#16925256) Homepage
            Why don't you ask AMD, as they've apparently already considered it, or they wouldn't be talking about putting both the CPU and the GPU in the same package.

            Without knowing anything about it, it would seem that if CPU+GPU in the same package is possible, then CPU + GPU in two separate CPU sized packages would be possible.
        • Re: (Score:3, Interesting)

          by obeythefist ( 719316 )
          Remember the characteristics of performance GPUs at the moment - AMD's Fusion technologies are not aimed at performance gaming at all, they're squarely aimed at the imbedded/budget/mobile segments of the market.

          Just look at the memory speed of your average CPU (667MHz DDR2) and your average GPU (1GHz DDR3 is good).

          Now, hooking your GPU up to the main system memory takes away a huge chunk of your performance. Saves you a lot of money though. If you're not sure, go look at how overclocking video memory aff
          • Re: (Score:3, Interesting)

            by mrchaotica ( 681592 ) *

            A couple of questions for you to consider:

            1. Which has more bandwidth, 16x PCIe or HyperTransport?
            2. What's stopping AMD from simply putting a memory controller that can talk to 1GHz DDR3 on these chips (remember, AMD CPUs have on-die memory controllers)?

            Also, keep in mind that we're talking about a technology in the early stages of development. The current performance of off-the-shelf hardware is irrelevant; the issue is whether the basic technology has the potential to be made to do it.

    • by hawkbug ( 94280 ) <psxNO@SPAMfimble.com> on Monday November 20, 2006 @07:22PM (#16922678) Homepage
      Yeah, I thought that same thing at first. However, I don't think we are the target market. I think Laptops and OEMs will be the market for this. Just imagine a mac-mini type computer from Dell or somebody. Onboard video has been around for ages, but if the board could be smaller since the gpu is on the cpu, then you'd save space and power so the machine could be smaller and theoretically cheaper.
      • by cnettel ( 836611 ) on Monday November 20, 2006 @07:34PM (#16922840)
        On the other hand, the real payoff of low latency won't surface if every operation means going through a driver, which only then realizes "oh, I have a single instruction for this thing, let's head back to the caller". This means that game writers will either still need to batch up complex operations, that the driver will then translate into batches of suitable instructions, or that we'll see games/applications with radically different codepaths. Any attempt to benefit optimally from the integrated approach will perform badly on a separate card, while code tuned to a separate card won't come close to harnessing the good points of an underpowered, but lower latency, local graphics implementation.

        It's almost like they would add L3 in a non-transparent manner, that is, expecting the developers to write the code moving suitable data into the cache and addressing that data in a radically different manner, while still also supporting the normal style of memory access, where you of course need to care about the cache, but not so explicitly. (The Cell's explicit local RAM for each unit, and the whole design of that beast, comes to mind. At least ALL PS3s will have one, but the expected target market for Fusion-only adaptations is much less clear cut.)

        And, yeah, this is quite like the situation almost ten years ago, when 3D cards were hot and new. Writing a pipeline to feed those cards was quite different from rolling your own hacked-up software rendered. (And with T&L and shaders, the move has been even greater.)

        But maybe then I'm just speculating a bit too much here. It would make sense that AMD is designing these instructions to fit into the existing driver model (or at least the DX10 one), so that you can get pretty good performance by just doing the relevant translation there.

        • Re: (Score:3, Insightful)

          by mrchaotica ( 681592 ) *

          Shouldn't this kind of thing be the compiler or library writers' problem, not the application developers'?

          • by gripen40k ( 957933 ) on Monday November 20, 2006 @08:13PM (#16923324)
            From what I understand I think the parent is right. If you use OpenGL, you don't worry about pipelining or however else the computer actually 'makes' the graphics, you just code it. Buuuutt.... I guess you would need to compile two versions of the same thing and put it on the same game disk, or figure out some kind of neat system so that translations are done in real time with hardware (much faster than the soft approach).
            • Re: (Score:2, Insightful)

              by Alien Being ( 18488 )
              "Buuuutt.... I guess you would need to compile two versions of the same thing and put it on the same game disk, or figure out some kind of neat system so that translations are done in real time with hardware (much faster than the soft approach)."

              I don't think so. The application can be linked against a single graphics library. The GL just swaps some function pointers when special hardware is available.
            • Most people associate these with their fixed functionality paths and the coding for the same.

              That'd be right for the older games or the older hardware.

              It'd not be right for the new hardware or the new games...

              The new GPUs use programmable vertex and fragment shaders and the fixed functionality paths go
              through an emulation of those paths in GLSL or HLSL. There's not much left that
              isn't merely a simplified computer like a DSP is for signal processing- this is merely one that
              is designed for graphics and sim
          • I know that graphics-guru's used to have to write a lot of inline assembly and other optimizers in order to get certain parts of their code to run a little bit faster. So, the majority of people won't need to delve into that kind of depth... but there are some who push the envelope a bit and who, traditionally, needed to get their hands dirty on the low level stuff. No idea if this is still the case... maybe someone a little more involved in pushing the envelope could answer... Mr. Carmack, are you out the
      • Yeah, except that the die that goes in a mobile and a desktop are not always different designs. Just the ones that can run low voltage make it there. Granted, the turions are 754-pin so they're not the same as the AM2 desktops but it was the case back in the day (e.g. you could throw a turion in a 754-pin desktop and use it there).

        So a "laptop chip" is not always a distinct design. Even in the Intel world with the 479-pin Core 2 Duo mobiles and 775-pin desktops they're likely very similar internally (onc
      • by ruiner13 ( 527499 ) on Monday November 20, 2006 @08:19PM (#16923406) Homepage
        Dead on. Think of the power savings for laptops, not needing to have to use energy to drive a pci-e slot with a graphic chip that only gets replaced when the laptop does. It would also allow for really slick interfaces on smaller devices, such as tablets, pdas, etc. It would also have one hell of a bandwidth rate to the processor, including full speed access to the computer's RAM. I don't think they'd give it dedicated memory die to die size, but it sure would beat going over a pci-e bus like today's shared memory integrated chipsets.
        • Re: (Score:3, Insightful)

          by 241comp ( 535228 )
          How about the ability to have dual video cards within a laptop? The low-power on-cpu video driver for when you're just browsing the web or whatnot and the latest Nvidia/ATI pci-e power-hog for when you're gaming... think of the battery life you could have when reading your email at the airport without sacrificing the ability to play your favorite games at full res.
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Well that all depends on how much performance can be gained by the integration. I seem to remember this as a weak argument for why numeric co-processors shouldn't be
      integrated into the 486DX...
    • Re: (Score:3, Insightful)

      Am I the only that thinks this is a bad idea? Either I change video cards more often than CPU's or CPU's more than graphics cards, but in either case I seldom want to upgrade both at the same time. Although I suppose I wouldn't mind a better GPU "for free" with my CPU, I suspect it won't be "for free".

      Look at it this way: nowadays you can get a computer with a video "card" onboard the motherboard, but nothing prevents you from disabling it and installing a separate video card. Most likely, that's what's gon
      • If we're talking about a substandard "onboard" kind of graphics, then I suppose it's not a big deal. You know, like Intel onboard graphics or whatever. But if we're talking about high end video, which is what I would suppose it would be, otherwise there would be no purpose in trying to make it "ultra efficient", then that changes far too fast.

        I don't want to pay $500 premium on my CPU for something that will be outdated in 6 months. I mean, I can sell off my old high end video card in 6 months and buy th
      • If you're a hard-core enthusiast, your upgrades may cost more.
      • And that lovely less-than-average performing integrated GPU is taking up space that could be used with a CPU, so you're sacrificing your SMP for the sake of that.

        Unless you get two sockets, so two GPUs and two CPUs. Some bizarre new AMD crossfire monster. But the memory throughput would still be weaker than using PCIe cards. 4x4 would be better performance.

        Slightly off topic - has anyone noticed that the Vista EULA only allows two CPUs? What does this mean for Intel and AMD quad cores? Everyone has to
    • I'm betting that this will be just like how many motherboards come with 'integrated graphics' to fall back on instead of having a dedicated graphics card. All they're doing is shifting the GPU (as well as the cost) to the CPU core from the motherboard. Pricewise, this will probably suck, however performance will most like be greatly increased so it will be a good thing for OEMs and those who buy such machines.
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday November 20, 2006 @07:46PM (#16922978) Homepage Journal
        All they're doing is shifting the GPU (as well as the cost) to the CPU core from the motherboard.

        They're also eliminating all of the components between the CPU core and the GPU. In theory they could have a HT chip that handled all of the I/O and didn't even present a traditional system bus, if they felt they didn't need expansion slots. Thus you could eliminate the PCI/PCI-E bus and all the things needed to support it; at minimum however you are eliminating the bus between the North Bridge and the GPU and all that entails... which is a lot.


    • Either I change video cards more often than CPU's or CPU's more than graphics cards, but in either case I seldom want to upgrade both at the same time.

      I'm really guessing that anyone looking for high-performance 3d acceleration isn't going to be the target for this product. Video cards get a lot of high-performance by using insanely fast memory. My guess is this design would use the system memory just like integrated graphics controllers do now.

      I'll venture that this GPU/CPU integration is really aimed at
      • I'll venture that this GPU/CPU integration is really aimed at the low end market to cheaply increase graphics performance for Vista. The integrated graphics chips that exist now are really just 2d chips, and have little or none of the acceleration that Vista wants for all its eye-candy.

        There's also been some speculation that GPUs will go multi-core / multi-socket because their architectures are inherently more amenable to that and also because the current crop of ~700M transistor GPUs are friggin expens

      • It's also possible that you could use both the Fusion processor and your graphics accelerated card at the same time (though I kind of doubt any game or graphics-subsystem is going to support that).

        The integrated processor would be good for any other parallel computations you happen to want to do, though, such as physics.

    • Current CPUs are 64-bit. Current GPUs go as high as 256-bit. Dunno about you, but if someone offered me a full 256-bit multi-threaded multi-core CPU, I, sure as hell, won't be updating it for a few years. (Yeah, yeah, I doubt that's what AMD are planning, but it would be truly cool if they did. Or hot. Or is that the heatsink?)
      • When a CPU is described as '64-bit' it means one (and usually both) of the following things:
        1. It supports 64-bit pointers, allowing it to address 2^64 bytes of memory (typically less in reality, since no one is likely to need the full 64 bits of address space for a few years).
        2. It has 64-bit registers, so it can operate on 64-bit values (integers, usually, since every chip has supported 64-bit floating point values for years) directly, rather than splitting them into two 32-bit values.

        Most GPUs only operat

    • Am I the only that thinks this is a bad idea? Either I change video cards more often than CPU's or CPU's more than graphics cards, but in either case I seldom want to upgrade both at the same time. Although I suppose I wouldn't mind a better GPU "for free" with my CPU, I suspect it won't be "for free".

      People said the same thing about memory controllers, FPU's, on-motherboard audio etc. Nowadays, nobody would go out of their way to get a special chip without an FPU. It simply wouldn't be cheaper to avoid g

    • Re: (Score:3, Informative)

      by timeOday ( 582209 )
      in either case I seldom want to upgrade both at the same time.
      What if the new architecture's "graphics" pipelines aren't dedicated exclusively to graphics and can be used to speed up other tasks? Or if the total power consumption of the integrated component is 50% of separate components? Or if a non-upgradeable "PC on a chip" with no expansion bus offers great performance at 50% the cost of a traditional PC?
      • by rbanffy ( 584143 ) on Monday November 20, 2006 @09:53PM (#16924196) Homepage Journal
        As added benefits:

        - With a public and standard ISA, you will have Linux-compatible drivers shortly

        - With a public and standard ISA, people will have a single standard to code against. Library support should be excellent.

        - While your über-FPUs/vector accelerators/stream processors (what GPUs are made of) are not GPU-ing something, they can accelerate SSL, physics processing and any other vector-friendly activity you may have. Playing Flash content, maybe.

        - GPUs are memory-hungry. The added memory bandwidth will benefit all software, not only graphics-intensive stuff.

        - There is nothing that precludes you from using a stand-alone GPU, provided you have the drivers. But your CPU will have a couple high performance units that can give it a hand. Think asymmetric SLI.

        We will see how well the idea performs by watching the Cell processor (a CPU with 8 "GPU"s attached) in the PS3. That's roughly the same idea.

        In the meantime, I bet it will work just fine.
    • Re: (Score:3, Insightful)

      by Sparohok ( 318277 )
      The history of computer architecture is a ceaseless march toward higher integration and higher generality. That is, what was once special purpose hardware in a seperate unit is now implemented in firmware as part of a more flexible, general purpose processor.

      This march is littered with those standing by the wayside saying things like, "Who needs floating point in the CPU? Leave it on a seperate chip!" or "I want to be able to upgrade my CPU without buying a new memory controller!" or "If you integrate sound
    • by Rycross ( 836649 )
      I see a lot of people commenting that you'll probably be able to disable it and slide in another video card, but I'm not seeing the other obvious suggestion: your integrated gpu becomes a backup gpu for your main video card.

      See, the big trend for high-performance video nowadays is SLI, which involves sticking a bunch of identical video cards in a computer and connecting them with a bridge. How much more work would be necessary to have that backup gpu support your main card?
    • I am more concerned with being able to select the best CPU/GPU combo, and not being stuck with a great CPU and lousy video card or vice versa. And by "lousy video card", I don't really mean poor performance so much as a lack of decent drivers. This is something that can change after buying the hardware; under Linux, the quality of the Nvidia drivers (which I currently hold my nose and use; last time I tried to use an ATI driver, they hadn't yet ported it to the version of X.org that I wanted to use) varie

  • by Rosco P. Coltrane ( 209368 ) on Monday November 20, 2006 @07:20PM (#16922650)
    ISA is definitely the future to interface a CPU and a GPU, but I keep hearing about this VLB technology that's even hotter!
    • I was thinking the same thing. I can't wait for them to put a new TLA architecture into my MAC or IBM. Because ISA is totally the VLB of the 2000s. It may even outcool USB and PCI-e.
    • by MadEE ( 784327 )
      Bah You haven't seen the new ISA EXTREME!
    • by misleb ( 129952 )
      My money is on microchannel. Nobody ever lost money betting on IBM.

      -matthew
    • by LoRdTAW ( 99712 )
      VLB.... Bah. Im sticking with EISA.
  • by User 956 ( 568564 ) on Monday November 20, 2006 @07:24PM (#16922706) Homepage
    'To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA.

    x86 is a great multi-purpose, but the reason we're seeing greater and greater offload onto a GPU is because that's great at a specific task. So my question is, how long until we see widespread PPU (Physics processing unit) usage, and beyond that, a Physics extension to the x86 ISA? Or will we all just be computing on the grid at that point?
    • Re: (Score:3, Insightful)

      So my question is, how long until we see widespread PPU (Physics processing unit) usage, and beyond that, a Physics extension to the x86 ISA?

      Never, since it looks like physics can efficiently run on GPUs now.
  • Can it run Linux? OK JUST KIDDING!

    No, the real hting lingering in mind is: Why? If I want to upgrade the GPU, now I have an additional cost because I have to upgrade the CPU as well and vice/versa. So, from economical perspective, is this the best way to go?

    • Re: (Score:2, Insightful)

      by zpapasmurf ( 761470 )
      no, but SLI isn't economical either. it's not the most bang for the buck, its the most bang.... period
    • by Anonymous Coward on Monday November 20, 2006 @07:33PM (#16922832)
      Can it run Linux? OK JUST KIDDING!

      Why joke? It is an important question.

      All the current nvidia and ati graphics cards require proprietary, closed-source drivers.
      If the GPU is to be integrated into the CPU, either they will have to keep the new ISA a secret or we will finally start getting access to the information required to really write Free graphics drivers.
    • by nomel ( 244635 )
      Does it run linux? Well, with the processing power of some of the GPU cards used for things like SETI@Home, I imagine that's not all that far fetched. Could use the GPU as a second processor perhaps! :D
  • Advantages? (Score:3, Insightful)

    by Tainek ( 912325 ) on Monday November 20, 2006 @07:34PM (#16922836)
    A lot of people seem to be having issues working out why AMD is doing this

    people are forgetting that they are not always the target market for computers (this isnt aimed at you if you upgrade one more than the other)

    for example, what is easyer for your computer illiterate father to do, change one slot component, or install a graphics card , and a cpu.

    it also allows for even smaller form computers

    i will concede, that these gains are pretty small though, i cant see it being worth it
    • Re: (Score:3, Interesting)

      people are forgetting that they are not always the target market for computers

      Until this was posted, I couldn't figure out the "why" to this problem. The "why" is indeed, pre-built home systems.

      Think about your average Joe that doesn't know a USB from USPS. He's not going to concern himself with more statistics than he has to when it comes to buying a computer. If he can get his CPU and his graphics rolled into one component at a lower cost than having them separate, he will without a thought. It doesn'

      • by Barny ( 103770 )
        Not to mention, mobile phones, house phones, hdtvs, blu-ray/hd-dvd players... just about anything that needs a bit of cpu grunt and must have a display output would benifit from integrateing a good cpu and graphics.

        Imagine getting a nice, soft edged, menu effect enabled interface on your dvd player instead of the |> || |>|> symbols flashing every time we push a button?

  • by WidescreenFreak ( 830043 ) on Monday November 20, 2006 @07:37PM (#16922872) Homepage Journal
    I guess I'm showing my age. As soon as I saw "ISA" I immediately thought, "Why the HELL are they thinking about bringing this [wikipedia.org] back?

    :(
    • by njchick ( 611256 ) on Monday November 20, 2006 @08:30PM (#16923492) Journal
      It's not your age. It's just a problem of the current TLA namespace. Another reason to switch to XTLA (extended three letter acronyms).
      • Another reason to switch to XTLA (extended three letter acronyms).

        Nah, expanded TLAs are far more flexible. They allow you to have more than one expansion at the same time from different manufacturers and don't require you to upgrade to a new version of English to use them...

  • ... maybe they should call it 3DNow or something?
  • What happened... (Score:4, Insightful)

    by dduardo ( 592868 ) on Monday November 20, 2006 @07:40PM (#16922892)
    What happened to the RISC philosophy? Keep the hardware simple and let the compiler do the work.

    No, lets create 1000 more instructions for graphics, 1000 for physics and 1000 more just for the heck of it.
    • by MadEE ( 784327 )
      That philosophy pretty much went out the window when we started seeing consumer 3D video accelerators hitting the market.
    • Re: (Score:3, Informative)

      by forkazoo ( 138186 )
      What happened to the RISC philosophy?


      We decided we wanted cheap, fast hardware, and we decided the philosophy made more sense at the software level.
      • by keeboo ( 724305 )
        >> What happened to the RISC philosophy?
        > We decided we wanted cheap, fast hardware, and we decided the philosophy made more sense at the software level.

        Uh... More likely you folks have decided you want to run DOS and Windows.
        Since both were (are) locked to the x86 ISA, it gave this decrepit architecture a reason to live.
        • Re: (Score:3, Informative)

          by Sj0 ( 472011 )
          That decrepit arcitecture is the fasteest consumer hardware platform in existance.
          • by keeboo ( 724305 )
            That decrepit arcitecture is the fasteest consumer hardware platform in existance.

            I agree.
            If Volkswagen had a massive consumer base to justify producing an old-style Beetle model capable of reaching 400km/h, you bet they would so.

            But you see, despiste such an interesting market for x86 processors, how many companies are able to invest massive amounts of money in order to make a x86 speedy processor?
            Compare this to the development money spent in UltraSparc T1. Geez, I bet those chinese guys spent a ni
            • by Dunbal ( 464142 )
              an old-style Beetle model capable of reaching 400km/h

                    Suddenly I want one of those...
        • Re: (Score:3, Interesting)

          by forkazoo ( 138186 )

          Uh... More likely you folks have decided you want to run DOS and Windows.
          Since both were (are) locked to the x86 ISA, it gave this decrepit architecture a reason to live.

          Personally, I have no particular ties to x86 or DOS or Windows. I wrote my previous post from an Ubuntu box, and I am writing this one from a PPC box. But, all that "decrepitness" that makes x86 unclean is actually pretty damned useful. The wacky instruction encoding is horrible to look at, but also means that you generally see better co

    • by Nahor ( 41537 )
      What happened to the RISC philosophy?

      It's a bit more complicated than that. The x86 at its very core is RISC: it converts all the CISC instructions into RISC like micro-instructions.

      But then, I read somewhere that Intel is now starting to do the opposite again with its "Wide Dynamic Execution": it combines several micro-instructions into one macro-instruction and also combines CISC instructions into even bigger ones.

    • by RelliK ( 4466 )
      I guess I should clarify. RISC "philosophy" lives on, but practicality has long been dead. Modern CPUs have RISC microcode with a x86 -> RISC translator in front. The translator adds a bit of overhead and uses up some silicone, but on the other hand CISC instructions are smaller, so you can fit more of them in a given amount of L1/L2 cache.
      • Re: (Score:3, Insightful)

        I guess I should clarify. RISC "philosophy" lives on, but practicality has long been dead. Modern CPUs have RISC microcode with a x86 -> RISC translator in front. The translator adds a bit of overhead and uses up some silicone, but on the other hand CISC instructions are smaller, so you can fit more of them in a given amount of L1/L2 cache.

        You are just plain wrong on many counts.

        RISC outsells CISC my a massive margin. Just look at the presence of PowerPC, MIPS and (the biggest of all) ARM in the embedde
    • Because doing graphics with a general-purpose CPU is a stupid idea. You'll always get better performance out of special-purpose hardware than general-purpose CPUs. The only reason to use these is for flexibility. When your requirements demand performance impossible with only software, you go to specialized hardware.

      Which would you prefer, a super-powerful CPU that does all the graphics and physics, but operates at 20 GHz and consumes 3000 watts, or a low-power system with separate CPU, GPU, and PPU (even
  • Realistically Intel will have to implement these instructions on their processors for any programs to use them. Macs aside, who's going to write a program that only a tiny (but growing as old PCs are replaced) percentage of the market can run?
    • by MadEE ( 784327 )
      My guess is that a lot of this will be abstracted by the video driver so developers will not have to (largely) worry about it. Though it's not unheard of for developers to use special features of CPUs such as MMX, 3DNow! etc I don't see this as much different from the developer standpoint.
  • by erice ( 13380 ) on Monday November 20, 2006 @07:57PM (#16923148) Homepage
    I'm sorry. I can't help it.

    Every time I see an article about AMD's "Fusion", I think
    "Everyone knows that the power consumption of modern cpu's has gotten out of hand. Still, you gotta give AMD credit for having the guts to propose the obvious solution: An on chip fusion reactor"
  • Close to reference to metal ( eg, metal& )?

    What kind of initiative is that?

  • by macemoneta ( 154740 ) on Monday November 20, 2006 @08:14PM (#16923336) Homepage
    If the video uses a documented instruction set, doesn't this imply that AMD/ATI CPU/GPU chips will be open source compatible? Shouldn't that be all the information needed (from the GPU perspective) to create a 3D hardware accelerated driver?
    • Even if the ISA is documented, writing an optimizing shader compiler is not easy. But my impression is that there are still some 3D-specific, fixed-function parts in the GPU that aren't triggered by instructions and thus aren't being documented, so the driver problem remains.
  • A super-FPU (Score:4, Informative)

    by thue ( 121682 ) on Monday November 20, 2006 @08:34PM (#16923534) Homepage
    As described by Ars Technica [arstechnica.com], the new NVIDIA G80 generation of GPUs are actually collections of general stream processors, a type of FPU. The GPU functionality is then programmed in software. The article from Ars Technica points out that "These threads could do anything from graphics and physics calculations to medical imaging or data visualization.". I assume the ATI GPU is moving in the same direction.

    So what AMD is adding to x86-64 is probably not just a GPU, but a new powerful general purpose massively parallel FPU.
    • by rbanffy ( 584143 )
      AMD is making the x86 more Cell-like.

    • Re:A super-FPU (Score:4, Informative)

      by Anonymous Coward on Monday November 20, 2006 @09:20PM (#16923890)
      Yes. I've pointed this out every time that Fusion has been mentioned here: a GPU is parallel vector processor. The resources available for rendering games can just as easily be used to accelerate scientific applications, and integrating it into one die will reduce the power and cost requirements. Since the GPUs are already becoming more general-purpose for more sophisticated shader programs, it makes a lot of sense to utilize those same resources for other applications without depending on incompatible shader architectures or PCI-Express add-on cards. It also gives AMD something to do with future die space besides creating 32-core processors that will be largely underutilized by software. People should think of this as AMD taking SSE out back and and just say, "to hell with the amateur hour, we're going to have some monster fp power." The end result is that you'll also be able to have superawesome graphics in games, as well as efficient scientific simulations.
  • This is the old integration-separation cycle continuing its course.

    When I programmed PC graphics cards, they were basically dumb devices, just converting whatever was in memory to the signals needed to drive the monitor. The CPU did all the drawing work.

    Then came the 2D accelerators, and later the 3D accelerators. The CPU sent them instructions about what to draw, and they did the drawing.

    Now, we're seeing a move back to putting graphics inside the CPU.

    The reason we're seeing these waves is, of course, that
  • The only point I can see in doing this is for more general use of the GPU than just rendering graphics. Graphics pipelines are pretty damn serial and the latency caused by putting the GPU on the north bridge is what, microseconds? Less? I don't see how this makes even the smallest difference in gameplay.

    However, if you're using the GPU for something else like massive DFTs or physics simulations or any of the other cool stuff people are coming up with these days, where memory access patterns are more rando

  • by smoker2 ( 750216 ) on Monday November 20, 2006 @09:01PM (#16923762) Homepage Journal
    Now with 5 cores ! (and a seperate core for those tricky areas)
  • by ravyne ( 858869 ) on Monday November 20, 2006 @09:11PM (#16923828)
    I've been following GPGPU stuff for awhile now, casually at first but much more closely now with the AMD/ATI merger and the release of nVidea's G80 architecture. Both of these represent the first big steps toward GPGPU technology (buzzword: stream computing) becoming reality.

    The initial approach I suspect from the Fusion effort will basically be an R600-based, entry-level GPU tacked onto the CPU die. I'd imagine that this would have 4-8 quads (GPU 4-wide SIMD functional unit) as standard. This would mostly be targetted at the IGP market for laptops and small and/or cheap desktops. Its likely that CTM will enable this additional horsepower to be used for general clculations, but its primary purpose will be to replace other IGP solutions.

    A little further out I see the new functional units being woven into the fabric of the CPU itself. This model likens closely to having many 128-bit-wide extended SSE units, likely to have automatic scheduling of SIMD tasks (eg - tell the CPU to multiply 2 large float arrays and the CPU balances the workload across the functional units automatically.) A software driver will be able to utilize these units as a GPU, but the focus is now much more on computation. It functions as a GPU for low-end users, and suppliments high-end users and gamers with discreete video cards by taking on additional calculations such as physics. Physics will benefit being on the system bus (even though PCIe x16 is relatively fast) because the latancy will be lower, and because the structures typically used to perform physics calculations reside in system memory.

    Even further out I see computers very much becoming small grid computers unto themselves, though software will take a long time to catch up to what the hardware will be capable of. I see nVidea's CUDA initiative as the first step in this direction - Provide a "sea of processors" view to the machine and allow tight integration into standard code withought placing the burden of balancing the workload onto the programmer (which nVidea's CUDA C compiler attempts to do.) nVidea's G80 architecture goes one further by migrating away from the vector-based architecture in favor of a scalar one - rather than 32 4-wide vector ALUs, they provide 128 scalar ALUs. Threading takes care of translating those n-wide calls into n seperate scalar calls. Most scientific code does not lend itself well to the vector model, though over the years it has been shoe-horned into vector-centric algorithms because it was neccesary to get addequate performance. Even graphics shaders are becoming less and less vector-centric, as nVidea research shows, because many effects (or portions there-of) are better suited to scalar code.

    Eventually, I think this model will grow such that the CPU will be replaced by, to coin a phrase, something called a CCU (Central Coordination Unit) who's only real responibility is to route instructions to the correct execution units. Execution units will vary by type and number from system to system depending on what chips/boards you've plugged into your CCU expansion bus. The CCU will accept both scalar and broad-stroke (vector) instructions such as "multiply the elements of this array by that array and store the results in this other array" which will be broken down into individual elements and assigned to available execution units.

    All of this IMHO of course.
  • by tap ( 18562 ) on Monday November 20, 2006 @09:46PM (#16924144) Homepage
    It used to be that CPUs didn't come with floating point units. You had to buy a 287 or 387 to go with your 286 or 386, and they weren't cheap either. I think we paid $400 for a 20 MHz 387 back in the early 90s. Around the end of the 386s' use in desktops, competitors to Intel (Weitek, Cyrix, some others I think) had produced 387 compatible chips that were faster and cheaper than Intel's. For the 486, Intel decided to integrate the floating point unit, which made it pretty much impossible to buy someone else's chip. Sure there were technical merits to that, but I'm sure that fact that it killed any possible competition in the FPU market wasn't lost on Intel's execs.

    Trying to bundle products is nothing new. A company that makes a whole package doesn't like it when parts of the package can be bought from other companies. Instead of just competing for the whole package (and the few companies who can provide that), they need to compete for each individual part, and every company that can make any one of those parts. If AMD puts the GPU in the CPU, then it's pretty hard for nvidia get OEM's to include their GPU. Nvidia will have to build a CPU that's as good as AMD's, and that's not going to happen any time soon.
    • Re: (Score:3, Insightful)

      by Prof.Phreak ( 584152 )
      I can imagine the real goal being DRM. With everything on the CPU, making some instructions priviledged, they can force any program that wants to manipulate (decode?) video at a ``fast'' rate to call the OS to perform the decoding---allowing the OS to ensure the video has valid signatures before it proceeds.

      Sure folks would still be able to use libraries that run on the CPU, but if those are inefficient/slow compared to the specialized instructions... then who knows.

      Just being paranoid...
  • To me this sounds like some nforce-Mega-integration thing. I dont like it because when i think about it i think about: Drivers?! Maybe for the PS$ or something
  • by Driving Vertigo ( 904993 ) on Monday November 20, 2006 @10:41PM (#16924528)
    I think people are forgetting the idea that the integrated "GPU" is going to be more like a programmable DSP than an actual graphics accelerator. We know the CPU is wonderful at computing integer math, but it's always been lacking in floating point. The modern GPU has become a floating point monster. The integration of this new element, including instructions to utilize it in native "stream processing" will likely cause a small leap forward in computing power. So, please, don't think of this as a replacement to your video card, put more like an evolution in the area of math-coprocessors.

If all else fails, lower your standards.

Working...