Forgot your password?
typodupeerror

AMD Fusion To Add To x86 ISA 270

Posted by ScuttleMonkey
from the but-is-it-cold dept.
Giants2.0 writes "Ars Technica has a brief article detailing some of the prospects of AMD's attempt to fuse the CPU and GPU, including the fact that AMD's Fusion will modify the x86 ISA. From the article, 'To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal"TM (CTM) initiative.'"
This discussion has been archived. No new comments can be posted.

AMD Fusion To Add To x86 ISA

Comments Filter:
  • by Anonymous Coward on Monday November 20, 2006 @07:24PM (#16922694)
    Well that all depends on how much performance can be gained by the integration. I seem to remember this as a weak argument for why numeric co-processors shouldn't be
    integrated into the 486DX...
  • by Chabil Ha' (875116) on Monday November 20, 2006 @07:28PM (#16922764)

    Can it run Linux? OK JUST KIDDING!

    No, the real hting lingering in mind is: Why? If I want to upgrade the GPU, now I have an additional cost because I have to upgrade the CPU as well and vice/versa. So, from economical perspective, is this the best way to go?

  • Re:Advantages? (Score:3, Interesting)

    by fatty ding dong (1028344) on Monday November 20, 2006 @07:48PM (#16923012)
    people are forgetting that they are not always the target market for computers

    Until this was posted, I couldn't figure out the "why" to this problem. The "why" is indeed, pre-built home systems.

    Think about your average Joe that doesn't know a USB from USPS. He's not going to concern himself with more statistics than he has to when it comes to buying a computer. If he can get his CPU and his graphics rolled into one component at a lower cost than having them separate, he will without a thought. It doesn't even come down to ease of repair since most average users will have someone else repairing their system. It will simply come down to cost.

    I've had the same home built box for 4 years, I just upgrade as needed (and sometimes as wanted). Average pre-built buyer doesn't care for upgrading. As soon as it breaks: fix it. If it costs too much to fix it: replace it. The word "upgrade" isn't in an average computer purchaser's vocabulary. If the Fusion can last even a year and a half on average, that would be adequate given that most pre-built buyers are conditioned to buy a new box every 2 years give or take. For average computer buyers, it will work as long as AMD can keep the cost down. I will agree however, its definitely not practical for those of us who buy new video cards almost as often as we buy milk.

  • by mrchaotica (681592) * on Monday November 20, 2006 @07:50PM (#16923042)
    I seriously doubt they're going to remove the PCIe 16x slot from motherboards any time soon.

    What I'd like to see is for AMD to put the CPU and GPU on separate chips, but make them pin-compatible and attach them both to the hypertransport bus. How cool would it be to have a 4-socket motherboard where you could plug in 4 CPUs, 4 GPUs*, or anything in between?

    *Obviously if it were all GPUs it wouldn't be a general-purpose PC, but it would make one hell of a DSP or cluster node!

  • by macemoneta (154740) on Monday November 20, 2006 @08:14PM (#16923336) Homepage
    If the video uses a documented instruction set, doesn't this imply that AMD/ATI CPU/GPU chips will be open source compatible? Shouldn't that be all the information needed (from the GPU perspective) to create a 3D hardware accelerated driver?
  • Re:What happened... (Score:1, Interesting)

    by Anonymous Coward on Monday November 20, 2006 @09:01PM (#16923764)
    A major non-DOS, non-Windows OS just switched from a supposed RISK architecture to the x86/x64 architecture. I say supposed RISK, because there hasn't been a commercially successful RISK design in a long time, only slightly less CISCy designs. RISK suffers from the inherent flaw that the required addressing information does not scale down with instruction word size. That wasn't a problem when memory was fast enough that it was possible to feed the CPU new instructions when it needed them. Current architectures have to go to great lengths to keep the CPU busy. There's no memory bandwidth left to waste on huge addressing information for tiny instruction units.
  • by ravyne (858869) on Monday November 20, 2006 @09:11PM (#16923828)
    I've been following GPGPU stuff for awhile now, casually at first but much more closely now with the AMD/ATI merger and the release of nVidea's G80 architecture. Both of these represent the first big steps toward GPGPU technology (buzzword: stream computing) becoming reality.

    The initial approach I suspect from the Fusion effort will basically be an R600-based, entry-level GPU tacked onto the CPU die. I'd imagine that this would have 4-8 quads (GPU 4-wide SIMD functional unit) as standard. This would mostly be targetted at the IGP market for laptops and small and/or cheap desktops. Its likely that CTM will enable this additional horsepower to be used for general clculations, but its primary purpose will be to replace other IGP solutions.

    A little further out I see the new functional units being woven into the fabric of the CPU itself. This model likens closely to having many 128-bit-wide extended SSE units, likely to have automatic scheduling of SIMD tasks (eg - tell the CPU to multiply 2 large float arrays and the CPU balances the workload across the functional units automatically.) A software driver will be able to utilize these units as a GPU, but the focus is now much more on computation. It functions as a GPU for low-end users, and suppliments high-end users and gamers with discreete video cards by taking on additional calculations such as physics. Physics will benefit being on the system bus (even though PCIe x16 is relatively fast) because the latancy will be lower, and because the structures typically used to perform physics calculations reside in system memory.

    Even further out I see computers very much becoming small grid computers unto themselves, though software will take a long time to catch up to what the hardware will be capable of. I see nVidea's CUDA initiative as the first step in this direction - Provide a "sea of processors" view to the machine and allow tight integration into standard code withought placing the burden of balancing the workload onto the programmer (which nVidea's CUDA C compiler attempts to do.) nVidea's G80 architecture goes one further by migrating away from the vector-based architecture in favor of a scalar one - rather than 32 4-wide vector ALUs, they provide 128 scalar ALUs. Threading takes care of translating those n-wide calls into n seperate scalar calls. Most scientific code does not lend itself well to the vector model, though over the years it has been shoe-horned into vector-centric algorithms because it was neccesary to get addequate performance. Even graphics shaders are becoming less and less vector-centric, as nVidea research shows, because many effects (or portions there-of) are better suited to scalar code.

    Eventually, I think this model will grow such that the CPU will be replaced by, to coin a phrase, something called a CCU (Central Coordination Unit) who's only real responibility is to route instructions to the correct execution units. Execution units will vary by type and number from system to system depending on what chips/boards you've plugged into your CCU expansion bus. The CCU will accept both scalar and broad-stroke (vector) instructions such as "multiply the elements of this array by that array and store the results in this other array" which will be broken down into individual elements and assigned to available execution units.

    All of this IMHO of course.
  • I like your solution (Score:3, Interesting)

    by TubeSteak (669689) on Monday November 20, 2006 @09:28PM (#16923974) Journal
    It neatly sidesteps the fact that high-end GPUs are massive compared to a CPU core

    Intel Core Duo CPU [tomshardware.com]
    ATI X1800 GPU [legitreviews.com]

    BUT, you'd also have to squeeze all the other microchips that are on a high-end graphics card board... I don't know if you'd be able to squish all that into a CPU sized area. And if you can't, you're just changing the form factor & moving the graphics card onto a faster bus.

    Anyone have a better idea how you can put quality graphics into a CPU?
  • by tap (18562) on Monday November 20, 2006 @09:46PM (#16924144) Homepage
    It used to be that CPUs didn't come with floating point units. You had to buy a 287 or 387 to go with your 286 or 386, and they weren't cheap either. I think we paid $400 for a 20 MHz 387 back in the early 90s. Around the end of the 386s' use in desktops, competitors to Intel (Weitek, Cyrix, some others I think) had produced 387 compatible chips that were faster and cheaper than Intel's. For the 486, Intel decided to integrate the floating point unit, which made it pretty much impossible to buy someone else's chip. Sure there were technical merits to that, but I'm sure that fact that it killed any possible competition in the FPU market wasn't lost on Intel's execs.

    Trying to bundle products is nothing new. A company that makes a whole package doesn't like it when parts of the package can be bought from other companies. Instead of just competing for the whole package (and the few companies who can provide that), they need to compete for each individual part, and every company that can make any one of those parts. If AMD puts the GPU in the CPU, then it's pretty hard for nvidia get OEM's to include their GPU. Nvidia will have to build a CPU that's as good as AMD's, and that's not going to happen any time soon.
  • by obeythefist (719316) on Monday November 20, 2006 @09:50PM (#16924170) Journal
    Remember the characteristics of performance GPUs at the moment - AMD's Fusion technologies are not aimed at performance gaming at all, they're squarely aimed at the imbedded/budget/mobile segments of the market.

    Just look at the memory speed of your average CPU (667MHz DDR2) and your average GPU (1GHz DDR3 is good).

    Now, hooking your GPU up to the main system memory takes away a huge chunk of your performance. Saves you a lot of money though. If you're not sure, go look at how overclocking video memory affects performance, just about any of your THG/Anandtech/Extremetech sites will tell you.

    This is just another method of doing your onboard graphics solution. It will perform much better than current integrated graphics solutions. But there's no way it can outperform a quad-SLI rig, even on paper.
  • by Driving Vertigo (904993) on Monday November 20, 2006 @10:41PM (#16924528)
    I think people are forgetting the idea that the integrated "GPU" is going to be more like a programmable DSP than an actual graphics accelerator. We know the CPU is wonderful at computing integer math, but it's always been lacking in floating point. The modern GPU has become a floating point monster. The integration of this new element, including instructions to utilize it in native "stream processing" will likely cause a small leap forward in computing power. So, please, don't think of this as a replacement to your video card, put more like an evolution in the area of math-coprocessors.
  • Re:What happened... (Score:3, Interesting)

    by forkazoo (138186) <wrosecransNO@SPAMgmail.com> on Monday November 20, 2006 @11:27PM (#16924808) Homepage
    Uh... More likely you folks have decided you want to run DOS and Windows.
    Since both were (are) locked to the x86 ISA, it gave this decrepit architecture a reason to live.


    Personally, I have no particular ties to x86 or DOS or Windows. I wrote my previous post from an Ubuntu box, and I am writing this one from a PPC box. But, all that "decrepitness" that makes x86 unclean is actually pretty damned useful. The wacky instruction encoding is horrible to look at, but also means that you generally see better code density on x86 than you do on a more pure RISC architecture. RISC came at a time when instruction decoders were a really significant part of a CPU. Now, with increased transistor budgets, on a high performance CPU the decoder is a non-issue. Making the decoder simpler wouldn't get you any benefit, and it would reduce the effectiveness of your instruction bandwidth and instruction caches.

    I'm no x86 evangelist. My main personal server is an Alpha, I love my MIPS hardware and I even have a VAX. But, x86 hardware can't be beat for cheap speed. Not with anything currently out, anyhow. If somebody comes out with something that is elegant, cheap, and beats x86 for my typical workloads, I'll jump on board in a heart beat. :)
  • by mrchaotica (681592) * on Tuesday November 21, 2006 @02:44AM (#16926324)

    A couple of questions for you to consider:

    1. Which has more bandwidth, 16x PCIe or HyperTransport?
    2. What's stopping AMD from simply putting a memory controller that can talk to 1GHz DDR3 on these chips (remember, AMD CPUs have on-die memory controllers)?

    Also, keep in mind that we're talking about a technology in the early stages of development. The current performance of off-the-shelf hardware is irrelevant; the issue is whether the basic technology has the potential to be made to do it.

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...