Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

AMD Fusion To Add To x86 ISA 270

Giants2.0 writes "Ars Technica has a brief article detailing some of the prospects of AMD's attempt to fuse the CPU and GPU, including the fact that AMD's Fusion will modify the x86 ISA. From the article, 'To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal"TM (CTM) initiative.'"
This discussion has been archived. No new comments can be posted.

AMD Fusion To Add To x86 ISA

Comments Filter:
  • by r_jensen11 ( 598210 ) on Monday November 20, 2006 @07:22PM (#16922674)
    I'm guessing that, as with integrated graphics, having (a) shared GPU/CPU(s) would allow having an additional video card. I seriously doubt they're going to remove the PCIe 16x slot from motherboards any time soon.
  • by hawkbug ( 94280 ) <psxNO@SPAMfimble.com> on Monday November 20, 2006 @07:22PM (#16922678) Homepage
    Yeah, I thought that same thing at first. However, I don't think we are the target market. I think Laptops and OEMs will be the market for this. Just imagine a mac-mini type computer from Dell or somebody. Onboard video has been around for ages, but if the board could be smaller since the gpu is on the cpu, then you'd save space and power so the machine could be smaller and theoretically cheaper.
  • Re:ISA? (Score:5, Informative)

    by MadEE ( 784327 ) on Monday November 20, 2006 @07:44PM (#16922944)
    ISA = Instruction Set Architecture
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday November 20, 2006 @07:46PM (#16922978) Homepage Journal
    All they're doing is shifting the GPU (as well as the cost) to the CPU core from the motherboard.

    They're also eliminating all of the components between the CPU core and the GPU. In theory they could have a HT chip that handled all of the I/O and didn't even present a traditional system bus, if they felt they didn't need expansion slots. Thus you could eliminate the PCI/PCI-E bus and all the things needed to support it; at minimum however you are eliminating the bus between the North Bridge and the GPU and all that entails... which is a lot.

  • Re:What happened... (Score:3, Informative)

    by forkazoo ( 138186 ) <wrosecrans@@@gmail...com> on Monday November 20, 2006 @07:58PM (#16923152) Homepage
    What happened to the RISC philosophy?


    We decided we wanted cheap, fast hardware, and we decided the philosophy made more sense at the software level.
  • by timeOday ( 582209 ) on Monday November 20, 2006 @08:01PM (#16923202)
    in either case I seldom want to upgrade both at the same time.
    What if the new architecture's "graphics" pipelines aren't dedicated exclusively to graphics and can be used to speed up other tasks? Or if the total power consumption of the integrated component is 50% of separate components? Or if a non-upgradeable "PC on a chip" with no expansion bus offers great performance at 50% the cost of a traditional PC?
  • by ruiner13 ( 527499 ) on Monday November 20, 2006 @08:19PM (#16923406) Homepage
    Dead on. Think of the power savings for laptops, not needing to have to use energy to drive a pci-e slot with a graphic chip that only gets replaced when the laptop does. It would also allow for really slick interfaces on smaller devices, such as tablets, pdas, etc. It would also have one hell of a bandwidth rate to the processor, including full speed access to the computer's RAM. I don't think they'd give it dedicated memory die to die size, but it sure would beat going over a pci-e bus like today's shared memory integrated chipsets.
  • A super-FPU (Score:4, Informative)

    by thue ( 121682 ) on Monday November 20, 2006 @08:34PM (#16923534) Homepage
    As described by Ars Technica [arstechnica.com], the new NVIDIA G80 generation of GPUs are actually collections of general stream processors, a type of FPU. The GPU functionality is then programmed in software. The article from Ars Technica points out that "These threads could do anything from graphics and physics calculations to medical imaging or data visualization.". I assume the ATI GPU is moving in the same direction.

    So what AMD is adding to x86-64 is probably not just a GPU, but a new powerful general purpose massively parallel FPU.
  • Re:What happened... (Score:3, Informative)

    by Sj0 ( 472011 ) on Monday November 20, 2006 @08:51PM (#16923682) Journal
    That decrepit arcitecture is the fasteest consumer hardware platform in existance.
  • Re:A super-FPU (Score:4, Informative)

    by Anonymous Coward on Monday November 20, 2006 @09:20PM (#16923890)
    Yes. I've pointed this out every time that Fusion has been mentioned here: a GPU is parallel vector processor. The resources available for rendering games can just as easily be used to accelerate scientific applications, and integrating it into one die will reduce the power and cost requirements. Since the GPUs are already becoming more general-purpose for more sophisticated shader programs, it makes a lot of sense to utilize those same resources for other applications without depending on incompatible shader architectures or PCI-Express add-on cards. It also gives AMD something to do with future die space besides creating 32-core processors that will be largely underutilized by software. People should think of this as AMD taking SSE out back and and just say, "to hell with the amateur hour, we're going to have some monster fp power." The end result is that you'll also be able to have superawesome graphics in games, as well as efficient scientific simulations.
  • by Svartalf ( 2997 ) on Monday November 20, 2006 @09:43PM (#16924114) Homepage
    Most people associate these with their fixed functionality paths and the coding for the same.

    That'd be right for the older games or the older hardware.

    It'd not be right for the new hardware or the new games...

    The new GPUs use programmable vertex and fragment shaders and the fixed functionality paths go
    through an emulation of those paths in GLSL or HLSL. There's not much left that
    isn't merely a simplified computer like a DSP is for signal processing- this is merely one that
    is designed for graphics and similar operations instead.

    The new games use their own shaders, etc. which is why GLSL is such a big deal and a tool to migrate
    HLSL over is as much of one.

    Who can say for certain that this doesn't make sense? I'm not going to venture a yes or no- because
    I can see where they could pull it clean off and I can see some where it could let them fall flat on
    their face.
  • by Anonymous Coward on Monday November 20, 2006 @09:54PM (#16924200)
    ATI is owned by AMD.
  • by dunc78 ( 583090 ) on Tuesday November 21, 2006 @10:58AM (#16930542)
    I believe it stands for Instruction Set Architecture, with x86 being an example of an ISA, not the old bus of which you are thinking.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...