Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Nvidia Working on a CPU+GPU Combo 178

Max Romantschuk writes "Nvidia is apparently working on an x86 CPU with integrated graphics. The target market seems to be OEMs, but what other prospects could a solution like this have? Given recent development with projects like Folding@Home's GPU client you can't help but wonder about the possibilities of a CPU with an integrated GPU. Things like video encoding and decoding, audio processing and other applications could benefit a lot from a low latency CPU+GPU combo. What if you could put multiple chips like these in one machine? With AMD+ATI and Intel's own integrated graphics, will basic GPU functionality be integrated in all CPU's eventually? Will dedicated graphics cards become a niche product for enthusiasts and pros, like audio cards already largely have?" The article is from the Inquirer, so a dash of salt might make this more palatable.
This discussion has been archived. No new comments can be posted.

Nvidia Working on a CPU+GPU Combo

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@gma[ ]com ['il.' in gap]> on Friday October 20, 2006 @12:26PM (#16517577) Journal
    Sounds like Nvidia is just firing back at the ATI-AMD claim from two months ago [theinquirer.net]. Oh, you say that you're integrating GPUs and CPUs? "Well, we can say that too!"

    What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.

    What I'm not confused about is the sentence from the above article:
    DAAMIT engineers will be looking to shift to 65 nanometre if not even to 45 nanometre to make such a complex chip as a CPU/GPU possible.
    Oh, I've worked with my fair share of DAAMIT engineers. They're the ones that go, "Yeah, it's pretty good but ... DAAMIT, we just need more power!"
  • by purpledinoz ( 573045 ) on Friday October 20, 2006 @12:30PM (#16517637)
    AMD and Intel have their own fabs that are at the leading edge of semiconductor technology. I highly doubt that nVidia will open up a fab for their chips. But who knows, IBM may produce their chips for them.

    I think the better option would be to have a graphics chip fit into a Socket 939 on a dual socket motherboard, with an AMD chip. It will have a high-speed link through hyper-transport, and would act just like a co-processor. I'm no chip designer, so I have no idea what the pros/cons of this are, or if it's even possible.
  • With integration.. (Score:3, Interesting)

    by Hangin10 ( 704729 ) on Friday October 20, 2006 @12:30PM (#16517643)
    With this integration, does that mean a standard for 3-d? No more Nvidia/ATI drivers. The OSDEV guys would love this if it came to that. But how would this integration work? A co-processor space like MIPS? If so, does that mean that graphics calculations have somewhat been moved back to the CPU? And what about the actual workings itself, I'm guessing the actual registers would still be memory mapped in someway (or I/O ports for x86, whatever).

    I'm thinking way too much. It did alleviate boredom for about a minute though...
  • by everphilski ( 877346 ) on Friday October 20, 2006 @12:32PM (#16517669) Journal
    What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again?

    a really, really fast pipe. It is a lot quicker to push stuff from CPU->GPU when they are on the same piece of silicon, versus the PCIe or AGP bus. Speed is what matters, it doesn't look like they are moving the load one way or another (although moving some load from CPU->GPU for vector based stuff would be cool if they had a general purpose toolkit, which I'd imagine one of these three companies will think about).
  • I smell a pattern (Score:3, Interesting)

    by doti ( 966971 ) on Friday October 20, 2006 @12:32PM (#16517673) Homepage
    There seems to be a cycle of integrating and decoupling things.
    We had separated math co-processors, that later were integrated in the CPU.
    Then the separated GPU, which will soon be integrated back too.
  • by Ryan Amos ( 16972 ) on Friday October 20, 2006 @12:40PM (#16517769)
    It is; but if you combine them on the same die with a large shared cache and the on-chip memory controller... you can see where I'm going with this. Think of it as a separate CPU, just printed on the same silicon wafer. That means you only need 1 fan to cool it and you can lose a lot of heat producing power management circuitry on the video card.

    Obviously this is not going to be ideal for high end gaming rigs; but it will improve the quality of integrated video chipsets on lower end and even mid range PCs.
  • Thank MicroSoft (Score:5, Interesting)

    by powerlord ( 28156 ) on Friday October 20, 2006 @12:43PM (#16517811) Journal
    Okay, I admit, I haven't RTFA yet, but if GPUs do get folded back into CPUs, I think we need to thank MS.

    No. ... Seriously. Think for a minute.

    The major driving force right now in GPU development and purchase are games.

    The major factor that they have to contend with is DirectX.

    As of DirectXv10. A card either IS, or IS NOT compliant. None of this "We are 67.3% compliant".

    This provides a known target that can be reached. I wouldn't be surprised if the DirectX10 (video) featureset becomes synonymous with 'VGA Graphics' given enough time.

    Yeah, sure, MS will come out with DX11, and those CPUs won't be compatible, but so what?, If you upgrade your CPU and GPU regularly anyway to maintain the 'killer rig', why not just upgrade them together? :)
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday October 20, 2006 @12:49PM (#16517871) Homepage Journal
    ATI/AMD is working on that right now. I think it comes after the next rev of hypertransport.
  • by purpledinoz ( 573045 ) on Friday October 20, 2006 @12:51PM (#16517889)
    It seems like this type of product would be marketed towards the budget segment, which really doesn't care about graphics performance. However, the huge advantage of having a GPU on the same silicon as the CPU would be a big boost in performance. The low cost advantage has already been attained with the integrated graphics chipsets (like nForce). So that would mean this might be marketed towards the high-performance crowd.

    But I highly doubt that nVidia will be able to get a CPU out that out-performs an Intel or AMD, which the high-performance junkies would want. Intel and AMD put a HUGE amount of money into research, development, and fabrication to attain their performance. This is going to be interesting to watch. Hopefully nVidia doesn't dig themselves into a hole with this attempt.
  • by Animats ( 122034 ) on Friday October 20, 2006 @01:10PM (#16518147) Homepage

    I've been expecting this for a while, ever since the transistor count of the GPU passed that of the CPU. Actually, I thought it would happen sooner. It's certainly time. Putting more transistors into a single CPU doesn't help any more, which is why we now have "multicore" machines. So it's time to put more of the computer into a single part.

    NVidia already makes the nForce line, the "everything but the CPU" part, with graphics, Ethernet, disk interface, etc. If they stick a CPU in there, they have a whole computer.

    Chip designers can license x86 implementations; they don't have to be redesigned from scratch. This isn't going to be a tough job for NVidia.

    What we're headed for is the one-chip "value PC", the one that sits on every corporate desk. That's where the best price/performance is.

  • by arth1 ( 260657 ) on Friday October 20, 2006 @01:24PM (#16518355) Homepage Journal
    At one time floating point was done by software it still is one some cpus.
    Then floating point co-processors became available. For some applications you really needed to speed up floating point so it was worth shelling out the big bucks for a chip to speed it up.

    Then people started using floats for the convenience, not because the accuracy was needed, and performance suffered greatly as a result. Granted, there are a lot of situations where accuracy is needed in 3D, but many of the calculations that are done could be better done in integer math and table lookups.
    Does it often matter whether a pixel has position (542,396) or (542.0518434,395.97862456)?
    Using a lookup table of twice the resolution (or two tables where there's non-square pixels) will give you enough precision for pixel-perfect placement, and can quite often speed up things remarkably. Alas, this, and many other techniques have been mostly forgotten, and it's easier to leave it to the MMU or graphics card, even if you compute the same unneccessary calculations and conversions a million times.

    Fast MMUs, CPU extensions and 3D graphics routines are good, but I'm not too sure they're always used correctly. Does a new game that's fifty times as graphically advanced as a game from six years ago really need a thousand times the processing power, or is it just easier to throw hardware at a problem?
  • by NerveGas ( 168686 ) on Friday October 20, 2006 @01:32PM (#16518455)
    I don't think that the CPU->GPU pipe is any limitation. Going from AGP 4x->8X gave very little speed benefit, and on PCI-E connections, you have to go from the normal 16x down to a 4x before you see any slowdown.

    Memory size and bandwidth are the usual limitations. Remember that if you want 2x AA, you double your memory usage, and if you want 4x AA, you quadruple it. So, that game that needed 128 megs on the video card, with 4x AA, can suddenly need 512.

    steve
  • by LWATCDR ( 28044 ) on Friday October 20, 2006 @01:39PM (#16518555) Homepage Journal
    Exactly.
    I was using floating point as an example.
    I don't know if Nvidia can pull this off without a partner. Too build a really good X86 core isn't easy. I wonder if they may not do a PPC or Arm core instead. That could make nVidia a big player in the cell phone and mobile video market. At some point there will be portable HD-DVD players.

    My crystal ball says.
    AMD will produce these products.
    1. A low end CPU it integrated GPU for the Vista market. This will be a nice inexpensive option for home and corporate users. It might also end up in some set-top boxes. This will the next generation Geode.
    2 A family of medium and high end video products that use Hyperchannel to interface with Opteron and Athlon64 line.

    Intel will
    Adopt Hyperchannel or reinvent it. Once we hit four cores Intel will hit a front bus wall.
    Intel will produce a replacement for the Celeron that is Duo2Core with integrated graphics on one die. This is to compete with AMD new integrated solution.
    Intel will not go in to the high end graphics line.

    nVidia will fail to produce an X86+GPU to compete with AMD and Intel.
    nVidia produces an integrated ARM+GPU and dominates the embedded market. Soon every cellphone and media player has an nVidia chipset at it's heart. ARM and nVidia merge.

    Of course I am just making all this up but so what, electrons are cheap.
  • by Anonymous Coward on Friday October 20, 2006 @01:40PM (#16518569)
    Except for the bottleneck of memory bandwidth. Video cards have tremendous bandwidth on external cards, several times that of main memory in a computer. Putting them next to cpus and having to share main memory bandwidth may not be that great for certain workloads.
  • by nine-times ( 778537 ) <nine.times@gmail.com> on Friday October 20, 2006 @01:54PM (#16518771) Homepage

    What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.

    You've already gotten some good answers here, but I'll throw in something that I haven't seen anyone else mention explicitly: GPUs aren't only being used for 3D animation anymore. GPUs started because, in order to make graphics for games better, you needed a specialized processor to handle the 3D calculations. However, GPUs have become, in some ways, more complex and powerful than the CPU, and as that has happened, other uses have been found for all that power. turns out that there are lots of mathematical transformations that are more efficient on the specialized graphics processors, including audio/video processing and some data analysis. Some clever programmers have already started offloading some of their complex calculations from the CPU to the GPU.

    This has lead many people to wonder, why don't we bring some of the GPU advancements back to the CPU somehow, so that we aren't swapping data back and forth between the CPU and GPU, the system RAM and video RAM? Apparently, it's not a stupid question.

  • do you really want to take away system ram for video ram?
    If using larger chips means I can get 2GB combined RAM for the price of 1GB system RAM and 256MB video RAM? Absolutely.
  • by julesh ( 229690 ) on Friday October 20, 2006 @02:40PM (#16519509)
    Memory size and bandwidth are the usual limitations. Remember that if you want 2x AA, you double your memory usage, and if you want 4x AA, you quadruple it. So, that game that needed 128 megs on the video card, with 4x AA, can suddenly need 512.

    Of course with the GPU integrated into the CPU you wouldn't need card-based RAM at all. You'd process your video on system RAM, and it would be as fast as the GPU accessing its own RAM at the moment is (not shit like shared-memory video cards are at the moment). This results in flexibility: if you're only using 128MB of RAM for your graphics, you can reuse the other 384MB as additional system RAM.
  • by Doctor Memory ( 6336 ) on Friday October 20, 2006 @09:59PM (#16524755)
    But I highly doubt that nVidia will be able to get a CPU out that out-performs an Intel or AMD
    Maybe they don't have to. If they can just make something that can accelerate MMX/3D Now (sort of a graphics pre-processor) and plug that into a Socket F [theregister.co.uk] slot, it'd be like a two-stage accelerator: first accelerate the calculations that produce the graphics, then accelerate the display. Maybe they could find a way to do a micro-op translation of MMX instructions into something more RISC-like, and run them on a RISC core.

    At least, I hope it's something like that, because I agree with PP: nVidia doesn't have much of a chance to beat both Intel and AMD at the x86 game.

There are two ways to write error-free programs; only the third one works.

Working...