Nvidia Working on a CPU+GPU Combo 178
Max Romantschuk writes "Nvidia is apparently working on an x86 CPU with integrated graphics. The target market seems to be OEMs, but what other prospects could a solution like this have? Given recent development with projects like Folding@Home's GPU client you can't help but wonder about the possibilities of a CPU with an integrated GPU. Things like video encoding and decoding, audio processing and other applications could benefit a lot from a low latency CPU+GPU combo. What if you could put multiple chips like these in one machine? With AMD+ATI and Intel's own integrated graphics, will basic GPU functionality be integrated in all CPU's eventually? Will dedicated graphics cards become a niche product for enthusiasts and pros, like audio cards already largely have?" The article is from the Inquirer, so a dash of salt might make this more palatable.
Heard This One Before (Score:5, Interesting)
What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.
What I'm not confused about is the sentence from the above article: Oh, I've worked with my fair share of DAAMIT engineers. They're the ones that go, "Yeah, it's pretty good but
nVidia don't have a good chance with this. (Score:3, Interesting)
I think the better option would be to have a graphics chip fit into a Socket 939 on a dual socket motherboard, with an AMD chip. It will have a high-speed link through hyper-transport, and would act just like a co-processor. I'm no chip designer, so I have no idea what the pros/cons of this are, or if it's even possible.
With integration.. (Score:3, Interesting)
I'm thinking way too much. It did alleviate boredom for about a minute though...
Re:Heard This One Before (Score:5, Interesting)
a really, really fast pipe. It is a lot quicker to push stuff from CPU->GPU when they are on the same piece of silicon, versus the PCIe or AGP bus. Speed is what matters, it doesn't look like they are moving the load one way or another (although moving some load from CPU->GPU for vector based stuff would be cool if they had a general purpose toolkit, which I'd imagine one of these three companies will think about).
I smell a pattern (Score:3, Interesting)
We had separated math co-processors, that later were integrated in the CPU.
Then the separated GPU, which will soon be integrated back too.
Re:Heard This One Before (Score:3, Interesting)
Obviously this is not going to be ideal for high end gaming rigs; but it will improve the quality of integrated video chipsets on lower end and even mid range PCs.
Thank MicroSoft (Score:5, Interesting)
No.
The major driving force right now in GPU development and purchase are games.
The major factor that they have to contend with is DirectX.
As of DirectXv10. A card either IS, or IS NOT compliant. None of this "We are 67.3% compliant".
This provides a known target that can be reached. I wouldn't be surprised if the DirectX10 (video) featureset becomes synonymous with 'VGA Graphics' given enough time.
Yeah, sure, MS will come out with DX11, and those CPUs won't be compatible, but so what?, If you upgrade your CPU and GPU regularly anyway to maintain the 'killer rig', why not just upgrade them together?
Re:Heard This One Before (Score:4, Interesting)
Re:Heard This One Before (Score:5, Interesting)
But I highly doubt that nVidia will be able to get a CPU out that out-performs an Intel or AMD, which the high-performance junkies would want. Intel and AMD put a HUGE amount of money into research, development, and fabrication to attain their performance. This is going to be interesting to watch. Hopefully nVidia doesn't dig themselves into a hole with this attempt.
It's a logical extension of the NVidia NForce line (Score:5, Interesting)
I've been expecting this for a while, ever since the transistor count of the GPU passed that of the CPU. Actually, I thought it would happen sooner. It's certainly time. Putting more transistors into a single CPU doesn't help any more, which is why we now have "multicore" machines. So it's time to put more of the computer into a single part.
NVidia already makes the nForce line, the "everything but the CPU" part, with graphics, Ethernet, disk interface, etc. If they stick a CPU in there, they have a whole computer.
Chip designers can license x86 implementations; they don't have to be redesigned from scratch. This isn't going to be a tough job for NVidia.
What we're headed for is the one-chip "value PC", the one that sits on every corporate desk. That's where the best price/performance is.
Re:Heard This One Before (Score:4, Interesting)
Then people started using floats for the convenience, not because the accuracy was needed, and performance suffered greatly as a result. Granted, there are a lot of situations where accuracy is needed in 3D, but many of the calculations that are done could be better done in integer math and table lookups.
Does it often matter whether a pixel has position (542,396) or (542.0518434,395.97862456)?
Using a lookup table of twice the resolution (or two tables where there's non-square pixels) will give you enough precision for pixel-perfect placement, and can quite often speed up things remarkably. Alas, this, and many other techniques have been mostly forgotten, and it's easier to leave it to the MMU or graphics card, even if you compute the same unneccessary calculations and conversions a million times.
Fast MMUs, CPU extensions and 3D graphics routines are good, but I'm not too sure they're always used correctly. Does a new game that's fifty times as graphically advanced as a game from six years ago really need a thousand times the processing power, or is it just easier to throw hardware at a problem?
Re:Heard This One Before (Score:5, Interesting)
Memory size and bandwidth are the usual limitations. Remember that if you want 2x AA, you double your memory usage, and if you want 4x AA, you quadruple it. So, that game that needed 128 megs on the video card, with 4x AA, can suddenly need 512.
steve
Re:Heard This One Before (Score:5, Interesting)
I was using floating point as an example.
I don't know if Nvidia can pull this off without a partner. Too build a really good X86 core isn't easy. I wonder if they may not do a PPC or Arm core instead. That could make nVidia a big player in the cell phone and mobile video market. At some point there will be portable HD-DVD players.
My crystal ball says.
AMD will produce these products.
1. A low end CPU it integrated GPU for the Vista market. This will be a nice inexpensive option for home and corporate users. It might also end up in some set-top boxes. This will the next generation Geode.
2 A family of medium and high end video products that use Hyperchannel to interface with Opteron and Athlon64 line.
Intel will
Adopt Hyperchannel or reinvent it. Once we hit four cores Intel will hit a front bus wall.
Intel will produce a replacement for the Celeron that is Duo2Core with integrated graphics on one die. This is to compete with AMD new integrated solution.
Intel will not go in to the high end graphics line.
nVidia will fail to produce an X86+GPU to compete with AMD and Intel.
nVidia produces an integrated ARM+GPU and dominates the embedded market. Soon every cellphone and media player has an nVidia chipset at it's heart. ARM and nVidia merge.
Of course I am just making all this up but so what, electrons are cheap.
Re:Heard This One Before (Score:1, Interesting)
Re:Heard This One Before (Score:3, Interesting)
What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.
You've already gotten some good answers here, but I'll throw in something that I haven't seen anyone else mention explicitly: GPUs aren't only being used for 3D animation anymore. GPUs started because, in order to make graphics for games better, you needed a specialized processor to handle the 3D calculations. However, GPUs have become, in some ways, more complex and powerful than the CPU, and as that has happened, other uses have been found for all that power. turns out that there are lots of mathematical transformations that are more efficient on the specialized graphics processors, including audio/video processing and some data analysis. Some clever programmers have already started offloading some of their complex calculations from the CPU to the GPU.
This has lead many people to wonder, why don't we bring some of the GPU advancements back to the CPU somehow, so that we aren't swapping data back and forth between the CPU and GPU, the system RAM and video RAM? Apparently, it's not a stupid question.
Re:It's a logical extension of the NVidia NForce l (Score:3, Interesting)
If using larger chips means I can get 2GB combined RAM for the price of 1GB system RAM and 256MB video RAM? Absolutely.
Re:Heard This One Before (Score:4, Interesting)
Of course with the GPU integrated into the CPU you wouldn't need card-based RAM at all. You'd process your video on system RAM, and it would be as fast as the GPU accessing its own RAM at the moment is (not shit like shared-memory video cards are at the moment). This results in flexibility: if you're only using 128MB of RAM for your graphics, you can reuse the other 384MB as additional system RAM.
Re:Heard This One Before (Score:3, Interesting)
At least, I hope it's something like that, because I agree with PP: nVidia doesn't have much of a chance to beat both Intel and AMD at the x86 game.