They can get AMD for chump change right now. Fits well with their model of being vertically integrated. They could pump some money into AMD and get them to improve their x86 processors, and then dump Intel. They could get the GPU division of AMD to make a mobile GPU for their mobile products. And AMD's CPU engineers would come in very handy for custom ARM CPU design for mobile.
I beg to differ, this approach really worked for me in grad school.
I didn't rely on simply my own recollection, but got together with a friend right after class to go over the class again and make notes. This allowed both of us to not worry about taking notes in the class and really concentrate on what the professor was teaching.
The discussion after class also allowed us to remove gaps in our understanding.
Really what is the problem with this
The problem is that a tool is being used weirdly. Is a PS3 really a more powerful parallel computer per dollar than the various cards from Nvidia and ATI? Maybe it is, but if it is, then I have a gripe against Nvidia and ATI.
It is not. Plus with CUDA, there is much more scope for expandability with new GPUs coming to the market every so often. Why did they use PS3s?
Then you would be happy to know that Nvidia's new Fermi chip supports ECC throughout the architecture.
He should've used something like CUDA instead, for long term gains. This would have shown far better performance than the Xbox's GPU (which is quite dated now), and easy scalability as better GPUs keep coming to the market. His familiarity with Xbox programming might have enabled him to come up to speed with CUDA quickly.
Isn't this the OS X version which has OpenCL integrated into it? If yes, is that not considered a big enough improvement?