Comment Link to the full paper (Score 1) 67
http://lss.fnal.gov/archive/2013/conf/fermilab-conf-13-035-cd.pdf
They're using M2070 (Fermi) GPUs. Kepler would perform even better, the latest one has > 6GB of memory.
http://lss.fnal.gov/archive/2013/conf/fermilab-conf-13-035-cd.pdf
They're using M2070 (Fermi) GPUs. Kepler would perform even better, the latest one has > 6GB of memory.
I'm surprised no one has mentioned this more generic solution that's already in production:
They can get AMD for chump change right now. Fits well with their model of being vertically integrated. They could pump some money into AMD and get them to improve their x86 processors, and then dump Intel. They could get the GPU division of AMD to make a mobile GPU for their mobile products. And AMD's CPU engineers would come in very handy for custom ARM CPU design for mobile.
I beg to differ, this approach really worked for me in grad school.
I didn't rely on simply my own recollection, but got together with a friend right after class to go over the class again and make notes. This allowed both of us to not worry about taking notes in the class and really concentrate on what the professor was teaching.
The discussion after class also allowed us to remove gaps in our understanding.
Really what is the problem with this
The problem is that a tool is being used weirdly. Is a PS3 really a more powerful parallel computer per dollar than the various cards from Nvidia and ATI? Maybe it is, but if it is, then I have a gripe against Nvidia and ATI.
It is not. Plus with CUDA, there is much more scope for expandability with new GPUs coming to the market every so often. Why did they use PS3s?
Then you would be happy to know that Nvidia's new Fermi chip supports ECC throughout the architecture.
He should've used something like CUDA instead, for long term gains. This would have shown far better performance than the Xbox's GPU (which is quite dated now), and easy scalability as better GPUs keep coming to the market. His familiarity with Xbox programming might have enabled him to come up to speed with CUDA quickly.
Isn't this the OS X version which has OpenCL integrated into it? If yes, is that not considered a big enough improvement?
Similar experience here. My Compaq Presario R4000 laptop stopped working abruptly while I was sitting in the university lab. It wasn't even plugged in, was running off the battery, and the display just went blank. The LCD had probably conked off, for I could run an external DFP off it. Anyway, since it was still under warranty, I shipped it off. I got a call from the service center, saying that I had dropped liquid on it, and the hard drive, motherboard, CPU and LCD were all shot and had to be replaced, and gave me a bill of around $700 (original cost of the laptop was $1200). I was livid, but no amount of reasoning/cussing changed anything. "We'll ship it back if you don't want to fix it". Fucking turds, those Compaq folks. Will never buy anything from them every again, and anyone who's heard this story has refused to touch HP/Compaq again.
Oh and the best part? I had purchased a $99 user-damage covering on-year warranty from them when I bought the laptop. So they had to replace everything or me, essentially handing me a new laptop. I got lucky.
Yeah, Vista-32 on my laptop with 4GB RAM shows 3.5GB RAM installed. I have an 8600M with a 256 MB framebuffer so all PCIE devices including it fit in that 0.5GB. I use Linux-32 as my primary desktop though, which I think uses HIGHMEM to access memory above 896MB (more knowledgeable kernel hackers correct me if I'm wrong), which isn't too efficient. One of these days I'll install a 64-bit OS, I keep telling myself.
Always draw your curves, then plot your reading.