Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Where is this interview itself? (Score 2) 338

What we don't know from these articles is why some or more of the AI computation can't be done in the GPU.

Because modern GPUs doesn't have a 1:1 mapping between their "cores" and general purpose logic units. Once you use a branch you effectively collapse your "cores" down to the general purpose logic associated with those "cores." (I think my gtx 660ti has like ~1400 cores and ... 8 general purpose logic units?) For graphics you essentially are just doing vector/matrix math calculations with no branching so you can use all of the "cores" in parallel. This is not to mention that there are costs in streaming data to the GPU you would need to engineer around to get this to work. All my knowledge comes from CUDA land, so maybe not 100% accurate here, but the general principle is probably correct. TL;DR - GPUs are not general purpose chips like x86 CPUs, and there are major caveats when using them.

Slashdot Top Deals

Even bytes get lonely for a little bit.

Working...