Submission + - IBM Researchers Propose Device to Speed Neural Net Learning by up to 30,000X (arxiv.org)
skywire writes: We've all followed the recent story of AlphaGo beating a top Go master. Now IBM researchers Tayfun Gokmen and Yurii Vlasov have described what could be a gamechanger for machine learning — an array of resistive processing units that would use stochastic techniques to dramatically accelerate the backpropagation algorithm, speeding up neural network training by a factor of 30,000. They argue that such an array would be reliable, low in power use, and buildable with current CMOS fabrication technology.