You should go watch Jeff Hawkins TED talk on HTMs (hierarchical temporal memory) . It's old-ish (over 5 years), but he's referenced in the article and he founded the Redwood Neuroscience institute. You should be able to also find a white paper or two on HTMs. Jeff's theoretical model of the brain may have changed some in the last 5 years (I don't know, I haven't been paying attention), but HTMs were basically a hierarchical structure of nodes, with one layer feeding up to the layer above it. The nodes weren't traditional simple NN nodes. Each "node" was fairly complex and did two things: 1) it looked at the pattern of data on its inputs and assigned it a label (if it saw the same pattern again, it would get the same label), and 2) it kept track of the sequence of patterns overtime and the node's final output would be a value that represented the sequence with the highest probability. Nodes higher in the network would then take these values as their input, etc, etc. Higher nodes, when they determined "i think we're seeing a cat", could push down this prediction to lower nodes in order to help train the lower nodes (I think). Anyway, the point was that the "nodes" in Jeff's model were not simple NN nodes – they were complex (actually implemented as a bayesian network, iirc), and then these complex nodes were wired together into a hierarchy. Jeff does a great job of arguing that his model is actually more biologically accurate than simple NNs. Anyway, it's good to see these ideas getting some good funding behind them. They always seemed "right" to me.