"Deep learning" refers a family of machine learning techniques (such as neural-networks, convolutional neural-networks, stacked-autoencoders, etc.) that have a multi-layer architechture, typically allowing the system to learn highly non-linear functions of many variables. Each layer can be thought of as a simple learned function whose output is fed into the next layer. Such systems can often have thousands or millions of parameters to learn and thus require a LOT of training data and a fair bit of computing power/ runtime to train. But if you look at some area (e.g. object reccognition in computer vision), deep networks are currently the top techniques by a fair margin.
This seems more like basic-level stuff...
The devil is in the details. How do you best represent learning mathematically and computationally? What are mistakes and or what are the objectives? How do you encode these and how to you penalize making these mistakes in the future? These are all challenging questions.
That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates.
Machine learning approaches have often taken inspiration from biology, however the exact neurological mechanisms of learning are not yet entirely understood. Its difficult to replicate nature. Its even more difficult when you don't yet understand nature.