Neural networks are good at generating correlations, but that's about all that they're good for.
No... What a supervised neural net does, in full generality, is to tune a massively parameterized function to minimize some measure of it's output error during the training process. It's basically a back box with a million (or billion) or so knobs on it's side than can be tweaked to define what it does.
During training the net itself learns how to optimally tweak these knobs to make it's output for a given input as close as possible to a target output defined by the training data it was presented with. The nature of neural nets is that they can generalize to unseen inputs outside of the training set.
The main limitation of neural nets is that the function it is optimizing and error measure it is minimizing both need to be differentiable, since the way they learns is by gradient descent (following the error gradient to minimize the error).
The range of problems that neural nets can handle is very large, including things such as speech recognition, language translation, natural-langauge image description, etc. It's a very flexible architecture - there are even neural Turing machines.
No doubt there is too much AI hype at the moment, and too many people equating machine learning (ML) with AI, but the recent advances both in neural nets and reinforcement learning (the ML technology at the heart of AlphaGo) are quite profound.
It remains to be seen how far we get in the next 20 (or whatever) years, but already neural nets are making computers capable of super-human performance in many of the areas they have been applied. The combination of NN + reinforcement learning is significantly more general and powerful, powering additional super-human capabilities such as AlphaGo. Unlike the old chestnut of AI always being 20 years away, AlphaGo stunned researchers by beng capable of something *now* that was estimated to be at least 10 years away!
There's not going to be any one "aha" moment where computers achieve general human-level or beyond intelligence, but rather a whole series of whittling away of things that only humans can do, or do best, until eventually there's nothing left.
Perhaps one of the most profound benefits of neural nets over symbolic approaches is that they learn their own data representations for whatever they are tasked with, and these allow large chunks of functionality to be combined in simplistic lego-like fashion. For example, an image captioning neural net (capable of generating an english-language description of a photo) in it's simplest form is just an image classification net feeding into a language model net... no need to come up with complex data structures to represent image content or sentence syntax and semantics, then figure out how to map from one to the other!
This ability to combine neural nets in lego-like fashion means that advances can be used combinatorial fashion... when we have a bag of tricks similar to what evolution has equipped the human brain with, then the range of problems it can solve (i.e. intelligence level) should be similar. I'd guess that a half-dozen advances is maybe all it will take to get a general-purpose intelligence of some sort, considering that the brain itself only has a limited number of functional areas (cortex, cerebellum, hippocampus, thalamus, basil ganglia, etc).