But still, unless it is buggy all the intelligence in the system was engineered by the programmers; a "self-learning" algorithm only learns what it was engineered to learn, and what it learned accidentally due to bugs
We're perfectly capable of making systems we can't really understand, or which develop uses we hadn't thought of earlier. Lots of software does things that the developers didn't have in mind when they wrote it. We can't understand the internal values in a complex artificial neural net; all we know is how we arrived at them and what they appear to do.
Artificial neural nets are limited by their base complexity and by their inputs. Make one complex enough and give it enough input and you might well get general intelligence, although I doubt it's that simple.