Er, not really. As long as the "intelligence" takes the form of algorithms, that means human beings are devising sets of rules for computers to follow. That is not very intelligent - or, at least, the intelligence involved is indirect, remote and attenuated. The people who specify the software's behaviour must communicate what they want clearly, unambiguously, completely and consistently to the programmers, who then have to do the same thing in their code. Finally, the computer does whatever the original specifiers could think of in response to events that they were able to conceive of. A physical analogy would be trying to tie your shoelaces using a pair of 30-foot-long tweezers - only much worse.
The very essence of real intelligence is the ability to recognise patterns immediately and respond to them in creatively flexible - if not always entirely new - ways. The art of making neural networks and the like, which are able to work that way, is in its infancy.
And even when those systems become "production strength", we will face their biggest problem: non-transparency. How far can you trust a superhuman intelligence that not only doesn't explain to you the reasons for its decisions, but is fundamentally unable to do so?
For details, see James Hogan's SF novel "The Two Faces of Tomorrow" (you can skip the fictional part for our purposes here, and just read the lectures on AI). By the way, Hogan was a computer engineer.