Lots of semantic discussion at this tier. Subtiers. Some of the posts address AI as some linear strength metric that qualifies after some arbitrary point. That's a stupid definition. Better would be to pin it to some tipping point; not the singularity, but self-writing or something.
AI isn't deterministic. Chess is deterministic. "Natural" voice navigation will be deterministic.
AI isn't even deductive. AI means it builds new decision trees, adaptive to conditions. Most life forms do it. In this context, humans boil down to execution of instinct code, all the way up to "love (and the expression of) is chemicals in the brain".
Now, we already have code that does this, at a crude level. At the time, Black & White set some niche records for the amount of behavioral code your Creature pet would build for itself. We have programs that don't know how to play chess, but can write a chess-playing decision tree that optimizes over iterations. It will reach a shitty roof well below current, human-written layouts. Programs that "learn" to play Space Invaders (or anything) play like shit.
So, seeing as you read this far, I'll concede that we hit a familiar snag - either we already have AI (of this definition) or we're pegging AI to some arbitrary strength of original, generated thought. But the code equivalent to the instincts of even simple life forms would be a mess, so it'll be a while before self-preserving, self-sustaining, "self-aware" (enjoy ur semaniks.) AI.