Today we have computer navigation, plain-language database queries, and speech processing such as Siri. AI? No. Table lookup, elaborate.
You've got the beginnings of a well known thought experiment called the Chinese Room:
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
So the person running the commands manually, operating the same as the computer processor basically, has passed the Turing test. However neither the human being nor the "computer" really understands Chinese.
What you are implying about Siri et al is that human beings operate differently. But how do you know that human intelligence isn't just a series of elaborate table lookups? While the stated purpose of the Chinese Room thought experiment is, according to Wikipedia, "to challenge the claim that it is possible for a digital computer running a program to have a "mind" and "consciousness" in the same sense that people do", it actually proves something else: as long as you define human "consciousness" as going beyond mere computation, it is impossible to test that it exists. But since all we can observe of a human are its inputs and outputs, that is the only basis upon which we can compare human "intelligence" to AI. The basis of the Turing test is to measure whether the inputs result in similarly intelligent outputs.
If AI research has taught us anything, it's that humans are much more intelligent than we thought we were. We have a lot of subconscious mental faculties that are beyond even our most complex computers. One big one is still the ability to make intelligent conversation. Siri may be able to understand some requests enough to deliver the desired response, but a lot of the time her level of comprehension is below a retarded four year old.
I do think, however, that if a computer could fool a dog into believing it was also a dog 100% of the time, then it would have the intelligence of a dog, with a caveat. The dog being fooled would need to understand the philosophical nature of the test and also understand how the computer is likely to fail. Otherwise it would be like asking Siri to provide feedback for an English paper; she just does not understand the question being asked.
The question is really more about "are this AI's actions indistinguishable from a being we know is intelligent". If the test administrator is qualified to judge that, and the test is run enough for the results to be statistically significant, it's perfectly reasonable to suggest that because the actions are indistinguishable, so too must be the level of intelligence behind them.