I would imagine his intent was not that the machine would achieve success, the experiment is simply a nice framework to analyse weather the machine is able to comprehend information. Weather the machine is convincing or not, if the machine can abstract concepts in the same we do then you'd have a thinking machine. The pretending scenario seems just like a simple way to make it harder for someone to make a cheap solution to the problem (i.e. a chatbot).
Ultimately producing real intelligence is what is sought, not passing the damned test.