I think more to the point, at least as far as I understood it, the Turing test was not meant to be a real test for whether an AI was actually intelligent.
The point of the test was essentially this: If a machine becomes able to imitate intelligence well enough that we can't tell the difference, then we may as well treat it as actual intelligence. As much anything, Turing was making a philosophical point from a pragmatic point of view. It doesn't make sense to ask whether a machine is "actually intelligent", but only whether it's capable of behaving as though it has intelligence.
So it's not really about fooling a certain specific percentage of people, or having the test go on for a specific point of time. Those are just issues of how you might hypothetically conduct an actual test, but what you're testing for is whether the effects of the machine "intelligence" have reached a level of being indistinguishable from human intelligence.
So really, the point was to have something like a "blind taste test". You say you can tell the difference between Coke and Pepsi, but if I pour Coke and Pepsi into identically glasses, can you tell the difference? If not, then maybe you shouldn't express a preference. Similarly, if I can put a series of questions to a person and a computer, and no matter what questions I ask, I can't tell the human's responses from the computer's responses, then maybe we shouldn't think that the computer is less intelligent than the human.