We are at the point where a computer can read a novel and spit out a high school book report that would both fool and impress most english teachers, and it can do it in seconds not days.
Not quite. It's very possible to do things that work part of the time, and allows for very nice demos. But the systems very easily blow a gasket on wrong parses, out-of-domain knowledge, etc. Roughly, there are three problems: we don't know how to operationally represent meaning, we don't know how to handle concepts that are fuzzy around the edges, which is the case of pretty much every concept out there, and we don't know how to introduce in a system all of the world knowledge a normal adult has.
Note that the advent of magnificient things like wikipedia certainly help, but as far as I know nobody is able to bootstrap a system from it yet.
There are also a lot of posts claiming the Turing test doesn't mean anything. However none of them I have read so far actually explain their statement, so I assume they are parroting their philosophy proffessor who was probably referring to Searle's Chinese translation room argument.
If you ever work in dialogue systems, you'll find out how adaptive humans are in a communicative context. It's, in fact, relatively easy to push a human to say things a particular way your system handles better, and he won't even notice. And that's because humans do it all the time. It's not a bad thing at all, and makes building efficient dialog systems for real tasks a tad easier. But it can shift the focus of the turing test from answering like a human to fooling a human, which is not the same problem at all and, annoyingly, a way easier and less interesting one.