Comment Re:"No idea how... the brain works" (Score 4, Informative) 230
(I work in this area of research.) You are right, the paper is about just a sequence-to-sequence transformation model that learns good replies for inputs but is not actually "understanding" what is going on.
At the same time, we *are* making some headways in the "understanding" part as well, just not in this particular paper. Basically, we have ways to convert individual words to many-dimensional numerical vectors whose mathematical relations closely correspond to semantics of the words, and we are now working on building neural networks that build up such vectors even for larger pieces of text and use them for more advanced things. If anyone is interested, look up word2vec, "distributed representations" or "word embeddings" (or "compositional embeddings").
If you already know what word2vec is, take a look at http://emnlp2014.org/tutorials...