Comment Re:No shit (Score 1) 99
I guess his point is that LLMs only do rote memorization with so little of proper reasoning steps that we may as well consider them incapable of understanding.
It is also very hard to distinguish between an LLM to simply spitting out a learned answer instead of doing some reasoning from a more generic model to come to the answer. If the LLM was taught an answer to your question then it can just provide the learned text without any (deeper) understanding of it. It may have only done some simple substitutions to the memorized data to give output tailored to your specific question. This is a big deal from my point of view. We do not know whether the model inside LLM is simple enough compared to the model humans have (i.e. Kolmogorov complexity of the model is not too much bigger in LLM than in a human).
It has been shown that LLMs can reason to at least a very limited level. It is not only memorization of the training data. They can do at least one reasoning step (e.g. a simple substitution rule or a simple modus ponens rule