Comment Re:So ChatGPT is a magnificent cut-and-paste machi (Score 1) 73
This is vacuous nonsense. LLMs have the ability to generalize. They can actually do shit. They can for example translate languages, base64 decode, solve simple ciphers, double recipes, apply knowledge learned via ICL. All with varying degrees of success. While their behavior is generally rather rote blanket dismissal as a glorified random number generator or a next character predictor fails to speak in any useful way to demonstrated capabilities.
No they can't generalize. They can produce random numbers and characters based on training data. If you're talking about 'summarize this paper' it is just producing random characters that fit the training data. If you think it 'thinks' or 'does shit', other than producing random characters that fit training data (or context data if you're loading it that way for MCP or tooling shit) then you have a fundamental misunderstanding of how both LLM is 'trained' and function. (we use the word training but what we really mean is ngram/token probability extraction from a given set of data).