Comment Re:PR article (Score 2) 244
What in hell GPT-generated word salad did I just plow through?
> It is a result of the thought processes that create it. To create language
LLMs are not thinking. Nor are they creating language.
> You cannot build a LLM from a Markov model
Really? 'cause I'm looking at research papers on Arxiv right now looking at the equivalences in their methodologies. Zekri, Odonnat, Benechehab, Bleistein, Boullé and Redko, last revised Feb 2025.
> If you could store one state transition probability per unit of Planck space
It's not that I don't trust you, but until you show your working I am absolutely not trusting you.
> For LLMs to function, they have to "think", for some definition of thinking
'For LLMs to function, they have to "shit", for some definition of shitting'. That is the idiocy of what you just gave us, right followed with "You can debate over terminology", so I'm more inclined to think you're a GPTbot than a human.
> so you have to "round" your position to nearby tokens, and there's often many tokens nearby
Omigod, it's as if I just heard a quick description of the Temperature parameter of an LLM.
> As for this article, it's just silly
jesuschrist did the irony just send me blind?
> argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think"
On the one hand, I've yet to see your definition of "think".
> isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff"
Well that's funny, because again looking at Arxiv I'm seeing "inference-time measures of improving novelty often trade-off gains in originality with a cost in output quality", "Measuring LLM Novelty As The Frontier Of Original And High-Quality Output", October 2025.
> ignoring that the language models *are* reasoning
SHOW IT. Because you haven't yet.