LLMs are essentially a sophisticated pattern recognition algorithm.
No, they're not.
The fact is, we do not know "how" they work except at the very base level.
We use recursive descent to move weights in a very large collection of MLPs with a self-attention mechanism, and they're able to produce text.
Beyond that, we have to evaluate their behavior empirically.
Based on their training, they compose sequences of tokens that approximate what would be expected in response to a prompt.
This is correct, but misleadingly limited.
Based on your training, you compose words that would be expected in response to a prompt.
Models generalize. It's what's in the middle of the prompt and the answer that matters. You're trying to assert that it doesn't "think", while being wholly unable to define "think" in a way that isn't anthropocentric.
AI is to intelligence, as a movie is to motion.
To your big-I anthrointelligence that you have defined to mean your subjective human experience, and defined their internal experience to be not that- sure, yes, I agree with that statement.
It's entirely fucking useless statement- but a statement it remains.
When watching a movie, there is a very convincing appearance of motion, but in fact, nothing on the screen is actually moving. It can be so convincing that viewers using 3D glasses might instinctively recoil when an object appears to fly towards them. But there is no actual motion.
This is the simulation vs. reality argument- and it's flat out logically wrong.
Intelligence is not a physical thing that can be simulated. It is a classification of action. LLMs can, in fact, act.
The characters have no intent, though humans assign intent to what the "characters" are saying and doing. The point is, it's an illusion. And in the same way, AI is an illusion, a fancy (and very useful) parlor trick.
Except this is a philosophical argument, not a physical one.
Next you'll tell me Achilles can't possibly beat the Tortoise.