The parrot is alive. It has feelings, personal experiences, an individual personality, opinions, a unique perspective of the world as an individual.
An LLM is a machine. It is very complex but it is not alive. It can be analyzed and the output, given certain input, is mathematically predictable and repeatable. Having a few billion levers, knobs and dials does not make an LLM intelligent. Machine complexity only makes it harder to know in advance what the output will be. With enough logging and debug output, you can see exactly how it works. It isn't magic.
The LLM is a program that entirely relies on the input provided to generate an output.
The parrot will bite you, "talk" at you, shit on you, snuggle you, ignore you, and do a variety of other things because that's how it feels at that moment. It has an inner state of thought and consciousness. The LLM has no such thing. It just sits there burning power in some Iowa data center near a corn field waiting for the next instruction in an idle loop. Although waiting implies awareness. It is not actually "waiting" in the sense I wait in the hallway for my daughter to get ready to go to the mall. It is waiting in the sense that her shoes sit in the hallway doing nothing until she interacts with them. Only then do her shoes "do" anything as she picks them up and puts her feet in them. Without her input her shoes do nothing. No different than the LLM; her shoes do nothing until get bored, do not anticipate, do not wonder where they'll walk today.
This is all very basic philosophy of mind stuff from my second year at university. Every time someone posts about LLM being intelligent, I smile slightly, shake my head a little and wonder of an hour of two of undergrad philosophy lecture for everyone would finally kill off this incorrect notion that LLM has -any- intelligence or ability to think.
It does not. LLM is a machine. A very complex machine but a machine nonetheless the less. No amount of additional model size or coding will ever make an LLM into an intelligent self aware being. LLM is the wrong technology to ever produce that.
In school we used to call this "strong vs weak AI". LLM falls squarely into the weak AI box. If anyone has done any serious work on strong AI and made any real progress since I was a university kid, I'd love to see it. Weak AI can appear intelligent but never will be. Strong AI is the real deal; we have yet to see a single strong AI system, even an extremely primitive and stupid one.
Intelligence battle scores:
Parrot: 1
Every LLM ever made or ever will be: 0
Game over.
Parrot wins. Not because it understands whatever sounds it's making (possibly but unlikely; the "talking bird" thing is just a distraction from the real discussion), but because it's a unique living creature with inner thoughts the LLM will never have. We can replace parrot with eagle, cat, dog, rat, cow, giraffe, and any other mammal with a real brain and all this would still be true. I specify mammal because there are plenty of living organisms with no discernible brain yet are clearly still alive. I'm quite certain bacteria does not have an inner state of mind, for example.
Go read John Searle (Berkeley) for much more depth on this topic.