As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking",
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.
It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.
The illusion of intelligence evaporates if you use these systems for more than a few minutes.
Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.
Or actually invite reporters to secret chats.
: is not an identifier