i think you're reading a regurgitation viewpoint into what i'm saying. i put 'think' in quotes because i'm still wary of describing them as thinking, but yeah i totally agree they're way beyond regurgitation at this point. i mean, huge portions of human reasoning are language-based. the entire field of formal mathematics is founded on symbol manipulation, for example. so it makes sense that being good at working with language can result in reasoning.
Agreed.
We don't know how our own brains work, but we have some ideas, and one of the (to me) best theories around posits that our reasoning ability is inextricably intertwined with our language abilities, that the two things co-evolved as part of one ability: The ability to justify our decisions to others and to convince them to agree with us. The notion is that "reason" isn't how we think, mostly; our real thinking and decisionmaking is done at a lower layer, and then we engage our reasoning "layer" to invent an explanation for the decision that we have made, and the reason we invent an explanation is so we can verbalize it to others. Another aspect of both reasoning and language is the ability to understand the explanations verbalized by others, and to find and exploit holes in their reasoning.
Basically, we're social animals and this reasoning/language ability evolved to enable us to gain social advantage over our peers and, and to be able to reason collectively with them to solve the group's problems. Reasoning + language enables us to both collaborate to solve group problems and to get the rest of the group to do what we want.
Cognitive psychologists and evolutionary psychologists have a lot of evidence that something like this is going on. They can thoroughly demonstrate that our decisionmaking processes are never reason-based (though reason can convince us to revise them, sometimes) but that we decide first and come up with an explanation afterwards (and, amazingly enough, we're just as good at inventing an explanation for a decision we didn't make as for one we did).
Anyway, the reason I mention this (aside from it being fascinating), is that if our own capacity for reason is tightly integrated with language, and they're tied because language is about symbolic modeling of reality and reason is fundamentally symbol manipulation, there's every reason to expect that artificial language models may also have the ability to reason, given the right structure. Merely having a model of language isn't enough, but if you can add to the the ability to manipulate symbols in the right ways, reasoning is a reasonable outcome.