Indeed. What is hilarious is that, apparently, many people are suffering from similar issues and cannot actually put the "general" in general intelligence in whatever thinking they are capable of. And hence the hype continues, despite very clear evidence that it cannot deliver.
As to "AGI", that is a "never" for LLMs. The approach cannot do it. We still have no credible practical mathematical models how AGI could be done.
I would submit that automated theorem proving or automated deduction (basically the same thing) is a theoretical model that is AGI. But that one is not practical because it gets bogged down in state-space explosion on simple things already. Scaling it up to what a really smart Mathematician can do would probably take more computing power than we can have in this universe, as the effort goes exponential in reasoning depth with a high base number. BTW, this was explored extensively around the 1990. What came out of it are proof-assist tools (very useful!), where a smart human takes the system in baby-steps through a proof and the system verifies the reasoning chain.
But besides that one? No mathematical / algorithmic approaches that can create AGI are known. They all are failing in the "general" aspect of things. Just like so many (but not all) humans do.