Gweihir is kind of infamous for refusing to admit that LLMs have any possible usage or that anyone is using them in a productive capacity today. His posts have been becoming increasingly strident of late.
The kicker is AGI. I'm not sure that with a definition that matches the acronym that it's even possible, yet some companies claim to be attempting it. Usually, when you check, they've got a bunch of limitations in what they mean. A real AGI would be able to learn anything. This probably implies an infinite "stack depth". (It's not actually a stack, but functionally it serves the same purpose.)
I don't like the term "AGI" because it's still nebulous and means different things to different people. The shifting window of vocabulary meanings in the AI field is rough. In the 1980s people regularly talked about chess as an AI problem. Now you can find plenty of people who say that's not AI. Ditto for Go (once Go became a defeated AI problem, it suddenly is no longer worthy of being considered AI). All the things I learned when I took an AI class ~25 years ago are now often derided as not AI (neural networks, A* and other search trees algorithms, etc).
There's a group of people who want to continue to goal shift until the only goal is "human intelligence" and if it's not human intelligence, it's not AI.
I think the definition of "AGI" = "Ability to learn" anything is close. But, can your average human learn anything? I'm not so sure.
Does it matter if the same AI program can answer math questions (or protein folding, whatever) AND plan a warehouse robot travel route AND summarize legal documents?
For now at least, throwing more people-time, processing time, and processing capacity at these models does seem to make a big difference. I've been playing around with some downloadable models, and this technology is improving so quickly. I can't imagine what it will be like in 2 years or 5 years or 10 years or 20 years.
I would bet on Zuckerberg over Gweihir.