Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.
I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.
That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.
LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.
On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.
In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?