It's my belief that the basic technologies of the LLMs are sufficient [for AGI]
Well, if you're going with "AGI is a really good LLM", okay, perhaps.
But if you're using AGI to describe a conscious intelligence (you know, what "AI" meant before the marketers starting calling thermostats intelligent), i.e. a synthetic person, no, almost certainly not. LLMs produce streams of words according to probabilities developed from their training data and guided by the input query or queries. This is why some of the word streams, while grammatically coherent, are factually nonsensical: this is misprediction (somewhat risibly called "hallucination" by the marketing types. Protip: it's not hallucination.) Putting words together according to probability is not reasoning. LLMs don't think.
Having said that, an LLM might end up being part of an AGI (and yes, I do mean a synthetic person) but the role is almost certain to be limited to assembling output from a system that actually puts information together in such a way as to actually understand it before trying to assemble a bunch of words that communicates that understanding to others. Understanding the information is not a technology anyone has demonstrated to date.
LLMs, due to their ability to assemble grammatically coherent word streams based on their training data probabilities and the input quer(y/ies), are outright subversive in their simulation of intelligence. We all love a nicely constructed sentence and series of sentences. But there's no "there" there.