The current approaches are effective in certain domains and useless in others. The LLM paradigm is NOT the way to achieve AGIs, and despite its popularity (with the hordes) it is a flawed and misleading path forward. We know by analysis of the brain that we learn incrementally, not by mass batch training. Humans learn one thing at time, and we do it by back-connecting to things we already know, building up knowledge trees. LLMs are rigid in that they have to absorb a ton of information per session and doing this is expensive in resources. That is NOT how the human brain works, and as I've said many times, we do it on 20 watts not with expensive vector processors and megawatts.
And as for the networks of many neurons approach, it is architecturally wrong. For example, by analogy, if you tried to start designing CPU architectures from the gate level first, you would soon run into trouble maintaining the project. For a CPU you have to do top-down not bottom-up. Likewise for AGIs.
A more viable way for AGIs is to start at the highest levels, handling things modularly in terms of functions and tasks and the blocks to implement those. Another approach, more daring, is to make each knowledge object itself be intelligent within a special framework. This is a little like Robert Hecht-Nielsen's confabulation theory but not quite. And analogous in a way with the attention head in LLM implementations but not quite. It can amount to, the more you know, the smarter you get. But that analogy breaks down because increasing of intelligence can require improving knowledge organization methods.
Anyway, current paradigms for deep learning / ML have a lot of flaws. Just like quantum mechanics where we know a lot of behavior but not a lot of why. In both cases, it is clear the underlying models have flawed approaches, and we have to evolve to much better deep models of intelligence. And stop gobbling up hype like lemmings. Yeah I'm bad at metaphors.