When I was younger, my take was that it would take 15-20 years to 'train' an 'artificial brain', just like it takes time to get a baby up to general intelligence.
I still like this idea.
Now we throw the whole internet, music libraries, picture collections at a monster computer to have it digest it all at once. We end up with LLMs that are seemingly magnificent at the written word, but have actually no clue as to the meaning/context of what they write. We have seen wonderfull things like Sora that seem to have only a rudimentary understanding of physics (water simulation seems pretty ok, but consistency seems whacky what with people/cars/things disappearing/appearing, the human skeleton not being consistent (swapping legs, finger horrorshows) etc
I like to think that the AI has to LEARN these things, not just regurgitate what it's been fed. So treat it like a baby. Give an advanced machine one or two manipulators and a vision system (it'll learn how to use them) and give it children's toys (maybe simulated). Biggest problem will probably be how to give it 'curiosity' (What will an AGI do when it has no tasks to do?) Keep adding more toys and add complexity to the toys, just as for a baby. Let the manipulators interact with water, mud, foam, soft things, hard things, unbreakable things, breakable things. Have it LEARN how things work, action & effect. Keep building up the compute capacity while doing this. Those things will go slowly, other things like learning arithmetic, mathematics, text etc will probably go faster. At the end you would end up with a machine that understands the world, has knowledge and reasoning skills. (hopefully...)
Anyway, maybe this is more for a science-fiction scenario but that is how I envisioned the AGI evolution. I'm sure we'll see wonderfull things in the coming one-two years but if no REAL breakthrough is achieved we might again go in the trough of disillusionment and enter the next AI winter....