It would, indeed, be *highly* preferable to pause, or at least slow, AI development in order to design and implement safeguards. But there are multiple groups striving to capture the first-mover advantage. Anyone who slows development will be bypassed. And they aren't all under the same legal system, so that approach won't work either.
Consider https://ai-2027.com/ , That's a scenario that currently seems a bit conservative, if you check the postulated timeline against what's been (publicly) happening, though I expect engineering problems to slow things down at some point. Read both suggested endings (and the caveats). In that scenario cluster, the US dominates if AGI occurs before 2030, and China dominates if AGI occurs much later.
It would clearly be better for the US, China, and the corporations to agree to a set of safety rules. My imagination, however, isn't flexible enough to imagine them actually doing so (as opposed to promising to do so, which they might well do).