That is the dumbest take. The kinds of mistakes that humans make are fundamentally different from the kinds of "mistakes" that LLMs make. Humans are also capable of evaluating their work and making appropriate changes and corrections. LMMs are not.
You wouldn't tolerate a 1% error rate from any other kind of program, let alone >60%. Using an LLM to write code requires more effort than just doing it yourself, not less. That makes it useless.
It ultimately doesn't matter from where the mistake comes from: you either have processes in place to detect it before it goes productive, or you have unhappy customers. Humans can definitely "self-correct", but often enough fail to do so, hence the need for formal review, testing and Q&A processes long before AI became relevant.
You think a 60% error rate is high, but I think you are not making the correct comparison: the comparison should be with a human coder trying to implement the solution. You are never going to convince me you only need 1 try to implement anything non-trivial.
The AI will likely not get it right the first time, but so would not the human. Iterating and going through some trial-and-error is likely going to be necessary regardless of whether the AI or the human is doing the coding.
Whether the former or the latter is more efficient, depends. I have encountered cases when it was more efficient to use an AI code agent and others where it was not. Ultimately these are new tools and learning when and how to use them is likely going to become part of the job for most.