Because then the understanding that went into that code is non existent. By definition, nearly all LLM generated code is tech debt right out of the gate because a human didn't write it, thus it is not understood by anyone.
And since the EXACT same series of prompts will arrive at different code, I can't give my series of prompts to anyone else to implement anything. At least with giving specifications to different developers, and getting different code, I can go ask the devs how they arrived at that code. And generally this reveals either missing assumptions, wrong assumptions, incorrect understandings, different understandings, which then can be reconciled and iterated on.
With LLM's this entire process happens inside that black box, thus there is absolutely no way to understand how it arrived at the output. So any flawed assumptions by the LLM, missing assumptions, incorrect understanding, and different understanding can NOT be ascertained, you will have to just GUESS what the LLM did wrong, and iterate on that, hoping you arrive at the correct output. And since the LLM won't store any of this for future use unless you actively tell it it to, you are setting yourself up for more work later.
LLM context is absurdly rigid compared to a human, because it is still a program. It can't context switch like a human. Even though we know context switching is harmful to engineering productivity, we are still capable of it. LLM's are incapable of it, we have to tell them to switch, thus all the cheat sheets for LLM's floating around (Assume the role of X. Ignore instruction A,B,C. Include file A,B,C, etc.)
I may not always remember the exact details of what I worked on years ago, but I remember the general gist of it. An LLM will never do that, or worse, ALWAYS do that even if it isn't applicable, it has no way to tell.