AI can write code, but it's not clear that it will ever solve the problem of verifying that the code it wrote actually does what people want it to do, in all cases. For important tasks, who is going to want to trust a codebase that is difficult or impossible for a human to review?
I currently work in a "Fortune 500" company, and people left and right are trusting LLM-bots with both writing code and reviewing the resulting PRs, for things one could consider "important tasks". And yes, this utterly irresponsible, but cheap and convenient, therefore it is done. People who don't want to participate in the AI-Slop-production will leave the company. I know I will.
Then take a look how many people privately trust "AI agents" with everything from their emails to their banking. Madness, yes, but there is no denying this is happening.
Will people just take the AIs' word for it that their air-traffic-control system software is correct and reliable?
I happen to know someone who is working on air-traffic-control IT systems. And his boss already found a loop-hole to make the AI Slop happen, there, too: Just more outsourcing to external companies, and then some internal employee, who of course has no reasonable chance to actually properly review the externally supplied code, is asked to "sign off" on their contributions.