Still, at least those people understand what they have built. Even if you get AI to spit out a codebase that covers your needs, it can't make changes to it (to cover a new need, for example), you can only tell it to spit out an entirely new codebase (that may or may not still cover your other needs the previous codebase did). The example that you mentioned (of developers failing to make a change) is an outlier, most software undergoes a lot of changes and additions over its lifetime.
Also, let's not forget that most of the bad designs you see are always the result of MBAs mismanaging people: They don't hire an experienced lead architect to come up with a good design but hire only kids fresh out of college (or even coding bootcamps), the impose too short timelines (they basically mistake agile to mean "we can do this with half the people in half the time"), and don't give developers time to write documentation (or they don't have a management structure to enforce documentation-writing, or they don't test applicants for proficiency in writing English before they hire them as employees).
Basically, MBAs look forward to AI as a way to get out of the mistakes they love to make when managing people. Forgetting that the codebase AI will give them is far less maintainable (and more often than not far less secure) than even a mismanaged team of "coders" gives them. But anyway, I am always looking to be proven wrong, so let's see if AI will actually deliver software that is at minimum as useful, maintainable, and secure than the average mismanaged team of humans.