I think of this as pretty much replacing the kind of work that electrical engineers used to do with board design and circuit layout... Now they use an expensive tool like Altium, and then while they may still tweak the output, by and large the layouts are automatically generated by the software and only the high-level requirements are fed into it
Ditto. Time was we coded in assembly. Then compilers came out but people didn't trust them. I, for one, would have occasion to review the generated assembly to double check what got generated matched what I thought the C code should do. By the time I was doing this, it was virtually always me misunderstanding C semantics, not a compiler bug.
AI feels like the next iteration. I code in reasonably specific English. I may or may not carefully review the code, depending on how familiar I am with the toolset and whether the result is a toy, a throwaway tool, or production.
I'll give you yesterday's example. I'm working on a REST API which presents telemetry from a brand new hardware device. There's a lot of uncertainty in how the hardware, control software, and my processing work. In the past, if I wanted to understand what's going on, I'd run a bunch of queries and pore over the output by hand, or maybe copy and paste into a spreadsheet, or write a Perl/Python script to analyze the output. Today I get an AI to create a browser browser app (took maybe 20 minutes if you count bug fix iterations) to fetch the data and render it in a chart. Do I know how to write a good React app? Not in the slightest. Do I care, as long as I can see the shape of the resulting curve? Also not in the slightest, this is a throwaway. Does it help me do a better, faster job on my real work product, the REST server? Oh my God yes.