ehh, I've had the opposite experience. I've got way too many years writing C for embedded systems, but needed an Android app. So I asked ChatGPT to create an Android app for me that would do MDNS and Bluetooth discovery, pop up a dialog to let me choose from the discovered devices, and then connect over WiFi or Bluetooth as appropriate. And, after spending an hour downloading and installing the Android toolset, the program compiled first time and did what I asked. I did my normal step-through-line-by-line verification, and it was fine for prototype code.
As someone who'd never written an Android app in his life, this was eye-opening. It would have taken me weeks to get there pre-Google, and several days to get there with Google, but about 10 minutes with an LLM. Now, would I trust it to write production embedded code? Ya know, with the appropriate LLM-generated tests and human validation, I think so.
My son recently graduated with a degree in Software Engineering; I try to tell him that the future for that is going to be what we today would call a software architect or designer. The Software Engineer will be judged on how well they can write the specs and requirements fed into an AI to generate code, not how well they remember the details of how the C++ Lambda function works, or be able to generate a complex regular expression -- and be able to read it six months later. We're not there yet, but there's so much money being poured into the problem, and it's such an easy-to-see evolution, that in 10 years the concept of actually paying attention to spaces and semicolons will be quaint.