Okay only a single data point, but here's a scenario that I can't imagine is unique.
I hired a junior full stack developer with a small portfolio of simple work. Started out with matching skill tasks, and grew over many months in complexity. As the complexity grew, the work output became slower and messy to be polite. We're not talking about jumping from 'edit this html' to 'design and build this ERP' but think standard stuff - build the api and the front end for a 10 view app. Eventually a brick wall was hit and there was frustration on both sides. I decided to sever the relationship.
In doing a more thorough review, I started noticing some common syntactical choices. Ones that got me curious enough to wonder why the developer would have reused some specific variable names, for instance. So I decided to call a hunch and sure enough, asking ChatGPT to build various things would spit out nearly verbatim code as what I was seeing in the commits. It was clear why there was a brick wall - the code given by AI was nearly always based on an example you would find on the developer's site, usually out of date and written against an older release. If you don't really know what you're doing and relying on an LLM to build things, you're in trouble. So in this instance, the focus was on reiterating with the AI versus taking its output as a shortcut to learning the platform.
So AI is fine, and I use it myself, but at this stage it's just a fancy search engine that can rewrite things. It's great for say translating a class between languages, or giving it a big chunk of JSON and asking it to build a model around it, or giving you a (broken) example of how you might accomplish some task in a language you're not completely expert in, but you can't accept its output as complete and ready to build.