TL;DR: AI's huge initial productivity gains can create a false sense of competence that leaves you stranded when you hit problems beyond its reach.
I consider myself an experienced developer and AI user. I recently got a task of getting an old commercial software product to work. It had not been maintained, not compatible with new o/s and packages, and the huge installation guide with tons of manual steps was probably never fully correct etc. I also didn't know many of the underlying packages and application servers well, which normally wouldn't matter for this kind of 3rd party product - until it did. Anyway, I decided the right approach would be to sanitize it all and to dockerize it, and since I haven't worked much with Docker I turned to AI help. It quickly found answers and solutions to lots of things that came along. I could basically give it the whole installation guide and have it dockerize many steps first shot. It even helped with some of the 'application level' problems that started to emerge and I became more and more ambitious in what I could let the AI do. Huge success.
This was until I reached a problem concerning one of the embedded application servers, that none of the AI's could crack. I tried everything - new context windows, new AI's, providing all conceivably useful context info (stack traces, even decompiled files etc.) and symptoms based on my own hunches and debugging skills - but nothing helped. I also let the AI suggest what further info could be needed. But nothing helped. This problem was completely out of reach for the AI's - even with my debugging help. And this was the worst - because I had let the AI do so much stuff and not researched it myself, I kept trying to get the AI to fix it and so wasted a lot of time because it ultimately couldn't. Also, I was no longer in a position to debug it myself. I basically didn't fully know what I was doing, I was missing steps on the knowledge ladder. Eventually I had to throw in the towel and basically retrace everything the AI had done, why it was done that way, how/why it worked, and properly research everything and THEN deep-debug it.
This is not a criticism of AI's as such, it is unreasonable to expect a quick fix to everything. But it is just to show how some of the initial time savings (that were indeed huge) can be eaten away when the quick fixes stop working.