I use AI to help me code a lot. Not sure I like it. Very often leads me down dead ends. Or guides me into spending way too much time explaining. Or trying to cobble together incompatible pieces that were spit out at me. Or getting burned by bad information, out-of-date documentation, pseudo-code, etc.
I think a big problem in general is the way the AIs are presented, largely based on a Wikipedia model. The idea of "information" or explainers also infected, for example, Google, where instead of giving search results they try to fake their way through giving ANSWERS.
Same with AI. I feel like it should be seen (and present itself) with more of a "maybe you could try this...?" attitude? Or "some people seem to claim that..." etc. Or "what about maybe...?" Instead of posing as a know-it-all. In a lot of good stuff I read, the writer doesn't present themselves as an authority. And even the best authorities hedge with caveats and seem excited to get you thinking about *how* to approach things, rather than giving an ANSWER.
I think it's a version of mansplaining, mixed with marketing, mixed with hoodwinking. It betrays a set of VALUES (one's I don't hold), and makes the whole thing baldly IDEOLOGICAL. I doubt that this attitude naturally emerges from GPT training, I suspect it's finely-tuned (though maybe largely unconsciously).
It really is detrimental to the interfaces.