AI is a tantalizing tease. You can see glimpses of what it can potentially do, but then it hallucinates and produces something beyond useless and potentially dangerous.
As a software developer using AI, it's clear to me when AI produces garbage or something not viable, and I'll ignore what it suggested or if I'm prompting I'll steer it in the right direction. But I'm using 35 years of experience to filter out the garbage it produces. It can speed up what I do, generally by saving me some time digging through API documentation and the like, so I use it, but I do so with caution.
Now when you throw this tech to the masses and expect it to work correctly 100% of the time, that's another story. If I tell Alexa or Siri to turn something on or turn it off or set a timer, vacuum my floor, close the garage door, there is no room for interpretation there. It either understood my speech and needs to do exactly what I said, or else ask for clarification or just flat-out do nothing and ignore what I said. The danger with AI is it making an assumption or hallucinating something, and opening my garage door while I'm on vacation because I said my car is too hot and Alexa assumed it was still in my garage and thought maybe opening the garage door would cool it off.
Back in the late 1980s, I was messing around with a tantalizing little piece of tech, which was an IC chip Radio Shack sold for about $15 that could recognize speech. The possibilities of this thing were pretty cool, and so I began experimenting with it. After a bit I realized it was pretty much a worthless gimmick - because almost everything you said was converted ("best fit") into one of the handful of words it was trained to recognize. I don't remember the exact words, but they were things like "yes", "no", "on", "off", "go", "stop", etc. Since the algorithm would almost always convert what you said into one of the words it recognized, it really only worked in a totally quiet environment where the only words you said were the ones the device understood, otherwise it would constantly be triggering from words that didn't even sound close to what it thought they were.
AI has been like this for a long time, and has "tricked" a lot of researchers over the last half century. At first it was always hardware limitations - if only we had more ram, more training data, faster processing - but now it's a fundamental misunderstanding by a lot of people exactly what LLMs are doing, and what they really should be used for.