That's the real problem. 99.5% of the people using them, or being encouraged to use them, do NOT understand their limitations - and companies are doing their best to make sure that does not change. This is the same bullshit you see re:robotics in fast food, and other complex, but low status jobs.
Robots are not going to replace people anytime soon, either. They are not adaptable enough, and the problems that arise in the real world are too varied. When systems fail with human workers, you can adapt, and generate *some* revenue, or get SOME work done. Most businesses can't afford to "be down" until you can fix the robots. But at least the limitations there are clear to most people - or if not, the BECOME clear within hours of seriously considering that level of automation.
LLM's not as transparent, or as intuitive. In fact, they are the opposite, and AI companies actively encourage misunderstanding by inserting terms like "reasoning", or "analyzing" - when they do no such thing. They are simply tuning their probability tree. Companies are representing them as something close to approaching AGI, when they are no such thing. It's not that their reasoning ability is poor or limited - it is that it is literally NON EXISTENT.
Worse, they are not being marketed OR deployed as assistants, or force multipliers, but as *replacements* for entire processes, without human oversight or intervention when they are - in NO way - suitable, or well enough trained to do so.
Most things comply somewhat closely with the 80/20 rule... 20% of the work takes 80% of the time. When well trained and in a solid framework (which is a lot of work in and of itself), LLM's can do the other 80%, maybe about 80% of the time. That's a huge productivity boost - but it's being sold as much, much more than that. An Air Traffic Control LLM has been floated. Not as a joke. No one who "understands the limitations" would ever take that seriously - but people in positions of responsibility are still seriously considering insanity like this.