Exactly. As technologists, we need the output of computers to be precise and accurate. LLMs might be precise, but they're very often inaccurate, and that's not acceptable to us.
The average person doesn't live in a world where accuracy matters to them. A colleague said she used AI all the time, and I asked her how. She said she often tells it the contents in her fridge and asks it for a recipe that would use those ingredients. She said, "yeah, and it's really accurate too." I don't know how you measure accuracy on a test like that, but it doesn't really matter. If you're just mixing some ingredients together in a frying pan, you probably can't go too far wrong. As long as you don't ask it for a baking recipe, it'll work out.
And I think that's what's going on. The people who love AI don't know enough to realize when it's wrong, or are just asking it open ended questions, like you would ask a fortune teller, and it spits out something generic enough that you can't disprove it anyway.