Comment It's not just you (Score 5, Insightful) 109
We've known since the beginning that incorrect responses make up 30-70% of LLM responses. Why? Because they're prediction engines, nothing more. They're fancy, and they sound human, but they're built to be convincing, not to be right. Error is built into the architecture. And it's getting worse. Even without model collapse, as we attempt to fix the issues, they will get worse and worse. It's built in.
This person sounds like they never heard about invented law citations that have gotten several lawyers in trouble. Or vibe coders that end up with a pile of garbage once they move beyond trivial apps. I think maybe they haven't been paying attention.
The solution is do what we know works - use systems whose architectures prioritise factual (or at least accurately referenceable) responses, instead of sounding good. That's not the current generation of LLMs. And it never will be. Wrong tool. Wrong job.