Comment Re:Crazy, damaged thinking is worse than deception (Score 1) 47
hallucinations are an unfortunate side effect of their design
No, "hallucinations" are in fact a deliberate outcome of their design and represent the LLM working exactly as it was built to. "Hallucinations" is a bad word to use because it erroneously anthropomorphizes the algorithm. It's an intentional marketing gimmick meant to trick people into thinking AI is more sophisticated and magical than it really is and condition users to excuse its "mistakes" as temporary mishaps or glitches that can be overcome, rather than seeing them for the hard limitations of the technology that they actually are.
That last part is essential to companies like OpenAI, because their marketability depends on your belief that ChatGPT is supposed to give you the right answers rather than the most likely answers. No amount of hard-coded guardrails and benchmark tuning (smoke and mirrors, respectively) will ever change the fundamental nature of its design.