Comment Re:"Now with 38% FEWER hallucinations!" (Score 1) 65
We really should not be using that term.
Because it implies that only some of the output of an LLM is made-up.
LLM output is factual/correct/valid/logical only coincidentally.
There are no rails from which the GPT/LLM can become derailed, leading to "hallucination". There's no thinking vs hallucinating "mode" or whatever.