Comment Re: Bubble AI (Score 1) 31
Take this statement of what an LLM is:
"Given a sequence of words, what word is likely to come next?"
Hallucinations are an inherent feature of LLMs, expressly because next token prediction is what they do.
There is a whole path, using NP as a lens that can provide insight of this if not being exact. As choosing the next word can be thought as a decision problem.
NP is equal to SO-E, the second-order queries where the second-order quantifiers are only existantials.
coNP is equal to SO-A, the second-order queries where the second-order quantifiers are only universals.
We think NP!=coNP, and due to the fact that LLMs are doing prediction, and are only successful due to the ability to efficiently parallelize tasks, the covariant form simply is not accessible.
The point being is that hallucinations are something we can minimize, but will never solve. We are actually taking advantage of what the root causes of hallucination as the core feature.
The ability to find similar patterns and apply them to new problems is exactly why LLMs work as well as they do.