Comment Re:Why doesn't this exist? (Score 1) 52
That's not how LLMs work. They can only extrude what's in their training data, and they have no way to understand if a case is relevant.
This is one of many reason why LLMs are of limited usefulness. They don't encode knowledge, just words. They don't reason, in spite of OpenAI's claims. They're a clever trick, and have some uses, but they aren't a universal solution to knowledge work, nor are they a path towards that goal.