Comment Re:"Science" has the same problem, thank you RFKjr (Score 1) 77
LLMs are completely unable to verify.
That's an exaggeration. You can give a LLM access to real things and they can use those real things to verify. I just flatly do not understand why they are not. It wouldn't make them infallible, but it would go a huge way towards improving the situation, and they are clearly not doing it. They could also use non-AI software tools to check up on the AI output. I'd bet that you could even use a plagiarism detection tool for this purpose with little to no modification, but I'd also bet this kind of tool already exists anyway.