Comment This is not a good thing (Score 1) 13
Of course, users have no assurance that an LLM is "confessing" to every lie or hallucination. But it will 'fess up often enough to foster habitual and reflexive trust. That misplaced trust is already a big problem for society, and having more of it is a very bad idea.
I predict that increased trust will make users even less critical of AI than they are of social media content. AI will become a bigger, more effective propaganda generator. It will be used to shape public opinion, convincing citizens to vote against their own best interests.