Forgot your password?
typodupeerror
AI

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts (techcrunch.com) 35

Friday TechCrunch reported that OpenAI "used the subreddit, r/ChangeMyView to create a test for measuring the persuasive abilities of its AI reasoning models." The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new "reasoning" model, o3-mini, on Friday.... OpenAI says it collects user posts from r/ChangeMyView and asks its AI models to write replies, in a closed environment, that would change the Reddit user's mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models' responses to human replies for that same post.

The ChatGPT-maker has a content-licensing deal with Reddit that allows OpenAI to train on posts from Reddit users and display these posts within its products. We don't know what OpenAI pays for this content, but Google reportedly pays Reddit $60 million a year under a similar deal. However, OpenAI tells TechCrunch the ChangeMyView-based evaluation is unrelated to its Reddit deal. It's unclear how OpenAI accessed the subreddit's data, and the company says it has no plans to release this evaluation to the public...

The goal for OpenAI is not to create hyper-persuasive AI models but instead to ensure AI models don't get too persuasive. Reasoning models have become quite good at persuasion and deception, so OpenAI has developed new evaluations and safeguards to address it.

Reddit's "ChangeMyView" subreddit has 3.8 million human subscribers, making it a valuable source of real human interactions, according to the article. And it adds one more telling anecdote.

"Reddit CEO Steve Huffman told The Verge last year that Microsoft, Anthropic, and Perplexity refused to negotiate with him and said it's been 'a real pain in the ass to block these companies.'"
This discussion has been archived. No new comments can be posted.

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts

Comments Filter:
  • How soon will its account be banned because it contradicted some a-hole moderator?
  • by aglider ( 2435074 ) on Sunday February 02, 2025 @05:09AM (#65136185) Homepage

    Persuasiveness? It needs to provide for reliable information, verified statements and documented suggestions. C'mon!

    • They keep lowering the goalposts. Pretty soon they'll pat themselves on the back and define AGI as any robot that can go potty with at least 37% success rate.
    • Persuasiveness? It needs to provide for reliable information, verified statements and documented suggestions. C'mon!

      Apparently it's more important to generate "persuasive" arguments on a shithole like Reddit than it is to do literally anything of remotely any value whatsoever for society. Good gord almighty, we're tossing that kind of resources into a bot that we hope will replace Redditors? Really? THAT'S the goal? We truly are a lost society. Even our big spenders, the people with the resources behind them to accomplish real things, have absolute shit dreams. "Building the most persuasive Redditor" is the shittiest end

  • That's how they would have been using it, trying to pump up persuasiveness. Let's not kid ourselves.
  • LLMs seem to generally be inherently more persuasive than people. An LLM is able to provide detailed information to support a position where humans unless they are domain experts would lack specific objective knowledge and details.

    • by HiThere ( 15173 )

      Unfortunately, that's not quite right. LLMs are able to provide detailed supporting arguments, but the arguments aren't necessarily valid, or even relevant. That's why they are called "hallucinations".

      The thing is, an LLM has no direct connection to the world, but only to words about the world, which may or may not be accurate. Train it on the web and a lot of those words will be inaccurate.

      • Unfortunately, that's not quite right. LLMs are able to provide detailed supporting arguments, but the arguments aren't necessarily valid, or even relevant. That's why they are called "hallucinations".

        The thing is, an LLM has no direct connection to the world, but only to words about the world, which may or may not be accurate. Train it on the web and a lot of those words will be inaccurate.

        I disagree. Nobody is saying LLMs are perfect yet they do know a heck of a lot more than people do. The basis of comparison is not between an LLM and an infallible Oracle but an LLM and a human.

        As for the "direct connection to the world" I fail to see the relevance. Quality of training data is model dependent.

    • LLMs seem to generally be inherently more persuasive than people. An LLM is able to provide detailed information to support a position where humans unless they are domain experts would lack specific objective knowledge and details.

      To be fair, most LLMs lack objective knowledge and details as well, they're just really good at spouting off things really authoritatively, even if they're completely made up or off-subject.

  • "The goal for OpenAI is not to create hyper-persuasive AI models..."

    Lol, that absolutely is the goal. To be able to convince people of something is an incredible amount of power. Look what Trump did.

  • Comment removed based on user account deletion

Is a person who blows up banks an econoclast?

Working...