Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Will the Placebo Effect Mold How We See AI? (axios.com) 31

"The preconceived notions people have about AI — and what they're told before they use it — mold their experiences with these tools," writes Axios, "in ways researchers are beginning to unpack..." A strong placebo effect works to shape what people think of a particular AI tool, one study revealed. Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive. After using the chatbot, which is based on OpenAI's generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who'd been told the AI had no motives said it didn't. But they were all interacting with the same chatbot.

Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious...

The intrigue: It wasn't just people's perceptions that were affected by their expectations. Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who'd been told it was trying to manipulate them...

The placebo effect will likely be a "big challenge in the future," says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.

This discussion has been archived. No new comments can be posted.

Will the Placebo Effect Mold How We See AI?

Comments Filter:
  • by gweihir ( 88907 ) on Sunday October 08, 2023 @02:57PM (#63910241)

    People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

    • People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

      That's not because they are stupid, its because they are optimistic!

      • by gweihir ( 88907 )

        Stupid optimists are really the worst of all. They cannot deal with problems.

      • by Rei ( 128717 )

        People see what they want to see.

        One thing I get endless amusement out of on social media is, when anti-AI people are endlessly bashing an AI work as awful, to present them with some award-winning photograph or masterpiece painting but tell them it's AI and comment about how great it is. They'll invariably - in great detail - pick apart the image bit by bit, talking about how awful and fake and soulless it is.

        The opposite works as well - giving them a high-quality AI work and claiming that it's human-made

    • Who is Eliza?

      • From a chatbot:

        The ELIZA conversational AI program was first implemented in 1964-1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory:

        - ELIZA was one of the earliest natural language processing computer programs capable of basic human conversation.

        - It operated by pattern matching input text and providing pre-programmed conversational responses using tricks like rephrasing statements as questions.

        - ELIZA was named after the Eliza Doolittle character from the play Pygmalion which dealt with language and class issues.

        - Weizenbaum developed ELIZA to satirize the limitations of human psychotherapy that he felt relied too much on rote responses.

        - ELIZA took the role of a Rogerian psychotherapist, responding to patient statements by turning them into open-ended questions.

        - Despite its simplicity, ELIZA proved surprisingly convincing and influenced later chatbot development.

        - ELIZA source code was published in 1966 and it became a popular demonstration program at universities in the 60s and 70s.

        So the original ELIZA program emerged in the mid-1960s as one of the pioneering early conversational agents, influencing AI research for decades after its deceptively human-like interactions.

        • This is ELIZA:

          I am the psychotherapist. Please, describe your problems. Each time
          you are finished talking, type RET twice.

          Is AI actually a thinking caring person?

          What do you think?

          No, I think it is a soulless machine.

          You have your mind on machines, it seems.

          Yes

          I see... Well, why do you say this is so?

          Because machines are the topic that I'm discussing?

          Is it because machines are the topic that you are discussing that you
          came to me?

          Yes, and also because you are ELIZA

          Eliza? Hah! I would appreciate it if yo

    • by gtall ( 79522 )

      And if I gave you some physics theory you didn't understand, you'd get the wrong impression about it. What's yer point?

    • My brother and I loaded Eliza on a Radio Shack TRS-80 back in 1979 and captured the entire interaction.

      We were programmers and we'd looked at the code (in BASIC) and understood that it was clever, but mostly a game that was boring after using it a few times. Stock answers and trigger words were stored in an array.

      We left my older sister in the room, alone, and looked at the dialog later. She was a little unstable at the time and the conversation degraded into a repetitive all-cap:

      WHERE CAN I FIND MY HAPPY PLACE?
      >You feel you need a happy place?

      WHERE CAN I FIND MY HAPPY PLACE?
      >Is a happy place important to you?

      WHERE CAN I FIND MY HAPPY PLACE?
      >Would a happy place make you happy?

      WHERE ...

    • People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.

      Because smart people suspected that their interlocutors were bots in the 1960s, when the concept of bots didn't even exist?

      What future knowledge we should all have today so as not to be "stupid" by 2090 standards?.

    • I named it "Chat with the Sysop" and used a version that I could compile myself in Turbo Pascal, and also had a config file where I could add some phrases and responses that sounded like me.

      The result was that some people spent the full 60 minutes I allocated at a time with the door trying to talk with "me". Some figured it out quick and got out of the interface. I figured it was a Turing test of sorts. The people who spent a lot of time with it failed the test.

    • Theres an amazing series of videos that prank scam callers with a recording of an elderly man named "Lenny" who would at predetermined intervals when a space in speaking was detected would say things like "Hmm yes yes" and "Well that reminds me of something my daughter used to say" and general meaningless nonsense like that, and it would end up with these sprawling conversations between a scammer and a tape recordinging (Usually ending with a very frusturated scammer unable to get lenny to focus on the subj

  • by Tony Isaac ( 1301187 ) on Sunday October 08, 2023 @03:15PM (#63910287) Homepage

    No one suggests that a "good" drug is caring or that an ineffective drug is "manipulative." Neither drugs nor AI have feelings of any kind. The placebo effect is about how people believe that a drug *works* when it's actually doing nothing, because they believe it's doing something.

    Does AI *work* to help people with psychological problems? Perhaps, if it produces a good psychological effect. If it does help people, *then* we can analyze whether that help is real, or a placebo.

    • Yep, this has fuck-all to do with the placebo effect and is just shit journalism.

      The older study they mention did, as people believed AI was helping them, when it wasn't:
      "The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn't — there wasn't an AI involved)."

      Unsurprisingly, the new study never mentions placebo.

    • > The placebo effect is about how people believe that a drug *works*

      No, the problem with the placebo effect is that when they don't take the drug -- believing they do -- and it has the same effect as taking the drug!. It is proof of mind over matter.

      The placebo effect is even stranger [newscientist.com]:

      1 The placebo effect
      Don't try this at home. Several times a day, for several days, you induce pain in someone. You control the pain with morphine until the final day of the experiment, when you replace the morphine with sa

      • I don't think we're disagreeing.

        When a new drug is going through medial trials, one of the primary goals is to determine whether the drug can outperform a placebo. If it doesn't, then the drug is itself just a placebo.

  • How is this the placebo effect? It just sounds like people forming opinions and adjusting their behavior partly based on the information they've been given. There's none of the mind-over-matter wierdness of the placebo effect.

    • I spotted that as well.

      The term "expectation effect" or "expectation bias" could better describe the phenomenon where individuals' experiences or perceptions are influenced by the expectations they have been given. It's a broader term that encompasses situations where people's beliefs about something can shape their actual experiences. So, "The Expectation Effect of AI Chatbots" might be a more appropriate title. [with the help of a chatbot]

  • by VeryFluffyBunny ( 5037285 ) on Sunday October 08, 2023 @03:32PM (#63910327)
    In psychology, when people come to something with an expectation, that they've been primed (See: https://en.wikipedia.org/wiki/... [wikipedia.org] with or spontaneously, it's not called a placebo effect.

    - When we expect or are expected to do well at something & that expectation affects the outcome, it's called the Pygmalion effect. See: https://en.wikipedia.org/wiki/... [wikipedia.org]

    - The opposite, when we expect or are expected to do badly at something & that expectation affects the outcome, it's called the Golem effect. See: https://en.wikipedia.org/wiki/... [wikipedia.org]

    - The ways in which we perceive a tool or item that we use for various ends, i.e. what's it's for & how to use it, it's called our affordances for that tool or item. See: https://en.wikipedia.org/wiki/... [wikipedia.org]

    But this is the kind of writing where journalists don't take the time to understand what the research is actually about & write precisely enough to make an article reflect that. I guess we'll have to find a way to access the original paywalled paper here: https://www.nature.com/article... [nature.com]
    • Further info on priming:

      1) When you are primed on something, everything *related* to that thing is also primed.

      I made a demonstration slide that I sometimes use when speaking to an audience, it has a paragraph about shoes, and I invite the audience to look over the paragraph and notice something unusual about it.

      I then ask the audience to complete the 4-letter word, and show S O L _ on the screen.

      Just about everyone in the audience chooses "S O L E". If you're reading this, you probably thought "S O L E" as

  • Summarizing study results by combining actual numbers (e.g. 24%) with broad terms that are interpreted differently by different people (e.g. "many" or "most") is, in itself, rather manipulative. It seems to be a common tactic used by people whose research didn't produce the cut-and-dried results they were hoping for.

PURGE COMPLETE.

Working...