Will the Placebo Effect Mold How We See AI? (axios.com) 31
"The preconceived notions people have about AI — and what they're told before they use it — mold their experiences with these tools," writes Axios, "in ways researchers are beginning to unpack..."
A strong placebo effect works to shape what people think of a particular AI tool, one study revealed. Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive. After using the chatbot, which is based on OpenAI's generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who'd been told the AI had no motives said it didn't. But they were all interacting with the same chatbot.
Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious...
The intrigue: It wasn't just people's perceptions that were affected by their expectations. Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who'd been told it was trying to manipulate them...
The placebo effect will likely be a "big challenge in the future," says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.
Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious...
The intrigue: It wasn't just people's perceptions that were affected by their expectations. Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who'd been told it was trying to manipulate them...
The placebo effect will likely be a "big challenge in the future," says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.
Sure (Score:3)
People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.
Re: (Score:2)
People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.
That's not because they are stupid, its because they are optimistic!
Re: (Score:3)
Stupid optimists are really the worst of all. They cannot deal with problems.
Re: (Score:2)
A pessimist is never disappointed.
Re: (Score:2)
People see what they want to see.
One thing I get endless amusement out of on social media is, when anti-AI people are endlessly bashing an AI work as awful, to present them with some award-winning photograph or masterpiece painting but tell them it's AI and comment about how great it is. They'll invariably - in great detail - pick apart the image bit by bit, talking about how awful and fake and soulless it is.
The opposite works as well - giving them a high-quality AI work and claiming that it's human-made
Re: (Score:2)
It's like "audiophiles" who always pick the system that they think is more expensive, even if they are lied to.
Re: (Score:2)
Who is Eliza?
Re: (Score:2)
From a chatbot:
The ELIZA conversational AI program was first implemented in 1964-1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory:
- ELIZA was one of the earliest natural language processing computer programs capable of basic human conversation.
- It operated by pattern matching input text and providing pre-programmed conversational responses using tricks like rephrasing statements as questions.
- ELIZA was named after the Eliza Doolittle character from the play Pygmalion which dealt with language and class issues.
- Weizenbaum developed ELIZA to satirize the limitations of human psychotherapy that he felt relied too much on rote responses.
- ELIZA took the role of a Rogerian psychotherapist, responding to patient statements by turning them into open-ended questions.
- Despite its simplicity, ELIZA proved surprisingly convincing and influenced later chatbot development.
- ELIZA source code was published in 1966 and it became a popular demonstration program at universities in the 60s and 70s.
So the original ELIZA program emerged in the mid-1960s as one of the pioneering early conversational agents, influencing AI research for decades after its deceptively human-like interactions.
Re: (Score:2)
This is ELIZA:
I am the psychotherapist. Please, describe your problems. Each time
you are finished talking, type RET twice.
Is AI actually a thinking caring person?
What do you think?
No, I think it is a soulless machine.
You have your mind on machines, it seems.
Yes
I see... Well, why do you say this is so?
Because machines are the topic that I'm discussing?
Is it because machines are the topic that you are discussing that you
came to me?
Yes, and also because you are ELIZA
Eliza? Hah! I would appreciate it if yo
Re: (Score:2)
And if I gave you some physics theory you didn't understand, you'd get the wrong impression about it. What's yer point?
Re: (Score:2)
My brother and I loaded Eliza on a Radio Shack TRS-80 back in 1979 and captured the entire interaction.
We were programmers and we'd looked at the code (in BASIC) and understood that it was clever, but mostly a game that was boring after using it a few times. Stock answers and trigger words were stored in an array.
We left my older sister in the room, alone, and looked at the dialog later. She was a little unstable at the time and the conversation degraded into a repetitive all-cap:
WHERE CAN I FIND MY HAPPY PLACE?
>You feel you need a happy place?
WHERE CAN I FIND MY HAPPY PLACE?
>Is a happy place important to you?
WHERE CAN I FIND MY HAPPY PLACE?
>Would a happy place make you happy?
WHERE ...
Re: (Score:2)
People though fucking _Eliza_ was a caring person, back when. Well, most people. But most people are stupid and see what they want instead of seeing what is.
Because smart people suspected that their interlocutors were bots in the 1960s, when the concept of bots didn't even exist?
What future knowledge we should all have today so as not to be "stupid" by 2090 standards?.
Eliza as a BBS door (Score:2)
I named it "Chat with the Sysop" and used a version that I could compile myself in Turbo Pascal, and also had a config file where I could add some phrases and responses that sounded like me.
The result was that some people spent the full 60 minutes I allocated at a time with the door trying to talk with "me". Some figured it out quick and got out of the interface. I figured it was a Turing test of sorts. The people who spent a lot of time with it failed the test.
Re: (Score:2)
Theres an amazing series of videos that prank scam callers with a recording of an elderly man named "Lenny" who would at predetermined intervals when a space in speaking was detected would say things like "Hmm yes yes" and "Well that reminds me of something my daughter used to say" and general meaningless nonsense like that, and it would end up with these sprawling conversations between a scammer and a tape recordinging (Usually ending with a very frusturated scammer unable to get lenny to focus on the subj
Placebo effect is about effectiveness, not emotion (Score:4, Insightful)
No one suggests that a "good" drug is caring or that an ineffective drug is "manipulative." Neither drugs nor AI have feelings of any kind. The placebo effect is about how people believe that a drug *works* when it's actually doing nothing, because they believe it's doing something.
Does AI *work* to help people with psychological problems? Perhaps, if it produces a good psychological effect. If it does help people, *then* we can analyze whether that help is real, or a placebo.
Re: (Score:3)
Yep, this has fuck-all to do with the placebo effect and is just shit journalism.
The older study they mention did, as people believed AI was helping them, when it wasn't:
"The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn't — there wasn't an AI involved)."
Unsurprisingly, the new study never mentions placebo.
Re: (Score:2)
> The placebo effect is about how people believe that a drug *works*
No, the problem with the placebo effect is that when they don't take the drug -- believing they do -- and it has the same effect as taking the drug!. It is proof of mind over matter.
The placebo effect is even stranger [newscientist.com]:
Re: (Score:2)
I don't think we're disagreeing.
When a new drug is going through medial trials, one of the primary goals is to determine whether the drug can outperform a placebo. If it doesn't, then the drug is itself just a placebo.
How is this the placebo effect? (Score:2)
How is this the placebo effect? It just sounds like people forming opinions and adjusting their behavior partly based on the information they've been given. There's none of the mind-over-matter wierdness of the placebo effect.
Re: (Score:2)
I spotted that as well.
The term "expectation effect" or "expectation bias" could better describe the phenomenon where individuals' experiences or perceptions are influenced by the expectations they have been given. It's a broader term that encompasses situations where people's beliefs about something can shape their actual experiences. So, "The Expectation Effect of AI Chatbots" might be a more appropriate title. [with the help of a chatbot]
That's not the placebo effect (Score:5, Informative)
- When we expect or are expected to do well at something & that expectation affects the outcome, it's called the Pygmalion effect. See: https://en.wikipedia.org/wiki/... [wikipedia.org]
- The opposite, when we expect or are expected to do badly at something & that expectation affects the outcome, it's called the Golem effect. See: https://en.wikipedia.org/wiki/... [wikipedia.org]
- The ways in which we perceive a tool or item that we use for various ends, i.e. what's it's for & how to use it, it's called our affordances for that tool or item. See: https://en.wikipedia.org/wiki/... [wikipedia.org]
But this is the kind of writing where journalists don't take the time to understand what the research is actually about & write precisely enough to make an article reflect that. I guess we'll have to find a way to access the original paywalled paper here: https://www.nature.com/article... [nature.com]
Notes on priming (Score:3)
Further info on priming:
1) When you are primed on something, everything *related* to that thing is also primed.
I made a demonstration slide that I sometimes use when speaking to an audience, it has a paragraph about shoes, and I invite the audience to look over the paragraph and notice something unusual about it.
I then ask the audience to complete the 4-letter word, and show S O L _ on the screen.
Just about everyone in the audience chooses "S O L E". If you're reading this, you probably thought "S O L E" as
Re: (Score:2)
6040718, we have your number.
Speaking of manipulative (Score:2)
Summarizing study results by combining actual numbers (e.g. 24%) with broad terms that are interpreted differently by different people (e.g. "many" or "most") is, in itself, rather manipulative. It seems to be a common tactic used by people whose research didn't produce the cut-and-dried results they were hoping for.
Re: (Score:2)
The chatbots I use do not have embedded ads. What are you on about?
Betteridge's law of headlines (Score:2)
says "no"