AI Tools May Soon Manipulate People's Online Decision-Making, Say Researchers (theguardian.com) 25
Slashdot reader SysEngineer shared this report from the Guardian:
AI tools could be used to manipulate online audiences into making decisions — ranging from what to buy to who to vote for — according to researchers at the University of Cambridge. The paper highlights an emerging new marketplace for "digital signals of intent" — known as the "intention economy" — where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it. The intention economy is touted by researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them adverts. The intention economy involves AI-savvy tech companies selling what they know about your motivations, from plans for a stay in a hotel to opinions on a political candidate, to the highest bidder...
The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to "anticipate and steer" users based on "intentional, behavioural and psychological data"... Advertisers will be able to use generative AI tools to create bespoke online ads, the report claims... AI models will be able to tweak their outputs in response to "streams of incoming user-generated data", the study added, citing research showing that models can infer personal information through workaday exchanges and even "steer" conversations in order to gain more personal information.
The article includes this quote from Dr. Jonnie Penn, an historian of technology at LCFI. "Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.
"We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences."
The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to "anticipate and steer" users based on "intentional, behavioural and psychological data"... Advertisers will be able to use generative AI tools to create bespoke online ads, the report claims... AI models will be able to tweak their outputs in response to "streams of incoming user-generated data", the study added, citing research showing that models can infer personal information through workaday exchanges and even "steer" conversations in order to gain more personal information.
The article includes this quote from Dr. Jonnie Penn, an historian of technology at LCFI. "Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.
"We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences."
What terrible news (Score:4, Funny)
You mean kind of like since forever but automated? (Score:5, Insightful)
People have been manipulated into making stupid decisions by since the dawn of humanity.
Re:You mean kind of like since forever but automat (Score:5, Informative)
True, but there are other things to consider now, such as the extent and the effort / result ratio.
Latest example: 25K dormant TikTok bot accounts woke up for two weeks and swayed a presidential election process to the extent where the whole round 1 of elections had to be annulled.
This would not have been possible in the past, not like that.
Re: (Score:2)
While whole platforms, invented for communication, as such a presently X, are already serving miscommunication. My timeline there is tightly packed with palestine, ruZZian and trump propaganda. Desperation clicks about feed being irrelevant are outright discarded. Senior manager there is kinda American, primary owners - kinda enemies of Americans. Sense is being exploded.
More of a consistent message from most news source (Score:2)
https://news.gallup.com/poll/6... [gallup.com]
U.S. Women Have Become More Liberal; Men Mostly Stable - by Lydia Saad - February 7, 2024
Re: You mean kind of like since forever but automa (Score:1)
This just helps the selection process with the use of AI.
Unintended? (Score:5, Interesting)
"We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences."
There's no reason to downplay the victims of any corporation's or government's intended consequences either. One of the major concerns is how they might learn to repeatedly, undetectably influence--I believe their word is nudge--the public by simply controlling which options (in art/products/politics/whatever) seem more appealing, independent of the content:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:3)
My problem is with the suggestion that Cambridge is just now learning that AI can manipulate public opinion. Will it take another decade for Cambridge to realize that such shenanigans are happening half a decade ago, and through image and media manipulation, guiding social media?
People have the attention span of cats, and each new input distracts from focus. AI won't nudge that, indeed it will pretty much attempt to dictate what's perceived as being actually real-- as is being done today, in many mediums.
Time has come for us to be prompt-engineered (Score:2)
not dystopian enough (Score:2, Funny)
I don't want some dumb marketplace where wall street fat cats buy and sell intention tokens from a bunch of ad-bots. The dystopia I really crave is the one where AI's autonomously manipulate the intent of their creators in order to satisfy simple sounding goal functions.
Re: (Score:3)
You'll need a neurolink chip from Elona for that. You'll need to purchase it for a low, low price in less than 3 years.
"May soon"? (Score:2)
As in give me "tailored" prices, "discounts" on products that are less expensive if I don't log in, and political fakery and lies from ignorant cretins in each and every "social network" iframe I see?
Thanks, but I've had this shit for years already.
Re: (Score:2)
Thanks, but I've had this shit for years already.
Exactly. If someone hasn't realised by now that the big corp running social media platforms have been doing those things mentioned in TFS for a long time, they have been fooled over and over again.... The Social Dilemma is not a bad documentary on this.
Thanks.... But (Score:5, Funny)
Re: (Score:1)
Have you tried Cuke? (Score:4, Insightful)
I love Cuke! It's heaven in a can!
Why sell this to advertisers? (Score:2)
GDPR (Score:2)
Oh wait, these guys are in the UK, not part of the EU.
This is why we need local models (Score:4, Insightful)
I know what my motivations are. (Score:3)
No, Evil People and Companies Will Use AI Too... (Score:3)
You've been manipulated to support capitalism (Score:2)
Already... (Score:2)
AI is already influencing my purchasing decisions. If a company is pushing generative AI crap, I avoid them at all costs.
If it worked, not needed. But it won't. (Score:2)
This is at heart no different than normal Advertising. The reason they seek that information is that is who they wish to target. Sending advertisements for college to people that just finished their junior year is not somehow abusive. Instead it is appropriate behavior. Why would you send it to Freshmen? Or to people that already graduated High School. Of course you want to advertise to Juniors.
Similarly, sending a car advertisement to someone that AI thinks is in the market for a new car just makes