Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI

AI Tools May Soon Manipulate People's Online Decision-Making, Say Researchers (theguardian.com) 25

Slashdot reader SysEngineer shared this report from the Guardian: AI tools could be used to manipulate online audiences into making decisions — ranging from what to buy to who to vote for — according to researchers at the University of Cambridge. The paper highlights an emerging new marketplace for "digital signals of intent" — known as the "intention economy" — where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it. The intention economy is touted by researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them adverts. The intention economy involves AI-savvy tech companies selling what they know about your motivations, from plans for a stay in a hotel to opinions on a political candidate, to the highest bidder...

The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to "anticipate and steer" users based on "intentional, behavioural and psychological data"... Advertisers will be able to use generative AI tools to create bespoke online ads, the report claims... AI models will be able to tweak their outputs in response to "streams of incoming user-generated data", the study added, citing research showing that models can infer personal information through workaday exchanges and even "steer" conversations in order to gain more personal information.

The article includes this quote from Dr. Jonnie Penn, an historian of technology at LCFI. "Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.

"We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences."
This discussion has been archived. No new comments can be posted.

AI Tools May Soon Manipulate People's Online Decision-Making, Say Researchers

Comments Filter:
  • by locater16 ( 2326718 ) on Monday December 30, 2024 @02:15AM (#65049315)
    We'd better spend more money on AI to make sure it's giving people the proper suggestions. That is what I, a human, suggest we do.
  • by Rosco P. Coltrane ( 209368 ) on Monday December 30, 2024 @02:18AM (#65049323)

    People have been manipulated into making stupid decisions by since the dawn of humanity.

  • Unintended? (Score:5, Interesting)

    by Kunedog ( 1033226 ) on Monday December 30, 2024 @02:26AM (#65049335)

    "We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences."

    There's no reason to downplay the victims of any corporation's or government's intended consequences either. One of the major concerns is how they might learn to repeatedly, undetectably influence--I believe their word is nudge--the public by simply controlling which options (in art/products/politics/whatever) seem more appealing, independent of the content:

    https://www.youtube.com/watch?... [youtube.com]

    • My problem is with the suggestion that Cambridge is just now learning that AI can manipulate public opinion. Will it take another decade for Cambridge to realize that such shenanigans are happening half a decade ago, and through image and media manipulation, guiding social media?

      People have the attention span of cats, and each new input distracts from focus. AI won't nudge that, indeed it will pretty much attempt to dictate what's perceived as being actually real-- as is being done today, in many mediums.

  • I don't want some dumb marketplace where wall street fat cats buy and sell intention tokens from a bunch of ad-bots. The dystopia I really crave is the one where AI's autonomously manipulate the intent of their creators in order to satisfy simple sounding goal functions.

  • As in give me "tailored" prices, "discounts" on products that are less expensive if I don't log in, and political fakery and lies from ignorant cretins in each and every "social network" iframe I see?

    Thanks, but I've had this shit for years already.

    • by Bumbul ( 7920730 )

      Thanks, but I've had this shit for years already.

      Exactly. If someone hasn't realised by now that the big corp running social media platforms have been doing those things mentioned in TFS for a long time, they have been fooled over and over again.... The Social Dilemma is not a bad documentary on this.

  • by zurkeyon ( 1546501 ) on Monday December 30, 2024 @03:46AM (#65049401)
    The only "Decisions" I make online, are which ad blockers to use.... ;-D
    • by Anonymous Coward
      You can block ads, but if a significant fraction of social media (or, say, Slashdot) posts are made by various malicious actors trying to influence you, at some point it's very difficult to escape. You can limit your internet communications to humans you know in real life and maybe a few trusted sources you have reason to believe are written by real people. But dead Internet theory [wikipedia.org] means it's impossible to know if anything else you read online is real and may be trying to subtly influence you in various way
  • by 93 Escort Wagon ( 326346 ) on Monday December 30, 2024 @04:04AM (#65049411)

    I love Cuke! It's heaven in a can!

  • Just directly manipulate the world for your own benefit. It's probably already happening.
  • No Thank you [wikipedia.org]

    Oh wait, these guys are in the UK, not part of the EU.

  • by Visarga ( 1071662 ) on Monday December 30, 2024 @05:02AM (#65049449)
    We can't deal with all the manipulation and deceit online, so why not fight fire with fire? We can have local models act as a firewall. Let their models talk to your model. Going online without your AI shield would be like walking without a mask during COVID. The problem is no website is ever going to be seen by human eyes unless you specifically want to see it.
  • by laxr5rs ( 2658895 ) on Monday December 30, 2024 @08:19AM (#65049743)
    My motivation is to stop having them shove so many ads in my face I can't stand it anymore. and that motivation drives me to block all ads on my systems with a pi-hole. As far as I can tell they, The Advertising Industrial Complex, knows very little about my motivations or what I'd like to see. Otherwise, I'd be seeing what I'd like to see, instead of navigating a jungle of BS.
  • Yea, start up a few more nuclear power plants, so that, a few companies can: steal from people, put them out of work, and continue to influence them--that everything will be fine and that 10% unemployment will an acceptable normal.
  • Why has the US universally attacked Socialism/Communism around the world, why should it care what economic system other countries choose? Maybe Marx came before his ideas were really practical but they are now. Soon AGI is going to be running businesses while the owners do nothing but watch. https://www.genolve.com/design... [genolve.com] The system is already a bit of a sham - you can find situations where one central supplier is servicing several brands who simply repackage the same underlying product.
  • AI is already influencing my purchasing decisions. If a company is pushing generative AI crap, I avoid them at all costs.

  • This is at heart no different than normal Advertising. The reason they seek that information is that is who they wish to target. Sending advertisements for college to people that just finished their junior year is not somehow abusive. Instead it is appropriate behavior. Why would you send it to Freshmen? Or to people that already graduated High School. Of course you want to advertise to Juniors.

    Similarly, sending a car advertisement to someone that AI thinks is in the market for a new car just makes

As far as we know, our computer has never had an undetected error. -- Weisert

Working...