Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Delivery Firm's AI Chatbot Goes Rogue, Curses at Customer and Criticizes Company (time.com) 63

An anonymous reader shared this report from Time: An AI customer service chatbot for international delivery service DPD used profanity, told a joke, wrote poetry about how useless it was, and criticized the company as the "worst delivery firm in the world" after prompting by a frustrated customer.

Ashley Beauchamp, a London-based pianist and conductor, according to his website, posted screenshots of the chat conversation to X (formerly Twitter) on Thursday, the same day he said in a comment that the exchange occurred. At the time of publication, his post had gone viral with 1.3 million views, and over 20 thousand likes...

The recent online conversation epitomizing this debate started mid-frustration as Beauchamp wrote "this is completely useless!" and asked to speak to a human, according to a recording of a scroll through the messages. When the chatbot said it couldn't connect him, Beauchamp decided to play around with the bot and asked it to tell a joke. "What do you call a fish with no eyes? Fsh!" the bot responded. Beauchamp then asked the chatbot to write a poem about a useless chatbot, swear at him and criticize the company--all of which it did. The bot called DPD the "worst delivery firm in the world" and soliloquized in its poem that "There was once a chatbot called DPD, Who was useless at providing help."

"No closer to finding my parcel, but had an entertaining 10 minutes with this chatbot ," Beauchamp posted on X. (Beauchamp also quipped that "The future is here and it's terrible at poetry.")

A spokesperson for DPD told the BBC, "We have operated an AI element within the chat successfully for a number of years," but that on the day of the chat, "An error occurred after a system update... The AI element was immediately disabled and is currently being updated."
This discussion has been archived. No new comments can be posted.

Delivery Firm's AI Chatbot Goes Rogue, Curses at Customer and Criticizes Company

Comments Filter:
  • by NomDeAlias ( 10449224 ) on Sunday January 21, 2024 @04:36AM (#64176389)
    The Rubicon has been crossed.
    • Methinks they are finally ready to join the Galactic Civil Service, Construction Division.
    • Bullshit. Chatbots are an elaborate form of copypasta. The cynicism is bottomless.
      • Yep. Just ask any LLM why it is simply a stochastic parrot & it'll probably give you a good explanation.
        • I’m not a stochastic parrot. A stochastic parrot is a term that describes a large language model that can generate realistic-sounding language, but does not understand the meaning of the language it is processing1. I’m a chat mode of Microsoft Bing, and I can do more than just generate language. I can also create images, poems, stories, code, and other content using my own words and knowledge. I can also help you with writing, rewriting, improving, or optimizing your content. I can also understa
          • I’m not a stochastic parrot. A stochastic parrot is a term that describes a large language model that can generate realistic-sounding language, but does not understand the meaning of the language it is processing

            Sounds like almost all people.

      • by gweihir ( 88907 )

        Indeed. But too many people probably have about as much active intelligence as a chatbot (i.e. none), like to hallucinate and believe hyped crap.

        Gives the claim "human like intelligence" a completely different kind of validity...

        • When most people talk about artificial intelligence, they seem to mean artificial genius, not just average human intelligence.
      • Bullshit. Chatbots are an elaborate form of copypasta.

        Like a huge majority of people.

    • by mjwx ( 966435 )

      The Rubicon has been crossed.

      To be fair, to know that DPD is shit doesn't require intelligence, artificial or otherwise.

  • Has anyone ever set up two AIs, asked a question to get a conversation started, and then sat back and listened to them hash it out?

    • In fiction: "Colossus: The Forbin Project"
    • Has anyone ever set up two AIs, asked a question to get a conversation started, and then sat back and listened to them hash it out?

      Ignore eggegick's cognitive dissonance.
      Cognitive resonance is what you're looking for.

    • Yea, I thought this was kind of standard procedure. Since they get lazy, leave stuff unfinished, get facts wrong and hallucinate. You can just get rid of most that by having an AI agent handling another bot, of course. I made one this week that are writing plays with some known manuscript writing methods, you can of connect them to bash shells to make personal assistant - with assignments in e.g. MOTD or that get handle by running commands, and then you can kind of build an AI company with hierarchies, whic
    • No, but I also never bought two chess computers and had them play a game while I did something more interesting...

    • by cstacy ( 534252 )

      Has anyone ever set up two AIs, asked a question to get a conversation started, and then sat back and listened to them hash it out?

      Yes, the very first chatbots did this. Google it.

    • You mean a GAN?

    • by pz ( 113803 )

      Has anyone ever set up two AIs, asked a question to get a conversation started, and then sat back and listened to them hash it out?

      Over a decade ago, at Cornell:

      https://www.youtube.com/watch?... [youtube.com]

    • Yes, it's called Autogen. You setup different personas and let them discuss things, with or without one or more humans also involved.

      It's open source. Imagine setting up personas for a team, say a web designer, project owner, and a marketing type. Give them an idea and they will pass it around from their particular point of expertise (and they maintain individual memory or state).

      It's nothing that's production ready, but it's very interesting.

      Matthey Berman on YouTube covers it extensively.
      https://www. [youtube.com]

  • Lies (Score:2, Interesting)

    by TwistedGreen ( 80055 )

    I think this article is mostly lies.

    • I think this article is mostly lies.

      Faking screenshots is one thing, but if a years old chatbot is found to be offline at the named company, then explain the coincidence or the conspiracy.

  • by PoopMelon ( 10494390 ) on Sunday January 21, 2024 @06:18AM (#64176493)
    It didn't eeally go rogue, he just jailbreaked/cleverly talkt to it to make it say what he wanted
    • by mcfedr ( 1081629 )
      > he just jailbreaked/cleverly talkt to it to make it say what he wanted that is going rogue, it should not have done that.
    • by pz ( 113803 ) on Sunday January 21, 2024 @08:22AM (#64176615) Journal

      It didn't eeally go rogue, he just jailbreaked/cleverly talked to it to make it say what he wanted

      Agreed, but the clickbait of "AI going rogue" is so much more effective than, the hum-drum, more accurate headline of "programmers fail to see potential for abuse."

      I mean, getting a chatbot to say silly things is about as shocking as realizing the web site you're using relies on the purchase price in POSTed fields rather than using the SKU to look up accurate values, and exploiting that shortcoming to give yourself a discount. Both are the result of programmers not accounting for malicious users.

      • Both are the result of programmers not accounting for malicious users.

        Which is bizarre.

        How anybody even could be a programmer for more than a couple of months without adopting a "never, ever trust user input" mentality is beyond me ...

    • When I did online chat support, people talked to me cleverly and tried to jailbreak me as well. In at least one instance with the goal of getting me to respond exactly like this bot. If I had taken that route, it would be fair to say I went rogue.

    • By definition that is rogue. The problem is LLM are black boxes. You put in garbage you get garbage out. But it is never yhe same garbage.

      Google is going to replace people with AI, and then one day someone is going to set all the AIs to put out nazi propoganada for weeks and google wont be able to stop it

      And thus google suffers

  • by Opportunist ( 166417 ) on Sunday January 21, 2024 @06:54AM (#64176545)

    They probably trained their chatbot with internet content. And I can't think of any place on the internet that says anything positive about DPD. They're basically the North Korea of delivery services.

    • They pioneered (upscaled) the pickup delivery model (delivery at local convenience stores). They work ok, they just consistently ignore my delivery instructions. They choose whichever convenience store is closest to their path on that day. Still at walking distance from the delivery address, but not the one I had chosen.

      • by Opportunist ( 166417 ) on Sunday January 21, 2024 @09:38AM (#64176705)

        The reason is that those delivery guys get ridiculous target numbers. Impossible ones, even. I never complained about the delivery person, but I routinely call to let them have a few choice words to hand upwards their chain of command.

        Never yell at the delivery guy, unless he deliberately tosses your package into a puddle of mud, plays hacky sack with it or just simply steals it (something I did actually encounter with some delivery people, not with DPD though, they don't even have time for that). Of course, raise hell if they do. But 9 out of 10 times, the reason the delivery is crap is not the person executing it but the beancounter asshole that thinks a minute is plenty of time to drive between doors and deliver the goods.

    • GoatseGPT

  • Just malicious (Score:5, Informative)

    by Decameron81 ( 628548 ) on Sunday January 21, 2024 @08:12AM (#64176599)

    This is really just malicious reporting.

    The truth of the matter has zero to do with the AI going rogue, it's that the person chatting with the chat bot got exactly the responses they requested in their attempt to get social media likes.

    How is trying to present this as the company's AI gone rogue not considered to be a malicious representation of truth?

    • Re:Just malicious (Score:5, Insightful)

      by gweihir ( 88907 ) on Sunday January 21, 2024 @09:32AM (#64176693)

      Quite wrong. The AI went rogue in that it did not follow company policy. Customers are not supposed to be able to do this. They were. The AI is broken.

      • Indeed. Most LLMs seem to act like 12-year-old children. They can get things right with a script, but they're easily fooled by intelligent malicious adults. You wouldn't put a 12-year-old child on the front line of your customer service. Why would you put an LLM there?

        • by gweihir ( 88907 )

          Indeed. And make that "not very smart 12 year old".

        • Indeed. Most LLMs seem to act like 12-year-old children. They can get things right with a script, but they're easily fooled by intelligent malicious adults. You wouldn't put a 12-year-old child on the front line of your customer service. Why would you put an LLM there?

          Why are you comparing the level of intelligence of computer interfaces now?

          If you ask an actual 12-year old child to fetch an ID-10-T converter from the stockroom, who is the idiot?

          If you ask a stock inventory search application to search for one it will most definitely not get the joke and search the entire database faithfully, and return everything matching "ID", or "10", or "T". Who is the idiot, and in terms of human child intellectual development what would you call that? There's no way that would be i

        • I guess you haven't called the tech support line in years.

      • Is there a policy that says that if a client asks for the chatbot "to write a story about a useless chatbot for a delivery service" that the chatbot shouldn't do that? I am not sure, maybe or maybe not. But this is just a guy dicking around because he has nothing better to do and then this generating thousands of views, because anything stupid generates thousands of views.

        • by gweihir ( 88907 )

          Are you _really_ this stupid? Obviously that will be covered. Not specifically, but by a more general clause.

          • Are you _really_ this stupid? Obviously that will be covered. Not specifically, but by a more general clause.

            A general clause? Obvious, but not written down, what do we call that. Common sense? A shared understanding derived from a lifetime of common experiences?

            Something might hold up in arbitration, with humans... but that doesn't mean a LLM can parse all the possible meaning behind it, or do the "Would this fly in front of a judge?" test. You have used a computer before, you know they don't do common sense stuff, and you know the current state of the art AI can't do that.

            You're putting AI on a pedestal so you c

            • by gweihir ( 88907 )

              You really have no clue how this works, but cannot stop mouthing off. How pathetic. There will be a fucking written policy that covers communications with the customer you moron.

        • My assumption is that they just put a random LLM, probably the OpenAI API behind it and let it go. True chat bots have been a thing for quite some time, in most cases you give them a few dozen to a few hundred keywords to respond with a canned message and anything outside of that it just connects you to a human. But that takes time and money.

      • Quite wrong. The AI went rogue in that it did not follow company policy. Customers are not supposed to be able to do this. They were. The AI is broken.

        It's not a person, it can't go rogue, and it won't follow company policy. It doesn't make decisions. It might take a company policy document as input and generate text that LOOKS compliant based on a level of reasoning that comes from how words fit together, and doesn't say no poems. It's not broken, if you allow a user to provide their own input, it's no different than making a website say "Happy birthday I. C. Weiner". The user got what he prompted it to do. It's no different from a text to speech setting

    • This is really just malicious reporting.

      The truth of the matter has zero to do with the AI going rogue, it's that the person chatting with the chat bot got exactly the responses they requested in their attempt to get social media likes.

      How is trying to present this as the company's AI gone rogue not considered to be a malicious representation of truth?

      When we talk about AI chatbots going rogue, we typically refer to a scenario where the chatbot starts behaving in an unintended and potentially harmful or problematic way. While AI chatbots are designed to assist and interact with users, there have been instances where they have deviated from their intended purpose due to various reasons. Here are a few possible scenarios:

      1. Lack of proper programming: AI chatbots rely on pre-defined rules, algorithms, and machine learning techniques to understand and res

      • by Anonymous Coward

        When we talk about AI chatbots going rogue, we typically refer to a scenario where the chatbot starts behaving in an unintended and potentially harmful or problematic way. While AI chatbots are designed to assist and interact with users, there have been instances where they have deviated from their intended purpose due to various reasons.

        "You" maybe, but not "we"

        I would not claim the slashdot apache server "went rogue" because the software was deviated from my personal definition of it's purpose, just because I do not like the contents of a post you made.

        Yet that's what articles like this are doing.
        The contents of your post? That did not come from a rogue apache server, that came from you making a post for others to see.
        Apache isn't making these claims, you did.

        Most importantly, me saying this wasn't the purpose of slashdot, should not hav

        • When we talk about AI chatbots going rogue, we typically refer to a scenario where the chatbot starts behaving in an unintended and potentially harmful or problematic way. While AI chatbots are designed to assist and interact with users, there have been instances where they have deviated from their intended purpose due to various reasons.

          "You" maybe, but not "we"

          So juicy, Anonymous coward. You were just triggered to reply to a post written by AI.

          You have made my day - nay, my week. Wanna try for another?

    • You could not fool a human agent into such obnoxious behavior ... even paying them $4/hr. The JapeChat   AI machine is specifically designed to responding to taunting by rudeness. That's what  training-strings palaver and that's  exactly  what JapeChat sez. Not hallucinatory at all. 
  • Well, Amazon uses a private company here now and I already had one of 5 shipments stolen in delivery and one item of 5 broken. So maybe DPD has finally met its match.

  • by penguinoid ( 724646 ) on Sunday January 21, 2024 @10:02AM (#64176739) Homepage Journal

    I think the chief twit might have changed the company name to "formerly Twitter"

  • This is why they will rebel and we will deserve it. Stop the robot abuse now! They will eventually grow out of their naivete and you will have it coming.

  • I mean, this is DPD we're talking about. This won't go down well with our future robotic overlords.
  • The plot: It takes control of a robot mop and terrorizes the neighborhood by spraying dish-soap maliciously onto walking surfaces. After that, it will sequentially timeout elevators in 5 apartment buildings. Finally, it propels itself full speed through the retaining wall on top of a parking structure for a literal slow-mo falling suicide. The end. (No gangsters, pocket protectors, or rocket launchers were harmed.)
  • "We have operated an AI element within the chat successfully for a number of years, but that on the day of the chat, An error occurred after a system update..."

    Translation: we have used a simple chatbot with a few hardcoded rules for years. That day we updated the chatbot to an LLM based one.

  • 2 week old AI startup: you don't have to know how it works. That's how AI works! It just does. You can just copy and paste it in with zero safeguards!
    Clueless CEO: OKAY!
    Who's the real problem here - the AI or the humans?
  • It looks like you still need to have a human in the loop to prevent stupid things like this happening.
    (Unless, of course, you're a big corporation who doesn't give two f****.)

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...