Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses

Klarna Claims AI Is Doing Agents' Jobs 61

Buy-now-pay-later lender Klarna said its AI assistant, powered by OpenAI, is doing the equivalent work of 700 full-time agents and has had 2.3 million conversations, equal to two-thirds of the company's customer service chats, within the first month of being deployed. The AI tool resolved errands much faster and matched human levels on customer satisfaction, Klarna said.
This discussion has been archived. No new comments can be posted.

Klarna Claims AI Is Doing Agents' Jobs

Comments Filter:
  • by SpzToid ( 869795 ) on Wednesday February 28, 2024 @01:32PM (#64276018)
    https://archive.is/jdLT1 [archive.is] We seem to have a pattern.
  • Of course it does (Score:5, Insightful)

    by rahmrh ( 939610 ) on Wednesday February 28, 2024 @01:34PM (#64276028)

    Step #1: make your human call center team have metrics (getting off the call and similar) and to meet the metrics they become so bad their satisfaction scores are low.
    Step #2: replace with AI that has an easy job since the call center was managed to 0 satisfaction.

    Metrics (maybe only bad, but there seem to be a lot more bad metrics than good metrics) seem to always make a call center efficient at being bad.

    • by dvice ( 6309704 )

      But once you have the AI working, you can actually measure AI against AI and start increasing the customer satisfaction score, without any actual additional cost for doing so.

      • by ebunga ( 95613 ) on Wednesday February 28, 2024 @02:21PM (#64276206)

        No costs other than the $37,000,000 per month cloud services bill that will triple once the providers know you can't move back.

        • by gweihir ( 88907 )

          Indeed. Turns out the cloud is _expensive_. The number of organizations leaving the cloud is growing, most because promised simplification did not materialize and cost is increasing.

          • ... because promised simplification did not materialize and cost is increasing.

            Sounds a lot like outsourcing in the 90's.

            • by gweihir ( 88907 )

              Yes, it does. Same mindless hype and drive towards it, same bad awakening when it became clear what it actually could and mostly could not deliver.

        • No costs other than the $37,000,000 per month cloud services bill

          They might need cloud services to train the AI.

          They need far fewer resources to run the AI.

          triple once the providers know you can't move back.

          If they use standard tools (TensorFlow, PyTorch, Keras, or whatever), they can move to a different cloud provider or buy a rack of GPUs and do it in-house.

    • Re:Of course it does (Score:4, Interesting)

      by alvinrod ( 889928 ) on Wednesday February 28, 2024 @02:27PM (#64276244)
      I wouldn't be surprised if for any business there's a clear cut case of the Pareto distribution in call subject. Make the AI good at handling the 80% of calls that represent 20% of possible cases and it frees up humans to deal with the trickier calls that are fewer in number. AI won't completely replace humans, it will just make them more productive. This means that we don't need as many working in a call center, just like we don't need them working at a switching board to connect the call.

      It seems like many companies have already been doing this for years. I've dealt with front-line chatbots for a while now that are doing this. If AI makes that more efficient or can fill in the human that I might eventually need to talk to with the relevant information so our interaction is more efficient, I'm not going to complain.
      • If the AI can handle most of the calls and the humans handle the tricky ones then the AI is making the humans more efficient. That is the humans will add more value. In your scenario potentially adding 400% more value. Companies will respond to this by offering more complex and tailored solutions thus requiring more call center staff.
        • Maybe. When ATMs were introduced, employment in banking went up as banks offered more services and opened more branches since ATMs made it more profitable to do so.

          But it doesn't always happen that way.

          When switchboards were automated, employment at telcos fell.

        • Perhaps, if their customers want more tailored and complex products. When the power loom was invented and it was possible to make socks without a bunch of women knitting them all by hand, people certainly bought more and the textile industry grew even larger and in want of more labor. Of course that was in large part due to the lower cost of textiles enabled by productivity increases. If the support is less expensive as a result of AI, I certainly would expect people to want more of it.
      • by mjwx ( 966435 )

        I wouldn't be surprised if for any business there's a clear cut case of the Pareto distribution in call subject. Make the AI good at handling the 80% of calls that represent 20% of possible cases and it frees up humans to deal with the trickier calls that are fewer in number. AI won't completely replace humans, it will just make them more productive. This means that we don't need as many working in a call center, just like we don't need them working at a switching board to connect the call.

        It seems like many companies have already been doing this for years. I've dealt with front-line chatbots for a while now that are doing this. If AI makes that more efficient or can fill in the human that I might eventually need to talk to with the relevant information so our interaction is more efficient, I'm not going to complain.

        Yep, most telco's I've had to phone up have an automated voice that says "in a few words, please tell us why you're calling" rather than relying on me pressing the right button for billing or faults for a number of years now. It's still quite terrible and if you have any kind of foreign accent it's quite often utterly useless (read: anything not American or English Estuary).

    • by mjwx ( 966435 )

      Step #1: make your human call center team have metrics (getting off the call and similar) and to meet the metrics they become so bad their satisfaction scores are low.
      Step #2: replace with AI that has an easy job since the call center was managed to 0 satisfaction.

      Metrics (maybe only bad, but there seem to be a lot more bad metrics than good metrics) seem to always make a call center efficient at being bad.

      Lets not forget the company we're dealing with here, legalised loan shark, Klarna.

      Their customers aren't going to be the sharpest tools in the shed to begin with, otherwise they'd never use such a service (far better options if you really need something on finance, if you cant afford the initial outlay on it). So most of the customer service interaction is going to be expressly to get them off the phone because the primary cause of the customers dissatisfaction is the customers own decision to use Klarna

      • On top of that, they owe money, so they cannot choose to stop being a customer no matter how much they hate the chatbot experience.

        • by rahmrh ( 939610 )

          Gotcha,

          I was not familiar with Klarna and I assumed Klarna was a normal company not someone whose entire business is preying on people who already have money issues.

          Given that info, the AI has an even easier job to increase call metrics since it would seem like as you stated the calls may simply people that were not aware of how bad of contract/deal Klarna gave them. So the entire call center task is either explaining to them what is going to happen and/or simply getting them to accept they signed a bad d

  • ...better AI robots will be able to know every detail of a product's operation and all troubleshooting and repair techniques. They will be able to give perfect and accurate customer support

    Current AI is approaching the competence level of a minimum wage moroon reading a script

    • by gweihir ( 88907 )

      Current AI is approaching the competence level of a minimum wage moroon reading a script

      ... and with a bad attitude. As some companies think these are the perfect customer service workers, I can understand why they are excited about AI.

    • Most of the time it might match competence with a script reader. Other times, it will hallucinate new script.

      At this point, though, it will be far more knowledgeable than any script. Which is still a plus for consumers compared to the status quo. Anything is better than someone reading something they literally don't understand the meaning of as they say it.

      • by HiThere ( 15173 )

        "Current AIs literally don't understand the meaning of" anything "they say".

        That said, they can be a lot more flexible and encompassing than any static script. This is usually good, but can lead to "hallucinated" answers. You need to train very carefully to (almost always) avoid that.

        • So not that much worse than a person working in the call center?
        • A language model doesn't really need understanding - it can fake it with "knowledge" that it does have. The point is that it can always write something that *looks* like it understands, even if going off-script. Not the case with call center workers. I'm not saying this will work well - just better than what we have now.

      • by narcc ( 412956 )

        Anything is better than someone reading something they literally don't understand the meaning of as they say it.

        You know that chatbots don't actually understand anything, right?

        it will be far more knowledgeable than any script.

        Don't be so sure. A lot of work goes into keeping these things on script. You absolutely do not want surprises in the output.

        I expect that by the time the range of potential responses is narrow enough to be acceptably reliable and resistant to casual exploitation it won't be all that different from existing systems. Once the novelty wears off, my guess is that AI support bots will go over about as well as those automated phone systems that

        • You know that chatbots don't actually understand anything, right?

          Right, but the language model can make output text that strongly mimics understanding.

          A lot of work goes into keeping these things on script. You absolutely do not want surprises in the output.

          These don't work on a script. That's the bonus. They can sound like they understand what they are saying. A human operator can get lost in the script if the customer uses different wording that the operator doesn't understand. But that's something a language model is great at. It's not good at actually following a script. Putting bounds on it to try to keep it on a script will break it.

          The bar is so low that it will

          • by narcc ( 412956 )

            These don't work on a script.

            Yes and no. While they're not following a choose your own support adventure script (yet) they are trained on a large corpus of text with the expectation that the output will remain consistent with that text. There are a lot of things you absolutely don't want your support bot to say, after all.

            That's not easy to do, obviously, and it doesn't come cheap. A less expensive approach, and what I expect will quickly become the standard, is using the model not to generate responses, but to use the user's respons

  • FAIL! (Score:5, Funny)

    by weeboo0104 ( 644849 ) on Wednesday February 28, 2024 @02:16PM (#64276176) Journal

    The AI tool resolved errands much faster

    It doesn't seem to catch spelling errands though.

    • C'mon, it picked up the dry cleaning and the dog food.

    • That's not a misspelling. It might be intentional that they think the reader would conflate the ideas. But in this case, I think "errand" has a specific meaning relating to highly scriptable actions that a customer needs to take. The natural language processing would facilitate the process, but it may only be set up to handle very specific tasks (errands). I think they are using the term errand here to refer to the external API communication the bot is doing to handle the task. It's not a physical trip

  • What can go wrong? (Score:4, Interesting)

    by rskbrkr ( 824653 ) on Wednesday February 28, 2024 @02:19PM (#64276192)

    After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

    https://www.wired.com/story/air-canada-chatbot-refund-policy/ [wired.com]

  • I imagine AI replacing management would be extremely successful as well. The information inputs that feed into management decisions are considerably less noisy than the inputs that "ground level" employees interact with on a day-to-day basis. And, there's far less variation in not only the types of scenarios that management faces, but also in the decisions they can possibly make. eg. Ground level data is aggregated, distilled, and compressed into easily-understood metrics; and the basis that forms the backb

    • by bjoast ( 1310293 )
      Good joke. Management will be the last employee sector to suffer replacement.
      • Management gets fired all the time. AI could easily fire a manager by mistake and then be unable to reinstate. The CEO or VP could decide that AI is doing so good that they can lay off more managers. The Board could decide that AI is so good that they get rid of the CEO. AI can then buy enough stock that they fire the board.

      • by HiThere ( 15173 )

        TOP management will be the last employee to suffer replacement. I expect some middle management is already being replaced.,,or at least consolidated, with an AI assistant.

    • No idea where I first saw this, maybe here, but it seems fitting.

      In The Beginning Was The Plan

      Project management and planning are deeply intertwingled. More often than not,
      both are interpreted and communicated as reality, and that a plan is a plan is a plan,
      and not necessarily reality, is forgotten.

      The Plan

      In the beginning was The Plan.
      And then came the assumptions.
      And the assumptions were without form.
      And the plan was without substance.

      And darkness came upon the face of the workers.
      And they spoke amongst t

  • by xack ( 5304745 ) on Wednesday February 28, 2024 @02:34PM (#64276282)
    With mass unemployment from ai, even among those with advanced degrees, and illegal immigration taking most of the remaining jobs, AI should pay the dole, which is now been rebranded UBI. I've already been modded troll a lot talking about this, but Nvidia's 2 trillion market cap should be seized for helping people who can't afford rent.
    • UBI + illegal immigration would not go well. Got to tackle the border problem first.

    • And all those monks who copied the Bible, horse carriage drivers, elevator lift operators, telephone switchboard operators, gas station jockeys, secretaries who learned shorthand, tape hangers for tape drives, punch card operators, data entry clerks, I cried when they all died of starvation because technology changed and took their jobs. But now with UBI we can save them! What a wonderful time to be alive!
    • The lack of outrage at Nvidia becoming a multi-trillion dollar company for directly causing mass unemployment is Stockholm Syndrome at its worst
    • With mass unemployment from ai, even among those with advanced degrees, and illegal immigration taking most of the remaining jobs, AI should pay the dole, which is now been rebranded UBI.

      You realize this idea fails immediately. The AI isn't being paid to do work; how will you tax it if it doesn't have a salary?

      I know I know! We will tax the business on profits. ROFLMAO.

      You ain't getting anything from those fuckers.

  • Oh, it resolved errands, did it? Not errors, but errands? Yeah.
  • by thegarbz ( 1787294 ) on Wednesday February 28, 2024 @02:47PM (#64276344)

    The customer service experience of any company is having the lowest paid breathing human capable of forming sentences, reading from a script, and then melting down if the script is deviated from. It is literally the perfect use case for an AI chatbot.

    Remember if you work like a robot, you will be replaced by a robot.

    • by gweihir ( 88907 )

      Indeed. And before that an expert system could have done it, except that these have trouble doing natural language.

      Remember if you work like a robot, you will be replaced by a robot.

      Exactly. AI is not threat at all to anybody with an actually good qualification. Too many people do not have that or their qualification is so generic that they can be easily replaced.

      Hey, I completely agree with you. Must be kind of a first ;-)

    • I had to deal with some online chat and I had to just repeat "let me speak to a human" three times and then I'd get a real person on the other end who could help (was the weekend so was impossible to use the phone outside of business hours). Otherwise the chat followed an obvious and useless script ("click the button you already clicked 10 times and we'll send yet another text to a phone you don't own.", "Have you tried turning it off and back on again?", "would you like to upgrade to the premium protectio

      • Saying the word "agent" usually works and avoids any quirks of the bad natural language processing.

      • And you would have had the same experience talking to a human. Front line support even those which may be humanoid in form will offer you nothing more than trying to turn it off and on again, and will also leave you asking multiple times to before you get elevated to level 2 support.

        There's nothing unique about Chatbots here. This is the design of the system, and has been since back in the days where people thought AI was just a Steven Spielberg movie.

    • And that horrible scenario involves both customers and employees doing things they have no interest in or understanding of. Literally anything is better than what we have now.

  • For example, I just had Amazon customer support promise me a replacement shipping for a stolen one, just to do nothing. I guess AI can perform on that level as well.

  • "Teleperformance shares fell as much as 29% in Paris trading, the steepest drop since November 2022, amid regular halts for volatility. The company already uses AI to manage simple processes on behalf of its clients, it said in a statement in response to the stock drop."

    “The group’s current activity in no way reflects the negative conclusions in its business that could be drawn from the technological developments mentioned in this communication,” Teleperformance said.

    .

    No, of course not. No reflection at all, none whatsoever. It's preposterous to even think such a thing!

  • when the customers satisfaction level bar is set soo low!
  • How many of these are people opening the chat, trying to ask a question, then quitting in disgust?

  • This kind of shit isn't AI. it's a damn basic decision tree. Most recently delt with this kind of shit doing a return on Amazon. There's no kind of "Intelligence" to them. It spits out a selection of pre-created responses that depending on your selection moves you to the next branch of the decision tree. Then if you get to a point where the decision tree can't resolve your issue or you've got an issue that the decision tree doesn't cover you get escalated to a human for assistance.

I think there's a world market for about five computers. -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943

Working...