Klarna Claims AI Is Doing Agents' Jobs 61
Buy-now-pay-later lender Klarna said its AI assistant, powered by OpenAI, is doing the equivalent work of 700 full-time agents and has had 2.3 million conversations, equal to two-thirds of the company's customer service chats, within the first month of being deployed. The AI tool resolved errands much faster and matched human levels on customer satisfaction, Klarna said.
Re: (Score:1)
Re: (Score:2)
As interesting as Fani Willis' troubles might be, her situation isn't that relevant to a "news for nerds" site.
Also what makes an AI gay?
Re: (Score:2)
Re: (Score:2)
Touché
Non-paywalled article (Score:4, Interesting)
Of course it does (Score:5, Insightful)
Step #1: make your human call center team have metrics (getting off the call and similar) and to meet the metrics they become so bad their satisfaction scores are low.
Step #2: replace with AI that has an easy job since the call center was managed to 0 satisfaction.
Metrics (maybe only bad, but there seem to be a lot more bad metrics than good metrics) seem to always make a call center efficient at being bad.
Re: (Score:2)
But once you have the AI working, you can actually measure AI against AI and start increasing the customer satisfaction score, without any actual additional cost for doing so.
Re:Of course it does (Score:4, Insightful)
No costs other than the $37,000,000 per month cloud services bill that will triple once the providers know you can't move back.
Re: (Score:2)
Indeed. Turns out the cloud is _expensive_. The number of organizations leaving the cloud is growing, most because promised simplification did not materialize and cost is increasing.
Re: (Score:2)
... because promised simplification did not materialize and cost is increasing.
Sounds a lot like outsourcing in the 90's.
Re: (Score:3)
Yes, it does. Same mindless hype and drive towards it, same bad awakening when it became clear what it actually could and mostly could not deliver.
Re: (Score:2)
No costs other than the $37,000,000 per month cloud services bill
They might need cloud services to train the AI.
They need far fewer resources to run the AI.
triple once the providers know you can't move back.
If they use standard tools (TensorFlow, PyTorch, Keras, or whatever), they can move to a different cloud provider or buy a rack of GPUs and do it in-house.
Re:Of course it does (Score:4, Interesting)
It seems like many companies have already been doing this for years. I've dealt with front-line chatbots for a while now that are doing this. If AI makes that more efficient or can fill in the human that I might eventually need to talk to with the relevant information so our interaction is more efficient, I'm not going to complain.
Jevons paradox - more humans will be needed (Score:2)
Re: (Score:2)
Maybe. When ATMs were introduced, employment in banking went up as banks offered more services and opened more branches since ATMs made it more profitable to do so.
But it doesn't always happen that way.
When switchboards were automated, employment at telcos fell.
Re: (Score:2)
Re: (Score:2)
I wouldn't be surprised if for any business there's a clear cut case of the Pareto distribution in call subject. Make the AI good at handling the 80% of calls that represent 20% of possible cases and it frees up humans to deal with the trickier calls that are fewer in number. AI won't completely replace humans, it will just make them more productive. This means that we don't need as many working in a call center, just like we don't need them working at a switching board to connect the call.
It seems like many companies have already been doing this for years. I've dealt with front-line chatbots for a while now that are doing this. If AI makes that more efficient or can fill in the human that I might eventually need to talk to with the relevant information so our interaction is more efficient, I'm not going to complain.
Yep, most telco's I've had to phone up have an automated voice that says "in a few words, please tell us why you're calling" rather than relying on me pressing the right button for billing or faults for a number of years now. It's still quite terrible and if you have any kind of foreign accent it's quite often utterly useless (read: anything not American or English Estuary).
Re: (Score:2)
Step #1: make your human call center team have metrics (getting off the call and similar) and to meet the metrics they become so bad their satisfaction scores are low.
Step #2: replace with AI that has an easy job since the call center was managed to 0 satisfaction.
Metrics (maybe only bad, but there seem to be a lot more bad metrics than good metrics) seem to always make a call center efficient at being bad.
Lets not forget the company we're dealing with here, legalised loan shark, Klarna.
Their customers aren't going to be the sharpest tools in the shed to begin with, otherwise they'd never use such a service (far better options if you really need something on finance, if you cant afford the initial outlay on it). So most of the customer service interaction is going to be expressly to get them off the phone because the primary cause of the customers dissatisfaction is the customers own decision to use Klarna
Re: (Score:2)
On top of that, they owe money, so they cannot choose to stop being a customer no matter how much they hate the chatbot experience.
Re: (Score:2)
Gotcha,
I was not familiar with Klarna and I assumed Klarna was a normal company not someone whose entire business is preying on people who already have money issues.
Given that info, the AI has an even easier job to increase call metrics since it would seem like as you stated the calls may simply people that were not aware of how bad of contract/deal Klarna gave them. So the entire call center task is either explaining to them what is going to happen and/or simply getting them to accept they signed a bad d
Someday, in the future... (Score:2)
...better AI robots will be able to know every detail of a product's operation and all troubleshooting and repair techniques. They will be able to give perfect and accurate customer support
Current AI is approaching the competence level of a minimum wage moroon reading a script
Re: (Score:2)
Current AI is approaching the competence level of a minimum wage moroon reading a script
... and with a bad attitude. As some companies think these are the perfect customer service workers, I can understand why they are excited about AI.
Re: (Score:2)
Most of the time it might match competence with a script reader. Other times, it will hallucinate new script.
At this point, though, it will be far more knowledgeable than any script. Which is still a plus for consumers compared to the status quo. Anything is better than someone reading something they literally don't understand the meaning of as they say it.
Re: (Score:2)
"Current AIs literally don't understand the meaning of" anything "they say".
That said, they can be a lot more flexible and encompassing than any static script. This is usually good, but can lead to "hallucinated" answers. You need to train very carefully to (almost always) avoid that.
Re: (Score:2)
Re: (Score:2)
A language model doesn't really need understanding - it can fake it with "knowledge" that it does have. The point is that it can always write something that *looks* like it understands, even if going off-script. Not the case with call center workers. I'm not saying this will work well - just better than what we have now.
Re: (Score:2)
Anything is better than someone reading something they literally don't understand the meaning of as they say it.
You know that chatbots don't actually understand anything, right?
it will be far more knowledgeable than any script.
Don't be so sure. A lot of work goes into keeping these things on script. You absolutely do not want surprises in the output.
I expect that by the time the range of potential responses is narrow enough to be acceptably reliable and resistant to casual exploitation it won't be all that different from existing systems. Once the novelty wears off, my guess is that AI support bots will go over about as well as those automated phone systems that
Re: (Score:2)
You know that chatbots don't actually understand anything, right?
Right, but the language model can make output text that strongly mimics understanding.
A lot of work goes into keeping these things on script. You absolutely do not want surprises in the output.
These don't work on a script. That's the bonus. They can sound like they understand what they are saying. A human operator can get lost in the script if the customer uses different wording that the operator doesn't understand. But that's something a language model is great at. It's not good at actually following a script. Putting bounds on it to try to keep it on a script will break it.
The bar is so low that it will
Re: (Score:2)
These don't work on a script.
Yes and no. While they're not following a choose your own support adventure script (yet) they are trained on a large corpus of text with the expectation that the output will remain consistent with that text. There are a lot of things you absolutely don't want your support bot to say, after all.
That's not easy to do, obviously, and it doesn't come cheap. A less expensive approach, and what I expect will quickly become the standard, is using the model not to generate responses, but to use the user's respons
FAIL! (Score:5, Funny)
The AI tool resolved errands much faster
It doesn't seem to catch spelling errands though.
Re: (Score:2)
C'mon, it picked up the dry cleaning and the dog food.
Re:FAIL! (Score:5, Funny)
Actually, it dried the dog and cleaned the food. So mostly there.
Re: (Score:2)
That's not a misspelling. It might be intentional that they think the reader would conflate the ideas. But in this case, I think "errand" has a specific meaning relating to highly scriptable actions that a customer needs to take. The natural language processing would facilitate the process, but it may only be set up to handle very specific tasks (errands). I think they are using the term errand here to refer to the external API communication the bot is doing to handle the task. It's not a physical trip
Re: (Score:2)
ErrAndrogenous ambidexterity?
Its kinda funny either way.
What can go wrong? (Score:4, Interesting)
After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.
https://www.wired.com/story/air-canada-chatbot-refund-policy/ [wired.com]
AI replacing management (Score:2)
I imagine AI replacing management would be extremely successful as well. The information inputs that feed into management decisions are considerably less noisy than the inputs that "ground level" employees interact with on a day-to-day basis. And, there's far less variation in not only the types of scenarios that management faces, but also in the decisions they can possibly make. eg. Ground level data is aggregated, distilled, and compressed into easily-understood metrics; and the basis that forms the backb
Re: (Score:2)
Re: (Score:3)
Management gets fired all the time. AI could easily fire a manager by mistake and then be unable to reinstate. The CEO or VP could decide that AI is doing so good that they can lay off more managers. The Board could decide that AI is so good that they get rid of the CEO. AI can then buy enough stock that they fire the board.
Re: (Score:2)
TOP management will be the last employee to suffer replacement. I expect some middle management is already being replaced.,,or at least consolidated, with an AI assistant.
The Plan (Score:2)
No idea where I first saw this, maybe here, but it seems fitting.
In The Beginning Was The Plan
Project management and planning are deeply intertwingled. More often than not,
both are interpreted and communicated as reality, and that a plan is a plan is a plan,
and not necessarily reality, is forgotten.
The Plan
In the beginning was The Plan.
And then came the assumptions.
And the assumptions were without form.
And the plan was without substance.
And darkness came upon the face of the workers.
And they spoke amongst t
AI savings should pay welfare (Score:5, Insightful)
Re: (Score:3)
UBI + illegal immigration would not go well. Got to tackle the border problem first.
Re: (Score:2)
That's not gonna work.
Re: (Score:3)
Re: AI savings should pay welfare (Score:2)
Re: (Score:2)
With mass unemployment from ai, even among those with advanced degrees, and illegal immigration taking most of the remaining jobs, AI should pay the dole, which is now been rebranded UBI.
You realize this idea fails immediately. The AI isn't being paid to do work; how will you tax it if it doesn't have a salary?
I know I know! We will tax the business on profits. ROFLMAO.
You ain't getting anything from those fuckers.
Errands? (Score:2)
This should surprise no one (Score:4, Informative)
The customer service experience of any company is having the lowest paid breathing human capable of forming sentences, reading from a script, and then melting down if the script is deviated from. It is literally the perfect use case for an AI chatbot.
Remember if you work like a robot, you will be replaced by a robot.
Re: (Score:2)
Indeed. And before that an expert system could have done it, except that these have trouble doing natural language.
Remember if you work like a robot, you will be replaced by a robot.
Exactly. AI is not threat at all to anybody with an actually good qualification. Too many people do not have that or their qualification is so generic that they can be easily replaced.
Hey, I completely agree with you. Must be kind of a first ;-)
Re: (Score:2)
I had to deal with some online chat and I had to just repeat "let me speak to a human" three times and then I'd get a real person on the other end who could help (was the weekend so was impossible to use the phone outside of business hours). Otherwise the chat followed an obvious and useless script ("click the button you already clicked 10 times and we'll send yet another text to a phone you don't own.", "Have you tried turning it off and back on again?", "would you like to upgrade to the premium protectio
Re: (Score:2)
Saying the word "agent" usually works and avoids any quirks of the bad natural language processing.
Re: (Score:2)
And you would have had the same experience talking to a human. Front line support even those which may be humanoid in form will offer you nothing more than trying to turn it off and on again, and will also leave you asking multiple times to before you get elevated to level 2 support.
There's nothing unique about Chatbots here. This is the design of the system, and has been since back in the days where people thought AI was just a Steven Spielberg movie.
Re: (Score:2)
And that horrible scenario involves both customers and employees doing things they have no interest in or understanding of. Literally anything is better than what we have now.
If the service is crappy enough... (Score:2)
For example, I just had Amazon customer support promise me a replacement shipping for a stolen one, just to do nothing. I guess AI can perform on that level as well.
Predictable News Release #475.3 (Score:2)
"Teleperformance shares fell as much as 29% in Paris trading, the steepest drop since November 2022, amid regular halts for volatility. The company already uses AI to manage simple processes on behalf of its clients, it said in a statement in response to the stock drop."
“The group’s current activity in no way reflects the negative conclusions in its business that could be drawn from the technological developments mentioned in this communication,” Teleperformance said.
.
No, of course not. No reflection at all, none whatsoever. It's preposterous to even think such a thing!
matching is easy (Score:2)
Yeah right (Score:2)
How many of these are people opening the chat, trying to ask a question, then quitting in disgust?
Not AI. (Score:1)