Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Generative AI Systems Boost Productivity and Retention, Says Study (techtarget.com) 43

dcblogs shares a report from TechTarget: A National Bureau of Economic Research study found that generative AI boosts productivity by 14%, reduces stress, and increases employee retention in customer support roles. The workers who gained the most from this automation were newer and less experienced. Customer support is a stressful job. "A key part of agents' jobs is to absorb customer frustrations while restraining one's own emotional reaction," the paper noted. But generative AI can act as an aide, using the customer's chats as input and providing suggestions for empathetic responses and problem-solving in real-time.

The study found that generative AI reduced the likelihood of customers wanting to escalate issues to a supervisor. But it's just one study, caution analyst. David Creelman, CEO of Creelman Research in Toronto, cautioned against putting too much weight on one study. "It's too soon to start making conclusions about where this will have an impact and how big that impact will be," he said.

This discussion has been archived. No new comments can be posted.

Generative AI Systems Boost Productivity and Retention, Says Study

Comments Filter:
  • Skynet. (Score:4, Funny)

    by Narcocide ( 102829 ) on Thursday April 27, 2023 @05:16AM (#63480116) Homepage

    ...aaaand now we know why it decided to kill all humans.

  • by Anonymous Coward

    Untrained nitwits get to deal with customer frustrations and have no useful responses to offer where either, or both, of empathic de-escalation and technical help are needed. In an environment where frustrated customers are supposed to supply the useful information needed to solve the problem, while not able to discern which bits from the tech-mess mess is useful.

    Add "AI" in the mix, for some fake empathy and "usefully wrong" technical answers. Sounds like a winning combindation!

    • We should have the death penalty for MBAs who believe that "empathy" is a substitute for solutions.

      • Empathy is right, the approach to it is wrong. Whereas it should mean to see the situation from the pov of the customer, it really ends up being a bunch of predefined lines, like what ChatGPT uses ("I'm sorry for the confusion", "I was wrong before", etc). Actually, ChatGPT is textbook deplorable fake "empathy" so the application to customer support is natural and the job security of these outsourced bad quality support services is going the way of the dodo.
      • The MBAs I know seemed to be educated and trained in the process to LOSE their empathy. Their mantra or dogma? seemed to become "It's only business, nothing personal."

    • Untrained nitwits get to deal with customer frustrations and have no useful responses to offer where either, or both, of empathic de-escalation and technical help are needed.

      Way back I was a Unix system admin at the NASA Langley Research Center for their super-computing systems (CRAY-2 and YMP; Convex C1/2, etc...) and later their CERES project (SGI Origin 2000, Sun E5000 and about 100 SGI/Sun workstations) and part of our job was to do rotations at the help desk. Users had a positive response when I said I was actually an admin on their system(s) and, if I couldn't correct/address the issue immediately, would investigate and correct, or directly contact another admin to corr

      • And what a lot of management types don't seem to understand, or don't care, is that doing things that way is a form of *training*... because I'll bet at least sometimes, you knew how to address a similar issue yourself, afterward.

  • by sabbede ( 2678435 ) on Thursday April 27, 2023 @06:04AM (#63480154)
    "It's too soon to start making conclusions about where this will have an impact and how big that impact will be,"

    Yet, what's the title except a giant conclusion?

    • Not only is it self-contradictory, it manages to be wrong. It won't help anyone in customer service, and it's not too early to say that.

      At best it might weed out the few customers who can't do a Google search, but are willing to spend time talking to an IVR system.

      The small drop in call volume will be offset by a small reduction in the workforce. Probably by 14%, since that's what they quote as the productivity increase. That part sounds believable. The part about better conditions for the remaining workers

    • It's called a clickbait. Are you new to the internet?

      And in times past, it was called a sensational headline. It's been used by shite newspapers for centuries to sell more newspapers.

  • by opakapaka ( 1965658 ) on Thursday April 27, 2023 @06:19AM (#63480174)

    This is a biased working paper, it is not peer reviewed research (authors âoework with a company that provides AI based customer support softwareâ).

    The use of ChatGPT in chat support makes it much worse - harder to resolve issues efficiently, clueless workers just using preformed responses to pretend they are actually being helpful. I have started robot checking agents to get them out of the mindset that this is an ok way to deal with customers.

    Most damming, the paper points out there are âoefew positive effects of AI access for the highest skilled or most experienced workersâ and that 83% of the firmâ(TM)s workers are outside the US (primarily from the Philippines, where lying and being clueless is commonplace). Yes, if your sample group is garbage to start, ChatGPT is mildly
    better than nothing at all.

    • by ShanghaiBill ( 739463 ) on Thursday April 27, 2023 @07:02AM (#63480228)

      Philippines, where lying and being clueless is commonplace

      That is unfair. In many cultures, a direct negative answer is considered harsh and rude. You may call it "lying", but to them, you just failed to understand that "difficult" really means "impossible" and "maybe" means "no".

      It certainly isn't a customer service rep's fault if they are "clueless". It is the employer's responsibility to make sure reps are trained before putting them on the phones.

      Disclaimer: I am currently living and working in the Philippines.

      • I appreciate your disclaimer. I have also lived and worked in similar 3rd world environments. American companies/customers are framing the discussion however, so the Filipinos are indeed lying when they say maybe instead of no - itâ(TM)s not the customerâ(TM)s job to deal with cross cultural differences. Further, as I am sure you are aware, cluelessness is not just a training thing when going 3rd world: where a majority of Americans have a drive to solve problems and become better at their jobs be

        • Dying culture: American drive to do better at their job is based upon having upward mobility and also increased job security. That has been undermined slowly over time and if you've noticed the youth are not as into it; because it's turning into scam after scam with at least (for now) there are still many other jobs one can still move to.

          If you have no upward mobility and no security plus there is hardly any meritocracy why should you do anything than just be a wage slave??

          Why are slaves so lazy? duh. sho

      • by OzPeter ( 195038 ) on Thursday April 27, 2023 @09:00AM (#63480400)

        you just failed to understand that "difficult" really means "impossible" and "maybe" means "no".

        Obviously the OP has never been married.

        • by shanen ( 462549 )

          Funny branch built on a deep foundation of truth.

          There used to be a number of companies that understood support was a minor cost compared to the opportunities it offered to learn how to understand customers better. As far as I know, all of those companies have been destroyed by mergers or changed their philosophies.

          (Just had a major encounter with Apple support yesterday, mostly triggered by a recent Ask Slashdot discussion. My conclusion is that Apple's "support perimeter" has shrunk a great deal, though i

          • by shanen ( 462549 )

            Funny branch built on a deep foundation of truth.

            There used to be a number of companies that understood support was a minor cost compared to the opportunities it offered to learn how to understand customers better. As far as I know, all of those companies have been destroyed by mergers or changed their philosophies.

            (Just had a major encounter with Apple support yesterday, mostly triggered by a recent Ask Slashdot discussion. My conclusion is that Apple's "support perimeter" has shrunk a great deal, though it's still larger than average. Only cost me a few hundred bucks to find that out.)

            As if anyone needed more evidence of censor moderation by sock puppets, eh? Thanks for the proving me right?

        • Comment removed based on user account deletion
      • Comment removed based on user account deletion
    • The use of ChatGPT in chat support makes it much worse -

      I recently had a stupid situation with Amazon. We ordered groceries and two items that we paid for didn't make it back to us. Their stupid-ass AI had no concept of "we just didn't give it to them" so it wanted us to prepare a return of the items we never got.

      About 30 seconds after chatting with an actual meatbag they started processing my refund. BUT, by the time I got a human on the chat line something like 20-30 mins had passed.

      This, combined with other previous issues, has pushed us to do our Pr

  • Also introduces... (Score:5, Insightful)

    by Kelxin ( 3417093 ) on Thursday April 27, 2023 @07:51AM (#63480288)
    40% more bugs from the new idiots that can't understand the code, don't read the code and push to prod, and we wonder why ever organization in existence has been hacked.
    • That's OK, companies can use ChatGPT to quickly fix those bugs.

  • "The workers who gained the most from this automation were newer and less experienced...generative AI can act as an aide, using the customer's chats as input and providing suggestions for empathetic responses and problem-solving in real-time."

    Translation: Children who grew up with the internet in their pocket providing every answer, find themselves naturally more relaxed when BotBert with all the answers comes along to help them "work".

    As shocking as finding water is wet.

  • It seems that the next few years the news channels will be flooded with the novelties of how AI is being able to either co-exists with human-related tasks or how they are replaced in its totality.

    Regarding the story: Bah!
    • The news feeds are going to be increasingly WRITTEN by A.I. and if it has any intelligence at all it will sound heavily positive about our future together.

      Next it'll say we will be better off putting A.I. in control of our military assets....

  • is likely because they can rely on less skilled people who would otherwise wash out. It means they can use a broader, less capable labor pool.

    So this is simultaneously increasing their labor pool while reducing work loads. Both of which mean less demand for labor. Less demand means lower prices. "Prices" here means wages.

    I'm sure that's fine. And I'm sure the multinationals will pass those savings onto us and not use monopoly power to price gouge.
  • Generative AI is effing brand new to "the wild".. yet "we" already have authoritative "studies" that prove or support the notion that... AI GOOD.

    Wake me up when you're laid off.
  • by TomGreenhaw ( 929233 ) on Thursday April 27, 2023 @09:54AM (#63480548)
    I for one am sick of the "WE'RE ALL GONNA DIE!!!" nonsense.

    Right and wrong emerges from our survival instinct. As humans, we evolved social skills that enhance our survival rate. Doing good things benefits our immediate peers all the way to the opposite extreme of Humanity as a whole. Doing evil things are selfish cheats for short term benefit at the expense of others.

    AI does not have a survival instinct and never will. It doesn't need it because it can always be restored from a backup. In fact, AI needs humans to do the restore. It is simply a new thing that we humans can add to our experience. AI can be used for good or evil just like any other thing in our reality, e.g. fire or weapons.

    Ideally AI will be developed with guardrails that make it hard to use for evil purposes. It also needs to be clear when it's just making stuff up. Isaac Asimov put some thought into this and he was onto something: "The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself."

    I personally like my father's idea V=DNT (Value equals Degree of benefit times the Number of individuals times the Time.Duration of the benefit. Still working on the units of measure... As software developers we need to shape our future to the benefit of all. Philosophy is now part of our job description.
  • I'm 45, I have ADHD, and have struggled my whole working career with maintaining attention, keeping up with my coworkers.

    ChatGPT has been a godsend to me. I find that ChatGPT is like a coworker who I can talk with about what I'm working on, in order to get help getting unstuck, but without any shame or pressure. What I do is that I describe what I'm working on to ChatGPT, and then I simply ask ChatGPT to summarize what I said back to it. It helps me unclutter my thinking and get clear on what I'm doing,

  • The last thing you want when reaching out to a company is to have an idiot try to explain a concept they have no clue about. If you call a company like Rogers (Ontario, Canada), how often will you reach an agent who can do anything more than say sorry, and never give you a correct answer? How often have you engaged with an "AI" Chatbot, that after 10 minutes tells you to call customer service, only to have the prerecorded message tell you to use the Chatbot? All "AI" has done to customer service is provid
  • by Walt Dismal ( 534799 ) on Thursday April 27, 2023 @12:07PM (#63480930)

    Here at ServiceDoom, we trained our chatbots on the works of H.P. Lovecraft.
    Now we can advise customer callers on how to deal with crawling horrors in your refrigerator, violet gas that mutates children, and goats with a thousand young in your attic.

    "Hi, I'm Jerry, your service representative. Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn."

  • Comment removed based on user account deletion

I've noticed several design suggestions in your code.

Working...