Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

First Empirical Study of the Real-World Economic Effects of New AI Systems (npr.org) 39

An anonymous reader quotes a report from NPR: Back in 2017, Brynjolfsson published a paper (PDF) in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called "What Can Machine Learning Do? Workforce Implications." Now, Brynjolfsson says, "I have to update that paper dramatically given what's happened in the past year or two." Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He's earned a reputation as a "techno-optimist." And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy. Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.
Brynjolfsson and his colleagues described how an undisclosed Fortune 500 company implemented an earlier version of OpenAI's ChatGPT to assist its customer support agents in troubleshooting technical issues through online chat windows. The AI chatbot, trained on previous conversations between agents and customers, improved the performance of less experienced agents, making them as effective as those with more experience. The use of AI led to an, on average, 14% increase in productivity, higher customer satisfaction ratings, and reduced turnover rates. However, the study also revealed that more experienced agents did not experience significant benefits from using AI.

The findings suggest that AI has the potential to improve productivity and reduce inequality by benefiting workers who were previously left behind in the technological era. Nonetheless, it raises questions about how the benefits of AI should be distributed and whether it may devalue specialized skills in certain occupations. While the impact of AI is still being studied, its ability to handle non-routine tasks and learn on the fly indicates that it could have different effects on the job market compared to previous technologies.
This discussion has been archived. No new comments can be posted.

First Empirical Study of the Real-World Economic Effects of New AI Systems

Comments Filter:
  • by Big Hairy Gorilla ( 9839972 ) on Thursday May 04, 2023 @09:32AM (#63496696)
    AI will allow companies to process more people and transactions more quickly but will lead to a large increase in edge cases not covered by the "proper" use cases.

    Everyone shrugs, until they become an edge case, and there is no resolution to your issue.
    • I disagree, the whole point of AI is to industrialize the addressing of edge cases by something that has the capacity to give every single case a bit more attention than the usual 'fill out a form and hope you get contacted' response a chat bot can usually provide.

      The more scary idea to me is that as published models become thoroughly understood it could become possible to manipulate them into providing specific adverse outcomes; Future hacker AI Version V knows that SmallCo XYZ is running a local version o

      • I wouldn't disagree that Presumably, Eventually, AI customer facing systems would learn to address more and more edge cases over time. Which doesn't preclude new edge cases, rinse, repeat... but in the shorter term we can see Google, is dropping safeguards in order to keep Microsoft from grabbing all the mindshare. Geoffrey Hinton is all over the news saying that. Companies are panicked that they will be left behind, so, damn the torpedoes, full speed ahead.

        But your second scenario is surely a possible and
    • Humans have more time to deal with edge cases because they aren't dealing with the big standard cases that the AI can manage nearly or just as well as a human. Labor multipliers mean that excess human labor can be redirected to other areas where it previously wasn't available to be utilized.
      • by Big Hairy Gorilla ( 9839972 ) on Thursday May 04, 2023 @02:38PM (#63497456)
        Hold on. Have you tried to get a Stripe account? Small businesses are mostly considered "high risk". I did this recently. It was a shit show. Millenials answer email, to address edge cases, BUT, they were useless, uninformed, and powerless... they referred me back to "the system", and in the end when they insisted that I submit to biometric identification where they would store my data at a "trusted partner" in any country in the world, I said FU. Management has their eye on cutting costs and expanding revenue generating activities... nobody gives a rats ass about really solving the edge cases when revenues are ramping up... that would prove that management is earning their keep. I would not be hopeful in that regard.
      • Humans have more time to deal with edge cases because they aren't dealing with the big standard cases

        Historically, increases in technology never result in increases in the time workers are given to solve problems. Instead, they are adapted into increasing the amount of work that can be performed by the worker in a given time because that is what has the most immediate effect on profits.

    • AI will play the role of tier 1 reps and support. it will handle the bulk of things in most cases, many that probably could have been handled in a different more efficient way, but it bridges the gap of mindless human intervention. then the humans can focus on the edge cases and fixing real issues. skill, ingenuity, and creativity can be more valued by society instead of filling a box and following procedures and instructions.
      • I just refuted the exact same argument.
        If AI allows for increased revenue, and decreased human workforce, that PROVES that management is making the right decisions for the company. They will all get a raise, and the edge cases will diminish in importance.

        Clearly this is my notion vs your notion, so nothing is provable.. you are hopeful.. I am not, based on observation of human nature, I'll stick with my prognostication, unfortunately, humans always do the wrong thing, especially when money is involved. Expe
  • by sinij ( 911942 ) on Thursday May 04, 2023 @09:33AM (#63496700)
    My take on this is that AI allows untrained and otherwise unqualified people perform at the average level in a large number of white collar jobs. This means that these jobs will turn into McJobs paying minimum wage.
    • Re: (Score:3, Interesting)

      by CAIMLAS ( 41445 )

      Exactly.

      That's the immediate 'short term' win for businesses. They can pay people less for these jobs in the short term.

      Medium term, the results will be undesirable. Companies will be hard pressed to develop ChatGPT content which is appropriate to newer products. This approach only works when you've got an established knowledge base - ChatGPT isn't able to instruct people on how to do things which it hasn't been trained on. And that training requires the highly skilled people to learn how to do it. You can'

      • by Zak3056 ( 69287 )

        The obvious solution to your presented problem is teams of subject matter experts do the job to train the AI, then the resulting tool is pushed out to the button pushers. Even if you're charging, say, $10k per seat for the specific skill, that would still be a net win for a business that's paying button pushers (and fewer of them) rather than experts, and even if it costs you, say, $1M to generate the skill, if you're selling thousands of seats that's chump change.

        Not saying this is desirable, only that it

        • by CAIMLAS ( 41445 )

          That isn't how expertise works, though. You can't just turn highly skilled people into trainers, for a couple reasons:

          1) Training is a different skill.
          2) Training is hard.
          3) Certain things are difficult to quantify.
          4) If they're building training data, they're not working as experts. It's not a parallel task.

          The problem here you're failing to address also clearly indicates fairly short term thinking on your part. Those experts became experts by doing. They grow old, tired, and move on. It's going to take a

          • by Zak3056 ( 69287 )

            Everything you say above is true, but you're missing my point.

            Hypothetically, let's say we're in your medium term. Most of the workforce has been reduced to button pushers, which is working great (from the employer point of view) but now there are new products that the button pushers simply cant use because they don't have the skills/experience, and there is no one to train the AI to help them. Let's say you have 250,000 jobs to fill using this skill that doesn't exist in the workforce. Rather than train

            • by CAIMLAS ( 41445 )

              With all due respect, you're missing my point.

              "you train 250, who then train the AI"

              Yes, and how do you train even one person in a skill, when nobody possesses that skill?

              You can't. Gaining the knowledge to train even one person is a holistic practice which takes months if not years for a highly skilled person to get to the level where they possess a corpus of knowledge on the topic sufficient to train another person - and the training of the first couple people will be a months+ long experience, and be ver

              • by Zak3056 ( 69287 )

                Yes, and how do you train even one person in a skill, when nobody possesses that skill?

                So you're suggesting that you simultaneously can create new products that need new skills, but not be able to train anybody to use that new product requiring new skills? Come on, that's ridiculous.

                Gaining the knowledge to train even one person is a holistic practice which takes months if not years for a highly skilled person to get to the level where they possess a corpus of knowledge on the topic sufficient to train another person - and the training of the first couple people will be a months+ long experience, and be very touch-and-go.

                Yes, I understood that--I used your "one year" arbitrary timeline for such when I wrote my response. You have a base of intelligent, flexible people that you presumably pay well that learn your new skills and create the training set for the AI. They represent a very small number of people when compared to the m

      • And that training requires the highly skilled people to learn how to do it.
        You can't get highly skilled people doing something if you're only hiring minimum wage button pushers.
        your pipeline for skilled and capable workers will disappear.

        This isn't an issue at all. We already basically have this which is why tech support has different levels and you have to escalate to the top for real problems.
        If you need 1000 employees, you have 900 button pushers, and then a few dozen humans in the rotation to continue to advance from low to medium to high skilled and train the AI.

        If you think of something like an AI surgeon then you could have 1 medical school and 1 real hospital where humans learned and then replicated that knowledge to the AI.

        The sta

        • by CAIMLAS ( 41445 )

          How do you think companies get "high level" tech support?

          Hint: it isn't by hiring people off the street, generally. The better support people are usually good to begin with, yes - but they're promoted from L1 or L2 to L3 or similar internally, because you need to know the product to get good.

          Button pushers will never get good.

          • How do you think companies get "high level" tech support?

            Hint: it isn't by hiring people off the street, generally. The better support people are usually good to begin with, yes - but they're promoted from L1 or L2 to L3 or similar internally, because you need to know the product to get good.

            Button pushers will never get good.

            Yes, but you don't need 10,000 L1 -> L2 -> L3.
            You could easily do it with 100 L1 -> L2 -> L3 and the other 9900 button pushers.
            Basically, you might still need a few skilled people to really understand the business and go thru each level of training
            but only a select few. You would only need a handful of people to make it to level 3 and that would be enough to train the AI.

  • by rsilvergun ( 571051 ) on Thursday May 04, 2023 @09:48AM (#63496732)
    Productivity and wages have been disconnected since the 70s. Worse increased productivity is leading to _lower_ wages because we're literally competing for the work left over. New jobs aren't matching the rate of job destruction, and companies are increasingly going with the "Apple Computer" model where they focus on higher end and luxury goods (Apple has trillions in cash just sitting around) so that cheap consumer goods are no longer masking declining wages.

    Fact is, labor works like anything else: decrease demand by raising productivity and you get lower wages. A growing population, newly built cities and large scale social programs can counteract that, but we're out of room to expand (or at least out of water) and the economic pressures mean we can't just grow our way out of this mess.

    This study is just looking at the positive impacts for businesses (lower costs thanks to fewer workers). It's ignoring all the social effects. We're still very much a "if you don't work, you don't eat" kind of place. And ask yourself this: what are the jobs that are gonna replace all the ones productivity gains take? Be specific. Don't sit there and tell me "they'll be so futuristic you can't imagine". My civics teacher in high school told me that and he was full of shit. They guy who ran the news papers and told me his job would be gone in 20 years though and to stay out of journalism was spot on.
    • It's also I think an artifact of a goofy definition of "productivity."

      Number of service calls handled, to me, isn't real productivity; real productivity is not needing the service calls in the first place.

      I mean I could say that my productivity in adding two numbers together is phenomenal, because I'm using a computer, but who cares? Society doesn't need numbers added together.

      How does AI reduce the amount of land and chemicals required to grow a given amount of food? How does AI reduce the recidivism rate

      • that's just the nature of things. No matter how easy your app is somebody is gonna need a password reset.

        But if AI does that reset for you and does it in a way that prevents the user from getting confused (if you've never worked a level 1 help desk you have no idea how hard it is for some people to reset a password...) then that's a huge productivity boost.

        I think the thing is you're thinking of productivity as "making things better". The guy who wrote this study is thinking in terms of "making thin
      • by Anonymous Coward

        >real productivity is not needing the service calls in the first place.
        Tangent: this reminds me of how wind power is sometimes touted as a major job creator compared to other forms of generation. It would be nicer for the consumer cost-wise if wind power *didn't* need so many jobs for upkeep.

    • A combination of the fact that no company pays its staff more than it needs to - at least in the long term - and way that the supply of employees is generally in excess of that demand means that in many areas the pressure on wages has been downwards in the long term.

      On the other hand we have seen median wages in China rise as they developed their economy to something more nearly approaching our level sophistication; they HAVE drained their supply of people to the point where firms have had to pay more to ge

    • More productive labor is more valuable, not less valuable. The lack of wage growth is basically down to globalization. No one is willing to pay someone more for a job another person can do for less. No one is actually that patriotic and we'll all happily let China build consumer goods for us if it makes them cost less.

      Your proposed solutions don't really do much other than try to redistribute wealth. In 100 years the shape of wealth distribution won't have changed much, but everyone will still be much be
      • Yes, it's more valuable, to the person making money off it. But that doesn't mean there's a linear progression in labor usage. I'm not just going to hire more people because I'm making more money.

        Repeat after me: Companies don't hire because they have money, Companies hire to meet demand

        If the Demand isn't there they won't hire. Mass layoffs will lower demand. Apple Computer has shown you can have a multi-Trillion dollar company with minimal product demand just by catering to the top 10-15%. Compani
  • The findings suggest that AI has the potential to improve productivity and reduce inequality by benefiting workers who were previously left behind in the technological era.

    More like reduce stratification, and not in good way. Now a lot of educated folks of some ability who could learn to use things like word processors and spreadsheets to make them effective in basic business administration tasks but were maybe not the sharpest knives in the drawer are effectively on the same level as barely literate.

    You are either going have highly specialized knowledge, like post-grad level, in narrow field or you will be a cog completely replaceable and able to offer little if anything ove

    • by CAIMLAS ( 41445 )

      Asking good questions is a skill.

      Most people do not have it.

      The people who can pose a good question, who can figure out how to make that prompt return useful results, will be highly valued.

      Unfortunately, being capable enough to do that is an extremely rare ability at this point with humans, and I doubt it'll get better.

  • There used to be a whole industry of cutting ice out of lakes and shipping it around the world. As refrigeration was invented, no doubt some newspaper wrote an editorial about how it was a tragedy that so many in the ice industry would be losing their jobs.
    Technology happens. Adapt or get left behind.

    • by DarkOx ( 621550 )

      I am about to write 'but this time its different' which of course we all know that its never really all that different history rhymes, so I all start be acknowledging you are probably right at least in the long run but..

      What is a little different about this than say refrigeration is that its so broadly applicable to so many industries.

      Similar technologies have come before with equally obvious broad applications, like say the internal combustion engine. However those things have lacked immediacy in terms of

  • by RogueWarrior65 ( 678876 ) on Thursday May 04, 2023 @10:42AM (#63496854)

    Every human idea can simultaneously be used for good and bad. There are no exceptions including AI. AI has the potential to do enormous good by eliminating human fallibility and laziness (and yes expense) from things. Take, for example, instruction translations. AI will eliminate the possibility of product directions that make no sense because companies without the resources to hire first-rate translators will use AI instead. AI can test millions of variations of problem solutions and learn from what works and doesn't work. Putting aside the Hollywood FUD tropes of AI becoming self-aware and taking over the world, AI could also be mistrained such that when you ask it about a particular topic, it uses false information that it was told was true. This could explain why certain ideological groups are putting so much effort into being revisionist historians. When AI is the only source of information, whoever trained it controls the narrative and you have no recourse to correct slander. You won't be able to sue an AI.

  • by seth_hartbecke ( 27500 ) on Thursday May 04, 2023 @11:10AM (#63496920) Homepage

    Trust me, I'll bring this back around to the AI topic.

    To me and many of my friends google's search is getting terrible. And no, I'm not talking about potential political censorship. I'm talking "I need the answer to this tech problem" and finding a page that's more than marketing speak about a product some company wants to sell me that might solve my problem is all I get. Or some other page that's "this is what this technology does" without technical detail I want comes up.

    When you search with Google, they watch every link you click, and the "last link visited" they assume had the right answer. When really, that might have been when I finally tossed my hands up in frustration and went over to try bing or some other engine. Or found a chat room, or broke out an old school book, etc to finally find my answer.

    For many many years I stopped bookmarking useful web pages. I've started using bookmarks again in the last few years because google can't find me back the page that is actually useful anymore.

    Here's where this interests with AI. Yea, they trained this AI on the prior "conversations." And now their low-skill employees are better. Great! Amazing!

    But as they continue to train the AI on the prior conversations, we're going to see the same effect that we're seeing with google. The bad data will slowly corrupt and spread its way through the signal, and soon the AI will be making poor recommendations. Especially as other comments have suggested in the case of people who are exceptions.

    We need to be careful as we train these AI systems on the responses "from the masses." It's useful input, but it needs to be weighted very carefully. It's not expert data or training cases.

  • by rbrander ( 73222 ) on Thursday May 04, 2023 @01:15PM (#63497268) Homepage

    After 40 years in IT, I'm retired, and look to a not-yet-retired buddy for guidance, since he's also the best programmer, deepest solutions, I've ever met. He, of course, jumped all over LLMs, downloaded a couple, began playing with them, and was soon working with them. One, at GitHub, helps you program, and he can stop in the IDE and just ask for "routine to filter correct dates", even adding parameters to "correct", and get a routine in seconds that he just has to scan to check, since (like us all) he has written dozens of such routines. Saves 90% of programming time on things like that.

    So it helps more with basic programming, skipping the dull stuff - the danger here is to the junior programmer you might have handed that off to? He agreed - junior programmers would be using the AI the most-heavily, though, being given much more of that work.

    But then came my killer question: "Is this going to affect your data-processing, get-the-job-done life...more than the invention of the spreadsheet? Remember having to stop and do all the calcs, write them down? Spreadsheets REALLY sped up a dozen common office jobs, more common jobs than programming. Even most programmers have to sort and organize piles of data, do repetitive calcs, part of every process. Would you go back from spreadsheets, if you could keep the AI helper?"

    NO WAY. Firm head-shake. Spreadsheets did MORE to speed up office work. At least for programmers. Absolutely for Accounting, pay, inventory, managing customers and sales.

    So: all I'm saying, is that before you start predicting a Brave New World that's Very Different, consider that our society swallowed up the efficiencies created by spreadsheets, and databases, and word processors, and internet communication instead of mailing paper, and for that matter telephones and cars. Employment and life continue.

    It's not bigger than spreadsheets. Calm down.

    • by narcc ( 412956 )

      he can stop in the IDE and just ask for "routine to filter correct dates", even adding parameters to "correct", and get a routine in seconds that he just has to scan to check, since (like us all) he has written dozens of such routines. Saves 90% of programming time on things like that.

      I've seen silly arguments like this before. The solution isn't to have a program write repetitive routines like that, it's to write a function so that you're not writing a bunch of repetitive code in the first place.

      ChatGPT is phenomenally bad at writing code. I shouldn't need to explain why ... again. People are figuring this out, which is why the claims have softened from 'replacing programmers' to 'replacing juniors' to 'helping juniors'.

      Would you go back from spreadsheets, if you could keep the AI helper?"

      Spreadsheets are useful. They actually replaced programmers. At

    • So: all I'm saying, is that before you start predicting a Brave New World that's Very Different, consider that our society swallowed up the efficiencies created by spreadsheets, and databases, and word processors, and internet communication instead of mailing paper, and for that matter telephones and cars. Employment and life continue.

      It's not bigger than spreadsheets. Calm down.

      The trend line for this technology is insane. It doesn't matter what GPT-4 can do today or how these capabilities impact either the workforce or society generally.

      What matters is the rate at which progress is being made and multitude of avenues for advancement. It matters that "emergent" capabilities nobody expected or can even explain are appearing and being amplified by subsequent efforts.

      The era of custom low precision/cost/power matrix processors is in its infancy as are LLMs generally. Spending and

  • As a software/firmware engineer, I'm convinced AI can generate bugs several orders of magnitude faster than I can!

Air pollution is really making us pay through the nose.

Working...