Forgot your password?
typodupeerror
AI Businesses Technology

Thousands of CEOs Just Admitted AI Had No Impact On Employment Or Productivity 75

An anonymous reader quotes a report from Fortune: In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973. Newfangled computers were actually at times producing too much information, generating agonizingly detailed reports and printing them on reams of paper. What had promised to be a boom to workplace productivity was for several years a bust. This unexpected outcome became known as Solow's productivity paradox, thanks to the economist's observation of the phenomenon. "You can see the computer age everywhere but in the productivity statistics," Solow wrote in a New York Times Book Review article in 1987.

New data on how C-suite executives are -- or aren't -- using AI shows history is repeating itself, complicating the similar promises economists and Big Tech founders made about the technology's impact on the workplace and economy. Despite 374 companies in the S&P 500 mentioning AI in earnings calls -- most of which said the technology's implementation in the firm was entirely positive -- according to a Financial Times analysis from September 2024 to 2025, those positive adoptions aren't being reflected in broader productivity gains.

A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted. However, firms' expectations of AI's workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment.
This discussion has been archived. No new comments can be posted.

Thousands of CEOs Just Admitted AI Had No Impact On Employment Or Productivity

Comments Filter:
  • well (Score:5, Funny)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday February 18, 2026 @09:06AM (#65996162) Homepage Journal

    individual employees surveyed saw a 0.5% increase in employment

    I'm going to Disneyland!

  • Incorrect (Score:5, Insightful)

    by GeekWithAKnife ( 2717871 ) on Wednesday February 18, 2026 @09:10AM (#65996172)
    It had no impact on them or their companies BUT it has increased employment and productivity for Nvidia, Anthorpoc, Open AI, Microsoft etc...

    I hear some 10x their productivity.

    You just need understand who is selling the shovels to know who's profiting from the gold rush...
    • by gweihir ( 88907 )

      You just need understand who is selling the shovels to know who's profiting from the gold rush...

      Exactly. The long-standing history of all hypes. Obviously, there is a sucker born every minute ...

    • I just sat all night and finished the work of 4 weeks in 5 hours. It's the first time I actually took the back seat to my Jetbrains AI setup and let "him" do 95%+ of the work. I didn't even have to look up API stuff or search the official docs - which are premium grade - of the toolkit or state management I'm using. I just asked him to do things.

      I've now got a 5 hour coherent and comprehensive commented chat log to revise and completely understand the intrinsics of an async render queue issue we ran into al

      • I'm a senior webdev earning 50k

        Why are you making less than a senior webdev based in India?

        • Why are you making less than a senior webdev based in India?

          I'm not. I live in Germany where there's higher taxes but also healthcare, a functioning public infrastructure, better quality of life and a Government that isn't made up of child-raping anarchists on crack. I get 50k because I went down 20k during my last job search and now have a job that's 95% remote and doesn't care if I'm sitting at the beach in Portugal or Croatia as long as I can maintain and develop the internal app I'm assigned to. Which I

      • I'm a senior webdev earning 50k

        You're not a "senior" anything at $50k, and of course your level of work can be replace with a plagarism bot. And it likely will, so I urge YOU to "prepare for incoming." My work is far beyond what an AI can do and will remain so. You're welcome.

        • My work is far beyond what an AI can do and will remain so.

          Smack center in the denial phase, are we? 8-)

          You're not a "senior" anything at $50k

          Well my very young padawan, at 39 years of programming experience and 25 years of professional work as a software dev, architect and tech lead I actually _am_ a senior dev. And I'm glad I could still score a cushy job at 50k annual salary. I have dev-buddies who made 250k a year and are now totally out of a job, have no chance of getting a contract, struggling to get

          • Senior citizen, not a senior in your position other than by duration. And there is no denial happening: I have access to all the latest AI out there through work and actively try to get it to do my job as I'd rather be a tool operator than have to actually use mental energy. Epic failure. But I'm an architect, not a mid-skill coder.
          • by jvkjvk ( 102057 )

            Yeah, I really doubt the rest of your story when I see this:

            >struggling to get by and getting ready to sell their 400k assets - that have depreciated significantly

            Just what assets have "depreciated significantly" when the DOW is at an all time high, and real estate, as well? I guess Bitcoin. Lol.

    • I've been in the chip biz forever. EDA co's are always claiming 10X since the 80's. The thing is, they have gotten way more than 10X, but never the way claimed. What has gotten the impressive gains is not say a layout tool that does manual layout 10X better. What has gotten the gains is P&R and synthesis. But these came with a hidden cost. Density and power. I'm sure many will argue but but but power is way down from the 80's. And I'd agree. But not because of eda. It is because of the fab process. The
    • I hear some 10x their productivity.

      There is no doubt that is true. In some cases, AI -10x their productivity, too. (How productive they were beforehand is left as an exercise to the reader).

    • by Sique ( 173459 )

      It had no impact on them or their companies BUT it has increased employment and productivity for Nvidia, Anthorpoc, Open AI, Microsoft etc...

      It increased production, not productivity. That are two different measurements.

      The problem with all those productivity gains by IT is that they are mainly in the administrative part of production. Yes, you can churn out more reports per time. But no one can eat reports, live in reports, cloth himself with reports or build reports into a car. Actual productivity gains are industrial, not administrative. How much has IT improved the construction of houses? How much has IT improved the planting of food crops

  • Such a surprise (Score:5, Insightful)

    by gweihir ( 88907 ) on Wednesday February 18, 2026 @09:12AM (#65996176)

    Nobody could have predicted this. Well, except anybody with a working mind at a bit of understanding of IT history.

    The next AI Winter will be a frigging ice-age!

    • by fluffernutter ( 1411889 ) on Wednesday February 18, 2026 @10:03AM (#65996226)

      It never fails to amaze me how little executives of tech companies actually know about tech. Here is an example of them being taken for trillions because of it.

      • by gweihir ( 88907 )

        Indeed. "Managers" with no clues about their products and how their organizations work.

      • Yep, I work for a software company whose CEO knows nothing about software. But he does know about sales, and that's why the PE owners chose him. You can guess how that plays out.

      • by sjames ( 1099 )

        In at least some cases I suspect the upper management is more concerned for the health of the hype than they are in actual results. It helps them to lowball new hires.

  • Self-reporting, perception based study from a very narrow and short sampling but doesn't actually measure productivity.

    • by Junta ( 36770 )

      It's what we have to work with right now, subjective assertions about the effectiveness of the technology in various fields. This also goes for advocacy, and is extra distorted because the stratospheric valuations that need to be justified as well as common revulsion, annoyance, and fear associated with LLMs.

    • Well, it's a meaningfully large sample, of people who know exactly how their business' productivity and employment have changed.
      • of people who know exactly how their business' productivity and employment have changed

        *of people who claim that they know exactly yada yada

        This study is based on executives taking surveys. It's not based on executives submitting financials for review.

        • We can assume, with sufficient safety, that executives who take the time to respond aren't lying in sufficient numbers to impact the results. If they didn't want to answer, or didn't want to answer honestly, all they had to do was ignore it.

          Besides, where is the incentive to lie about this?

          And why are you acting like the people whose jobs revolve around numbers like these don't know them?

          • We can assume, with sufficient safety, that executives who take the time to respond aren't lying in sufficient numbers to impact the results.

            No, you cannot. CEOs are mostly delusional. They think they're smart. The one thing you can count on is them lying to themselves.

            • Oh, come on now. Don't let the jokes replace reality.
              • Oh, come on now. Don't let the jokes replace reality.

                If you think billionaires are billionaires because they're smarter than everyone else, you're the joke and you're nowhere near reality. Some of them are the sharpest pencil in the box. Most of them are just the least scrupulous, luckiest, and best positioned.

                • People become wealthy for a variety of reasons, some good and some bad. I don't see your approach as being one that is analytically useful. Cynical assumptions about the number of good vs bad people in a group doesn't produce valid results.
  • by Dan East ( 318230 ) on Wednesday February 18, 2026 @09:22AM (#65996190) Journal

    Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.

    This makes no sense to me. Are you telling me that in 1948, when a record or file was requested, and then...

    A paper requisition was created and physically delivered (IE via the mail), the requisition was sorted by hand to eventually reach the correct person, who then went into a set of records to determine if the person requesting the file was allowed to access it, then that order went to some warehouse of files where the physical file was located, then a physical copy was made of the file, then the process essentially in reverse, sending the file back to the person who requested it.

    Imagine that was the process of EVERYTHING you needed a record for (like a birth certificate, or military service record, etc). The only way this was slightly efficient was if the scale was small enough (like getting a marriage record from the local courthouse).

    That's supposed to be more efficient than nearly instantaneous access via computerized records? Whatever this productivity measures it is affected by other factors, besides computers being involved. You know, maybe the fact that we do far, far more complex things in this day and age than in 1948?

    • I'm wondering where he was talking about, because that doesn't look like US data.

      There were some big post-war spikes, but also a lot of negative growth before 1983.

      • Oh, he said 1973. Oops. But that meant he was mostly looking at a recessionary period. Whatever productivity increases came from those early computers was washed out by Carter's malaise.
    • by ZiggyZiggyZig ( 5490070 ) on Wednesday February 18, 2026 @10:13AM (#65996246)

      That's supposed to be more efficient than nearly instantaneous access via computerized records?

      I suppose that whatever gains we made by computerizing everything, we lost them by requesting much more documentation than before. Times of old used to be simpler, bureaucracy had time and resource limits. Now with information technology, we are drowning in a deluge of requests, documents, etc., which have grown boundlessly precisely because there are no time and resource boundaries anymore. In a way it's a variation of what Intel giveth, Microsoft taketh away.

      • But how much more documentation are we really requesting per person overall? The population has almost doubled since I was a kid, so there's a lot of increase in work to be done right there. The DMV isn't requesting any more documentation from me than the first time I went. Most things which didn't used to require documentation still don't. About the only place I've actually noticed an increase in paperwork is in medical institutions. Filing my taxes is still easy and the even IRS doesn't require me to prov

    • Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.

      This makes no sense to me. Are you telling me that in 1948, when a record or file was requested, and then...

      A paper requisition was created and physically delivered (IE via the mail), the requisition was sorted by hand to eventually reach the correct person, who then went into a set of records to determine if the person requesting the file was allowed to access it, then that order went to some warehouse of files where the physical file was located, then a physical copy was made of the file, then the process essentially in reverse, sending the file back to the person who requested it.

      Imagine that was the process of EVERYTHING you needed a record for (like a birth certificate, or military service record, etc). The only way this was slightly efficient was if the scale was small enough (like getting a marriage record from the local courthouse).

      That's supposed to be more efficient than nearly instantaneous access via computerized records? Whatever this productivity measures it is affected by other factors, besides computers being involved. You know, maybe the fact that we do far, far more complex things in this day and age than in 1948?

      You are mistaking economic productivity with personal productivity. It only matters what you earn, not what you actually do. Dollar value produced vs labor input. Donald J Trump's worthless kids and every Kardashian is more productive than nearly all elite world-leading scientists, literally advancing humanity. It sounds like you want to measure personal productivity, which is impossible to do objectively for many reasons, including the ones you gave. nVidia and nearly every RAM manufacturer saw their

    • I think the key is "productivity growth," not productivity. People were still moving in from the farms in 1948.
  • AI is great for watching us and they don't care about the false positives. It's also a blanket excuse for layoffs. Stockholders are also spoofed as expenses now look like investments. Management just has to fake productivity gains while they pad their resume with AI experience.
  • Picking a nit. (Score:4, Insightful)

    by sabbede ( 2678435 ) on Wednesday February 18, 2026 @09:29AM (#65996200)
    "individual employees surveyed saw a 0.5% increase in employment."

    No, they didn't. 100% of employees are employed. Always. By definition.

    • by ET3D ( 1169851 )

      No, it actually makes sense if a few of the employees were previous unemployed.

      • Then they weren't employees. The unemployed never are.

        Don't waste time trying to find a way for it to make sense. It was poorly worded. That's all there is to it.

      • So badly worded that it completely lost the meaning. Here's what the actual report says: "We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. "
    • The response is based on a survey of employees so there is nothing wrong with this statement "individual employees surveyed saw a 0.5% increase in employment." Meaning that employees believe that AI resulted in a slight increase in employment. Which is unexpected.
      • Those words don't make sense in that order unless employees are working an extra 12 minutes. Assuming they work 40 hours a week.
      • Wording from the study itself: "We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. "
  • ..the Recession they were paid to deny for the last few years.

    The MSM should be in prison for their complicity.

    • What's the evidence that they had to be paid to deny it? Nobody wants to be Cassandra.

      • What's the evidence that they had to be paid to deny it? Nobody wants to be Cassandra.

        Can you say lies for clickbait profits?

        The FUCK ELSE did CNN and like ilk financially survive on for the last 3 years.

  • ...all the more reason for everybody who needs AI to have their own compute boxes w/ all the GPUs/NPUs they need, without just tossing it all to datacenters that then have to be paid for by the public at large. Just make it clear to datacenters that that will no longer happen, so that they stop hosting such services. It'll also make things like RAM, SSDs et al once again available to the public at large, while the people who do believe in AI can also be the ones putting their money where their mouth is
    • by ledow ( 319597 ) on Wednesday February 18, 2026 @10:32AM (#65996298) Homepage

      It's only when you treat datacentres or AI as something special that the problems start.

      It's just another app, why does that mean they get free reign on polluting rivers, or first dibs on power provision, or are able to override planning laws that have been in place for a hundred years? It's nonsense.

      It's not AI that's causing those problems. It's people literally corrupting the law for quick profit, as always.

      If there's no power / permission / water for a new hospital? Guess what? We shouldn't be authorising that for a datacentre in the same place either.

  • Because a lot of companies just aren't hiring waiting to see if AI can replace those jobs.

    You could argue that it can't so eventually they will have to hire but that doesn't help people now and in the meantime they can just force existing workers to work harder. Basically forcing 996 on people
    • 'Because a lot of companies just aren't hiring waiting to see if AI can replace those jobs.'

      Lots of companies do lots of things. Lots of companies hire AI experts etc. to see if there is any there there.

      Lots of companies aren't hiring at all.

      Lots of companies are hiring for lots of reasons.

      Lots of companies are not hiring because they decided they have too many employees already, or are doing things that do not need to be done, or not doing them well enough, or should be doping something else more important

      • by Junta ( 36770 )

        He's talking about jobs data, which was very muted, considering growth of the labor market.

        Though 'lots of reasons' is a fair observation. Some might be in a holding pattern waiting to see if LLM works out for them, some might be hiring because they think they need particular people to take advantage of LLM, some people might be firing people because they had LLM productivity gains, believe they did, or use it as a publicity stunt to send a 'we are AI aligned' message to investors. Some economic activity

  • The stat is based on the past. But the new models they are real good. You'll see in 6-18 month all jobs are gonna be done by AI.
    Chat GPT 5.x was terrible but now we've got Chat GPT 5.x+1 and now it's reall good!

    Do you even prompt bro? /s

    • No, it's not "real good". It's better at some simple multi-step use cases but still eats shit on things humans can do trivially.

      I swear there's some sort of psycosis many people who use a lot of AI get, thinking it's more than it is.
  • by Somervillain ( 4719341 ) on Wednesday February 18, 2026 @10:47AM (#65996342)
    In my lifetime: the internet, smartphones, big data, cloud computing...all were technological revolutions. Here's what we saw:

    about 1 year of it being a little more than a news story while businesses figured out what they were doing

    then after year 1, early adopters make a big splash earning money by filling existing needs using new technology (Uber, tindr/grindr, Salesforce, Netflix, Google, Amazon, etc)...it's obviously heavily subsidized by VC, but they're all over the news...and the hype cycle is about how amazing and innovative these services are (whether or not that claim has merit, you can debate)

    Then within 1.5-2 years, every decent company is either adopting this new technology or debating if they should. In the 4 examples above, it's safe to say, all of them found some use of it, although some companies need an iPhone app, big data, or cloud hosting more than others, obviously.

    We're in year 4 of LLMs. We haven't gone out of the news phase. No one is disrupting existing markets.

    Beyond LLM vendors and suppliers of hardware, LLM-based AI is just a bunch of individual evangelists bragging about how it will make them 10x developers. We keep hearing the promise. We keep seeing demos. We're all excited...because we've spent our whole lives dreaming about HAL or Jarvis or Mother or whichever AI is in your favorite science fiction.

    But who is actually making money on this?...besides AI companies, hardware companies, and people involved with building datacenters?

    OK, so allegedly, this is producing 10x developers....OK, with 4 years of ChatGPT and these tools being mainstream for 2 years, why is life the same? Why aren't these 10x developers pushing faster releases...why isn't /. filled with articles about a renaissance in software quality with AI finding and fixing all those hidden bugs and eeking out incremental performance gains. EVERY codebase that's over 5 years old I've ever seen in my life has some opportunity here and there to become more efficient. Why aren't there stories about AI leading to smaller binaries?

    We have analysts telling us how GLP-1 is impacting fast food and grocery sales...why aren't there similar stories about how ChatGPT is impacting software releases?...making them faster, more frequent, smaller, more performant, etc...why aren't companies bragging about releasing features 2x as fast?...they have 10x developers...the productivity gain of making 1 person 10x would impact release schedules tangibly...in fact, releases should go out more than 10x faster because now you don't have to rely on teams. If you can reduce your team from 10 to 2, you eliminate communication issues, timezone issues, personality conflict, etc. The more 1 person can do, the more efficient everything becomes. Collaboration is major overhead.

    Hell, just having AI write my documentation and paperwork would save me a fuckton of time.
    • The main issue I see with AI is it is being sold as a way to extrapolate reports, data, code, especially in the case of using LLM. Right now AI works best when analyzing data and interpolating results: For example using AI to analyze patient data to determine better predictors of cancer. (Test A + Test B is a better predictor than Test C alone). Where AI (LLM) is terrible is getting details right. The general framework of a code snippet is probably right. The code may not actually work or only work under ve
    • I look at this through the lens of automation, regardless of the technology.
      Automation tends to be successful when routine aspects of a process can be handled by a machine in such a way that the effort of finding and handling exceptions doesn't swamp the productivity gains of the mainline automation. But that's actually quite hard in practice, because it means that one has to identify and cordon off the parts of a process that are repeatable and where failures can be readily detected, and then create ways

    • No one is disrupting existing markets.

      Hm. I think you are not seeing the whole. Answer these questions:

      Who has the ability to sift through hours of video in minutes?

      Who has the ability to examine every post ever made by an individual?

      Who has the ability to cross-reference your credit reports, medical records, posting history, listen to your every phone call ever made, and watch video of you in gas stations and other businesses?

      No single person can do all of that. A group of people could do it, but will likely miss some details. AI could examine

  • Most of the time AI tools are a way to blow a lot of money. Sometimes you can get some useful work out of them, but it is a struggle you have to repeat for every project.
    I think I would rather hire some interns and new college grads. Teach them a few things then have someone more competent. AI on the other hand does not get more competent with repeated tasked, you have to wait for a new model to come out for it to "learn" anything. I can show a human something ONCE and have them doing it again the next day

  • While it's normal to turn to CEOs, CTOs etc for command on what effects AI is having, yet the reality is we're all in the phase of getting our collective hands dirty and discovering what AI can do.

    From that the tales will flow of sometimes great things that can be done (hopefully with instructions), the meh's and spectacular failures. I've definitely observed some things AI has a talent for and others AI is completely awful at. My list looks something like:

    Awful:
    Obscure things with little public information

  • The tech shows great promise, but is immature.
    I suspect that productivity will go down as people take time from their regular work to try to figure out how to use the new tools effectively.
    The transition will be uneven, with a lot of failures and wasted time.
    Meanwhile, hypemongers and lying salesweasels continue to promise perfect results now and clueless executives believe them.
    I suspect that there will be a backlash as workers are forced to use buggy, immature tech that slows them down.
    Here's a good artic

  • The world changed on Feb. 5, 2026: Claude Opus 4.6 and GPT-5.3 Codex were released.
    What you thought you knew about AI suddenly became quaint.
    • "THIS time, it's for real!"

      • If you don't code, or if you're only probing freebie AI, or if you've not challenged these two resources since Feb. 4th, you're 100% ignorant of what's happened and of the pace at which the world is already changing. These advanced tools are being used to build the next generation of AI. I can't fathom what that will be like.
        • You keep idolizing and throwing money at them. I code alright. But I know that every line of code that I don't write, is coding that I don't practice, and I will forget. Has happened with several languages already, because that's how learning and memory works. You do you, I don't want to become a braindead amplifier of the "greatness of our AI overlords", among the rest of the beef I have with the tech.

          • You don't get it. I don't blame you. Opus 4.6 is a huge advance that I still can't comprehend. Opus 4.6 is effortlessly in minutes writing code that I would never put time and effort into. I can experiment with algorithms and data layouts like never before--that were never practical before. It still takes knowledge to direct it, but far less effort. It's not perfect, but it's a LOT more accurate and builds test blocks and runs tests and edits its code all on its own. Prompts do not have to be carefully craf
  • I'm not 10x more efficient and can't say that I haven't written a line of code in 2026 or whatever, but AI - Claude mostly, has taken over the mundane part of programming and I get to think at a higher level for most of the time. Two years ago, I thought AI was garbage and it was at that time. Now I've opened my mind and tried things again to much better results.

    • I'm more than 10X faster, because Claude Opus 4.6 (released Feb. 5th) lets me research methods that I would never take time the time and herculean effort to investigate. Opus 4.6 writes accurate, complex code with simple prompts in minutes. It's utterly mind-blowing.
  • by rbrander ( 73222 ) on Wednesday February 18, 2026 @03:20PM (#65997402) Homepage

    In "Accidental Empires" pen-name Robert X. Cringely wrote a history of the first wave of microcomputer development - mostly Jobs and Gates and Aldus/Adobe, etc. In the first chapter, he points out that new information technologies

    1) take 30 years to sink in: telephone invented 1870, changed lives after 1900; motion pictures invented 1890, big industry by 1920s; television invented 1920s, major industry by 1950s.

    2) Rarely do what was envisioned at first. Bell thought phones would be used to broadcast music, and radio pioneers thought it would be used two way; both were opposite to each other. Early phonographs had record heads and they imagined spoken "letters" sent by mail...

    Predicting WHAT the LLMs will actually do at this point is asking to be a comedy meme in a few years. People will just have to muck around with the tools for a while before they find out what they will really do for us.

    Ironically, Cringely's point in that chapter is that the microcomputer revolution had to be invented by amateurs mucking around, because it takes too long for a new technology to settle in, to be worth the investment. He also notes that aviation was much-invented by the leftover planes and leftover pilots from WW1 just barnstorming and tinkering.

    This time, the investors think they CAN stick around until payday comes...but if this one is also still 25 years away (I think we're at year 5 now?), then I don't see the money lasting that long.

    • Clearly you haven't challenged Claude Opus 4.6 to write significant code. Since Feb. 5th, the day it was released, there's no more "mucking around".
  • Because they really have no idea what their employees do on a day-to-day basis. They think in terms of strategic initiatives, not "God, just get me through this problem so I can do REAL work!" problems. People are using it every day for things they know nothing about. Often without permission and on their own accounts, and sharing all sorts of corporate data because they just don't give a crap and know they will probably never get caught.

Everybody needs a little love sometime; stop hacking and fall in love!

Working...