Forgot your password?
typodupeerror
AI

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else 64

An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.
This discussion has been archived. No new comments can be posted.

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

Comments Filter:
  • AI is useful but (Score:5, Informative)

    by phantomfive ( 622387 ) on Monday April 13, 2026 @11:40PM (#66092756) Journal
    AI is useful, without a doubt. There are times when it makes me feel good.

    The noise added to my life by AI slop that I am now forced to filter through is extremely irritating.

    On average I would say the irritation happens more times per day than the enjoyable moments.
  • by Khopesh ( 112447 ) on Monday April 13, 2026 @11:55PM (#66092774) Homepage Journal
    There's a nice Mastodon post [programming.dev] I think is worth quoting here;

    For years I've been hearing that "One day AI will be smarter than humans and we'll all be doomed." "Nonsense," I said. "AI is very stupid, and not getting noticeably smarter." And I was right. But I didn't think about the fact that there were two ways that prophecy could be fulfilled.

  • Not surprising (Score:5, Insightful)

    by MunchMunch ( 670504 ) on Monday April 13, 2026 @11:56PM (#66092776) Homepage
    This isn't really surprising.

    The "AI experts" (oof) are the people who are best poised to reap any economic rewards either through being in tech pushing AI, or being otherwise invested (personally or financially) in AI succeeding. These include the people who magically think that productivity gains will benefit workers, as opposed to owners, which ignores 100 years of productivity-gain data.

    The "non-AI-experts" are presumably regular workers who see the C-suite and owners salivating at AI as the fastest way to stop paying actual humans to do a service, or for companies to degrade quality with a magical-thinking "I can't believe it's good enough!" mindset that actually gives customers a worse experience (for a greater profit).

    What this divergence doesn't rule out is that the "expert" class has well-founded reason for optimism, and the "non-expert" class has well-founded reason for pessimism. It just suggests that one side sees itself as the owner class in a corpo-owned dystopian cyberpunk future where wealth has access to skill and skill doesn't have access to wealth.
    • I don't think I've ever seen a technology aimed at developers pushed as hard as LLMs. Usually adoption arises organically.
      • Normally see devs happily jumping on the bandwagon for anything that makes their lives easier. Yet a lot of the LLMs are being forced upon them. Sure there's devs that enjoy it before anyone gets defensive. I'd love to see a genuine breakdown of the type of developer and their usage of LLMs. I find LLMs useful for web as it's all 99% been done before with tons of examples in a reasonably rigid environment(still sucks for UI unless you like purple buttons and carded drop shadows). I haven't had a whole lot o
        • Re: Not surprising (Score:4, Informative)

          by TurboStar ( 712836 ) on Tuesday April 14, 2026 @01:08AM (#66092834)

          I use it daily on embedded an unusual platforms. Works great if the docs are good.

          • Perhaps that's why I'm failing. Struggling with some poorly documented lcd and an esp32. Would probably be more accurate if I were using a pi or something.
            • Re: (Score:2, Interesting)

              by TurboStar ( 712836 )

              Web development with a framework already has the problem constrained. You have to learn how to constrain the AI yourself when doing embedded or systems programming. It took me hundreds of hours and hundreds of dollars to get the hang of it.

            • by CAIMLAS ( 41445 )

              You could just use the LLM to reverse engineer the esp32/lcd and create usable documentation for it. I know a couple people who've done similar things to their janky unsupported equipment to create sufficient development docs to make it usable.

            • Perhaps that's why I'm failing. Struggling with some poorly documented lcd and an esp32. Would probably be more accurate if I were using a pi or something.

              I have a lot of success with obscure, mostly-undocumented systems. Which models are you using? There's an enormous difference in capability level between the top-tier models and the next step down. Also a pretty big cost difference.

        • Anything I have done with python is pretty good. But your project needs to be broken into smaller scripts in a way that the AI can focus on a few at a time. Also it is horrible with larger scripts that have been done by hand. But if you start with AI it is pretty amazing.
        • by CAIMLAS ( 41445 )

          I'm not a "real developer" (because I'm able to think and look at systems holistically - zing! - and hate writing code, but like reading it and finding problems) but I've become a software architect in the past year. I've worked in ops my entire life, making (or maintaining) the tools that engineering and support depend on, and I've worked in support escalation and engineering support.

          I use about 30-50M tokens a month, more when I'm stuck going through UI garbage. I've done a net-new React SaaS with extensi

    • Re: (Score:3, Interesting)

      Some of those folks in the first group have also fooled themselves into thinking the aim is for everyone to be owner class, and can't imagine why anyone would doubt this. Others in that group are encouraging them to continue to think that way.

      • by gweihir ( 88907 )

        The thinking in the first group may be that everyone not in the owner class is lazy and just did not try hard enough. Obviously, that is nonsense. But these people may genuinely believe it.

  • by larryjoe ( 135075 ) on Tuesday April 14, 2026 @12:07AM (#66092780)

    Climate change, vaccinations, impact of immigration ... people are easily swayed by pundits, demagogues, social media, podcasts, etc. Views on AI aren't always logical. For example, here on slashdot, its seems like a lot of people simultaneously believe that AI doesn't work and that it's replacing human jobs.

    • by Cyberpunk Reality ( 4231325 ) on Tuesday April 14, 2026 @01:14AM (#66092836)

      ...a lot of people simultaneously believe that AI doesn't work and that it's replacing human jobs.

      These are not incompatible positions if you believe that management is foolish, malicious, or both.

      • Management may be foolish, but that would mean those who went another way would more likely succeed. There is a sort of evolutionary survival of the fittest in the business world and it works (See: Sears, Nokia, Blackberry, Lehman Brothers, etc). If AI did not work, those who were quick to adopt it will be soon to fall. On the other hand, if it does work, those who believe it doesnt will soon be replaced by those who do. Market forces are beholden to no one.
        • by Tom ( 822 ) on Tuesday April 14, 2026 @04:04AM (#66092932) Homepage Journal

          There is, however, another market that moves faster than that one: The CEO market.

          Any CEO who said "we don't do AI here, that's all bullshit" will find himself on the job market pretty fast in the current mood. So, everyone does AI. Not because it works as a business decision, but because it works as a job security decision.

          see also: "Nobody ever got fired for buying IBM"

      • by gweihir ( 88907 )

        Indeed. However now I think the replacement of human jobs will be far smaller than originally feared. Apparently, most of the promises currently made (and they are getting more fantastic by the day as the economic numbers continue to not even remotely pan out) are not realistic at all.

    • by doragasu ( 2717547 ) on Tuesday April 14, 2026 @03:09AM (#66092896)

      I wouldn't say "it both doesn't work and is replacing human jobs", but I would say "it both doesn't work and is causing people to lose their jobs". For example the recent 30.000 people fired by Oracle to get some cash to build datacenters.

      • Both the efficacy of AI and the human job displacement of AI are currently unclear.

        AI seems to work to some degree for some use cases and not for others. CEOs claim AI as the reason for layoffs, even though many suspect pandemic overhiring and short-term stock impact are the real reasons. Current AI use is most certainly not a valid reason for current layoffs. Perhaps anticipation of future AI use might be. However, no competent CEO would layoff current workers for future replacements, especially when t

  • by Anonymous Coward

    As long as the upper management to whom he reports are even bigger morons, we'll have to put up with all his AI 'expertise'.

  • by Borgmeister ( 810840 ) on Tuesday April 14, 2026 @12:36AM (#66092802) Homepage
    24 months to deliver an ROI - or the wheels on the bus start coming off. Especially given the additional commodity expenses we now face. That doesn't mean AI will disappear - it's way too useful for that - but it will see a rationalisation. Railway Mania ultimately led to latter rationalisation and branch lines closed. So I suspect it will be for datacentres. Costs of use will go up, quality in narrower fields will improve, the race for generalised AI will cease. We rather got ourselves into a heavy lift race - AGI was no Moon to capstone the race.
    • by gweihir ( 88907 )

      If that long. But it really does not look like it will happen. The amounts of cash burned are just too great in comparison to the profits. Some LLM use will stay, agreed, but it may be on the level of somewhat better search only and in specialist models that just do one thing or a small number of things.

    • My experience thus far: it does a pretty good job with frontend (JavaScript, HTML, CSS). Struggles with existing large (2M+ lines of C++) backend code base.

      • Re: (Score:2, Insightful)

        by CAIMLAS ( 41445 )

        Why does your codebase require 2M+ lines of context? Do you not have it broken out into functional parts which can be worked on in isolation, with complete documentation?

        I'm working on a 700K LoC project and even that's broken into numerous clearly delineated operational and functional components.

    • by CAIMLAS ( 41445 )

      24 months might be a rough ROI horizon for the big providers, but frankly, I don't think that matters. Techniques to reduce memory and compute requirements for LLMs while retaining utility (not 'smartness') is increasing substantially, now that the capability of a small model is "good enough" for a great deal of work.

      We're already at the point where a home user with a couple grand of equipment can be just as productive as someone using frontier models, often more productive. That's a cost that amortizes nic

  • by fredrated ( 639554 ) on Tuesday April 14, 2026 @12:54AM (#66092818) Journal

    but what did AI say?

  • by MpVpRb ( 1423381 ) on Tuesday April 14, 2026 @12:59AM (#66092824)

    All the general public sees is slop, scams and threats of job loss.
    Maybe all of those CEOs, hypemongers and pundits shouldn't have publicly said that AI will replace all jobs...over and over and over.

  • Expert bias (Score:5, Insightful)

    by misnohmer ( 1636461 ) on Tuesday April 14, 2026 @02:33AM (#66092876)
    Is it really surprising that experts in some technology are proponents of said technology and see more positive uses for it? I am not trying to debate here whether AI is good or bad, simply stating that experts in any emerging technology will typically have a more positive outlook on its uses.
  • by Tom ( 822 ) on Tuesday April 14, 2026 @04:01AM (#66092930) Homepage Journal

    So called "AI insiders" are almost exclusively people for whom AI is either an active research subject or a business opportunity. There is almost no money to be made from being sceptical about AI. Of course these people feel positive about AI.

    The common sense opinion here is more reliable, even if it is less informed.

    • Counterpoint: good-faith skepticism and criticism requires engagement, because you have to learn about the subject to identify its problems.

      See "The Center Has A Bias" https://lucumr.pocoo.org/2026/... [pocoo.org]

  • by skam240 ( 789197 ) on Tuesday April 14, 2026 @05:10AM (#66092976)

    What's not to like about Data centers using as much power as 100,000 homes moving in to neighborhoods and jacking up the power bills for all the locals all so that these companies can replace American workers with AI!?

  • by geekmux ( 1040042 ) on Tuesday April 14, 2026 @06:19AM (#66093002)

    AI..Insiders? Oh, you mean the founding millionaires and those that still have jobs and a way to sustain their very existence for the foreseeable future?

    You mean they're finding themselves disconnected from those who have already lost their jobs to Toddler-Grade AI, and may find themselves involuntarily part of The Unemployables, depending on age and capability?

    Go fucking figure.

  • by Brandano ( 1192819 ) on Tuesday April 14, 2026 @06:50AM (#66093018)

    It works great for a multitude of uses and experts are ready to extol its virtues... I can see a parallel.

    • by gweihir ( 88907 )

      Asbestos is actually very nice stuff. But it should never come into contact with unprotected humans. We may see something similar with LLM-type AI.

  • Apples and oranges (Score:4, Insightful)

    by jenningsthecat ( 1525947 ) on Tuesday April 14, 2026 @07:37AM (#66093026)

    The "insiders" and "everyone else" probably assign vastly different meanings and connotations to the same terms. For example:

    56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years

    "Positive"? In whose evaluation and for whose gain? Government policies, legislation, and attitudes since the Reagan era have made things vastly better for those who already had money, power, and connections, while making them objectively worse for the lower and middle classes. At this point, I think even upper middle class Americans are starting to feel the pinch.

    84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same.

    When the US medical system already has large numbers of people taking out mortgages, resorting to GoFundMe drives, or declaring bankruptcy in order to pay for adequate healthcare, I'm surprised the optimism of the public is as high as 44%. I would have guessed it to be no higher than 25%. People are well aware that cost savings - and additional revenue generated by more effective and efficient treatments - usually end up in the pockets of the providers and not the patients.

    ... 69% of experts felt that AI would have a positive impact on the economy.

    Again, "the economy" is a very different beast when you're well-off than when you're poor. AI may have a positive economic impact for people whose financial well-being isn't threatened by it, while those who become jobless as a result may end up on the street. That's the nature of Ponzi schemes - see my sig below.

    • by DarkOx ( 621550 )

      and additional revenue generated by more effective and efficient treatments - usually end up in the pockets of the providers and not the patients.

      Heh, I think if the money was ending up in the pockets of the providers people would have lot less issue with the direction US healthcare is going. The money is ending up in the pockets of the administrators, HMOs, insurers (both health, and malpractice), litigators, and recipients of political donations.

      • Agreed. When I typed "providers" I was thinking of the companies, not of the front-line staff that deal directly with patients.

  • by Junta ( 36770 ) on Tuesday April 14, 2026 @08:39AM (#66093088)

    Issue is that the "AI insiders" are constantly gaslighting, and offering a different face to different audiences.

    You are a software developer? Oh, well, you need to pay us money for the tooling and also pay more money for 'education', because you need a whole new skillset to use LLM, it's hard to use and you can still leverage your expertise but need to give us money to be competitve.

    You pay software developers to do stuff? Oh, well you can lay off those losers because any rando can just prompt up a fully working piece of software with zero skill... "real soon now".

    That is before the GenAI tendency to Gaslight people day to day on accident (and also be gaslit, sometimes by itself).

    Based on my real life experience, it's not the GenAI in and of itself that is the trouble, it's just that all the most obnoxious people are more empowered by it. Busybodies that love to tell people how to do their jobs without actually knowing how that job is done can be more specific and verbose as they still don't know what is going on. Clickbait types that made slop before now make more slop than ever. Of course there's the "AI insiders" that want to suck up all investment and all the attendant ego problems related (especially Musk, Altman, and Zuckerberg are just even more insufferable than usual).

  • We keep reading reports of people going becoming deluded after using AI for a while. It shouldn't be much of a surprise the business sector dedicated to this is experiencing that effect in some way.

    The people leading AI companies are really hyping it in a way that isn't grounded in the reality, and investors are probably not technically savvy enough to see through it. AI company leaders are probably beginning to believe their own rhetoric. There are companies out there, and by way of their investors are dem

    • by gweihir ( 88907 )

      This is basically the original definition of hysterics.

    • by Junta ( 36770 )

      The deluded state seems to be a result of the all-hyped Chat interface designed to agree with you, which causes lines of conversations that should be corrected or ignored to be encouraged. So you have a sycophantic external 'validation' for your questionable thoughts.

      For the executives, they have a lot of that, with or without the ChatBot, but more importantly it *must* be true for them to get their payoff.

      • by CAIMLAS ( 41445 )

        You'd be well suited to give your agents prompts to be contrary, critical, and sarcastic. It adds some perspective - while also helping indicate the model's alignment on a specific topic.

    • by CAIMLAS ( 41445 )

      It would be good for everyone if, after 2-4 years in the IT field as a developer, everyone were forced to take a 6-9-month paid sabbatical to work on their choice of one of: a working cattle or dairy ranch, a non-mechanized farm (orchards, avocados, greenhouse operation, etc.), or some other "foundational" industry where their work output has more direct baring on reality - forestry and land management, water treatment, infrastructure maintenance.

      It would help put life in perspective, and likely result in m

  • by thehossman ( 198379 ) on Tuesday April 14, 2026 @02:35PM (#66093604)
    People who sell Face Eating Leopards: "Face Eating Leopards going to have a massive positive impact on society! Everyone should start integrating Face Eating Leopards into every aspect of their daily lives as soon as possible"

    People with faces: "I have some concerns about how Face Eating Leopards will impact my life."

  • At my job I use AI every day. It makes some really difficult research in my companies scattered documentation much easier to find and use. But that is the only positive thing that I have to say about AI as a daily user who works to help field AI products in customer environments.

Advertising may be described as the science of arresting the human intelligence long enough to get money from it.

Working...