Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses

Growth of AI Adoption Slows Among US Workers, Study Says (axios.com) 34

The percentage of workers in the U.S. who say they are using AI at work has remained largely flat over the last three months, according to a new study commissioned by Slack. From a report: If AI's rapid adoption curve slows or flattens, a lot of very rosy assumptions about the technology -- and very high market valuations tied to them -- could change. Slack said its most recent survey found 33% of U.S. workers say they are using AI at work, an increase of just a single percentage point. That represents a significant flattening of the rapid growth noted in prior surveys.

Global adoption of AI use at work, meanwhile, rose from 32% to 36%. Between the lines: Slack also found that globally, nearly half of workers (48%) said they were uncomfortable telling their managers they use AI at work. Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent.

Growth of AI Adoption Slows Among US Workers, Study Says

Comments Filter:
  • Most larger workplaces are very much in a limited trial phase and haven't done a big rollout yet. These trials take time. Where I work we had a limited set of licenses on trial for the past 6 months using Copilot, even the weirdos who *want* to use it can't get it, while some of us are forced into this sick experiment.

    • It's about 1/3. Is that considered low? 1/3 of everybody is a lot. Do more than 1/3 of all workers regularly use a word processor?
      • It's about 1/3. Is that considered low?

        In the corporate world yes. If you are a Microsoft shop with a push of a button Copilot is magically pushed to ever Office 365 app. That's kind of my point. Eventually if adopted at a corporate level this AI stuff will be used whether people want it or not, but right now it's a management decision more than anything else.

    • by Rei ( 128717 )

      My office is held up by legal issues. It's a big ask for most workplaces to trust sending all of their private data over the internet to multiple third parties.

      I tried Cody with a local server, but it didn't work, and I have some skepticism about using a local model vs. something like Claude regardless. Cursor doesn't support local models at all.

      • by Rei ( 128717 )

        (When I say re: Cody "it didn't work", I mean, I could use the model just fine from the terminal, serving it locally with ollama, but when *Cody* tried using it, the responses had nothing whatsoever to do with the prompt or the provided code - like, you could say "Write a simple Hello World program" and it'd go solve some unrelated math problem or whatnot. Clearly it was calling it either with a defective prompt or defective model settings)

  • They should be telling employees what their corporate AI policy is. They should also listen to employees with an open mind about what they see as assisting them be more productive. Fears like this: Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent. should not be an issue.

    AI is a company decision not an individual employee decision.

    • by cayenne8 ( 626475 ) on Tuesday November 12, 2024 @03:43PM (#64940697) Homepage Journal

      They should be telling employees what their corporate AI policy is. They should also listen to employees with an open mind about what they see as assisting them be more productive. Fears like this: Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent. should not be an issue.

      AI is a company decision not an individual employee decision.

      I should think that security would be one of the TOP risks for using AI...at least any AI resource that is located and operated externally to your company.

      I mean, any time a developer cuts and pastes code or specs into an external AI to try to get things generated, that is also potentially letting out company secrets, code and security concerns to a system that "learns" from the input given it, etc.

      This is sure to be a concern for any company with the proprietary work going on, and especially so for any work for the various levels of government with security clearance issues or health information issues, etc.

      Are any of these AI companies installing and guaranteeing that the AI being interacted with is ONLY contained within company walls so to speak?

      I'm not familiar with copilot, I think is a MS product? I would guess IT could be on internal server, but not sure....

      Anyway, if I were a company owner, I'd be VERY hesitant to allow any of my work to be in any way submitted to any external AI entity...

      • That is why management should be doing their due diligence and deciding the company stance on the use of AI. I know how my company weighs the security prior to any approval of AI use first and foremost. There are a lot of claims about private tenant use of AI but I am not convinced about the privacy controls.

      • by jhecht ( 143058 )
        And don't forget performance and accuracy. An AI support server may be cheaper than humans, but if its responses are useless garbage, your customers are going to go away.
      • by Tom ( 822 )

        I mean, any time a developer cuts and pastes code or specs into an external AI to try to get things generated, that is also potentially letting out company secrets, code and security concerns to a system that "learns" from the input given it, etc.

        I'm honestly not much worried about code. There's very few secrets in code, much less than people think. Software is an execution thing, rarely a "nobody else knows this" thing.

        I'm much more worried about the PHBs pasting their top secret internal powerpoint presentation into ChatGPT so it can tell them which three words to change for "more engagement" or whatever the buzzword of the month is.

      • by Rei ( 128717 )

        I mean, any time a developer cuts and pastes code or specs into an external AI to try to get things generated, that is also potentially letting out company secrets, code and security concerns to a system that "learns" from the input given it, etc.

        LLMs don't learn in realtime. A company may or may not retain your data for training; this is disclosed in their policy. In some cases, things like no-data-retention are offered to paying clients, while data is retained for training from free users, or whatnot.

        Ev

      • by Mitreya ( 579078 )

        Are any of these AI companies installing and guaranteeing that the AI being interacted with is ONLY contained within company walls so to speak?

        Yes. My brother works at a big company where this was a concern (after someone posted a bunch of proprietary stuff into the prompt)
        They have an "indoor" solution which is completely local and does not leave premises of the company. Naturally, you probably have to be a non-trivial-sized company to afford such a setup.

  • by nightflameauto ( 6607976 ) on Tuesday November 12, 2024 @03:29PM (#64940657)

    Once you experiment you either fand that it helps a bit, and retain it, or it slows you down a bit, and you set it aside. Early adopters are early adopters. There tends to be a fairly significant portion of the business world that's a bit conservative with new tech. Those folks aren't jumping on board until it seems to have given a competitor an edge they feel they are missing, or they see something truly compelling and amazing about it. Right now the "compelling" is coming from the AI prognosticators, not the AI itself. And the AI prognosticators are coming off very much like snake oil salesmen. The whole "it's gonna save the planet" thing hits the ear wrong, for a myriad of reasons, and the louder that shit gets yelled, the more people go, "Oh, it's one of those deals," and turn away.

    • by gweihir ( 88907 ) on Tuesday November 12, 2024 @04:47PM (#64940907)

      Right now the "compelling" is coming from the AI prognosticators, not the AI itself. And the AI prognosticators are coming off very much like snake oil salesmen. The whole "it's gonna save the planet" thing hits the ear wrong, for a myriad of reasons, and the louder that shit gets yelled, the more people go, "Oh, it's one of those deals," and turn away.

      Yep, pretty much. While many people cannot recognize a scam when it is just wrapped nicely enough, many people also can.

  • by Anonymous Coward

    I've seen three kinds of people interested in AI:
    1. Slashdot types that might be interested but at least need to keep up and have a formed opinion.
    2. Those that buy the marketing and think the current state of AI is really going to help the organization get more done.
    3. Folks that see this as giving them some edge. I tend to think these are the bottom of the class folks or they at least feel inadequate.

    I don't use the current garbage AI much because it doesn't do much for me that I'm not already easily doin

    • Case 3: why are they bottom of the barrel? AI is just a tool. If a tool can make me more productive, what's wrong with that?

      I know (like almost everyone else here with rare exceptions) that AI is not magical and doesn't think or have a personality or need human rights or any of that crap. But it has proven very good at finding patterns and extracting information from very large seemingly random data sets, among other things and used by humans as a tool to enhance their work, not replace it, I believe it

    • by gweihir ( 88907 )

      Case 3: Yes, very much. It is basically the same effect that makes incompetent coders case the latest language hypes, frameworks and tools in a vein hope that this one great tool will finally make them non-crappy coders. Of course that never works out.

      Case 2: For bullshit-work that may even be true. Not for any real work though. Of course, there is a lot of bullshit work being done in the corporate world.

  • I've installed every AI code assistant there is into my IDE so they can all argue.

  • There's a Methodology section in the Slack report [slack.com]. There were a bunch of respondents in the survey (17,372). However, the report doesn't say how the respondents were selected. It's very likely that there was some or a great deal of self-selection. So, it's unclear how well the respondents population corresponds to the general population, i.e., it's unclear what these results mean.

  • Today's AI sucks mightily for useful work.
    AI is a research project, and future AI may possibly be very useful, but today's AI is just crap generators, suitable only for having fun and laughing at it.

    • by gweihir ( 88907 )

      Well, it does "better search" and "better crap". The first has some limited use and the second is useful for DoS attacks on assholes (and for all kinds of SPAM, fraud and phishing attacks). But that is about it. Nothing that would justify the massive investments into practical deployments.

      Whether it ever becomes a lot more useful, we will see. After 70 years of intense AI research, I do not think the big hopes (AGI or something close to it, even in a limited form only) will come true anytime soon and maybe

  • by rabun_bike ( 905430 ) on Tuesday November 12, 2024 @04:35PM (#64940859)
    Which stage are we on currently? Peak of Inflated Expectations heading to Trough of Disillusionment?
    • I asked ChatGPT and this is what is gave lol. ------

      The "Peak of Inflated Expectations" is a phase in Gartner's Hype Cycle, which describes the progression of technological adoption and expectations over time. Identifying when a technology, such as artificial intelligence (AI), has reached this peak involves observing several key indicators: Indicators of the Peak of Inflated Expectations:

      1. Widespread Media Coverage:
      There is extensive and often sensational media coverage about the potential of A
  • Which is probably right on the mark in many cases.

  • One headline says AI will be a a multi billion dollar industry, while another, like this one, suggests stuttering use of LLM based copilots. Seems like many people don't understand its real value which will be in things like cancer screening, protein folding, autonomous driving. i.e. things like classic pattern matching and classification type problems. LLM offerings appear to be duping the Ill informed into thinking that they can do various forms of sentient work for them, when of course LLMs are no more t
  • All AI sites are blocked in the company firewall, except Copilot, which is so slow and throttled that it is easier just to use ChatGPT on my phone and send the result to my work email or run something in MLStudio on my private macbook i also bring to work. :D

  • by OrangeTide ( 124937 ) on Tuesday November 12, 2024 @07:51PM (#64941485) Homepage Journal

    Once businesses started having to pay per employee for access, you'll find the adoption rate tick down.

    The other part is a lot of people tried AI tools out and found them lacking or didn't understand how to leverage them. Eventually people who know what they are doing will replace some of the people who don't.

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...