Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI Microsoft

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills 61

A new study (PDF) from researchers at Microsoft and Carnegie Mellon University found that increased reliance on AI tools leads to a decline in critical thinking skills. Gizmodo reports: The researchers tapped 319 knowledge workers -- a person whose job involves handling data or information -- and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI's ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.

Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."

By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without.
This discussion has been archived. No new comments can be posted.

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills

Comments Filter:
  • by Anonymous Coward on Friday February 14, 2025 @08:53PM (#65167629)

    Using AI makes you dumber because your mental muscles don't work as hard so they atrophy.

    • Re: (Score:2, Funny)

      by Anonymous Coward
      imagine gen z'er trying to read a paper map.
    • by martin-boundary ( 547041 ) on Friday February 14, 2025 @10:32PM (#65167757)
      It's never that simple. AI parrots its training data with a lot of random noise. So using AI causes "reversion to the mean", represented by the empirical distribution of the training data.

      There are two cases, either you are better than the mean(*) of the training data, then using AI makes you worse. Or you are worse than the training data, then using AI can make you better.

      However, there are caveats. The source of data is never high quality (quality doesn't scale), and the sprinkled noise (aka hallucinations) from the AI's approximation of the empirical distribution produce low quality output.

      TL;DR. AI produces output that passes a low bar. If that looks attractive to you, go for it.

      (*) choose a statistic of interest, obviously.

      • Very succinct description .. I actually think I understand it better myself now ... that explains why some find it very useful and others find it full of shit... also explains a lot of arguments over how useful it is ..
        • If you are learning a new skill or technology or topic, AI can speed up the learning process to an extent.

          If you are already an expert in a skill, technology or topic, AI is more of a "whack a mole" search for useful information.

          What AI works well with is throwing out seemingly odd answers to questions which may be viable (and need proving out first!).

          For example, how many X can fit in Y cubic meters?
          If X is of irregular shape, one can compute it's volume by measuring the water displacement when it is subme

      • by Rei ( 128717 )

        AI parrots its training data with a lot of random noise

        That is not how LLMs work, mate.

        AI's approximation of the empirical distribution

        You're thinking of Markov chains.

        • All trained LLMs are static models of the empirical training distribution.The only questionable argument is how close they can theoretically get when the parameters have been optimized. This is basic statistics theory, not Markov chains.
          • by Rei ( 128717 )

            All trained LLMs are static models of the empirical training distribution.

            No. Once again, you're thinking of Markov chains.

            LLMs develop an internal worldmodel and function through a process of repeated logical decisionmaking steps, in which probability plays absolutely no role. There is certainly a degree of "fuzzy logic", but it's not probabilistic fuzzy logic, but rather functions as passing the degree of confidence in the decision to the subsequent layer.

            To be more specific: the hidden state of a LLM i

            • 1) I wasn't talking about Markov chains. The fact that LLMs (without RAG) are pure Markov chains is not needed for my point, but since you insist on bringing them up, let's discuss.

              2) Where to start? I don't think you have a grasp of what a Markov chain is. As far as I can tell, you associate the phrase "Markov chain" with some specific algorithm you have in mind, which doesn't fit the architecture of LLMs. You get caught up in hundreds of details, and can't see the underlying truth.

              My best guess is that

              • by Rei ( 128717 )

                I wasn't talking about Markov chains.

                Your description of LLMs as probabilistic state engines is a description of Markov chains, whether you're aware of this or not.

                ) Any computer program which iterates a state X by applying a fixed function of X and an optional stream of independent random numbers gives rise to a Markov chain.

                Even in the most pedantic description, which risks roping in human beings as Markov models (only being given an out due to quantum uncertainty), LLMs do not meet the Markov criteria be

                • Why do you insist on making up non-sequiturs? An abstract mapping is not a difficult concept for you, given that you appear comfortable with high dimensional constructions.

                  Markov models subsume all examples you've given, from purely deterministic to partially random and even self modifying code if you care. The secret sauce is no secret: you have to write down an appropriate state space. It's always possible to do, since that is exactly what was done once before to implement the physical LLM on a server f

  • Not surprising (Score:3, Insightful)

    by alvinrod ( 889928 ) on Friday February 14, 2025 @08:58PM (#65167635)
    Our reliance on many labor saving devices has lead to overall physical atrophy in the population. It may be easier to measure and quantify muscles compared to brain activity, but the underlying idea is the same: use it or lose it.
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      The lucky thing for all of the Indians who are using AI to generate their crap code is that they never had any critical thinking skills to begin with.

      In fact, AI code is at least 2 steps better than Indian written code.

    • UK Taxi Drivers get less Alzheimer's. Loosing the ability to read printed maps - lower iq. Using spell checks GPS, not reading books - more lowering of intelligence. AI will certainly increase the pace, because 9/10 people do not read the warning that AI can be wrong. On the plus side, witch burning may make a return.
  • by davecb ( 6526 ) <davecb@spamcop.net> on Friday February 14, 2025 @09:12PM (#65167651) Homepage Journal
    Once upon a time, lint would critique your work, at the expense of often being wrong. I tried using LLMs for that, and they work only a little less well than lint.
    I was insulted, though, when It misquoted my sentence and then said I had a grammar error in the misquoted part (:-))
  • by BishopBerkeley ( 734647 ) on Friday February 14, 2025 @09:18PM (#65167657) Journal
    Why was a study even needed to demonstrate this? Loss of critical thinking is abundantly manifest in the number of people who place trust in talk radio, who prefer navigation apps to paper maps, and those who design things in CAD without thinking about whether the part can actually be machined.

    And those who get their news from facebook, and those who linger in newsgroups, and those who google everything and ....
    • by Temkin ( 112574 )

      Dammit! I came here to make this very comment! My kingdom for a mod point!

    • and those who design things in CAD without thinking about whether the part can actually be machined.

      That's why you do it with the manufacturing constraints in mind as you design it. It's all part of prototyping. Even if it can't be machined, maybe it can be die-casted or 3d printed.

    • Why was a study even needed to demonstrate this?

      Because the world is like Wikipedia. If you can't cite a reference, then it doesn't exist. Middle managers have to justify their decisions to upper management in case of failure, so this is useful for them.

    • by Luthair ( 847766 ) on Friday February 14, 2025 @11:22PM (#65167815)
      I think it was more that previously society restricted access to stupidity, now its readily accessible and seeking out suckers.
    • by Rei ( 128717 )

      Study was too long, so I had an AI summarize it.

      Me: Summarize this study for me: https://www.microsoft.com/en-u... [microsoft.com]

      ChatGPT said:
      A recent study titled "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers" examines how Generative AI (GenAI) tools influence critical thinking among knowledge workers. Conducted by researchers from Carnegie Mellon University and Microsoft Research, the survey involved 319 participa

  • Its almost inconceivable to me that a company like Microsoft, who just invested hundreds of billions of USD into AI would say anything like this from the corporate blowhole.

    Shareholders and the board are counting on AI to rule every human being and extract as much money as possible from each of them.

    Inside job?

    --
    Great things in business are never done by one person. They're done by a team of people. - Steve Jobs

  • by dskoll ( 99328 ) on Friday February 14, 2025 @09:36PM (#65167681) Homepage

    Cashier: That comes to $7.85

    Me: OK, here's $8.10

    Cashier (confused): But... why the extra $0.10?

    People stopped doing mental arithmetic once calculators were everywhere.

    • by dmay34 ( 6770232 )

      It's literally the same "phenomenon" for all physical fitness. If you don't workout your muscles, they will atrophy. If you don't work out your mind, it will too.

    • Cashier: That comes to $7.85
      Me: OK, here's $8.10
      Cashier (confused): But... why the extra $0.10?
      People stopped doing mental arithmetic once calculators were everywhere.

      Whenever I do something like this, I always size up the cashier and make a silent bet with myself as to what the result will be. Older people usually seem better than younger at grasping what's expected -- probably more/longer experience dealing with cash themselves. Using your example. I once got back $0.15 + my original dime from a youngster (*sigh*) -- instead of a quarter, for you youngsters reading this. :-)

      • by dskoll ( 99328 )

        I still remember the old days when people made change without being told how much to return by the cash register. Your items cost $7.85 and you pay with a $20 bill... the cashier would make change by counting up: $7.95 (dime); $8.00 (nickel), $9.00 ($1 bill), $10 ($1 bill), $20 ($10 bill). People seem to have lost that simple trick for calculating change.

        Now excuse me; I think someone's on my lawn...

        • People seem to have lost that simple trick for calculating change.

          Except that it isn't calculating how much change to give; it's an algorithm for giving the right amount of change without needing to know how much that is. I learned that method for making change from my father back when I was a lad and I've been using it ever since when I was working a cash register. I've wondered, off and on, how many cashiers today could make change without having the register tell them how much change to give.
    • Cashier: That comes to $7.85

      Me: OK, here's $8.10

      Cashier (confused): But... why the extra $0.10?

      People stopped doing mental arithmetic once calculators were everywhere.

      In the example, giving $8.10 makes sense in case they want change of a quarter, instead of a dime and a nickel (along with the dime already in their pocket). Most people rather have larger value coins than an array of a bunch of small value coins. A handful of dozens of pennies in change would be rather annoying to most people.

      • by dfm3 ( 830843 )
        Yes, and half the time they hand you back a dime and three nickels after some head scratching.
  • by quonset ( 4839537 ) on Friday February 14, 2025 @09:45PM (#65167693)

    This exact same story was posted four days ago [slashdot.org].

  • I thought we stopped teaching critical thinking decades ago. Heck, it's already considered a prime reason we are where we are right now politically in the U.S.

    • I thought we stopped teaching critical thinking decades ago.

      Not everywhere, but expectations are high since Jan 20, 2025 -- can't have any of that critical thinking stuff now...

  • When you give up on critical thinking and expect a tool like Stack Overflow to do it for you, it didn't kill your critical thinking skills, you did that.

    Knowledge based tools can't hurt your critical thinking that way. It's your brain. You're supposed to apply critical thinking to them. The same people that don't do that with AI tools also don't do it with advice from teachers, books, news, politicians, priests, blogs, youtubers, total fucking strangers etc.

    We're just not willing to say most people are dumb

    • You have just outed yourself as the wolf in the pack of sheep..

      Oh, your right all right. People are super lazy. It's an evolutionary feature to conserve energy. They are dumb too. Bingo again. They like simple answers because complex ones take work to analyze and understand.

      Bam! You see laziness leads to dumbness. QED.

      Only preditors know that and only psychopaths care so little to say that out loud. Congrats. :-)
  • That means the AI is working fine. That's the whole entire point.

  • This is no different than what the internet has done to us already.

    Folks rarely commit anything to memory because it's dead simple to just look it up via ( enter your favorite search engine here ).

    If / when the day comes that the internet goes down for good, the human species is going to be in trouble since we've relied so heavily on
    said internet to show us how to do damn near everything there is to be done. :|

  • (I have previously instructed to always respond in surfer speak) Nah, dude, relying on AI doesnâ(TM)t totally nuke your critical thinking skills, but it can make âem a little rusty if you donâ(TM)t stay engaged, ya know? Itâ(TM)s kinda like using a GPS all the timeâ"you might forget how to read a map, but if you still check the route and think about where youâ(TM)re going, your brain stays in the game. AIâ(TM)s rad for getting quick info and different perspectives, but ya
  • the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."

    However, there's nothing in the study about how actual critical thinking is reduced. All the study essentially says is that if someone trusts Tool A to do its job, then they won't think further about Tool A doing its job. Uhh ... of course.

    All tools that make a job easier are supposed to reduce thinking about that replaced task. If that weren't so, the tool isn't useful. A more useful research question is whether critical thinking about low-level tasks are replaced by critical thinking about high-level

    • "That's the goal and hope of not only AI but all tools, that tools uplevel human thinking and involvement, thus leading to not only greater efficiency but also the realization of tasks that weren't previously practical."

      I hope studies are being done.
      Would be fairly easy to arrange, say, multiple software develop teams to work on a complex task, some using AI and some not, and see if the AI helped produce better low-level code faster, but also if it meant the humans using AI were enabled to produce better hi
    • I thought the same thing. Didn't read TFA, but from TFS, it seems this study wasn't studying effects over time, but simply took a snapshot of the current situation.

      So saying there's a decrease seems wrong to me, it seems more accurate to say that confident people think ai is incompetent and rely less on it, and vice versa.

      Which is something I could have told you already.

  • Or you lose them. Like most skills. AI makes most people intellectually lazy because it seems to provide answers that look good enough.

    Not a surprise. I observed this with a first year coding course: The students that relied on AI to do the simple tasks never learned anything a bit more advanced.

  • How did they determine that it wasn't just the people who have trouble with critical thinking that use AI more in the first place?
  • Then everything became automatic and easier so shifting isn't needed. I didn't have to think about this article because the AI already told me what's right and why. So now it's easier and I don't have to think what's right and why. See? Progress! In the future even Homer Simpson will be a genius using AI.
  • Look at podcast interviews from 2020s: people were genuinely trying to be useful by providing critical insights.

    Once insights are made cheap by AI, this naturally discourages people to try to make those insights.

  • I research various human cellular pathways and treatments as a hobby.

    AI seems to not "piece" ideas together.

    For example, let's say:
    * paper #1 suggests that compound X activates pathway A
    * paper #2 suggests that activation of pathway A will then also activate pathway B.

    If I ask AI, what compounds activate pathway B, it is very unlikely to tell me compound X as a possibility.
    (bringing together research from both papers)

  • I suspect the more people trust a tool (especially one as unreliable as AI) the more they lean on it instead of their own critical thinking. And the less critical thinking they use, the more they rely on the AI bot. So it likely snowballs.

    I've had a few people send me work or reports that were clearly written by AI, and a big obviously flag was the factual errors. When I asked people about the false claims they admitted AI wrote the reports and they didn't bother to fact-check. When using AI people tend
  • Another study I have conducted finds that relying on Microsoft indicates you already have lost your critical thinking skills.

  • I'd argue that swishing through Facebook, Instagram, and headlines only has killed critical thinking as well.

We were so poor that we thought new clothes meant someone had died.

Working...