

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills 61
A new study (PDF) from researchers at Microsoft and Carnegie Mellon University found that increased reliance on AI tools leads to a decline in critical thinking skills. Gizmodo reports: The researchers tapped 319 knowledge workers -- a person whose job involves handling data or information -- and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI's ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.
Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."
By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without.
Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."
By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without.
tldr; Using AI makes you dumber. (Score:3, Informative)
Using AI makes you dumber because your mental muscles don't work as hard so they atrophy.
Re: (Score:2, Funny)
Re:tldr; Using AI makes you dumber. (Score:5, Interesting)
There are two cases, either you are better than the mean(*) of the training data, then using AI makes you worse. Or you are worse than the training data, then using AI can make you better.
However, there are caveats. The source of data is never high quality (quality doesn't scale), and the sprinkled noise (aka hallucinations) from the AI's approximation of the empirical distribution produce low quality output.
TL;DR. AI produces output that passes a low bar. If that looks attractive to you, go for it.
(*) choose a statistic of interest, obviously.
Re: tldr; Using AI makes you dumber. (Score:2)
Pulling oneself up or down with AI (Score:2)
If you are learning a new skill or technology or topic, AI can speed up the learning process to an extent.
If you are already an expert in a skill, technology or topic, AI is more of a "whack a mole" search for useful information.
What AI works well with is throwing out seemingly odd answers to questions which may be viable (and need proving out first!).
For example, how many X can fit in Y cubic meters?
If X is of irregular shape, one can compute it's volume by measuring the water displacement when it is subme
Re: (Score:2)
That is not how LLMs work, mate.
You're thinking of Markov chains.
Re: (Score:2)
Re: (Score:3)
No. Once again, you're thinking of Markov chains.
LLMs develop an internal worldmodel and function through a process of repeated logical decisionmaking steps, in which probability plays absolutely no role. There is certainly a degree of "fuzzy logic", but it's not probabilistic fuzzy logic, but rather functions as passing the degree of confidence in the decision to the subsequent layer.
To be more specific: the hidden state of a LLM i
Re: (Score:2)
2) Where to start? I don't think you have a grasp of what a Markov chain is. As far as I can tell, you associate the phrase "Markov chain" with some specific algorithm you have in mind, which doesn't fit the architecture of LLMs. You get caught up in hundreds of details, and can't see the underlying truth.
My best guess is that
Re: (Score:2)
Your description of LLMs as probabilistic state engines is a description of Markov chains, whether you're aware of this or not.
Even in the most pedantic description, which risks roping in human beings as Markov models (only being given an out due to quantum uncertainty), LLMs do not meet the Markov criteria be
Re: (Score:2)
Markov models subsume all examples you've given, from purely deterministic to partially random and even self modifying code if you care. The secret sauce is no secret: you have to write down an appropriate state space. It's always possible to do, since that is exactly what was done once before to implement the physical LLM on a server f
Not surprising (Score:3, Insightful)
Re: (Score:1, Interesting)
The lucky thing for all of the Indians who are using AI to generate their crap code is that they never had any critical thinking skills to begin with.
In fact, AI code is at least 2 steps better than Indian written code.
UK Taxi Drivers (Score:2)
Think of lint (Score:3)
I was insulted, though, when It misquoted my sentence and then said I had a grammar error in the misquoted part (:-))
Is 25 years of google not proof enough? (Score:5, Insightful)
And those who get their news from facebook, and those who linger in newsgroups, and those who google everything and
Re: (Score:2)
Dammit! I came here to make this very comment! My kingdom for a mod point!
Re: Is 25 years of google not proof enough? (Score:2)
and those who design things in CAD without thinking about whether the part can actually be machined.
That's why you do it with the manufacturing constraints in mind as you design it. It's all part of prototyping. Even if it can't be machined, maybe it can be die-casted or 3d printed.
Re: (Score:2)
Because the world is like Wikipedia. If you can't cite a reference, then it doesn't exist. Middle managers have to justify their decisions to upper management in case of failure, so this is useful for them.
Re:Is 25 years of google not proof enough? (Score:4, Insightful)
Re: (Score:2)
Study was too long, so I had an AI summarize it.
Shareholders Pissed! (Score:2)
Its almost inconceivable to me that a company like Microsoft, who just invested hundreds of billions of USD into AI would say anything like this from the corporate blowhole.
Shareholders and the board are counting on AI to rule every human being and extract as much money as possible from each of them.
Inside job?
--
Great things in business are never done by one person. They're done by a team of people. - Steve Jobs
This is not a new phenomenon (Score:3)
Cashier: That comes to $7.85
Me: OK, here's $8.10
Cashier (confused): But... why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.
Re: (Score:2)
It's literally the same "phenomenon" for all physical fitness. If you don't workout your muscles, they will atrophy. If you don't work out your mind, it will too.
Re: (Score:2)
Cashier: That comes to $7.85
Me: OK, here's $8.10
Cashier (confused): But... why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.
Whenever I do something like this, I always size up the cashier and make a silent bet with myself as to what the result will be. Older people usually seem better than younger at grasping what's expected -- probably more/longer experience dealing with cash themselves. Using your example. I once got back $0.15 + my original dime from a youngster (*sigh*) -- instead of a quarter, for you youngsters reading this. :-)
Re: (Score:2)
I still remember the old days when people made change without being told how much to return by the cash register. Your items cost $7.85 and you pay with a $20 bill... the cashier would make change by counting up: $7.95 (dime); $8.00 (nickel), $9.00 ($1 bill), $10 ($1 bill), $20 ($10 bill). People seem to have lost that simple trick for calculating change.
Now excuse me; I think someone's on my lawn...
Re: (Score:2)
Except that it isn't calculating how much change to give; it's an algorithm for giving the right amount of change without needing to know how much that is. I learned that method for making change from my father back when I was a lad and I've been using it ever since when I was working a cash register. I've wondered, off and on, how many cashiers today could make change without having the register tell them how much change to give.
Re: (Score:1)
Cashier: That comes to $7.85
Me: OK, here's $8.10
Cashier (confused): But... why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.
In the example, giving $8.10 makes sense in case they want change of a quarter, instead of a dime and a nickel (along with the dime already in their pocket). Most people rather have larger value coins than an array of a bunch of small value coins. A handful of dozens of pennies in change would be rather annoying to most people.
Re: (Score:2)
Editors must be using AI (Score:5, Informative)
This exact same story was posted four days ago [slashdot.org].
Re: (Score:2)
Lol knew it was a dupe! Good catch.
Re: (Score:2)
Re: (Score:2)
New phenomenon? (Score:1)
I thought we stopped teaching critical thinking decades ago. Heck, it's already considered a prime reason we are where we are right now politically in the U.S.
Re: (Score:2)
I thought we stopped teaching critical thinking decades ago.
Not everywhere, but expectations are high since Jan 20, 2025 -- can't have any of that critical thinking stuff now...
Replace AI with Stack Overflow (Score:2)
When you give up on critical thinking and expect a tool like Stack Overflow to do it for you, it didn't kill your critical thinking skills, you did that.
Knowledge based tools can't hurt your critical thinking that way. It's your brain. You're supposed to apply critical thinking to them. The same people that don't do that with AI tools also don't do it with advice from teachers, books, news, politicians, priests, blogs, youtubers, total fucking strangers etc.
We're just not willing to say most people are dumb
Re: Replace AI with Stack Overflow (Score:2)
Oh, your right all right. People are super lazy. It's an evolutionary feature to conserve energy. They are dumb too. Bingo again. They like simple answers because complex ones take work to analyze and understand.
Bam! You see laziness leads to dumbness. QED.
Only preditors know that and only psychopaths care so little to say that out loud. Congrats.
That means the AI is working fine (Score:2)
That means the AI is working fine. That's the whole entire point.
Easy / Instant access to information (Score:2)
This is no different than what the internet has done to us already.
Folks rarely commit anything to memory because it's dead simple to just look it up via ( enter your favorite search engine here ).
If / when the day comes that the internet goes down for good, the human species is going to be in trouble since we've relied so heavily on :|
said internet to show us how to do damn near everything there is to be done.
Re: (Score:2)
I asked ChatGPT (Score:2)
Re: I asked ChatGPT (Score:2)
Re: (Score:2)
Maybe tell ChatGPT to talk so that Slashdot displays it correctly?
Re: (Score:2)
Isn't this study stating the obvious? (Score:2)
the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."
However, there's nothing in the study about how actual critical thinking is reduced. All the study essentially says is that if someone trusts Tool A to do its job, then they won't think further about Tool A doing its job. Uhh ... of course.
All tools that make a job easier are supposed to reduce thinking about that replaced task. If that weren't so, the tool isn't useful. A more useful research question is whether critical thinking about low-level tasks are replaced by critical thinking about high-level
Re: (Score:2)
I hope studies are being done.
Would be fairly easy to arrange, say, multiple software develop teams to work on a complex task, some using AI and some not, and see if the AI helped produce better low-level code faster, but also if it meant the humans using AI were enabled to produce better hi
Re: Isn't this study stating the obvious? (Score:2)
I thought the same thing. Didn't read TFA, but from TFS, it seems this study wasn't studying effects over time, but simply took a snapshot of the current situation.
So saying there's a decrease seems wrong to me, it seems more accurate to say that confident people think ai is incompetent and rely less on it, and vice versa.
Which is something I could have told you already.
Thinking still need to be exercised (Score:2)
Or you lose them. Like most skills. AI makes most people intellectually lazy because it seems to provide answers that look good enough.
Not a surprise. I observed this with a first year coding course: The students that relied on AI to do the simple tasks never learned anything a bit more advanced.
AI is used a lot by managers (Score:2)
Correlation or causation? (Score:2)
I once had to shift (Score:2)
If AI can make those insights, why would I? (Score:1)
Look at podcast interviews from 2020s: people were genuinely trying to be useful by providing critical insights.
Once insights are made cheap by AI, this naturally discourages people to try to make those insights.
Depends on how deep you go in research (Score:1)
I research various human cellular pathways and treatments as a hobby.
AI seems to not "piece" ideas together.
For example, let's say:
* paper #1 suggests that compound X activates pathway A
* paper #2 suggests that activation of pathway A will then also activate pathway B.
If I ask AI, what compounds activate pathway B, it is very unlikely to tell me compound X as a possibility.
(bringing together research from both papers)
Probably a snowball effect at work (Score:2)
I've had a few people send me work or reports that were clearly written by AI, and a big obviously flag was the factual errors. When I asked people about the false claims they admitted AI wrote the reports and they didn't bother to fact-check. When using AI people tend
another study (Score:2)
Another study I have conducted finds that relying on Microsoft indicates you already have lost your critical thinking skills.
I would argue reading headlines on phones is worse (Score:2)