Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills – Slashdot

Follow Slashdot stories on Twitter
Nickname:
Password:
Nickname:
Password:
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Using AI makes you dumber because your mental muscles don’t work as hard so they atrophy.
There are two cases, either you are better than the mean(*) of the training data, then using AI makes you worse. Or you are worse than the training data, then using AI can make you better.
However, there are caveats. The source of data is never high quality (quality doesn’t scale), and the sprinkled noise (aka hallucinations) from the AI’s approximation of the empirical distribution produce low quality output.
TL;DR. AI produces output that passes a low bar. If that looks attractive to you, go for it.
(*) choose a statistic of interest, obviously.If you are learning a new skill or technology or topic, AI can speed up the learning process to an extent.If you are already an expert in a skill, technology or topic, AI is more of a “whack a mole” search for useful information.What AI works well with is throwing out seemingly odd answers to questions which may be viable (and need proving out first!).For example, how many X can fit in Y cubic meters?If X is of irregular shape, one can compute it’s volume by measuring the water displacement when it is subme AI parrots its training data with a lot of random noiseThat is not how LLMs work, mate. AI’s approximation of the empirical distributionYou’re thinking of Markov chains.All trained LLMs are static models of the empirical training distribution.No. Once again, you’re thinking of Markov chains.LLMs develop an internal worldmodel and function through a process of repeated logical decisionmaking steps, in which probability plays absolutely no role. There is certainly a degree of “fuzzy logic”, but it’s not probabilistic fuzzy logic, but rather functions as passing the degree of confidence in the decision to the subsequent layer.To be more specific: the hidden state of a LLM i
2) Where to start? I don’t think you have a grasp of what a Markov chain is. As far as I can tell, you associate the phrase “Markov chain” with some specific algorithm you have in mind, which doesn’t fit the architecture of LLMs. You get caught up in hundreds of details, and can’t see the underlying truth.
My best guess is that I wasn’t talking about Markov chains.Your description of LLMs as probabilistic state engines is a description of Markov chains, whether you’re aware of this or not.) Any computer program which iterates a state X by applying a fixed function of X and an optional stream of independent random numbers gives rise to a Markov chain.Even in the most pedantic description, which risks roping in human beings as Markov models (only being given an out due to quantum uncertainty), LLMs do not meet the Markov criteria be The lucky thing for all of the Indians who are using AI to generate their crap code is that they never had any critical thinking skills to begin with.In fact, AI code is at least 2 steps better than Indian written code.Dammit! I came here to make this very comment! My kingdom for a mod point!and those who design things in CAD without thinking about whether the part can actually be machined.That’s why you do it with the manufacturing constraints in mind as you design it. It’s all part of prototyping. Even if it can’t be machined, maybe it can be die-casted or 3d printed.and those who design things in CAD without thinking about whether the part can actually be machined.That’s why you do it with the manufacturing constraints in mind as you design it. It’s all part of prototyping. Even if it can’t be machined, maybe it can be die-casted or 3d printed.Why was a study even needed to demonstrate this?
Because the world is like Wikipedia. If you can’t cite a reference, then it doesn’t exist. Middle managers have to justify their decisions to upper management in case of failure, so this is useful for them.Study was too long, so I had an AI summarize it.Me: Summarize this study for me: https://www.microsoft.com/en-u… [microsoft.com] ChatGPT said:A recent study titled “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” examines how Generative AI (GenAI) tools influence critical thinking among knowledge workers. Conducted by researchers from Carnegie Mellon University and Microsoft Research, the survey involved 319 participaIts almost inconceivable to me that a company like Microsoft, who just invested hundreds of billions of USD into AI would say anything like this from the corporate blowhole.Shareholders and the board are counting on AI to rule every human being and extract as much money as possible from each of them.Inside job?–Great things in business are never done by one person. They’re done by a team of people. – Steve JobsCashier: That comes to $7.85
Me: OK, here’s $8.10
Cashier (confused): But… why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.It’s literally the same “phenomenon” for all physical fitness. If you don’t workout your muscles, they will atrophy. If you don’t work out your mind, it will too.Cashier: That comes to $7.85
Me: OK, here’s $8.10
Cashier (confused): But… why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.
Whenever I do something like this, I always size up the cashier and make a silent bet with myself as to what the result will be. Older people usually seem better than younger at grasping what’s expected — probably more/longer experience dealing with cash themselves. Using your example. I once got back $0.15 + my original dime from a youngster (*sigh*) — instead of a quarter, for you youngsters reading this. :-)Cashier: That comes to $7.85
Me: OK, here’s $8.10
Cashier (confused): But… why the extra $0.10?
People stopped doing mental arithmetic once calculators were everywhere.
Whenever I do something like this, I always size up the cashier and make a silent bet with myself as to what the result will be. Older people usually seem better than younger at grasping what’s expected — probably more/longer experience dealing with cash themselves. Using your example. I once got back $0.15 + my original dime from a youngster (*sigh*) — instead of a quarter, for you youngsters reading this. :-)I still remember the old days when people made change without being told how much to return by the cash register. Your items cost $7.85 and you pay with a $20 bill… the cashier would make change by counting up: $7.95 (dime); $8.00 (nickel), $9.00 ($1 bill), $10 ($1 bill), $20 ($10 bill). People seem to have lost that simple trick for calculating change.
Now excuse me; I think someone’s on my lawn…Cashier: That comes to $7.85Me: OK, here’s $8.10Cashier (confused): But… why the extra $0.10?People stopped doing mental arithmetic once calculators were everywhere.In the example, giving $8.10 makes sense in case they want change of a quarter, instead of a dime and a nickel (along with the dime already in their pocket). Most people rather have larger value coins than an array of a bunch of small value coins. A handful of dozens of pennies in change would be rather annoying to most people.Cashier: That comes to $7.85Me: OK, here’s $8.10Cashier (confused): But… why the extra $0.10?People stopped doing mental arithmetic once calculators were everywhere.In the example, giving $8.10 makes sense in case they want change of a quarter, instead of a dime and a nickel (along with the dime already in their pocket). Most people rather have larger value coins than an array of a bunch of small value coins. A handful of dozens of pennies in change would be rather annoying to most people.This exact same story was posted four days ago [slashdot.org].Lol knew it was a dupe! Good catch.I thought we stopped teaching critical thinking decades ago. Heck, it’s already considered a prime reason we are where we are right now politically in the U.S.I thought we stopped teaching critical thinking decades ago.
Not everywhere, but expectations are high since Jan 20, 2025 — can’t have any of that critical thinking stuff now…I thought we stopped teaching critical thinking decades ago.
Not everywhere, but expectations are high since Jan 20, 2025 — can’t have any of that critical thinking stuff now…When you give up on critical thinking and expect a tool like Stack Overflow to do it for you, it didn’t kill your critical thinking skills, you did that.Knowledge based tools can’t hurt your critical thinking that way. It’s your brain. You’re supposed to apply critical thinking to them. The same people that don’t do that with AI tools also don’t do it with advice from teachers, books, news, politicians, priests, blogs, youtubers, total fucking strangers etc.We’re just not willing to say most people are dumb That means the AI is working fine. That’s the whole entire point.This is no different than what the internet has done to us already.Folks rarely commit anything to memory because it’s dead simple to just look it up via ( enter your favorite search engine here ).If / when the day comes that the internet goes down for good, the human species is going to be in trouble since we’ve relied so heavily onsaid internet to show us how to do damn near everything there is to be done. :|Maybe tell ChatGPT to talk so that Slashdot displays it correctly?the researchers warned that this could portend to concerns about “long-term reliance and diminished independent problem-solving.”However, there’s nothing in the study about how actual critical thinking is reduced. All the study essentially says is that if someone trusts Tool A to do its job, then they won’t think further about Tool A doing its job. Uhh … of course.All tools that make a job easier are supposed to reduce thinking about that replaced task. If that weren’t so, the tool isn’t useful. A more useful research question is whether critical thinking about low-level tasks are replaced by critical thinking about high-level the researchers warned that this could portend to concerns about “long-term reliance and diminished independent problem-solving.”However, there’s nothing in the study about how actual critical thinking is reduced. All the study essentially says is that if someone trusts Tool A to do its job, then they won’t think further about Tool A doing its job. Uhh … of course.All tools that make a job easier are supposed to reduce thinking about that replaced task. If that weren’t so, the tool isn’t useful. A more useful research question is whether critical thinking about low-level tasks are replaced by critical thinking about high-level I thought the same thing. Didn’t read TFA, but from TFS, it seems this study wasn’t studying effects over time, but simply took a snapshot of the current situation.So saying there’s a decrease seems wrong to me, it seems more accurate to say that confident people think ai is incompetent and rely less on it, and vice versa.Which is something I could have told you already.Or you lose them. Like most skills. AI makes most people intellectually lazy because it seems to provide answers that look good enough.Not a surprise. I observed this with a first year coding course: The students that relied on AI to do the simple tasks never learned anything a bit more advanced.Look at podcast interviews from 2020s: people were genuinely trying to be useful by providing critical insights.Once insights are made cheap by AI, this naturally discourages people to try to make those insights.I research various human cellular pathways and treatments as a hobby.AI seems to not “piece” ideas together.For example, let’s say:* paper #1 suggests that compound X activates pathway A* paper #2 suggests that activation of pathway A will then also activate pathway B.If I ask AI, what compounds activate pathway B, it is very unlikely to tell me compound X as a possibility.(bringing together research from both papers)Another study I have conducted finds that relying on Microsoft indicates you already have lost your critical thinking skills.There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.NYC Is Giving Free E-Bikes To Delivery Workers Using Unsafe ModelsPIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model”Let’s show this prehistoric bitch how we do things downtown!”
— The Ghostbusters