WednesdayTuesdayMondaySundaySaturdayFridayThursday

The Impact of Generative AI on Critical Thinking [pdf]

greybox 202 points microsoft.com
Maro
A good model for understanding what happens to people as they delegate tasks to AI is to think about what happens to managers who delegate tasks to their subordinates. Sure, there are some managers who can remain sharp, hands-on and relevant, but many gradually lose their connection to the area they're managing and become pure process/project/people managers and politicians.

Ie. most managers can't help their team find a hard bug that is causing a massive outage.

Note: I'm a manager, and I spend a lot of time pondering how to spend my time, how to be useful, how to remain relevant, especially in this age of AI.

BeetleB
Indeed. I started using Sonnet for coding only about a month ago. It's been great in that I've finally written scripts I had floating around my brain for years, and very rapidly.

But the feeling of skill atrophy is very real. The other day I needed to write a function that recursively traverses a Python dictionary, making modifications along the way. It's the perfect task to give an LLM. I can easily see that if I always give this task to an LLM, a few years down the road I'll be really slow in writing it on my own, and will fail most coding interviews.

Also, while there is a high in producing a lot in a short amount of time, there is no feeling of satisfaction that you get from programming. And debugging its bugs is as much fun as debugging your coworkers' bugs - except at least with the coworkers you can have a more intelligent conversation on what they were trying to do.

hosh
This is exactly why I still play Go, practice martial arts, archery and use the command line for my dev workflow. Those are all arguably less efficient and obsolete. The AlphaGo series can defeat the strongest human Go players and firearms are more effective than unarmed martial arts and archery. GUI is easier for most people than the command line. Yet, I practice all these to develop my mind and body. I don't have to be a world-class Go player to benefit from learning to play Go.

This happens a lot in the natural world in ecosystems. For example, many people plant trees and add a drip system. The trees grow to depend on the drip system, and never stretch and develop their roots -- and the relationship they have with the soil microbiome. They make the trees prone to get knocked down when an unusually strong gust of wind come through.

BeetleB
This is exactly why I still play Go, practice martial arts, archery and use the command line for my dev workflow.

And this is what worries me. Pre-LLM, I'd get all my practice done during work hours. I fear that with LLMs, I'll need to do all this practice on my free time, effectively reducing my truly "free" time.

hosh
I had someone give me a good use-case for LLMs -- such as, doing the CRUD work that I really don't want to do, if you are short on funding.

On the other hand, there is something I learned from permaculture design. You need to allow what you want to grow in the ecosystem if you want them present. If I wanted to develop the next generation of developers, then I'd be giving that CRUD stuff I've done many times to a junior developer instead. It obviously depends on funding, and whether your investors are on board with that.

As far as keeping up and remaining competitive ... my suggestion is to learn these things in a way where you can broadly apply the underlying principles in other domains, especially that, as you age, your working memory will shrink. This isn't accumulating more of the same thing, but gaining deeper insights that can only be found when you've developed a sufficient level of skill. There are lots of ways this can be accomplished.

BeetleB
You need to allow what you want to grow in the ecosystem if you want them present. If I wanted to develop the next generation of developers, then I'd be giving that CRUD stuff I've done many times to a junior developer instead.

Indeed, one of my fears is that people won't develop those skills to begin with. In my team, the few senior people can pretty much be trusted with using LLMs and still producing quality code. But we decided against evangelizing it to the junior folks. They still have access via Copilot, but we won't gently suggest they try it out.

alephnerd
Pre-LLM, I'd get all my practice done during work hours. I fear that with LLMs, I'll need to do all this practice on my free time

If you can attach a valid business reason/excuse then it's fairly easy to get time dedicated to experimentation and learning.

I haven't been a coder for several years now, but I still try to justify a personal lab environment in order to understand the market and customer problems.

If you're an Engineer or EM, I think it would be even easier (attach experimentation to some sort of critical engineering initiative).

Even with Code Gen Copilots becoming a thing, between the lines most of us in engineering and business leadership recognize it cannot replace architecture or design. If it can, then you've solved the Turing or Chinese Room problem and that's a whole other story.

That said, if your day job isn't only "write this terraform/python script" you shouldn't be at risk.

BeetleB
If you can attach a valid business reason/excuse then it's fairly easy to get time dedicated to experimentation and learning.

This has the same problem as "doing it in free time". I have a finite number of goodie points with management, and I'd be using some when I request this - which means I have fewer things I can request from them in the future.

That said, if your day job isn't only "write this terraform/python script" you shouldn't be at risk.

I'm not worried about the present, or even the near future. The reason I have other valuable skills is because of the years I spent on writing code. Eventually, I'll stop growing if I rely too much on LLMs. I may have to rely on them if everyone else is and being more productive.

CamperBob2
The Turing test can be considered well and truly passed at this point, partially because it has always said more about the human taking the test than it did about the machine. As for the Chinese Room, it was never anything but a pointless exercise in question-begging. If anyone ever considered it relevant, the advent of LLMs should have immediately convinced them otherwise. Searle knew nothing about embedding vectors, never mind attention.

Great points otherwise, though. A strong focus on identifying, understanding, and solving customer problems is, in modern parlance, all you need.

MrMcCall
You might have my favorite profile page here, with the exception of DonHopkins.

Just beautiful concepts, inspiring, and well-represented in this comment.

Peace be with you, my friend.

MoonGhost
Coding is actually a hard skill which requires practice. Regular litcoding for the sake of it should help. The problem with that is it takes the whole brain and breaks other thoughts chain. I'm thinking about to dedicate full days for small tasks like this.
BeetleB
Regular litcoding for the sake of it should help.

I got where I am without regular leetcoding. For me, regular leetcoding will only marginally improve my skills. And I definitely don't want to do regular leetcoding just to maintain my skills.

You know how many people love jogging outdoors but hate treadmills? Leetcoding is like treadmills. You may have to do it, but it sucks. Yes, some people love treadmills, but they're in the minority.

QuadmasterXLII
There’s two kinds of leetcoding. Leetcoding via memorization obviously sucks, but leetcoding by clicking next problem until you get one thats totally alien and then letting it mull in the back of the head for a few days to crack it without reference material is pretty fun
mechagodzilla
After being a professional programmer for ~20 years, and recently playing around with leetcode - my main issue with leetcode is that there's almost no overlap between leetcode problems and the problems I actually encounter in the wild. The validation tests often have silly corner cases that force you into a single answer to avoid timing out. It's frequently as much work to understand what the problem is actually asking you as it is to implement a solution. Just like I've found ChatGPT to be pretty mediocre at writing the sort of code I work on, but others swear by it, maybe some peoples' dayjob actually looks like writing leetcode all day? I know a lot of interviewers use it, but it feels so disconnected from actual engineering work.
QuadmasterXLII
My line of work (ML for medical imaging) is pretty dense with leetcodelikes, especially the classic “what’s the best time complexity? Great now whats the best space conplexity”
MoonGhost
I'm working on sort of 'graph' library. It's litcoding all the way. There are many separate containers and algorithms. The problem to a) write them b) optimize for memory c) optimize for performance d) find a 'good' balance where 'good' is undefined. But it starts with architecture which is based one some estimates of achievable functionality/performance.
bookofjoe
What if, like me, you hate both jogging outdoors and treadmills?
CamperBob2
I started using a C compiler for coding about 30 years ago. It's been great, but the feeling of skill atrophy is very real. I probably couldn't write useful code in x86 assembly any more, at least not without refreshing my memory first.

And you know what? That's just peachy keen. I don't need to write x86 assembly anymore. In fact, these days I do a lot of coding for ARM platforms, but I never learned ARM assembly at all. So it would take more than just a refresher course to bring me up to speed in that. I don't anticipate any such need, fortunately.

So... if I still need to write C in 10 years, why in the world would I consider that a good thing? Things in this business are supposed to progress toward higher and higher levels of abstraction.

BeetleB
I have extremely high faith that the C compiler will produce very reliable x86 assembly code.

I am extremely pessimistic that LLMs will ever reach that level of reliability. Or even close. They are a great helper, but that's all they'll be.

CamperBob2
What are some examples of prompts and responses that have made you pessimistic about the technology's ability to improve? Reliability has gotten insanely better over the past couple of years, and it's still improving.
BeetleB
What are some examples of prompts and responses that have made you pessimistic about the technology's ability to improve?

I'm not saying it won't improve. I think its lack of determinism puts a natural upper bound on reliability - something we don't have to worry about with compilers.

As for examples: Oh wow. As much as I love and use them for coding, almost every project I work on has cases of coming up with complex solutions to a task that do not work, where 1-2 lines of (simple!) code would have solved the problem. It's not just that it fails, but it kept failing even after I told it how to do it right.

Sometimes it does a great job and solves a hard problem for me. Other times I lose a lot of time getting it to do beginner level stuff. It has huge gaps/blind spots.

medhir
the flip side to that “high” that comes with working super quickly for me has been a crash associated with a sinking feeling that I’ve outsourced too much.
mullingitover
Ie. most managers can't help their team find a hard bug that is causing a massive outage.

Strategy vs tactics. Managers aren't there to teach their reports skills, they hire them because they already have them. They're there to set priorities and overall direction.

BeetleB
He's not disputing that. The difference, though, is that if you're using LLMs as code assistants, your skills will atrophy the way the manager's skills will, but you are still a SW engineer.

While the manager doesn't need those skills, you still do.

mullingitover
LLMs and agents are going to make labels like 'sw engineer', 'qa tester', marketer, CFO, and CEO very squishy.

I think in the coming decade if you put yourself in a box and do a specialized task, and nothing more, you'll have a bad time. This is going to be an era where strategy is far more important than tactics.

Applejinx
This assumes all specialized tasks are fungible.

I can't disagree more strongly. If you're a specialized person with expertise and ability to perform outside the knowledge of all possible interns because you're developing something novel and not covered by standard materials, you'll be hard to compete with using AI because AI will direct people to standardized methods.

Granted, if you do a specialized task that is taught in schools and that anyone can do, that's trouble. But that's not tactics either, that's clock-punching. That's replaceable. You can talk 'strategy' all you like but if you're only exploiting the AI's ability to do what people already know, that's another box to be stuck in.

mullingitover
I think we're in agreement then, I'm saying the specialists of this stripe:

if you do a specialized task that is taught in schools

are in trouble, and so are you.

perform outside the knowledge of all possible interns

This is the strategic level thinking I'm referring to. Frankly this role might not be safe either, but if that's the case then we're probably headed for fully automated luxury communism no matter what we do.

farts_mckensy
What if you use it to handle boring BS work you don't care about and focus on what you actually want to do? I offload my work tasks to GPT and do other stuff during work hours. Play with my dog. Stretch. Paint. Work on other creative projects. I don't give a fuck about work as long as they keep sending me a paycheck. Oh no! My brain is going to atrophy from not manually synthesizing info from this report that no one reads anyway.
gotoeleven
Startup idea: Farts_Mckensyly

Do all the work Farts_Mckensy does at half the price using AI.

farts_mckensy
I assure you, it wouldn't be very lucrative.
balamatom
real
w10-1
This is a good analogy, and not just because of skills atrophy.

Managers grow the skills needed for their organization. Their team affects them.

A process-oriented team with quality/validation mindset has replaceable roles; the action is in the process. An expert team has people with tremendous skills and discretion doing what needs doing; the action is in selection and incentives. Managers adapt to their team requirements, in ways positive and negative: becoming rule-bound, privileging favorites, etc.

With AI this might be a positive insofar as it forces people to state the issues clearly, identifying relevant context, constraints, objectives, etc.

Agile benefitted software development by changing the granularity of delivery and planning -- essentially, helping people avoid getting lost in planning fantasies.

Similarly, I believe that the winner of the AI-for-development race (copilot et al) will not just produce good code, but build good developers by driving them to state requirements clearly and simply. A good measure here is the number of iterations to complete code.

An anti-pattern here, as with agile, is where planning devolves into exploring and exploring into incremental changes for no real benefit - polishing turds. Again, a good measure is the number of sessions to completion; too many and you know you don't know what you're doing, or the AI cannot grasp the trees for the forest.

causal
I think this is a really good analogy. Delegating problems to others is nothing novel or new to human experience.

Perhaps the biggest difference is the lack of feedback AI gives that humans can give: a subordinate can communicate if they feel like their manager is being too hands off. AI never questions management style.

natebc
It's also like trying to manage the most prolific bullshit artist the world has ever produced.
mattgreenrocks
I think this is the primary reason I have difficulty with management: my brain simply doesn't really work out (in the fitness sense) in the cognitive way it is used to, but instead has to track all sorts of problems, many of which are emotional/political. Those problems are often emotional/political and thus not really something I can readily solve by thinking real hard about things.
farts_mckensy
That's what the money's for!
balamatom
Fellow manager, it will do you good to realize that we are not in an "age of AI". Out there it's the age of disinformation.
vunderba
I've been calling this out since ChatGPT went mainstream.

The seductive promise of solving all your problems is the issue. By reaching for it to solve any problem at an almost instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning.

That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.

This is why it's pretty baffling to me when I see attempts at comparing LLMs to the invention of the calculator. A calculator is still used IN SERVICE of a larger problem you are trying to solve.

turnsout
Yeah, but the calculator analogy is apt. In the past, anyone who went to grade school could answer 6 * 7 off the top of their head, and do basic mental math. We've pretty much lost that.

With that said, I do worry that losing the ability to craft sentences (or code) is more problematic than losing the ability to do mental math.

bluefirebrand
Losing the ability to do mental math is probably not actually a big deal

Losing the ability to do calculations by hand on a piece of paper with a pencil probably actually is a big deal

When I went to school we still had to do a lot of calculations by hand on paper. Thus, if I use a calculator to get an answer, I'm capable of reproducing the answer by hand if necessary

With math, at least when I was learning it, we seemed to understand that the calculator is a useful tool that doesn't replace the need to develop underlying skills

I'm seeing the exact opposite behavior and mentality from the AI crowd. "You don't need to learn how to do that correctly anymore, you can just have the AI do it"

"Vibe Coding", literally the attitude that you don't need to understand your code anymore. You just have the AI generate it and go off vibes to decide if it's right or not

Yeah, I don't know how my car engine works. But I trust that the people who engineered it do, and the mechanics that fix it when it breaks do. There's no room for "Vibe Bridge Building" in reality

Anyone advocating for "Vibe coding" is an admission that it doesn't actually matter if the thing they build works or not

Unfortunately that seems to be a growing portion of software

marinmania
I wonder if people that were writing code in assembly complained that people learning more modern languages didn't really know how the 0s and 1s work.

I'm not sure where the line is, but there is a point where the abstraction works so well you really don't need to know how it works underneath.

I'm also not sure if a car mechanic needs to know how an engine works. I'm assuming almost none of them could design a car engine from scratch. They know just enough to know which parts needs to be replaced.

rqtwteye
I'm also not sure if a car mechanic needs to know how an engine works. I'm assuming almost none of them could design a car engine from scratch. They know just enough to know which parts needs to be replaced.

That's why, when your car has a problem, a lot of mechanics just blindly replace parts with the hope that something will fix it. You are much better off with a mechanic that understands how the car works. And you will save a lot of money.

marinmania
I disagree that any car mechanic working a local auto body shop knows engines well enough to design one. They just know which parts are broken.

Similarly we reach a point in coding where you don't really need to know how every API or language you use operates beneath the hood, you just need to be able to see where its broken.

jdlshore
They did complain. But the key part is “the abstraction works so well.” It doesn’t, and I suspect it can’t.
balamatom
It can abstract over human agency.
bluefirebrand
there is a point where the abstraction works so well you really don't need to know how it works underneath

There is a point where most people might not need to know

There is never a point where no one needs to know

rqtwteye
Losing the ability to do mental math is probably not actually a big deal

I think it’s a huge deal. I see a lot of people do some financing stuff and they have no idea what a 20% interest rate really means. So they just go ahead and do it because taking out a calculator is too tedious. I find it pretty crazy how many can’t figure how much a 20% or even 10% is. A lot of financial offerings take advantage of the fact that people can’t do even basic math.

bluefirebrand
If taking out a calculator is too much of a burden when we all have smartphones with built in calculator apps, then the problem is not the lack of mental math skill.

Edit: At best, it shows that the person doesn't really know how to calculate that even with a calculator

At worst, it shows that the person lacks of any kind of give a fuck at all

Either way, they probably would not likely have learned the mental math required to do this regardless of if it were being taught at school or not

pessimizer
At best, it shows that the person doesn't really know how to calculate that even with a calculator

What it shows is that 10% means nothing to them. It's just another number that a calculator will spit out. It's not "one out of ten" or "shift everything to the right." They have no ability to evaluate what the number means on the fly. They just plug it into the calculator and check if they have that much. They can't have a reasonable discussion about the number without stopping and calculating every interest rate they can think of, writing them down on a piece of people, and writing the budget next to them.

A calculator here, of course, leads to more work, not less. I bet the vast majority of the time that somebody too lazy or anxiety-ridden to learn how to calculate 20% in their head is also going to be too lazy and bothered to take out a calculator every time they need to know, rather than just trying to bluff their way through conversations until they can get to a calculator.

Calculators are of no benefit to students who have no clear concept of the calculation. Just learn your times tables, they're one of the first things we teach kids.

The pragmatic pro-calculator "nobody needs to calculate those numbers in real life" school I think gets most of its support because constructivists often fail entirely to teach things that have been until now best learned by rote, hoping that an instinct for multiplication and division will naturally and precisely arise from children's mathematical souls. The fact that it doesn't is proof that doing arithmetic is bad and a waste of time actually. They love abstractions, but only vague ones that can't be reliably tested.

Either way, they probably would not likely have learned the mental math required to do this regardless of if it were being taught at school or not

That's the law of averages. It is not a law.

turnsout
My feeling is… just give it 6-12 months. All the low-quality apps that were "vibe-coded" will start to break down, have massive security breaches, or generally fall apart as new features are added.

Brace yourself for a wave of think pieces a year from now, all wringing their hands about the death of vibe coding, and the return of artisanal handmade software.

bluefirebrand
I have a similar feeling, but there's also a little voice in the back of my head trying to convince me that I'm just trying to cope with the fact I'm going to be made obsolete by a software industry that actually doesn't care if software breaks down and has massive security breaches as long as the AI furnace keeps getting coal
rqtwteye
I think both are true. The vibe-coders will run into massive problems. But AI will also improve a lot and probably be better than humans at some point.
throwway120385
Not to mention the absolute pile of "mathematics problems" that can't be solved except by pushing symbols around a page, which a calculator is absolutely useless at. So sure I can have a calculator "calculate" an approximation for 4/3 but it can't help me manipulate the symbols around the improper fraction that i need to manipulate to calculate the radius given the surface area of a sphere. And it's of zero help in understanding the relevance of that "pattern" to whatever phenomenon I'm using the mathematics to reason about. That all requires human intelligence.

There are a lot of calculators and other tools that can push the symbols around and many that can even apply some special high-level mathematical rules to the symbols. But whether or not those rules are relevant to the task at hand is entirely a matter for a human to decide.

belter
Would love to see eighth grade students of today, try this test from 1912...I sense disaster....

https://www.reddit.com/r/interestingasfuck/comments/13jhckh/...

fwip
Why? I'm pretty sure my public school education prepared me for all of these questions by 8th grade, excepting some notation that we no longer use and some specific history questions that are now less relevant.
BeetleB
These would have been quite doable by my 8th grade education.

But I am older than many. :-)

wiseowise
In the past, anyone who went to grade school could answer 6 * 7 off the top of their head, and do basic mental math. We've pretty much lost that.

Source?

turnsout
Observing people around me
askonomm
Must be a place with a pretty bad education system if people can't answer 6 * 7 off the top of their head.
turnsout
PS, if you're downvoting this, can you explain why?
BeetleB
We've pretty much lost that.

In the US :-)

And those skills are entirely context dependent. You're likely saying this from a SW engineer's point of view. Whereas I've worked in teams with physicists and electrical engineers. When you're in a meeting, and there is a technical discussion, and everyone can calculate in their head the effects after integrating a well known function and how that will impact the physical system, while you have to pull out a calculator/computer, you'll be totally lost.

You can argue that you could be as productive as the others if you were given the time (e.g. doing this on your own at your own cubicle), but no one will give you that time in meetings.

tsumnia
I won't disagree with their findings; however I do think there is some need to counter the narrative that "LLM AI worse for humans". Specifically I think back to an example I use when I would describe why I had such motivation toward study students completing typing practice while learning CS. In short, I use the analogy that when I am browsing the web for code snippets (like extracting files from a tar file), I will explicitly retype out the command rather than rely on copy+paste. My logic is that typing out the command helps build the muscle memory so that someday I'll just REMEMBER the command.

That said, the counter to my own counter is "do I really need to memorize that?" Yes yes no internet and I'm screwed... but that's such a rare edge case. I am able to quickly find the command and knowing that it is stored somewhere else may be enough knowledge for me rather than memorization. I can see Gen AI falling into a similar design, I don't need to know explicitly how to do something, just that that task can be resolved through an LLM prompt.

Granted, we're still trying to figure out how to communicate with LLMs and we only really have 3 years of experience. Most of our insights have come from blog posts and a handful of research articles. I agree that Gen AI laziness is a growing issue, but I don't think it needs to go full Idiocracy sensationalist headline.

MrMcCall
we're still trying to figure out how to communicate with LLMs

Communicating with 'guess-the-next-token' machines is just ELIZA version X.Y.

You can also ask your dog it they want another treat, or if they want to play Quake. They're merely listening for key words and tone of voice and reacting to them according to their experience with that matrix. And LLMs don't even understand tone of voice, and they never will.

orangecat
And LLMs don't even understand tone of voice, and they never will.

My favorite part of AI discourse is the confident assertions that AIs will never be able to do things that they've been doing for months.

MrMcCall
They understand tone of voice? C'mon bruh.
tsumnia
I believe what orangecat means is that AIs don't understand tone ~yet~. However as we continue working on adding context and refining models, it could be a possibility in, say, 10 years. Think where we were 10 years ago - IoT was the big deal and IBM Watson was beating Jeopardy champions while still not understanding London isn't a US city.

AI's "humanity" will be a consistent debate among people - there will always be people that choose to think of AI as nothing more than a tin can with an opinion built from a math equation, myself included.

But these are the types of questions that we'll need to discuss as AI continues to progress while people continue to point out how AI can't do X (which makes them inferior to humans).

MrMcCall
I believe what orangecat means is that AIs don't understand tone ~yet~.

I don't see how you get that from what he quoted of mine then what he said.

Besides, their understanding 'tone' is not a sensible prediction, for the simple fact LLMs don't ever understand anything, whatsoever, any more than ELIZA was a psychologist. They digest and then regurgitate word sequences in complex ways ~ it's the intrinsic and sole nature of their design. We can hide that fact with layers of obfuscation or tricky ways of packaging their output, but they are what they are, and nothing more.

Like, human beings are never going to walk on the sun in our physical bodies; we are limited by our physiology, and there is no transcending it in dimensions beyond our capabilities.

tsumnia
I am in agreement with you that LLMs are lifeless regurgitation machines; what I'm implying isn't too far from how children learn/interact. One of the first concepts we teach children is word association to objects and once they've learned the term, they use it to describe whatever electrical signals are firing in their brain that made them decide "apple". If the difference is that LLMs use numerical representations to determine those "firings" in its decision making process, then would the numerical representations of an fMRI scanner hooked to a human be equivalent?

Of course not, because humans > robots. BUT, that does not discount the fact that humans learn through methods like memorization, which have their own reinforcement learning patterns. Many of the cognitive science* concepts have similar LLM infrastructure equivalents as well - "Mind Palace" is similar to RAG, the different relationships we have with others is akin to LLM role-playing, one/few-shot prompting is like templates.

I've recently been thinking about what types of discussions we'll have around AI in about 20 years - a new generation of young adults will leaving school, where we are currently introducing them to AI tutor avatars. Combined with things like Neuro-sama, the AI driven Twitch streamer, I'm curious about how the NEXT generation will perceive AI since they've grown up with it. This isn't a Futurama Marilyn Monroe Bot fantasy, its sort of a reality (whether I agree with it or not).

orangecat
Oh no, I mean that there are current AI models that have some understanding of tone of voice. (With a definition of "understand" that is different from yours so as not to be a vacuous statement). I just now went to https://www.sesame.com/research/crossing_the_uncanny_valley_... and asked it to interpret my mood as I said the same words in different tones, and it did a reasonable job.
labrador
I'm a retired computer programmer. All my time is free time. I'm using AI as a cognitive amplifier. I'm learning at a much faster rate than I would without AI. I don't have to waste time doing google searches and reading thru irrelevant material to find something germane to my research.

I don't depend on AI for anything. I am not doing corporate work. Could it be that what people are experiencing is that they are becoming less suitable for corporate work as AI and robots replace them? Isn't this a good thing? Shouldn't the focus be on using AI to bring out the innate talents of humans that aren't profit driven?

johnea
That's a really inspiring hope, but, sadly, not the reality.

The current "AI" tech is in fact developed FOR the profit. There is 0 concern at the capital investing level for enhancing any innate human talents. In fact the effort is explicitly intended to REPLACE humans with automation in tasks that have traditionally required those innate human skills.

I do believe you find learning and research to be enhanced, and I agree in general that the tech has a great deal of possible benefit, but sadly that's not what ownership hopes to use it for (statistically speaking that is. not all ownership is created equally).

This is the luddite problem all over again: it's not the looms that are the problem, it's ownership's interest in shifting funds that would be paid to workers into the coffers of corporate profit.

Could advances in technology benefit ownership, and still insure that labor can earn a decent living? Of course it COULD! It's just that given the precedent of human history, that's not how it WILL be used...

labrador
Computers were initially seen in the 60's and 70's as corporate and scientific tools. Many couldn't imagine one day we would have them in the home. Once that happened, people used computers for all sorts of non-profitable endeavors like painting, music, social discussion, etc... I believe AI will follow a similar trajectory. I can't really speak to how AI is used in corporations to improve the bottom line because I no longer work in them, but I do see how it enhances my personal learning and creativity. Given time, I expect more people to explore AI beyond profit-driven use cases, just like we did with personal computers.
zdragnar
You've already got your brain trained in the underlying fundamentals. The way you use AI is going to be slightly different from how someone who is just starting out would experience and adapt to it.

For the same reason, we were required to have graphing calculators, but not TI-92 or similar models in calculus class. While the utility is fine for people who have already learned the concepts, attempting to learn advanced algebra or other math with a symbolic solver available cripples your long term retention.

The question of "what is motivating the task" doesn't really factor very well into "how does this tool affect a novice", at least not in any similar circumstance I have seen.

labrador
If you're right, then we should not use AI in elementary and high schools as a learning tool. I suspect most people think we should.
borgdefenser
I feel exactly the same way about this "I'm using AI as a cognitive amplifier. I'm learning at a much faster rate than I would without AI."

I just don't know how much is actually being replaced though. I think of corporate jobs I have done in that past. I can't think of anything I have ever been paid to do that would be replaced by a language model. It was either something that could have been automated without a language model but was not for various reasons or the output would just be amplified by a language model. In some cases my work would have been enormously amplified and better but not "automated".

For some reason we don't seem to like this idea of a cybernetic relationship with a machine that benefits the human even though that is exactly what we have been doing for at least a 150 years. Maybe it is something in our brains that can't turn off a type of predator/prey model. Then on top of that is the mass appeal of this infantile and collectivist idea that AI will do all the work while we collect our UBI trust fund allowance from artificial daddy.

sollewitt
One thing I've tried using Gemini for, and been really impressed with, is practicing languages. I find Duolingo doesn't really translate to fluency, because it doesn't really get you to struggle to express yourself - the topics are constrained.

Whereas, you can ask an LLM to speak to you in e.g. Spanish, about whatever topic you're interested in, and be able to stop and ask it to explain any idioms or vocabulary or grammar in English at any time.

I found this to be more like a "cognitive gym". Maybe we're just not using the tools beneficially.

kokanee
I remain perplexed that everyone is so focused on using LLMs to automate software engineering, when there are language-based professions (like Spanish tutor, in your example) that seem more directly threatened by language models. The only explanation I've heard is that the industry is so excited about reducing spend on software engineering salaries that they're trying to fit a square peg into a round hole, and largely ignoring the square holes.
thewebguyd
The only explanation I've heard is that the industry is so excited about reducing spend on software engineering salaries that they're trying to fit a square peg into a round hole, and largely ignoring the square holes.

I think that's really just it, and I agree with you. There are many other areas LLMs can, and should, be more useful and effort put toward both assisting and automating.

Instead, the industry is focusing on creative arts and software development because human talent for that is both limited and expensive, with a third factor of humans generally being able to resist doing morally questionable things (e.g., what if hiring for weapons systems software becomes increasingly difficult due to a lack of willingness to work on those projects, likewise for increasingly invasive surveillance tech, etc.)

We're rushing into basically the opposite of what AI should do for us. Automation should work to free us up to focus more on the arts and sciences, not take it away from us.

Greed at its finest.

zzbzq
I think it's because software engineers are the only group that can unanimously operate LLMs effectively and build them into larger systems. They'll automate their own jobs first and move on to building the toolkits to automate the others.
adelie
language-based professions like translation have been dying for years and no one has cared; they're not about to start now that the final nail's been put in the coffin.
rahimnathwani

  the topics are constrained
Is this true even if you have Duolingo Max and use the video calling feature?
bentt
Is this any different than saying that nowadays most people in the USA are physically weaker and less able to work on a farm than their predecessors? Sure, it's not optimal through certain lenses, but through other lenses it is an improvement. We are by any rights dependent on new systems to procure food, which is even more fundamental than other types of human cognition being preserved.
MrMcCall
Is this any different than saying that nowadays most people in the USA are physically weaker and less able to work on a farm than their predecessors?

Yes, far different, because we can still go to the gym and throw medicine balls around or swing kettle bells and do dead lifts and squats if we want to stay fit.

There is no substitute for exercising our ability to logically construct deterministic, hardened, efficient data flow networks that process specific inputs in specific environments to produce specific changes and outputs.

Maybe I'm the only one who understood the most important factor the eminent Leslie Lamport explained in grisly detail the other day, that, namely, logical thinking is both irreplaceable and essential. I'll add that that nerdiest of skillsets is also withering on the vine.

"Enjoy." --Daniel Tosh

orangecat
There is no substitute for exercising our ability to logically construct deterministic, hardened, efficient data flow networks that process specific inputs in specific environments to produce specific changes and outputs.

Factorio?

MrMcCall
Of course.

And every single microprocessor and their encompassing support systems and the systems they host and execute.

Every single system, even analog ones, because it's all just information flowing through switched systems, even if it's solely measured in something involving coulombs.

Also, fundamentally, living cells and the organisms that encompass them, because they all have a logically variable information flow both within them and between them, measured in molecules and energy.

They're extraordinary and beautiful.

aithrowawaycomm
This would be a valid POV if there was any solid evidence that LLMs truly increased worker productivity or reliability - at best it is a mixed bag. To stretch the food analogy, it seems like LLMs could be pure corn syrup, without any disease-resistant fruits and unnaturally plump chickens that actually make modern agriculture worthwhile.

Or, since LLMs seem to be addictive, it's like getting rid of the spinach farms and replacing them with opium poppies. (I really hate this tech.)

coffeefirst
People pay a fortune and expend endless hours to replace the basic physical activity that used to be a default part of the human experience. Also a huge chunk of the population that doesn't suffers from life-altering metabolic disorders.

Let's... not do that for brainrot.

lenerdenator
So it does what Google searching did: it made retaining of information an optional cognitive burden, and optional cognitive burdens are usually jettisoned.

Fortunately, my ADHD-addled brain doesn't need some fancy AI to make its cognition "Atrophied and Unprepared"; I can do that all on my own, thank you very much.

JohnMakin
It really doesn't though. Even when google was at its best, and showed you relevant non-spammy results, a degree of critical thinking was required when sifting through the results, evaluating the credibility of the author, website, etc. Do you fact check everything the AI spits out? The ability for people to critically think is basically gone. That's been trending since before AI, but it's really clear to me at this moment in time how bad it has gotten. It's a laziness of thinking that I don't think was the same with Google.
lenerdenator
It really doesn't though. Even when google was at its best, and showed you relevant non-spammy results, a degree of critical thinking was required when sifting through the results, evaluating the credibility of the author, website, etc. Do you fact check everything the AI spits out? That's been trending since before AI, but it's really clear to me at this moment in time how bad it has gotten. It's a laziness of thinking that I don't think was the same with Google.

Nah, it was already at zero before ChatGPT came to public attention.

drbojingle
may not have been the same degree but reducing cognitive burden trends in the same direction. This might be bad but it might be very good. Is the AI competing with you to get a promotion and replace you? Is it going to lie to you knowingly cause it doesn't like you?
risyachka
No, not even close.

Google helps you find things that you process later on with your brain.

With AI your brain shuts off as you offload all thinking to asking questions. And asking questions is not thinking. Answering them is.

lenerdenator
Google helps you find things that you process later on with your brain.

I'm willing to bet that there were a lot of Google searches, pre-ChatGPT, that effectively were questions. Lots of "huh, I wonder" during conversations and the first result was taken as "the truth".

risyachka
Sure, and google gave you info that possibly contains your answer. You had to read it at least and analyze.

Now you are being spoon-fed.

derefr
How odd. I don't think I'm thinking any less hard when making use of LLM-based tools. But then, maybe I'm using LLMs differently?

I don't build or rely on pre-prompted agents to automate specific problems or workflows. Rather, I only rely on services like ChatGPT or Claude for their generic reasoning, chat, and "has read the entire web at some point" capabilities.

My use-cases break down into roughly equal thirds:

---

1. As natural-language, iteratively-winnowing-the-search-space versions of search engines.

Often, I want to know something — some information that's definitely somewhere out there on the web. But, from 30+ years of interacting with fulltext search systems, I know that traditional search engines have limitations in the sorts of queries that'll actually do anything. There are a lot of "objective, verifiable, and well-cited knowledge" questions that are just outside of the domain of Google search.

One common example of fulltext-search limitations, is when you know how to describe a thing you're imagining, a thing that may or may not exist — but you don't know the jargon term for it (if there even is one.) No matter how many words you throw at a regular search engine, they won't dredge up discussions about the thing, because discussions about the thing just use the jargon term — they don't usually bother to define it.

To find answers to these sorts of questions, I would have previously ask a human expert — either directly, or through a forum/chatroom/subreddit/Q&A site/etc.

But now, I've got a new and different kind of search engine — a set of pre-trained base models that, all by themselves, perform vaguely as RAGs over all of the world's public-web-accessible information.

Of course, an LLM won't have crystal clarity in its memory — it'll forget exact figures, forget the exact phrasing of quotations, etc. And if there's any way that it can be fooled or misled by some random thing someone made up somewhere on the web once, it will be.

But ChatGPT et al can sure tell me the right jargon term (or entire search query) to turn what was previously, to me, almost deep-web information, into public-web information.

---

2. As a (fuzzy-logic) expert system in many domains, that learned all its implications from the public information available on the web.

One fascinating thing about high-parameter-count pre-trained base models, is that you don't really need to do any prompting, or supply any additional information, to get them to do a vaguely-acceptable job of diagnosis — whether that be diagnosing your early-stage diabetic neuropathy, or that mysterious rattle in your car.

Sure, the LLM will be wrong sometimes. It's just a distillation of what a bunch of conversations and articles spread across the public web have to say about what are or aren't the signs and symptoms of X.

But those are the same articles you'd read. The LLM will almost always outperform you in "doing your own research" (unless you go as far as to read journal papers — I don't know of any LLM base model that's been trained on arXiv yet...). It won't be as good at medicine as a doctor, or as good at automotive repair as an automotive technician, etc. — but it will be better (i.e. more accurate) at those things than an interested amateur who's watched some YouTube videos and read some pop-science articles.

Which means you can just tell LLMs the "weird things you've noticed lately", and get it to hypothesize for you — and, as long as you're good at being observant, the LLM's hypotheses will serve as great lines of investigation. It'll suggest which experts or specialists you should contact, what tests you can perform yourself to do objective differential diagnostics, etc.

(I don't want to under-emphasize the usefulness of this. ChatGPT figured out my house had hidden toxic mold! My allergies are gone now!)

---

3. As a translator.

Large-parameter-count LLM base models are actually really, really good at translation. To the point that I'm not sure why Google Translate et al haven't been updated to be powered by them. (Google Translate was the origin of the Transformer architecture, yet it seems to have been left in the dust since then by the translation performance of generic LLMs.)

And by "translation", I do literally mean "translating entire documents from one spoken/written human language to another." (My partner, who is a fluently-bilingual writer of both English + [Traditional] Chinese, has been using Claude to translate English instructions / documents into Chinese for her [mostly monolingual Chinese] mother to better understand them; and to translate any free-form responses her mother is required to give, back into English. She used to do these tasks herself "by hand" — systems like Google Translate would provide results that were worse-than-useless. But my partner can verify that, at least for this language pair, modern LLMs are excellent translators, writing basically what she would write herself.)

But I also mean:

• The thing Apple markets as part of Apple Intelligence — translation between writing styles (a.k.a. "stylistic editing.") You don't actually need a LoRA / fine-tune to do this; large-parameter-count models already inherently know how to do it.

• Translating between programming languages. "Rewrite-it-in-Rust" is trivial now. (That's what https://www.darpa.mil/research/programs/translating-all-c-to... is about — trying to build up an agentive framework that relies on both the LLM's translation capabilities, and the Rust compiler's typing errors on declaration change, to brute-force iterate across entire codebases, RiiRing one module at a time, and then recursing to its dependents to rewrite them too.)

• Translating between pseudocode, and/or a rigorous description of code, and actual code. I run a data analytics company; I know far more about the intricacies of ANSI SQL than any man ought to. But even I never manage to remember the pile of syntax features that glom together to form a "loose index scan" query. (WITH RECURSIVE, UNION ALL, separate aliases for the tables used in the base vs inductive cases, and one of those aliases referenced in a dependent subquery... but heck if I recall which one.) I have a crystal-clear picture of what I want to do — but I no longer need to look up the exact grammar the SQL standard decided to use yet again, because now I can dump out, in plain language, my (well-formed) mental model of the query — and rely on the LLM to translate that model into ANSI SQL grammar.

sitkack
Thanks, the paper is very readable.

Abstract The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

It is be presented at CHI Conference https://chi2025.acm.org/

https://en.wikipedia.org/wiki/Conference_on_Human_Factors_in...

greybox
Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”

“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.

jldugger
Well, that's just a summary of a much, much older paper. Still a relevant paper, but somewhat disingenuous to attribute it to MS researchers.

[1] https://en.wikipedia.org/wiki/Ironies_of_Automation

nukem222
Certainly a common enough concern in people critiquing use of ChatGPT here. I'm more worried about "softer" problems, though—morality, values, persuasion, including deciding which of two arguments is more convincing and why.

But these have always been issues that humans commonly struggle with so idk.

pseudocomposer
It seems like something like medical/legal professionals’ annual/otherwise periodic credential exams might make sense in fields where AI is very usable.

Basically, we might need to standardize 10-20% of work time being used to “keep up” automatable skills that once took up 80+% of work time in fields where AI-based automation is making things more efficient.

This could even be done within automation platforms themselves, and sold to their customers as an additional feature. I suspect/hope that most employers do not want to see these automatable skills atrophy in their employees, for the sake of long-term efficiency, even if that means a small reduction in short-term efficiency gains from automation.

bluefirebrand
suspect/hope that most employers do not want to see these automatable skills atrophy in their employees, for the sake of long-term efficiency, even if that means a small reduction in short-term efficiency gains from automation.

I wish you were right, but I don't think any industry is realistically trending towards thinking about long term efficiency or sustainability.

Maybe it's just me, but I see the opposite, constantly. Everything is focused on the next quarter, always. Companies want massive short term gains and will trade almost anything for that.

And the whole system is set up to support this behavior, because if you can squeeze enough money to retire out of a company in as short a time as possible, you can be long gone before it implodes

nopelynopington
I feel like my critical thinking has taken a nosedive recently, I changed jobs and the work in the new job is monotonous and relies on automation like copilot. Most of my day is figuring out why the ai code didn't work this time rather than solving actually problems. It feels like we're a year away from the me part being obsolete.

I've also turned to AI in side projects, and it's allowed me to create some very fast MVPs, but the code is worse than spaghetti - it's spaghetti mixed with the hair from the shower drain.

None of the things I've built are beyond my understanding, but I'm lazy and it doesn't seem worth the effort to use my brain to code.

Probably the most use my brain gets every day is wordle

oneofyourtoys
The year is 2035, the age of mental labor automation. People subscribe to memberships for "brain gyms", places that offer various means of mental stimulation to train cognitive skills like critical thinking and memory retention.

Common activities provided by these gyms include fixing misconfigured printers, telling a virtual support customer to turn their PC off and back on again, and troubleshooting mysterious NVIDIA driver issues (the company has gone bankrupt 5 years ago, but their hardware is still in great demand for frustration tolerance training).

jonahx
For those who read only the headline or article:

In this paper, we aim to address this gap by conducting a survey of a professionally diverse set of knowledge workers ( = 319), eliciting detailed real-world examples of tasks (936) for which they use GenAI, and directly measuring their perceptions of critical thinking during these tasks

So, they asked people to remember times they used AI, and then asked them about their own perceptions about their critical thinking when they did.

How are we even pretending there is serious scientific discussion to be had about these "results"?

Tossrock
"The Impact Of Taking A Survey About AI On Answers To A Survey About AI" doesn't have the same ring to it.
jhallenworld
Obvious advice for students: Human brains are neural networks- they have to be trained. If you have the already trained artificial neural network do all the work, it means your own neural network remains untrained.

You are tremendously better off getting a bad grade doing your own work than getting a good one using ChatGPT.

IndiaPaleAle
Many students who use AI do so not because they value their own learning, but because they prioritize the grades they earn in class.

I work in a high school, so I've seen this first hand. To be fair, this mindset isn’t entirely their fault. Their parents, their future universities, and society as a whole place a high value on getting top grades, too..

In a system where college admissions are highly competitive, and where cheating with AI offers a high reward and low risk, even students who genuinely care about their learning will feel pressured to follow suit. Just to remain in the game.

DrNosferatu
Then prompt the AI to provide its outputs in a way that keeps the human user engaged and aware of where they are in the thought process: maps, diagrams, repetition summaries.

We have the cognition science to make it happen - or at least learn how to structure it.

masfuerte
This isn't a new thing. I noticed it in the 1990s in bank employees as their work became increasingly automated. As the software became better at handling exceptions, their skills atrophied further and they became even worse at handling the harder exceptions that remained.
kokanee
As our grade school teachers warned, my arithmetic skills are indeed brutally stunted thanks to calculators. The implications do seem even worse with folks using AI to do their technical debugging and decision making, though.
piltdownman
Moreover, participants perceived it to be more effort to constantly steer AI responses (48/319), which incurs additional Synthetic thinking effort due to the cost of developing explicit steering prompts. For example, P110 tried to use Copilot to learn a subject more deeply, but realised: “its answers are prone to several [diversions] along the way. I need to constantly make sure the AI is following along the correct ‘thought process’, as inconsistencies evolve and amplify as I keep interacting with the AI.

While much is made of the 'diminished skill for independent problem-solving' caused by over-reliance, is there a more salient KPI than some iteration of this 'Synthetic Thinking Effort' by which to baseline and optimise the cost/benefit of AI usage versus traditional cognition?

tunesmith
I like to think of problems as having two components: specification and implementation.

With using GenAI (and/or "being a manager") aren't they somewhat inversely related?

I find implementation level programmers to generally be poor at stating specifications. They often phrase problems in terms of lacking their desired solutions. They jump straight to implementation.

But a manager has to get skilled at giving specification: being clear about what they expect, without stating how to do it. And that's a skill that needs to be quickly developed to use GenAI well as well. I think getting good at specifying is definitely worthwhile, and I think GenAI is helping a lot of people get better at that quickly.

Overall, it seems that should very much be considered part of "critical thinking".

rraghur
Sort of like once you get used to GPS to get anywhere, you stop developing any further directional sense but even existing capabilities start withering away
causal
This is interesting because I don't feel like my directional sense has withered at all because of GPS, but I do think it was important that I develop a sense of how to navigate and use maps before I introduced GPS.

I find this is similar in my experience with AI: I pick up tidbits and tricks from AI when it's doing something I'm familiar with, but if I have it working with a completely novel framework or language it quickly races ahead and I'm essentially steering it blind, which inevitably fails.

1vuio0pswjnm7
Original HN titles:

Impact of Gen AI on Critical Thinking: Reduction in Cognitive Effort, Confidence

Impact of AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort

The Impact of Generative AI on Critical Thinking: Reductions in Cognitive Effort

Actual title of the paper:

The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers

Previous discussion:

10 Feb 2025 17:01:08 UTC https://news.ycombinator.com/item?id=43002458 (1 comment)

10 Feb 2025 22:31:05 UTC https://news.ycombinator.com/item?id=43006140 (0 comments)

11 Feb 2025 11:14:06 UTC https://news.ycombinator.com/item?id=43011483 (0 comments) [dead]

11 Feb 2025 14:13:36 UTC https://news.ycombinator.com/item?id=43012911 (1 comment)

12 Feb 2025 01:47:16 UTC https://news.ycombinator.com/item?id=43020846 (0 comments) [flagged] [dead]

14 Feb 2025 15:54:57 UTC https://news.ycombinator.com/item?id=43049676 (1 comment)

15 Feb 2025 12:06:01 UTC https://news.ycombinator.com/item?id=43057907 (101 comments)

MrMcCall
Usain Bolt didn't walk around on crutches all day.

Comedians' ability diminishes as they take time off.

Ahnold wasn't lounging around all day.

We should understand that fixing crap, unsensible code is not a productive skillset. As Leslie Lamport said the other day, logically developing and coding out proper abstractions is the core skillset, and not one to be delegated to just anything or anyone.

It's ok; the bright side for folks like me is that you're just happily hamstringing yourselves. I've been trying to tell y'all, but I can only show y'all the water, not make you drink.

thombles
I'm, uh, not a fan of AI, however in this case I would strongly recommend everybody ctrl-f the juicy quotes in the 404media article and see where they came from in the full text of Microsoft's study. Both of the leading quotes come from the _introduction_, where they're talking at a high level about a paper from 1983. It's enormous clickbait.
Qem
Paywalled, but full study available here: https://www.microsoft.com/en-us/research/wp-content/uploads/...
dang
Thanks! We've changed the URL to that from https://www.404media.co/microsoft-study-finds-ai-makes-human....

Submitters: please don't post paywalled articles unless there are workarounds (such as archived copies).

0x20cowboy
I love doing all aspects of building software. However, I’ve noticed when I am feeling lazy I’ll just copy pasta a stack trace into an LLM and just trust what is says is wrong. I won’t even read the stack trace.

I only tend to do that when I am tired or annoyed, but when I do it I can feel myself getting dumber. And it’s a weirdly satisfying feeling.

I just need a chair that doubles as a toilet and I’ll be all set.

rraghur
Just today had Gemini write a shell spot for me that had to generate a relative symlink.. Getting it to work xplat on linux & mac took more than ten tries and I stopped reading after the second

At the end, I spent probably more time and learnt nothing.. My initial take was that this is the kind of thing I don't care much for so giving it to a llm is OK... However, by the end of it I ended up more frustrated and lost it in the simulation of working things out aa well

allenrb
I’ve told people at work, including my boss and his boss, that it will be time for me to go if and when my job ever becomes “translating business problems into something AI can work with.”

Right now I’m curious to see how long I can keep up with those using AI for more mundane assistance. So far, so good.

_heimdall
I'm often surprised that a study like this is even needed, the result seems obvious.

Critical thinking is a skill that requires practice to improve at and maintain it. Using LLMs pushes the task that would require critical thinking off to something/someone else. Of course the user will get worse at critical thinking when they try to do it less often.

RecycledEle
You decide to rot as AI does the work, or you decide to learn from the AI.

The same is true of managers. I have had managers who yelled at me to do things they did not understand. They rotted on the inside. Other managers learned every trick I brought to the company. They grew.

riffic
can't it go the other way? Can't AI be developed to improve and strengthen human cognition? I'm incredibly naive and ill informed but feel that you can go both ways (growth vs fixed mindsets?)
moralestapia
I believe this to be true, and it came to happen at the worst possible time, post-COVID and w/ education levels through the floor.

I also believe, however, that humans who are able to reason properly would become much more valuable, because of this same thing.

gatinsama
You can't delegate understanding. I don't mean you shouldn't, you can't.

If you don't understand what's happening, you have no way to know if the system is working as intended. And understanding (and deciding) exactly how the system works is the really hard part for any sufficiently complex project.

divtiwari
As a part of Gen Z, I feel that with regards to critical thinking skills, our generation got obliterated twice, first with Social media (made worse with affordable data plans) then followed by GenAI tools. You truly need a monk level mind control to come out unscathed from their impact.
_aavaa_
It’s why I write all of my code directly in binary. Depending on a compiler, or god forbid Python, is really detrimental to me accomplishing my goal as a data scientist: allocating registers.
sunjester
Sounds like Microsoft is back at it again.
fbn79
People growing up using AI risks to become like France nobility during luigi IVI reign.
deepfriedchokes
I seem to recall Socrates arguing that writing weakened the memory and hindered genuine learning. He probably wasn’t wrong, but the upside of writing was greater than the downside.
AtomBalm
Literacy makes human memory atrophied and unprepared. Kids these days can’t even recite the Illiad from memory!
AISnakeOil
Just as any muscle gets weaker the less you use it, same goes for intelligence.
ChrisArchitect
Article from February.

Some discussion on the study: https://news.ycombinator.com/item?id=43057907

Deprogrammer9
Don't listen to Microscam yo!