I'm not really a fan of ChatGPT, I think it's overhyped and oversold, and very fundamentally limited.
But I also completely disagree with his take, it's too broad and sweeping.
Offloading some cognitive load does not suddenly make people stupider. It just means that their focus can go elsewhere. Nor do tools that abstract away some processes.
A good example is driving. Does a manual transmission require more focus and thought? Sure! But that's focus and thought that can instead go into paying attention to surroundings. Same with having a GPS. Not having to think about the layout of the city, or how to use a map, just leaves less distraction and cognitive load.
Another is programming. Having high-level languages instead of having to write assembly does not make programmers worse. It just means that instead of having to worry about byte alignment, structure packing and nuanced cache details - they can instead focus on providing more features and functionality to the end user - which is a good thing!
ChatGPT at its best will be like those two examples. Something that removes some existing load and lets us handle higher level, more abstract problems.
I don’t think pg is talking about the short-term, cognitive deficiencies of using AIs during execution of a specific task. I read it as the long-term decline in capabilities because you don’t regularly exercise certain things:
- being able to organize and compose your thoughts
- being able to synthesize disparate ideas into a cohesive whole
- tuning word choice, diction, and flow for a particular audience
- choosing the narrative frame in which you present your idea (this one is huge)
More concerning than long-term decline is that the next generation never develop these at all.
That is indeed what factories preferred since the beginning of industrialization.
I’m not concerned so much about AI taking over my livelihood as I am of businesses treating humans as automatons, and have been for over a century, if not more. We even adapted our education system to crank out more automatons (John Taylor Gatto’s book).
The flip side is understanding what a true autonomous agent is from the lens of Promise Theory. At the core of it, you trust humans to do what they promise to the best of their abilities. This is the heart of voluntary cooperation. For AIs to be truly autonomous, they cannot be proxies for promises humans make to each other. (Being a proxy for humans is how AI safety ethicists are approaching this). Instead, AIs have to be capable of making and keeping promises in their own right, and we have to trust they do what they promise to the best of their capabilities. … but not every business can’t seem to do that with humans.
> Having high-level languages instead of having to write assembly does not make programmers worse. It just means that instead of having to worry about byte alignment, structure packing and nuanced cache details - they can instead focus on providing more features and functionality to the end user - which is a good thing!
It also means that now we seriously lack people capable of working close to the bare metal - and the same will happen to many other skills and niche knowledge that AI will be able to replace (e.g. why school statics engineers, if AI can solve the math for architects directly).
That will IMHO result in some bad things as well:
- There will be no one left to oversee and improve the work done by AI,
- It will further slow down the overall progress because the most of people will lack deeper insights into how things work and AI will probably still be of the limited use for making ground-breaking original discoveries,
- And finally, though as individuals we will probably keep our IQs, just entertaining them with something else, as whole our civilization will loose some knowledge and skills - making us all dumber in a way.
"But I don't expect to convince anyone (over 25) to go out and [ignore ChatGPT]. The purpose of this article is not to change anyone's mind, but to reassure people already interested in [unassisted writing] -- people who know that [writing in isolation] is powerful, but worry because it isn't widely used. In a competitive situation, that's an advantage." -- slightly modified PG drivel on Lisp
http://www.paulgraham.com/avg.html
The scope is much broader than choice of computing language, since it affects the discourse of our whole civilization, and how we think about and discuss societal issues, and where we want to go as humanity.
On the other hand, if you want to be a thought leader, you’ll have less competition.
For me it’s extremely useful for revealing information I know I need to know, but I’m not yet sure how to query it.
That’s a skill I’ve developed over the years, and I’ll still need to use it, but it’s often a major time sink. Skipping it lets me focus far more on learning and applying rather than grasping in the dark for a while, digging, sorting through sources, verifying and validating, etc.
GPT will even surface studies pertaining to very specific topics by linking regular language to more sophisticated, field-specific jargon. I wouldn’t use it to learn what’s in the papers, but it’s extremely useful for simply finding what I need to know.
> Offloading some cognitive load does not suddenly make people stupider. It just means that their focus can go elsewhere. Nor do tools that abstract away some processes.
Agreed, but so much depends on where that focus goes instead. Whatever you do instead of writing is going to be what you improve at, while those other skills atrophy. That thing you spend focus on could be thinking deeply about abstract concepts, or composing sonnets, but it could also be mindlessly scrolling TikTok. Whether you ultimately buy Graham's thesis here depends on what you think the most likely outcome is.
This is overly optimistic. If this were true, most people driving an automatic transmission would be safer drivers in light of the lighter cognitive load. But that's not what happens. Instead the brain is overwhelmed by autonomous friending and vending apps until the car becomes an autonomous ending machine.
My n=1 experiment with driving a manual for 40 years in urban environments coast to coast is that I can't focus well without it and my experience doing half of that data set on a motorcycle just enforces that conclusion.
The automatic transmission and modern infotainment aren't really the same thing, they're different parts of the car. I think even if automatics had never existed, infotainment systems would still be around as we know them today.
a manual transmission car does not require more focus and thought as very soon changing gears becomes a reflex, ie it doesn't require the driver's prefrontal cortex. The same can not be said for repetitive work in programming (eg as in programming in assembly or C)
IMO, this opinion deserves some thought even though it is not as bad as it sounds
But I also completely disagree with his take, it's too broad and sweeping.
Offloading some cognitive load does not suddenly make people stupider. It just means that their focus can go elsewhere. Nor do tools that abstract away some processes.
A good example is driving. Does a manual transmission require more focus and thought? Sure! But that's focus and thought that can instead go into paying attention to surroundings. Same with having a GPS. Not having to think about the layout of the city, or how to use a map, just leaves less distraction and cognitive load.
Another is programming. Having high-level languages instead of having to write assembly does not make programmers worse. It just means that instead of having to worry about byte alignment, structure packing and nuanced cache details - they can instead focus on providing more features and functionality to the end user - which is a good thing!
ChatGPT at its best will be like those two examples. Something that removes some existing load and lets us handle higher level, more abstract problems.