For AI criticism and everything else
Sometimes when people talk about a problem in society, they strongly imply that most people are stupid.
This is wrong. Most people aren’t super knowledgeable about a lot of specific facts about the world (only half of Americans can name the 3 branches of government) but they’re intelligent when it comes to their own lives and the areas they work and spend time in. We should expect the average person to struggle with factual questions about abstract ideas and far-off events, but not so much about what’s right in front of them day to day.
If a claim about how society works implies that most people are incredibly stupid, much more stupid than anyone I encounter in my day to day life, I dismiss it. This simple test kills a lot of big claims about how the world works. I’ve been applying it in a lot of AI conversations recently. I’ve written about this a bit before but want to go into more detail.
Here’s a common claim that I think fails my test: “The reason Americans are so unhealthy is that doctors don’t tell people about healthy diets.”
I think most people know what’s considered healthy food. They maybe wouldn’t be able to perfectly break down ideal ratios of macronutrients, but they have a rough idea. The average person whose bad diet is making them unhealthy would probably be able to point to the bad diet as part of the problem. If I walked up to the average person and asked them to make an ideal meal plan for themselves to be maximally healthy, I think most people would do a decent job.
Stefan Schubert makes a similar observation about what he calls sleepwalk bias:
When we predict the future, we often seem to underestimate the degree to which people will act to avoid adverse outcomes. Examples include Marx's prediction that the ruling classes would fail to act to avert a bloody revolution, predictions of environmental disasters and resource constraints, y2K, etc. In most or all of these cases, there could have been a catastrophe, if people had not acted with determination and ingenuity to prevent it. But when pressed, people often do that, and it seems that we often fail to take that into account when making predictions. In other words: too often we postulate that people will sleepwalk into a disaster. Call this sleepwalk bias.
I often use the idea of sleepwalk bias in conversations. However, what I’m pointing at here is a much more extreme example of assuming everyone is stupid about even normal everyday experiences, so I think it needs its own name. I’m calling it my "Are you presuming most people are stupid?" test.
There are a few claims about AI floating around that fail my test.
I was motivated to write this in response to this Time article: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. There are a few points in this article that break my rule.
Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices.
The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely “soulless.” The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. “It was more like, ‘just give me the essay, refine this sentence, edit it, and I’m done,’” Kosmyna says.
What are these results actually telling us that the average person doesn’t already know? These seem to be the claims:
If you use a talking robot to write your essay for you, you won’t learn as much about the topic compared to writing the essay yourself.
Having a talking robot easily available to you makes you more likely to cheat on essay assignments.
Students using ChatGPT to write their essays for them aren’t stupid about what’s happening. Similar to students who just Google to find answers to homework problems, they’re aware that they’re making a trade-off between actual learning and saving time. This article is presuming that students are somehow blind to the idea that copying work from other places means they don’t actually learn. The average student isn’t like that. They make bad decisions when they cheat using talking robots, but they know what they’re doing.
Here’s another study referenced in the same article:
The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.
It’s hard for me to imagine walking up to someone I don’t know and saying “Hey, spending a lot of time staring at your screen talking to a robot instead of interacting with real people can make you feel lonely. The experience itself can be somewhat alienating because the robot doesn’t feel human.” I don’t know how you could assume this is useful unless you assume the average person is really stupid. Would you feel comfortable telling a stranger this? Would you be able to say it in a way that isn’t demeaning?
Another big claim that fails my test is that AI chatbots are useless. 10% of the world are now choosing to use them weekly. If they were useless, this would mean that 10% of the world is so stupid that they can’t tell that this tool they’re using every single week isn’t providing any value to them at all. There’s basically nothing else like this that people interact with regularly. You might think that social media like TikTok is bad for people, but it’s not “useless.” Users have fun or learn interesting facts or subtle social vibes from the TikTok videos they watch. You can criticize AI and think it’s net bad to use, but that’s a different claim from saying it’s useless. When I hear people say that AI chatbots are useless, it’s hard not to read it as a claim that almost everyone is incredibly stupid.
There’s too much of this way of talking in AI conversations. There are a lot of great criticisms of AI and chatbots, and real reasons to worry. I think that students cheating with ChatGPT is a gigantic crisis in education without clear solutions. But when people talk as if everyone using chatbots is incredibly stupid, and that people exposed to this technology are blind to the simple obvious trade-offs involved in specific situations, I come away with less respect for them. It seems like they underestimate the average person in a way that shows a lack of curiosity, or a tendency to steamroll other people’s experiences if they’re having a slightly different reaction to new technology.
I don't like the word “stupid”. It carries a moral judgement and in the context of this post is never defined. I don’t see any falsifiable claims made here.
The author seems to be projecting their own above average intelligence onto other people. He’s imaging their inner world to be somewhat like his when it’s anything but.
> but they’re intelligent when it comes to their own lives and the areas they work and spend time in. We should expect the average person to struggle with factual questions about abstract ideas and far-off events, but not so much about what’s right in front of them day to day.
This is cosmically untrue. My cleaners can’t work my vacuum. I’ve spent a year constantly re-explaining it. They can’t put the oven racks back the way they found them, just force them in the wrong way around every time. No number of reminders seems to help. My landscaper could not work out he had our landscape wiring crossed, spent days coming back replacing bulbs, digging up wires and replacing them, randomly rewiring sections. 5 minutes with a multi-meter and I had it solved. I know a nurse who thinks deoxygenated blood is blue.
The average person tries to memorize a handful of things from someone smarter and then stays in their lane. That’s fine, I don’t think we should call them “stupid” but capable thinkers and problem solvers they are not.
I'm glad I'm not the only one who had a similar disagreement with the article.
I try to frame it in my head that people aren't generally stupid, but that we do a lot of stupid things. Or we don't do things (eg. read, or think critically) that adds to our level of stupidity.
It's perhaps a privilege of being above-average intelligence, but these days I try to focus less on being smarter and more on being less stupid. I seem to get more bang for the buck.
I'm still pretty stupid, though.
My own anecdote:
I worked with a guy on military aircraft. He was a radar technician. He went to school for it. I also went to school for it. The school was pretty hard with a decently high washout rate, including a lot of 2 year EE graduates, for some reason.
One day, we're working on the flight line on a radar issue and he says something pretty stupid, but we're kind of buddies, so I ask him to elaborate.
Long story short, his belief was that radar tracked other (jet) aircraft airspeed by reading the reflections bounced off of the other jets' turning pistons and calculating airspeed by how fast their piston assembly rotated.
I was completely taken aback by the multiple levels of stupidity. If you think through it a little, there are multiple levels of fail there. I then had to explain how this particular system actually worked and work him through the ludicrousness of each step of his beliefs.
How he 1.) fabricated this elaborate theory from the relatively simple section of training ("measure latency of returned energy transmissions"), and 2.) made it through tech school without washing out, I'll never know.
Stupid is an insult, but there is a clinical equivalent that all of the people you mentioned would meet (unfortunately).
The author seems to be pretending these people don't exist, and I think you made a good guess as to why.
> I know a nurse who thinks deoxygenated blood is blue.
That's hard to believe. Any nurse has drawn blood from veins and seen deoxygenated blood with their own eyes.
Nurses still believe all sorts of medical myths, unfortunately.
We tend to think about intelligence in terms of capacity, but "stupidity" is generally more about emotion and self control. I suspect every gambling addict has the brain power to figure out they have a problem. It's not a fundamentally complex issue. That doesn't mean they stop.
Students are often fully aware their use of ChatGPT is a bad idea. Like the gambling addict, that doesn't mean they stop. Forcing yourself to do your schoolwork has always been difficult, and they've been given a way out.
> Students are often fully aware their use of ChatGPT is a bad idea.
Often and fully aware? I sincerely doubt it. I’d bet it’s only a tiny minority of students who use ChatGPT who think it’s a bad idea.
Unlike the gambling addict, they can’t feel the immediate repercussions of their actions. Those will only be available in hindsight, long after they can correct.
> I don’t know how you could assume this is useful unless you assume the average person is really stupid. Would you feel comfortable telling a stranger this? Would you be able to say it in a way that isn’t demeaning?
Something I've noticed from time to time in my career is the following:
1. Someone does something they know perfectly well they shouldn't have done, but they think they won't get caught.
2. They get caught.
3. They feign ignorance or confusion about the rules, hoping to lessen their punishment.
4. The organisation takes their claim of ignorance seriously, and introduces incredibly patronising training/rules/signage.
A person who doesn't notice this happening could easily get the impression their peers have room-temperature IQs.