AI optimism is a class privilege

2026-02-1618:05134137joshcollinsworth.com

I think I have an idea why we're so extremely divided on AI: it's because we have an intuitive sense of who it stands to benefit, and who stands to pay the costs. I think whether you see reason for…

A while back, in a slightly earlier era of AI1, a project was making the rounds which would read your GitHub profile, and create a personalized roast based on the contents.

It was intended, I assume, as a harmless, lighthearted novelty. I wanted to join in on the fun, so I put my profile in and tried it out.

I didn’t laugh at my roast.

It wasn’t clever, or funny, or even particularly unexpected. A tech-savvy stranger on Fiverr probably could’ve done better.

But more than that: I remember being surprised at how mean it was. Little of what the model produced even felt like a joke; instead, it just read as a slew of very personal insults.

And then I remember being surprised that the artificial cruelty actually affected me.

Despite knowing this was all a soulless (and as it turns out, humorless) machine making a poor attempt at comedy—one that nobody else even saw!—reading those words hurt. Bizarrely, I suppose, AI actually managed to hurt me.

And that was the first time I remember thinking about what AI was going to do to my children.

If I—a grown man with thick skin, hardened by decades of internet usage—can still be susceptible to highly personalized online bullying, what will it be like for my son, when some mean kid inevitably gets their hands on this technology and decides to put it to malicious use?

By the time my kids encounter real bullying, I’m sure derogatory jokes will be about the least harmful form of antagonism AI will be empowering. Imagine the damage one bad kid could cause using deepfakes, for example. Forget the days of starting a nasty rumor and spreading it around the school; now you can share a video of it happening.

Imagine the shame, intimidation, harassment, and trauma AI might enable a cruel juvenile to inflict—particularly once the tech has had another few years to improve. (To say nothing, of course, of what it might enable for an unethical adult.)

Imagine how absolutely unmitigable the damage would be.

My reaction wasn’t laughter; my reaction was horror at the realization that we’re racing to build the perfect bullying tool.

I was never exactly an optimist when it comes to AI. But that was the first time I realized exactly how dark the future I foresaw actually was.

Although it’s not an entirely correct description, I’ll use the term “AI optimist” a lot in this post, as it’s at least a serviceable label for a general group of people.

That group, to be a bit more descriptive, is made up of people who are excited about AI. This might include future developments, but they’re particularly excited about AI in the present and near term, and how they can use it right now. You might call them enthusiasts, or even believers, maybe. But in any case, they’re generally enthusiastic about AI, and aren’t overly concerned with costs or downsides.

You almost certainly know at least one or two of these people. Maybe you even are one. (If so: I’m not naive enough to think I’ll change your mind with this post, but I hope I’ll at least give you some things to think about.)

It seems to me that to be in this group—to regard AI, as it exists currently, with optimism and enthusiasm—requires at least a certain degree of privilege. Hence, the somewhat blunted title of this post.

I had long struggled to put the thought into words. But once it crystallized into this post’s titular sentence, I felt as though a great deal around me suddenly shifted into perspective.

So, that’s why I wrote this post; to share that perspective. It is my own, and it comes from my own experiences (and yes, through the lens of my own substantial privileges, class and otherwise). You can take it, or not, as you like.

It’s late 2025, and so you don’t need me to tell you how extreme opposing views on AI can be. Everyone has an opinion of AI, and the overwhelming majority fall to one far end of the spectrum or the other. There’s a vast divide between the sides, each fiercely passionate in their own entirely opposite ways.

For my part, I’m decidedly on the pessimist side of the chasm, for many reasons. Some I’ll get into here; others, I’ll mostly pass over, as they’ve been well covered elsewhere.

But for now, suffice to say: when I look around me at the impact AI is currently having, I see little reason for enthusiasm—let alone the little-questioned, quasi-religious belief that this fundamentally flawed technology might one day soon bring about some sort of economic, societal, and/or scientific revolution all on its own.

Come to think of it, “religious” might be a good word to describe how AI optimism feels, from the outside. It has fervent believers, prophecies from prominent figures to be taken on faith, and—of course, as with any religion—a central object of worship which can at all times be offered as The Answer, no matter what the question might happen to be.

In fairness: that’s not all AI optimists. I’m mostly describing the extreme ones.

Even among the more moderate optimists, though—ordinary people who just like the tech—the enthusiasm has always seemed…disproportionate, let’s say.

It was always perplexing to me that so many of my peers seemed so eager to be across the divide from me; that they were so much more impressed with AI than I was, and so indifferent to what I felt were alarming flaws and drawbacks.

They didn’t seem particularly different than me. In fact, many were my friends, connections, and people I looked up to.

We were looking at the same tech, with the same outcomes, and drawing entirely different conclusions. What was I missing?

The answer eventually hit me:

They see themselves as the ones benefiting from AI, and not as the ones it might cost.

I concede AI can occasionally be helpful for certain tasks, and I can understand the enthusiasm, as far as that goes. I don’t use it often, but admittedly some. (I do still write every word of every post on my own, however, hand-typed em dashes and all.)

I sometimes find AI helpful for generating reference images to use as starting points for illustrations, and occasionally for ideating, as a “rubber duck” of sorts. I also use it once in a while to compensate for my own color vision deficiency. But mostly, it helps me with code.

In full disclosure of all the mundane details: I mostly only use code completion suggestions in VS Code, even though they’re often hit and miss. I rarely use chat mode, and when I do, it tends to be mostly for rote tasks like format conversion or pattern matching. That’s pretty much it. Every time I’ve tried giving AI more responsibility than that, it’s let me down pretty spectacularly.

I’m deeply skeptical that AI offers a net productivity boost in general2, but particularly that it’s capable of high-quality frontend code. I theorize good frontend is just too subjective, too visual, balances too many concerns, and is too underrepresented in training data. (That might explain why developers in other specialties seem to report better results.)

I can already hear the enthusiasts scoffing and getting up to leave, because I don’t use AI “the right way,” by vibe-coding with Cursor agentic MCP, or whatever the flavor of the week is. And it’s true; I’ve never gone that deep with it.

That’s partly because I’ve heard too many horror stories about leaked secrets, deleted databases, and wiped hard drives. I don’t like the idea of giving a non-deterministic black box full control of my machine and/or production.

But it’s also because I like using my brain. Any passion I have for what I do comes largely from the process of ideating, building, and creatively solving a problem. Having a machine do all that for me and skipping to the result is as unsatisfying as a book full of already-completed sudoku puzzles, or loading up a save file where somebody else already played the first two thirds of a video game. I don’t do these things just because I want the result; I also do them because I want the experience.

I want to improve! And it’s hard to imagine how that might happen if I’m not actually putting skills into practice.

All of that’s mostly beside the point anyway, though; my issues with AI have little to do with its level of effectiveness.

Even if my new coding buddy is severely prone to overconfidence, it’s still admittedly exciting when it makes tasks that would’ve been previously time-consuming and/or challenging quick and easy.

In order to be an AI optimist about this, however: that’s where I would have to stop thinking about it.

I would be forced to ignore what else my little coding buddy is getting up to when I’m not looking; the other impacts he’s having on other people’s lives.

Let’s take layoffs as an example.

In order to be an AI optimist, it seems to me you’d have to believe yours is not among the jobs at risk of being automated or downsized, and that you aren’t among the countless workers staring down displacement. (Or at least: not at risk of AI taking over the interesting and fulfilling parts of your work, as your role is reduced to acting as its manager.) After all, how could you feel enthusiastic about a threat to your own livelihood?3

You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.

To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.

Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that your résumé opens doors for you.

Or maybe you’ve been promoted into leadership. Maybe you spend your days in important meetings.

Maybe this is all a lot easier to be optimistic about with the right charts and graphs in front of you.

You almost certainly aren’t a junior, though, or an intern, or somebody trying to break into the field. You must not be near the rising tide engulfing entry-level workers across my industry and a wide range of others. Because, infamously, nobody is hiring juniors anymore.4

It seems fairly safe to assume you aren’t in the first group against the wall, if you’re excited about the thing putting them there.

You probably aren’t a contractor, either, or working at a consultancy. And for that matter: you almost certainly aren’t an artist, or illustrator, or writer. You probably haven’t watched client dollars funnelled upwards, with the bitter knowledge that this thing eroding your income is only possible because it brazenly plagiarized you and a million other people who do what you do.

AI optimism probably means you’re in a position where nobody is stealing your work, or bulldozing your entire career field.

That’s the thing about being bullish on AI: to focus on its benefits to you, you’re forced to ignore its costs to others.

AI optimism requires believing that you (and your loved ones) are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health; likely, it also means you have a wider and more substantial support system propping up your wellbeing.

(Not to put too fine a point on it, but: those things are otherwise known as privileges.)

AI optimism requires you to believe that, whoever will be impacted by the sprawling data centers, the massive electricity demands, the water consumption, and the other environmental hazards of the AI boom5, it won’t be you. Whatever disaster might happen, your neighborhood will be safe from it. Probably far away from it.

The harms of AI aren’t a standalone issue; as AI becomes a part of other technologies, systems, and parts of society, it’s exacerbating their existing problems, and accelerating damage already being done elsewhere.

I have to believe scammers are enthusiastic about AI; there’s likely never been a more helpful tool for fraud. Criminals and con artists have always been around, of course, but they’ve never had such powerful instruments at their disposal. After all, it’s much easier to rob somebody’s unsuspecting grandma when you can simply conjure a video of that person out of thin air, or perfectly imitate their voice on a phone call.6

But that’s a relatively small scale of harms, aimed at individuals. The broader harms come from AI interacting with systems, like governments and their substructures.

Malicious state actors (both in and outside of the US) are wielding AI as a ruthlessly efficient propaganda machine, disseminating disinformation that’s more convincing than ever, faster than ever previously possible. Much of what’s being produced serves to dehumanize and victimize vulnerable groups, like immigrants, refugees, queer people, and political dissidents. Mainly (but not exclusively), this is to bolster authoritarian power.

It’s hard to imagine how one could be optimistic about the technology empowering such horrors, but I suppose knowing it probably won’t affect you must help.

I doubt I could feel very good about the tech helping me write emails faster if I knew that same tech was helping to make me, or people close to me, a target of violence.

Even when the intent might be good, however, AI often amplifies existing harms.

In the rush to shove AI into everything possible, we’ve now injected it into parts of the justice system, too. It’s in everything from facial recognition and surveillance tech to data and administrative work. It’s even in the legal system.

In theory, this is an efficiency boost. In theory, a machine should be less biased than humans.

In reality, not only do these models make mistakes at a rate that is utterly unacceptable in this context; they mimic and amplify the inherent racism present in their own training data. (Tech is always a mirror of its creators; it is never neutral.) Compounding this problem, AI is non-deterministic, and something of a black box, offering little to no way to inspect, challenge, or appeal its results.

Needless to say, this deployment of AI has already had profoundly devastating impact on real people’s lives—damage which shows no signs of slowing.

Forgive me, but I can’t imagine being excited that this technology which is rapidly accelerating inequality is also helping me save a little time on writing code.

I have to imagine such excitement would require me to think none of this could happen to me, or to anybody who matters to me.

Or, at the very least: that it’s all undeniably unfortunate, but ultimately, in service to some greater good. A justifiable tradeoff; a glitch to be ironed out.

AI optimism requires you to see the lives of at least some of your fellow humans as worthwhile sacrifices; bug reports in a backlog.

But even when there’s no larger system behind it, and even with no broader goal or agenda at all—malicious or otherwise—AI can still amplify existing harms.

One example at the top of my mind: Facebook was recently flooded with AI-generated videos of women being violently strangled. There was no apparent deeper purpose behind this horrifying wave of misogynistic terrorism, however; it just happened to be what the algorithm rewarded. That content generated engagement, and that engagement generated more of the same content.

A similar thing happened recently on TikTok, but this time it was videos of immigrants being ruthlessly brutalized that struck a nerve and triggered a proliferation of objectionable content across the platform.

Sometimes this effect is more or less benign (see: Shrimp Jesus); other times, a machine built to provoke a reaction will inevitably hit paydirt in the horrifying, the traumatizing, the inhumane, and the unacceptable.

AI isn’t just harmful on its own; it’s a force multiplier for existing harms. The intent behind it, if one even exists, is irrelevant; the impact is the same.

I think all of this is why so many of us are so pessimistic about AI; we can see very clearly the many ways it represents a threat to us, and to the things we care about.

For so many, AI stands to take away something important from us and those around us—safety, stability, creativity—and replace it with benefits for somebody else; productivity and profit, going mainly to those above us.

I think so many people are against AI because they see how it functions as a system for taking away from those with the least, to give even more to the already highly privileged.

This is why the promise of AI fixing everything and empowering workers is so important; it’s the linchpin of the whole operation. It’s required to get buy-in from the people who stand to lose the most.

So let’s talk about that next.

Some might argue I’m missing the entire point here, by focusing so much on the present. Optimism isn’t about what’s happening right now, they might say; it’s about the future!

Forget what AI actually is currently; the models will get better, the data centers more energy-efficient, the tokens cheaper, the mistakes rarer, the harms mitigated, and so on, until we have something that changes the world for the better; an actual benevolent technology that solves our problems, in whatever way. Maybe it even is, or leads to, AGI (actual human-level artificial intelligence; the thing AI used to mean before 2022).

I take issue with these predictions, for several reasons:

  • While I’m sure the technology and its costs will continue to improve, it’s hard to see how that would mitigate most of these harms. Many would just as likely be intensified by greater speed, efficiency, and affordability.

  • I’m wary of predictions in general,7 but particularly those that bear little to no resemblance to observed reality. Absent a clear evidential link, prediction tends to be based purely on hype and speculation, and there’s a wild overabundance of both around AI.

    It’s reasonable to believe the tech will improve. It seems much less reasonable to think it might suddenly change into something new, develop presently impossible capabilities, or take us somewhere far distant with absolutely no clear path or connection between here and there.

  • Most of the utopian visions of AI center on the idea that AI is sentient, which it categorically, factually, is not. Language and statistics can simply mimic cognition easily, and our human brains are overly eager to anthropomorphize anything that vaguely imitates human behavior. Thinking and reasoning are very different than statistically emulating communication.8

  • Many LLM experts, including prominent researchers from OpenAI and Meta, tell us the models are already approaching their realistic ceiling, and cannot practically scale further beyond this plateau, for several reasons (not least of which: the exhaustion of training data that isn’t already tainted by AI). Several also tell us LLMs are a dead end when it comes to achieving AGI, and most generally agree it’s literally impossible to stop LLMs from making things up. (Really: actual people from OpenAI publicly admitted LLMs will never stop “hallucinating.” It’s an un-fixable bug, because it’s a core component of how LLMs work.)

    So if you believe LLMs are intelligent, or eventually will be, you’re fighting a lot of people who know a lot more than you do from every angle.

  • Even if we ignore all the technical limitations, or find ways around them: new advancements simply don’t work that way. They never have. (The equitable, worker-liberating way, that is.) Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less.

    If you become twice as productive, you don’t get twice the pay or twice the time off; you just get twice the workload—likely because somebody else doing the same job just got laid off, and now you’re doing their work, too.

    This sort of technology distributes instability to the many at the bottom, while consolidating benefit at the top—and there has arguably never been a more efficient mechanism for this than AI. I see absolutely no reason to believe this time will be different, especially because:

  • AI models exist in the consolidated hands of a precious few huge companies, which are themselves quite obviously happy to do away with as many of their own workers as they possibly can. AI will serve, and is already serving, corporate interests first and foremost—especially as these models continue to replace core infrastructure, like web search, and can be manipulated however the companies please.9

  • Regardless, even if you naively believe in the tech: you’re still willing to put up with all the harms and dangers of AI until that imagined potential future arrives—which brings us back to the original point.

Some might also point to positive use cases for AI. Accessibility is a popular one. (In fact, it’s so popular that online AI apologists have realized all they need to do is invoke the word “ableist” to shut down any discussion.)

Yes, there are good use cases for AI. I don’t think most reasonable people would argue with that. Like I said: I sometimes even use it myself, to compensate for my own physical inability. But calling out such cases tends to be a bad-faith attempt to justify all of AI’s other harms by using disabled people and others who might benefit from AI, rather than reckoning with the damage and rethinking our deployment of AI in order to maximize good and minimize harm for everyone. We don’t have to accept every use of AI and all of its impacts just because some of them might be beneficial.

Finally, let me take a moment to address anyone who might be thinking: sure, AI is being used for some bad things, but I’m not personally using it that way. What’s wrong with me just focusing on the good parts and enjoying the benefits to me?

My friend, that’s privilege. You are literally describing privilege.

Let me close this post the same way it began; with a personal example from my own family.

I have a newborn daughter.

I began writing this post before she was born, and mostly because of her, I’m now finishing it up several weeks later. (I’ve fit most of this writing into her nap schedule, typing as she sleeps beside me.)

And I can’t shake the thought that I’m welcoming her into a world where so much of the potential malicious misuse of AI could one day be directed at her.

Looking beyond all the things we’ve already talked about: technology in general has made things like stalking and abuse easier than ever. But AI goes even further. I live knowing AI will allow any degenerate pervert with an internet connection to create deepfakes of this little girl—up to and including pornography—without any consent at all, at barely the click of a button.

If this sounds like a horrifying, disturbed thought: it is! It absolutely is! But I’m not coming up with this on my own; this is already happening to untold numbers of women, many of whom are school-aged girls.

To be an AI optimist, I would need to turn away from this. Ignore it. Consider it all just part of the plan; a price to be casually paid (hopefully by somebody else) in exchange for…what? Writing software a little bit faster?

Optimism would require me to believe that my children probably won’t have that kind of experience, or any others I’ve described here.

To believe they’ll be in better schools. Better neighborhoods. Have better friends. Better support systems.

Won’t ever attract the attention of the wrong guy, or piss off the wrong girl.

Won’t ever live in the wrong places. Won’t ever find themselves in the wrong part of the system.

Won’t end up on the wrong side of the accelerated inequality.

AI optimism requires you to see yourself and your loved ones as safe from AI; as the passengers in the self-driving car, and not as the pedestrians it might run over.

I don’t know how you see yourself that way without a great deal of class privilege.

The rest of us?

I guess it’s hard to see the convenience as worth the price—let alone exciting—when you know you could be among the ones paying for it.


Read the original article

Comments

  • By nomdep 2026-02-1621:2910 reply

    (I feel that I write a comment like this every few years)

    The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).

    And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.

    In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.

    History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.

    • By abeppu 2026-02-1622:04

      > but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't

      I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.

      In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.

      In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.

      I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.

      Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.

    • By aprilthird2021 2026-02-1621:502 reply

      Everyone says this as if the previous cycles of labor displacement could not compound and this be the last straw. Same with how phones cause shorter attention span and less thought and more social isolation. People will say "oh they said the same thing about books and TV and video games"

      We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur

      • By robflynn 2026-02-1622:11

        I am reminded of this, I feel like its kind of a similar phenomenon: https://www.reddit.com/r/dataisbeautiful/comments/1m803ba/th...

      • By plagiarist 2026-02-1622:13

        I think they are compounding. Prior to the internet we had more third spaces, less attention economy, fewer self-esteem problems comparing our lives against influencers', warehouse and delivery jobs without pissing in a bottle to stay employed, people were employed instead of doing gigs. We used to have privacy somewhat, that's gone.

        It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.

    • By raincole 2026-02-1622:181 reply

      > But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.

      And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!

      [0]: https://www.economist.com/united-states/2026/02/12/the-decli...

      • By _DeadFred_ 2026-02-1720:12

        The scifi books were right in predicting future relationships would be poly, they just didn't explain it was because it was the only way people would be able to afford to live.

    • By random3 2026-02-1621:511 reply

      arguably the history of humanity was about automating humanity.

      - teeth and nails with knives (in various shapes from bones to steel)

      - feet with carriages and bicycles and cars

      - hands with mills and factories on steam engines to industrial robots

      Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.

      This automation is an automation of the human brain.

      While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.

      With loss of utility comes loss of benefits.

      Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.

      By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)

      • By NonHyloMorph 2026-02-1623:102 reply

        The term you're looking for is externalisation not automation. Check out "the fault of epimetheus"; & on the alienation of the machine by automation ca. 19late7s one of its intellectual predecessors: gilbert simondon

        • By random3 2026-02-175:33

          Thanks, both! Glad to get the explicit names for the things I'm "gesticulating" at. I haven't done any explicit reading on the topic, except for adjacent stuff like Analogia (Dyson), The coming wave (Suleyman) and saw talk by Terry Winograd that I thought was on point https://www.youtube.com/live/LcvYYXdXF8E. I have and do want to read Superintelligence and will check out both Stiegler and Simondon.

        • By plastic-enjoyer 2026-02-1623:17

          I mean, if he is a reader of Nick Land, automation may be right.

    • By bsder 2026-02-1622:58

      > enabled cyberbullying

      The problem here is that adults do not take bullying seriously and they take cyberbullying even less seriously.

      This is the fairly standard problem that we do not apply the existing rules and laws online with the same vigor as offline.

    • By antonvs 2026-02-1621:474 reply

      The assumption in your comment is that those changes were all net good. In hindsight though, the automobile has had possibly existential costs for humanity, the internet has provided most benefit to those who most abuse its power, and so on. In the end, it doesn’t seem as though you’ve actually made any sort of case.

      • By jdross 2026-02-1621:504 reply

        The set of people who believe the automobile (or the Internet) are net negatives taken as a whole for society is extremely small, for good reason

        • By Retric 2026-02-1621:591 reply

          Is it? Do you include everyone that’s died or lost a loved one due to personal automobiles in that assessment?

          We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.

          • By notanastronaut 2026-02-1715:541 reply

            How many people have lived or had a loved one saved due to automobiles?

            We have the benefit of hindsight but we're also making judgment calls looking back on fuzzy recollections, forgetting just how the past used to be before an innovation came along.

            • By Retric 2026-02-1717:38

              I agree it’s difficult to do these calculations as society evolves with technology. Trains enable long distance evacuation from hurricanes. Street cars and subways allow for medical transportation but it looks very different than an ambulance. Similarly do we exclude helicopters assuming cars were simply banned rather than our failing to design IC engines or whatever.

              That said, there are modern enclaves without cars mostly on islands or in very remote locations. They make due just fine without cars, it’s the low population density that’s at issue for medical care.

        • By EdwardDiego 2026-02-1622:012 reply

          Let's refine terms - internal combustion engine driven automobiles have lead to lead poisoning, air pollution, and CO2 emissions.

          • By zozbot234 2026-02-1622:161 reply

            The automobile on its own was actually far less polluting than the horse wrt. air quality. It's just that there's a whole lot more of the former than there ever was of the latter. Even wrt. climate change, it turns out that horses produce methane emissions which are far worse for the climate than carbon dioxide.

            • By mikelitoris 2026-02-1623:271 reply

              You are immensely discounting induced demand though.

              • By simianwords 2026-02-170:56

                induced demand is a good thing - it means there is more utility going around.

          • By simianwords 2026-02-170:551 reply

            I would like to get actual numbers.

            1. how many people died because of lead poisoning, air pollution?

            2. how many people were saved and had qualitatively better lives because of automobiles?

            • By EdwardDiego 2026-02-287:20

              1. Is a very hard question to answer, airborne lead pollution isn't fatal, but it does impact cognition. Perhaps ask yourself how many people lead lower quality lives because of it, and the flow-on effects from that lower cognition.

              The people who did Freakonomics claimed that the drop in violent crime in the US in the 90s could be correlated to the phasing out of leaded fuel, but I'm not a statistician so can't speak to the accuracy of that correlation.

              2. How would you even measure that? How would you define a better life thanks to a vehicle?

              I feel like you're ice skating uphill a little given the deleterious effects of leaded fuels are well studied, but the question you're asking isn't.

        • By antonvs 2026-02-176:48

          That reason is along the lines of, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”

          Coal miners will fight for coal mines, the oil industry will fight for dependence on oil, and so on. Sometimes they’re aware of what they’re doing, but in the case of a comment like the above, apparently not so much.

      • By shimman 2026-02-1621:56

        I often wonder that if cable news was around say, during the American Civil War, how likely would the 13th, 14th, and 15th amendments have passed? I'd say extremely unlikely.

        Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.

        I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.

        To think that technological progress means prosperity is extremely delusional.

        We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.

      • By bad_haircut72 2026-02-1621:54

        The Industrial Revolution and its consequences have been a disaster for the human race.

      • By MSFT_Edging 2026-02-1622:13

        Seriously, these types of comments are always really narrow in their view.

        Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.

        All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.

        As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.

        The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.

    • By georgemcbay 2026-02-1621:491 reply

      > The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.

      I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.

      • By akoboldfrying 2026-02-1622:06

        I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.

        Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?

    • By zozbot234 2026-02-1621:501 reply

      Local retail and specialty print media are alive and well. Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news. Even cyberbullying is mostly a matter of salience: it takes something that has always existed in the physical realm (bullying behavior) and moves it to the cyber environment where the mass public becomes aware of it.

      • By mft_ 2026-02-1621:55

        > Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news.

        Genuinely interested in some sort of data on this.

        My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.

    • By notanastronaut 2026-02-1715:40

      Don't forget, the camera also destroyed art and put artists out of a job. Then digital cameras ruined it for the film industry. And on, and on.

    • By techpression 2026-02-1622:27

      I don’t think we can haphazardly apply history like this, it’s never the same, we just like to find patterns where there are none.

      The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.

      And sure, some will run some local model here and there, but it will irrelevant in a global context.

  • By d_burfoot 2026-02-1621:284 reply

    > they mimic and amplify the inherent racism present in their own training data

    LLMs turn out to be biased against white men:

    https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-g...

    > When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.

    • By dogmayor 2026-02-1622:061 reply

      Important sentences immediately before the ones you quote.

      > For our evaluation, we inserted names to signal race / gender while keeping the resume unchanged. Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.

      • By daveguy 2026-02-1622:10

        Hah. Even LLMs know Meta and Palantir are evil af.

    • By aprilthird2021 2026-02-1622:09

      These are because of post-training. You have to give it such directives in post-training to correct the biases they bring in from scraping the whole internet (and other datasets like books, etc.) for data

    • By biophysboy 2026-02-1622:50

      Looking at the paper, the effect is significant but weak (5-7%), even with the conditionals that magnify the effect. I would be curious to see the effect if this experiment were performed on a slightly different categorical variable (e.g. how are two white ethnicities treated). I do think its bad if preferences are "baked in" to the default though - prompting them away seems like a bad solution.

    • By 113 2026-02-1622:13

      That's not a reliable source.

  • By abeppu 2026-02-1621:321 reply

    > You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.

    > To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.

    So I think even these people should not feel secure. The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise. If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter. Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.

    I think the class divide that is most relevant is more literal and old-school:

    - Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.

    - Can you invest specifically in the firms that are actively building AI, or applications thereof?

    We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.

    • By zombot 2026-02-1712:14

      > then owners of course take a bigger share of value created going forward.

      As has been the case with every technological change since forever.

HackerNews