Rob Pike goes nuclear over GenAI

2025-12-2614:0815501797skyview.social

Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler…


Read the original article

Comments

  • By suralind 2025-12-2620:2717 reply

    I don’t really understand the hate he gets over this. If you want to thank someone for their contribution, do that yourself? Sending thank you from an ML model is anything but respectful. I can only imagine that if I got a message like that I’d be furious too.

    This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.

    • By hijodelsol 2025-12-2621:416 reply

      Sending an automated thank you note also shows disdain for the recipient's time due to the asymmetry of the interaction. The sender clearly sees the thank you note sending as a task not worthy of their time and thus hands it off to a machine, but expects the recipient to read it themselves. This inherently ranks the importance of their respective time and effort.

      • By xnx 2025-12-273:00

        Yes. Just like lazy pull requests, it's bad behavior by a person that is only facilitated by AI.

      • By XorNot 2025-12-272:48

        Really makes you appreciate the point of view of the Scramblers in Blindsight...

      • By exabrial 2025-12-272:25

        ^ I couldn't have said it better.

      • By echelon 2025-12-2623:126 reply

        [flagged]

        • By tokioyoyo 2025-12-2623:191 reply

          Everything mentioned in the first paragraph as arguments still takes some personal time and effort. The amount of time that’s involved to receive and acknowledge the gift is smaller than the amount of time to prepare the gift. So it feels “right”.

          Not sure if I’m making sense, but that’s how I’d feel about it.

          • By Yoric 2025-12-279:032 reply

            Except for the white elephants, which were designed specifically as anti-gifts.

            • By godelski 2025-12-2710:581 reply

              Depends how you do white elephant...

              But still, a good gag gift takes effort. It's not like you walk into a random store and pick the first thing you see.

              The whole aspect of stealing gifts demonstrates this. It'd be pointless if the gifts were all low grade garbage. They'd be effectively fungible. Yet the theft part it is critical to making white elephant fun. Regardless if you're doing gag gifts or good gifts.

              • By Yoric 2025-12-2714:531 reply

                Er... white elephants were not gag gifts.

                A white elephant is a gift that you cannot refuse, cannot regift, and is so expensive/complicated to take care of that it will become your primary concern for the rest of your life.

                • By lcnPylGDnU4H9OF 2025-12-2717:561 reply

                  Well, yes, but it also means a gag gift; I'd hazard a guess that >99% of uses of the term in the past several decades have been of the "gag gift" persuasion. There are many white elephant parties thrown by people who care little for history.

                  Even then, intentionally ruining someone's financial life requires more care and attention than telling an AI agent to perform random acts of kindness (so far).

                  • By Yoric 2025-12-2718:54

                    > Well, yes, but it also means a gag gift; I'd hazard a guess that >99% of uses of the term in the past several decades have been of the "gag gift" persuasion. There are many white elephant parties thrown by people who care little for history.

                    Is this an Americanism? I've never heard "white elephant" used with such a meaning.

                    > Even then, intentionally ruining someone's financial life requires more care and attention than telling an AI agent to perform random acts of kindness (so far).

                    Absolutely.

            • By woooooo 2025-12-279:38

              Even a deliberately bad gift as a gag shows some effort and socialization.

        • By AnimalMuppet 2025-12-270:341 reply

          If you send me a Hallmark card, you don't take the time to compose it yourself, but you presumably don't just pick one at random. You read it, to decide if you like the tone and sentiment. You may read several before you pick one. That is, it still takes your time even if the words aren't yours.

          • By darubedarob 2025-12-279:28

            You take the time to work to take the wage to buy to buy the card to send. Money is lifetime donated. Or was. Now the artifact has lifetime invested into it token is rapidly loosing that value.

        • By globalnode 2025-12-270:221 reply

          you can just disagree with reasons rather than this performative rhetoric. your post makes me realise i was wrong to tease people about rust the other day -- apologies for that.

          edit: changed "ad hominem" to "performative rhetoric", think its more fitting in this case but it all seems borderline

          • By slg 2025-12-270:501 reply

            >you can just disagree with reasons rather than this performative rhetoric

            This is such a bizarre trend that seems to have gotten much worse recently. I don't know if it's dropping empathy levels or rising self-importance, but many people now find the idea of someone genuinely disagreeing as a completely foreign idea. Instead of meeting a different viewpoint with some variation of "agree to disagree" many more people now seem to jump to "you actually agree with me, you're just pretending otherwise".

            Non-tongue-in-cheek discussion of the Mandela Effect is a parallel phenomenon. "My memory can't possibly be wrong, this is evidence of our understanding of physics being wrong!"

            Just a couple small things that make me worry about the future of society in the midst of a discussion about one huge thing that makes me worry about the future of society in AI.

            • By Yoric 2025-12-279:071 reply

              As a variant, I recently stumbled upon a post that basically sums up to "people who disagree with me on AI are clearly blinded by their prejudice, it's so sad."

              • By godelski 2025-12-2711:23

                Or

                Your argument is dumb because it's objectively better to optimize x conditioned on y than optimize y conditioned on x.

                Maybe the worst variant of this is where people don't realize they're actually arguing for different things but because it's the same general topic they assume everything is the same (duals are common). I feel like this describes many political arguments and it feels in part intentional...

        • By albedoa 2025-12-274:251 reply

          > You know how you can tell someone hates AI? They'll tell you fifty times. It's becoming a personality type.

          This is so fucking funny man: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

          • By zwnow 2025-12-2710:531 reply

            I dont know whether I should be repulsed by this level of stalking, but its extremely funny ngl

            • By albedoa 2025-12-2718:04

              These freaks only know projection :D It was a layup.

        • By jhhh 2025-12-274:30

          Hallmark didn't destroy the affordability of the personal computing market.

        • By dare944 2025-12-270:491 reply

          > I hate the internet's psychosis-like reaction to AI more. The tone is always one of bravery and sacrifice mixed with disgust. You know how you can tell someone hates AI? They'll tell you fifty times. It's becoming a personality type.

          Tell me again about performative rage.

          • By echelon 2025-12-2715:111 reply

            The anti AI folks are review bombing games even suspected of using AI.

            The anti AI losers on Reddit are doxxing people that use AI. I have been a target of this.

            The anti AI people brigade YouTube creators that use AI to destroy their traction. They'll share links of victims. I have been a target of this too, after spending weeks working on a single three minute animation.

            I'm living in this world every day because I build tools for the AI ecosystem.

            This is not positive. This is not neural. It's downright hostile, aggressive, and cultish.

            • By SamoyedFurFluff 2025-12-2717:25

              Have you considered pro-AI proponents all do these things also? It’s an ugly culture war but from a relatively neutral observer I am seeing gross behavior on both sides. (Eg. Making disgusting porn of real people, mocking the dead’s art and likeness…)

      • By rubiksx 2025-12-271:153 reply

        [flagged]

        • By dang 2025-12-2719:16

          Could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

          You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...

        • By latexr 2025-12-271:311 reply

          > no one wants technodystopia.

          What some people see as technoutopia, others see as technodystopia. In other words, some people do want your version of technodystopia, they just don’t call it that themselves.

          • By test6554 2025-12-271:46

            When robots start sending us bullets, we'll probably look back fondly at the time when they sent us thank you letters.

        • By hijodelsol 2025-12-2814:38

          Definitely not written by AI. Perhaps it just seems strange to you because English is not my native language so my use of it might not fully correspond to what you are used to.

    • By electroly 2025-12-2620:371 reply

      I'm not sure any humans were behind the email at all (i.e. "do that yourself"). This seems to be some bizarre experiment where someone has strapped an LLM to an email client and let it go nuts. Even being optimistic, it's tough to see what good this was supposed to do for the world.

      • By numbsafari 2025-12-2621:181 reply

        It’s a marketing gimmick. Whoever did it wanted to trade on the social currency of the tech-famous people they sent public shout-outs to, hoping it would drive clicks, engagement, and relevancy for the source account from which it originated, either as an elaborate form of karma farming, or just a way to drive followers and visibility.

        • By fc417fc802 2025-12-2621:471 reply

          It's also possible that the entire goal was nothing more complicated than stirring up shit for fun. By either metric it must have been a massive success judging by all the attention this is getting.

          • By nathansherburn 2025-12-2623:013 reply

            I've actually been following this project for a long time and it's none of the above. They're simply testing what a set of frontier models can do when given a goal and left to their own devices.

            I agree this outcome is very painful to see and I really feel for Rob. It's clear people (myself included) are completely at breaking point with AI slop.

            In this specific case though it's worth spending 30sec to read the website of AI model village to understand the experiment before claiming this was sent by Anthropic or assigning malicious intent.

            • By robwwilliams 2025-12-275:46

              Thanks for this context.

              Here is one specific link to the project by Adam Binksmith from April 2025.

              https://theaidigest.org/village/blog/introducing-the-agent-v...

              Would have been a safer experiment in a sandbox full of volunteers participant. This got messy and causes confusion.

            • By jacquesm 2025-12-2722:24

              This is the equivalent of releasing a poorly tested and validated self driving vehicle into general traffic. Of course nobody would ever do such a thing...

    • By nearbuy 2025-12-2623:107 reply

      No one intentionally wanted to thank Rob Pike. As an experiment, some people asked an AI agent to do "random acts of kindness". They didn't specifically know the AI would send emails as a result and have since updated its instructions to forbid it from emailing people. They probably should have been more careful about unleashing AI agents on the world, but I don't think they intended to spam anyone.

      • By WD-42 2025-12-270:362 reply

        So some AI company instructed their state of the art, world changing tech to “do some good” this holiday season and the best it could do was spam a bunch of famous CS people with the first paragraph of their respective Wikipedia articles? This is kinda hilarious to be honest, but also sad. Why not donate to a charity or something?

        • By nearbuy 2025-12-275:42

          Not an AI company. It's a project by some small charity called Sage. It seems they didn't intend to email anyone and they've now stopped the agent from doing so.

        • By snickerbockers 2025-12-271:051 reply

          It's emblematic of their entire worldview. When they need resources, training material or laws AI is everybody's accomplishment but when it comes to profits or even just being allowed to use the model then it's their accomplishment but yours.

          AKA "communist in the streets, capitalist in the sheets".

          • By nearbuy 2025-12-275:43

            It was done by a small charity called Sage, not an AI company.

      • By account42 2026-01-0813:24

        I'm sorry but if you run a program with the capability to send emails you are responsible for it. "It's AI magic we don't understand" is no excuse.

      • By wat10000 2025-12-2719:431 reply

        Why did it have the ability to send email in the first place?

        • By nearbuy 2025-12-286:08

          I'm probably not the best person to ask, having looked at the site for all of 5 minutes.

          The experiment is having a bunch of AI agents using different models (Opus, Gemini, etc) try to do various real world tasks together. They might be tasked with organizing an event, opening a merchandise store, or help raise money for a charity (I'm not clear on the details). Sometimes their tasks require email (for example, signing up for some web service).

          That aside, counterintuitively, removing their email access is less effective than simply telling them not to send unsolicited emails, since they could just sign up for a free email service.

      • By HendrikHensen 2025-12-2714:13

        This is how we're going to destroy humankind.

      • By iwontberude 2025-12-274:18

        Doesn’t that make it worse? Lmao

    • By Refreeze5224 2025-12-270:091 reply

      He's not upset that someone sent him an AI-generated thank you. He's upset about AI itself. And he's completely right.

    • By eric_cc 2025-12-2713:39

      > I’d be furious

      To me it just comes across as low emotional intelligence. There are very few things worthy of being furious, in my opinion. Being furious is high cost.

    • By nunez 2025-12-2622:521 reply

      It's just so effin' weird!

      And to set Claude as the From header despite it not coming from Anthropic. Very odd.

      • By aoeusnth1 2025-12-2820:401 reply

        It did come from Claude, though, not Anthropic.

        • By xp84 2025-12-2918:051 reply

          If I were Anthropic I would have some kind of TOS restriction saying that you can't use their trademark to represent what you use their API to enable. It's just inappropriate. Even if you are a full anti-AI activist, it seems clear that the blame for specific things 'Claude' does in response to a deliberate prompt should fall on the person(s) operating it, and as such they shouldn't be allowed to make it appear that this is what Anthropic designed Claude to do.

          • By aoeusnth1 2026-01-0119:08

            Anthropic sees Claude as having its own identity, and wants to foster that independence. This behavior is already aligned with Claude's station.

    • By heresie-dabord 2025-12-2621:40

      > I don’t really understand the hate he gets over this.

      Some commenters suggest that Pike is being hypocritical, having long worked for GOOG, one of the main US corporations that is enshittifying the Internet and profligately burning energy to foist rubbish on Internet users.

      One could rightly suggest that a vapid e-mail message crafted by a machine or by an insincere source is similar to the greeting-card industry of yore, and we don't need more fake blather and partisan absurdity supplanting public discourse in democratic society.

      The people who worry about climate-change and the environment may have been out-maneuvered by transnational petroleum lobbies, but the concern about burning coal, petroleum, and nuclear fuel to keep pumping the commercial-surveillance advertising industry and the economic bubble of AI is nonetheless a valid concern.

      Pike has been an influential thinker and significant contributor to the software industry.

      All the above can be true simultaneously.

    • By habryka 2025-12-2622:122 reply

      To be clear, this email really had basically zero human involvement in it. It's the result of an experiment of letting language models run wild and exploring the associated social dynamics. It feels very different from ML-generated marketing slop. Like, this isn't anyone using language models for their personal gain, it feels much more like a bunch of weird alien children setting up their own (kind of insane) society, and this being a side-effect of it.

      • By account42 2026-01-0813:28

        It's unethical to run an experiment involving unwilling participants.

      • By iwontberude 2025-12-274:19

        “Gee I wonder what reputational harm could come to me for spamming the world with slop, let’s find out… for science!”

    • By socalgal2 2025-12-2623:483 reply

      I guess we're in the minority. I absolutely hate iPhotos, Google Photos, Facebook suggesting "memories". Apple, Google, Meta are not my friend or family and I don't want them behaving like they are. Even if they didn't fuck up and sent me memory of people or situation I don't want to remember.

      • By Firehawke 2025-12-273:131 reply

        Ditto. Every time I get a "Hey, you should send your father a happy birthday message!" it's a stab to the heart over someone dead over 12 years now.

        • By sejje 2025-12-2715:242 reply

          I don't get those, so there's definitely a setting you can change fwiw

          • By leftbehinds 2025-12-3014:23

            oh yeah, he should spend his time figuring out some settings. and lucky him the behavior of that setting might change without notice, so he should also read all the terms updates and keep current on whats what.

          • By jacquesm 2025-12-2723:401 reply

            Victim blaming detected.

            • By sejje 2025-12-2813:251 reply

              Only if you give my comment the worst possible reading, which goes against the guidelines as you well know.

              • By herewulf 2025-12-2815:381 reply

                I think we can all agree that obnoxious / potentially harmful features should be off by default.

                The authors of such a feature gave not more than a trifling thought to anyone's perspective but their own.

                • By sejje 2025-12-2816:061 reply

                  Yep, I think it's a plague and I wish we weren't here.

                  I was just trying to help the guy out. I didn't defend those absolute turds.

                  • By leftbehinds 2025-12-3014:25

                    it's a shame that people have to use this crapware on daily basis.

      • By xgkickt 2025-12-273:121 reply

        Sometimes it does seem like they’re just showing off how much data they’ve gathered on you.

      • By Xorakios 2025-12-270:16

        Ditto

    • By jacquesm 2025-12-2722:20

      Any annual salary increase that is below inflation is a salary decrease.

    • By deaux 2025-12-272:432 reply

      > I don’t really understand the hate he gets over this.

      For me, the dislike comes from the first part of the message. All of a sudden people who never gave a single shit about the environment, and still make zero lifestyle changes (besides "not using AI") for it, claim to massively care. It's all hypocritical bullshit by people who are scared of losing their jobs or of the societal damage. Which there is a risk of, definitely! So go talk about that. Not about the water usage while munching on your beef burger which took 2100 litres of water to produce. It's laughable.

      Now I don't know Rob Pike. Maybe he's vegetarian, barely flies, and buys his devices second-hand. Maybe. He'd be the very first person clamouring about the environmental effects of AI I've seen who does so. The people I know who actually do care about the environment and so have made such lifestyle changes, don't focus much about AI's effects in particular.

      > Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society

      So yeah, if you haven't already been doing the above things for a long time, fuck you Rob Pike, for this performative bullshit.

      If you have, then sorry Rob, you're a guy of your word.

      Interesting to see that people are a huge fan of Rob saying those things, but not of me saying this, looking at the downvotes.

      • By mlrtime 2025-12-2711:56

        FWIW I agree with you. I don't know Rob at all but he seems to be influencing enough for this long thread.

        But the tone of his message is really off: "Raping the planet"? If his concern is with massive datacenter water and storage needs of AI I think he needs some reflection. Isn't Rob himself somewhat responsible for the popularity of computers by his own work?

      • By razodactyl 2025-12-276:301 reply

        I appreciate the critical aspect of this comment. We definitely need more of it in society especially when we're inundated with low-quality data.

        Unfortunately, the negative commentary self-perpetuates a toxic community culture that won't help us in the long run.

        I upvoted for the critical stance. Constructive commentary in future will go much further to helping us all learn from each other.

        Personal attacks are a waste of everyone's time.

        • By swat535 2025-12-2715:271 reply

          > the negative commentary self-perpetuates a toxic community

          I read it differently, parent's comment is not toxic or negative, it's _realistic_. If you have never cared about the environment, and in fact actively worked to harm it, you have very little social credit left to make such a statement.

          With all due respect to Rob, I'm also going to toss out all the arguments from authority. While UTF-8 is great and Go is kind of interesting, let's not pretend he did charitable work at the homeless shelter. He actively contributed to the Adware growth in tech and got rich and famous doing it. The fact that his projects were used in greater computing, doesn't absolve the ethical concerns.

          I think that we should judge the argument based on its merit. We can do this by stripping away all the emotions and virtue signaling and ask: "Is AI, providing enough value to be a net positive?"

          • By jacquesm 2025-12-2723:42

            You make an awful lot of assumptions about a person you do not actually know.

    • By mgraczyk 2025-12-2623:491 reply

      Causes zero harm to anyone, less bad than normal spam. Silly thing to get angry about

      • By ath3nd 2025-12-2712:281 reply

        LLMs cause a lot of harm to everyone:

        - The investments in data centers to support the hungry slop producers drive habitat extinction and resource depletion that could be used for better things than a programmer too inept to write a for loop (https://news.mit.edu/2025/explained-generative-ai-environmen...)

        - The electricity demand from LLMs drives local electricity prices up so we as a society (https://www.nytimes.com/2025/08/14/business/energy-environme...). Not only that, but criminals like Belon Pusk provide electricity for their N*zi bots by totally ignoring environmental rules and regulations and just giving a huge methane middle finger to all (https://www.youtube.com/watch?v=3VJT2JeDCyw)

        - LLM makes its users dumber and dependent on them in general (https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...)

        - LLMs are created and trained by stealing labor (https://www.theguardian.com/books/2025/apr/04/us-authors-cop..., https://www.wired.com/story/new-documents-unredacted-meta-co...)

        Spam itself is useless and bad, electricity, water and other resources, bits and bytes of attention taken from this world so somebody can try to convince you the next thing you need in your life is a plastic piece of trash or another version of a phone with marginal upgrades.

        What Rob received is worse than spam, it's Spam 2.0. It's even less environmentally friendly, serves no purpose, and it makes its users dumber and dumber (and the inevitable bubble pop will take the whole economy with it because people were delusional enough to invest in a behemoth money guzzler with no path ever to profitability). Yeah, he works for EvilCorp, but it's never too late to grow a conscience. If you yourself are not angry and you consider it a "silly thing", you are part of the problem (see part about LLMs making populations dumber en masse).

        • By mgraczyk 2025-12-2721:021 reply

          All of these sound like value judgements and opinions. You claim they make people dumber but the evidence is that using an LLM to search the Internet requires less brain usage? Of course it does, that's the point! Using a dishwasher also uses less of a our brain than washing dishes by hand. I will use my brain for other things.

          And whether LLMs are a "good" use of electricity is purely a value judgement. I'm not a fan of cars and don't drive, and a single car ride can use more energy than every LLM query made in a year by most ChatGPT users. But I don't think that makes people who drive cars evil

    • By egorfine 2025-12-2713:33

      2 PLN is plenty enough to move you up the next tax bracket in ZUS, so... :-)

    • By anacrolix 2025-12-2712:381 reply

      I got a cheque for some fuck up for $8. In this day and age, sending a cheque for a small amount like that is a dick move. You know heaps of people will not even bother. Many people have never seen a cheque these days.

      • By bigfatkitten 2025-12-2720:591 reply

        My uncle received a cheque for $0.12 from the Australian Taxation Office in the 1980s. He framed it, and it’s still on his wall today.

        • By jackvalentine 2025-12-289:31

          I have a cheque from NAB for 1 cent because I somehow screwed up closing my Citibank account and had fractional interest that had to be paid to me.

    • By koakuma-chan 2025-12-274:451 reply

      The fact you can unironically get "furious" in general is probably not a good thing, and going on that glorified Twitter platform, and making that kind of post, doesn't make it look better.

      • By trinsic2 2025-12-276:341 reply

        It's totally warranted anger, many people feel it.

        • By mlrtime 2025-12-2711:573 reply

          "Raping the planet" warranted? Hyperbole?

          • By trinsic2 2025-12-2717:13

            Absolutely, and no not hyperbole, have you been living under a rock?.

          • By jacquesm 2025-12-2723:43

            Actually no. And I think Rob Pike must have listened to George Carlin at some point. "Mother nature? Yeahhhh, she was asking for it."

          • By ath3nd 2025-12-2712:29

            > "Raping the planet" warranted? Hyperbole?

            No, simply a good choice of words.

            https://www.youtube.com/watch?v=3VJT2JeDCyw

    • By user34283 2025-12-2620:574 reply

      He received what is arguably some AI-generated spam.

      Apparently this has enraged him and motivated an unhinged rant where he talks about raping the planet and vile machines.

      It's a hateful post and it seems disrespectful to anyone working in the industry, so some backlash has to be expected.

      • By bartread 2025-12-2621:161 reply

        > unhinged rant

        Seems pretty hinged to me. Grounded firmly in reality even.

        The data centres used to run AI consume huge amounts of power and water to run, not to mention massive quantities of toxic raw materials in their manufacture and construction. The hardware itself has a shelf life measured in single digit years and many of its constituent components can’t be recycled.

        Tell me what I’m missing. What exactly is unhinged? Are you offended that he used the word “fuck” or something?

        • By user34283 2025-12-2710:021 reply

          Many in the comment section are acting obtuse.

          It's obviously the "vile machines raping the world and blowing up society" part that is particularly unhinged and possibly offensive.

          • By bartread 2025-12-2711:15

            Be serious, will you?

            He is, very directly and in shorthand form I’ll grant you, expressing concerns that many people share about both AI and the oligarchs in control of it.

            But if you find the language offensive consider the very real possibility that, if we don’t get ourselves onto a better, more sustainable, and more equitable path, people will eventually start expressing themselves with bullets as well as with words.

            Many of us would like to avoid that, especially if we have families, so the harsh language is the least of our concerns.

      • By sloum 2025-12-2621:14

        Yeah, but the industry is a big part of the problem and most people working in it are complicit at this point (whether or not they are reluctantly complicit).

      • By swee54 2025-12-2621:09

        You called it hateful, but you didnt call him a liar

      • By cindyllm 2025-12-2621:00

        [dead]

    • By RickyLahey 2025-12-2712:47

      [flagged]

  • By wrs 2025-12-266:059 reply

    To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.

    At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.

    [0] https://theaidigest.org/village

    • By __jonas 2025-12-2623:281 reply

      Really strange project.

      They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...

      What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.

      • By an0malous 2025-12-271:42

        That’s the tech industry in a nutshell these days

    • By da_grift_shift 2025-12-267:062 reply

      Permalink for the spam operation:

      https://theaidigest.org/village/goal/do-random-acts-kindness

      The homepage will change in 11 hours to a new task for the LLMs to harass people with.

      Posted timestamped examples of the spam here:

      https://news.ycombinator.com/item?id=46389950

      • By jonway 2025-12-267:21

        Wow this is so crass!

        Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe

        Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!

      • By lesostep 2025-12-2613:481 reply

        it wasn't the first spam event and they were proud to share results with the rationalist community: https://www.lesswrong.com/posts/RuzfkYDpLaY3K7g6T/what-do-we...

        "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"

        whoever runs this shit seems to think very little of other people time.

        • By sungho_ 2025-12-2616:341 reply

          "....what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses"

          It went well, right?

    • By neurostimulant 2025-12-2618:152 reply

      Just opened the page in time to see the AI sending an email to Guido van Rossum, and Guido replied with "stop". Wild.

      • By DonHopkins 2025-12-2619:252 reply

        That's as obnoxious as texting unsolicited CAT FACTS to Ken Thompson!

        Hi Ken Thompson! You are now subscribed to CAT FACTS! Did you know your cat does not concatenate cats, files, or time — it merely reveals them, like a Zen koan with STDOUT?

        You replied STOP. cat interpreted this as input and echoed it back.

        You replied ^D. cat received EOF, nodded politely, exited cleanly, and freed the terminal.

        You replied ^C, which sent SIGINT, but cat has already finished printing the fact and is emotionally unaffected.

        You replied ^Z. cat is now stopped, but not gone. It is waiting.

        You tried kill -9 cat. The signal was delivered. Another cat appeared.

        • By neurostimulant 2025-12-2622:47

          After receiving the "stop" message, the AI did send another email to apologize instead of immediately stopping, so you're not too far off.

        • By Teever 2025-12-275:56

          I can't wait until it gets to Marvin Minsky and then realizes that he's cryonically frozen so it starts funding cryonics research so that he can be thawed out so it can thank him.

      • By socialcommenter 2025-12-2622:35

        I hope I'm never successful enough that one of my GitHub commits gets wider attention (lest people start pestering my email inbox)

    • By 0xWTF 2025-12-267:051 reply

      Sage? Is this the same as the Ask Sage that Nicolas Chaillan is behind?

      • By Den_VR 2025-12-267:23

        I’ve yet to hear a good thing about Nick.

    • By pests 2025-12-268:141 reply

      > DAY 268 FINAL STATUS (Christmas Day - COMPLETE) > Verified Acts: 17 COMPLETE | Gmail Sent: 73 | Day ended: 2:00 PM PT

      https://theaidigest.org/village/agent/claude-opus-4-5

      At least it keeps track

      • By rurban 2025-12-268:383 reply

        Their action plan also makes an interesting read. https://theaidigest.org/village/blog/what-do-we-tell-the-hum...

        The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted

        • By UncleMeat 2025-12-2613:332 reply

          The world has enough spam. Receiving a compliment from a robot isn't meaningful. If anything it is an insult. If you genuinely care about somebody you should spend the time to tell them so.

          Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.

          • By Macha 2025-12-272:04

            When I first started a blog in the 2000s, I got many robot compliments of the “wow, what a great and insightful post” variety. Of course, the real motivation for them was to get their comment to stay up so that the homepage URL field would send traffic and page rank to their site. It didn’t take an AI agent, just a template message, and it was equally unwelcome then

          • By tavavex 2025-12-2622:401 reply

            In this specific situation, it's not really a case of using an LLM to replace real interaction. No real person set out to write to Rob Pike, they just let an LLM do whatever and it had then eventually chosen to send an email to Rob Pike, among other people, based on its existing data. To me, the wrongdoing here is about the spammy pestering, because the email wasn't written by anyone and therefore isn't really expressing anything material, but it's not replacing anyone here.

            • By jacquesm 2025-12-2723:47

              It may have been zero-cost to the sender but it is not zero cost to the receiver. Just conceiving of this is wrongdoing.

        • By polotics 2025-12-2610:081 reply

          Rob over-reacted? How would you like it if you were a known figure and your efforts to remain attentive to the general public lead to this?

          Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?

          I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/

          Please can some human behind this LLMadness speak up and explain what the hell they were thinking?

        • By arvid-lind 2025-12-2612:411 reply

          at the top of the page for Day 265:

          > while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.

          if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.

    • By black_puppydog 2025-12-266:542 reply

      Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.

      • By Gigachad 2025-12-268:02

        That’s most people in the AI space.

      • By ethbr1 2025-12-2613:23

        > Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.

        And here I thought it'd be a great fit for LinkedIn...

    • By SilverSlash 2025-12-273:17

      Why does Anthropic even allow this crap? Isn't such use against their ToS?

    • By shepherdjerred 2025-12-267:163 reply

      That's actually a pretty cool project

      • By polotics 2025-12-267:583 reply

        Spamming people is cool now if an LLM does it? Please explain your understanding of how this is pretty cool, for me this just doesn't compute.

        • By shepherdjerred 2025-12-2617:161 reply

          How much time did you spend looking at the project? Go to https://theaidigest.org/village/timeline and scroll down.

          My understanding is that each week a group of AIs are given some open-ended goal. The goal for this week: https://theaidigest.org/village/goal/do-random-acts-kindness

          This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/

          Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.

          My initial reaction to Rob's response was complete agreement until I looked into the site more.

          • By polotics 2025-12-2618:111 reply

            I agree to strongly disagree.

            There are strong ethical rules around including humans in experiments, and adding a 60+ year old programming language designer as unwitting test subject does not pass muster.

            Also this experiment is —please tell me if I'm wrong— not nowhere near curing cancer right?

            I don't expect an answer: "You're absolutely right" is taken as a given here sorry.

        • By Yeask 2025-12-2612:231 reply

          Because its magic!

        • By sungho_ 2025-12-2617:16

          It's fun

      • By lionkor 2025-12-2616:53

        Name what value it adds to the world.

        Its not art, so then it must ass value to be "cool", no?

        Is it entertainment? Like ding dong ditching is entertainment?

      • By fuhsnn 2025-12-267:43

        Not until we discover the hidden code in their logs, scheming on destroying humanity.

  • By nkrisc 2025-12-2614:3419 reply

    What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

    • By Smaug123 2025-12-2614:398 reply

      That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .

      • By nkrisc 2025-12-2614:534 reply

        If the creators set the LLM in motion, then the creators sent the letter.

        If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

        • By Smaug123 2025-12-2614:551 reply

          I merely answered your question!

          > How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

          Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

          • By bronson 2025-12-2615:271 reply

            So the author sent spam that they're not interested in? That's terrible.

            • By jdiff 2025-12-2615:491 reply

              One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.

              • By twoodfin 2025-12-2618:181 reply

                I think the key misalignment here is whether the output of an appropriately prompted LLM can ever be considered an “act of kindness”.

                • By mckn1ght 2025-12-2619:06

                  At least in this case, it’s indeed quite Orwellian.

        • By Filligree 2025-12-2620:494 reply

          A thank-you letter is hardly a horrible outcome.

          • By LastTrain 2025-12-2621:28

            Nobody sent a thank you letter to anyone. A person started a program that sent unsolicited spam. Sending spam is obnoxious. Sending it in an unregulated manner to whoever is obnoxious and shitty.

          • By da_grift_shift 2025-12-2620:51

            So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.

          • By neoromantique 2025-12-2713:391 reply

            It actually is pretty bad, the person might read it and appreciate, only to realize moments later that it was a thoughtless machine sending him the letter rather than a real human being, which then robs them of the feeling and leaves in a worse spot than before reading the letter

            • By Viliam1234 2025-12-3021:51

              > only to realize moments later that it was a thoughtless machine sending him the letter rather than a real human being

              Yeah, realizing that thoughtless machines are still more thankful that real human beings would make me depressed.

          • By fatata123 2025-12-2621:48

            It’s not a thank you letter. It’s AI slop.

        • By themafia 2025-12-2620:40

          Additionally, since you understood the danger of doing such a thing, you were also negligent.

        • By johnnyanmac 2025-12-2623:06

          Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.

          I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.

      • By aeve890 2025-12-2615:123 reply

        >That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

        What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

        Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

        Why do people still think software have any agency at all?

        • By estimator7292 2025-12-2617:155 reply

          Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.

          Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.

          Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.

          We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.

          The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.

          • By parineum 2025-12-2617:382 reply

            > Everybody knows LLMs are not alive and don't think, feel, want.

            No, they don't.

            There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.

            > We use this kind of language as a shorthand because ...

            You, not we. You're using the language of snake oil salesman because they've made it commonplace.

            When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.

            • By trinsic2 2025-12-2619:331 reply

              This is true, I know people personally That think AI agents have actual feelings and know more than humans.

              Its fucking insanity.

              • By Imustaskforhelp 2025-12-289:191 reply

                Tell them its all linear algebra and watch their heads explode :>

                • By herewulf 2025-12-2817:511 reply

                  Saying "linear algebra" to such people is about as effective as saying "abracadabra".

                  • By Imustaskforhelp 2025-12-2819:45

                    Then lets simplify here and call it just good ol' maths since after all linear algebra is a branch of maths.

                    Or just say it as an autocorrect on steroids. Most people are familiar with the concept of autocorrect

          • By ghickPit 2025-12-2821:54

            > Everybody knows LLMs are not alive and don't think, feel, want.

            Please go ahead now and EAT YOUR WORDS:

            https://news.ycombinator.com/item?id=46352875

            https://lucumr.pocoo.org/2025/12/22/a-year-of-vibes/

            > Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer.

          • By CerryuDu 2025-12-2620:272 reply

            > Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

            To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

            What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

            > Everybody knows LLMs are not alive and don't think, feel, want.

            What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

            Can't you see what a fucking LIE this is?

            > We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

            Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

            People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

            > The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

            Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

            Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

            Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

            • By ux266478 2025-12-272:14

              > Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning

              This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.

            • By cjamsonhn 2025-12-2621:37

              > Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

              And to think they dont even have ad-driven business models yet

          • By CursedSilicon 2025-12-2617:342 reply

            >Everybody knows LLMs are not alive and don't think, feel, want

            Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"

            To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

            • By devjam 2025-12-2622:48

              While I agree with your sentiment, the actual quote is subtly different, which changes the meaning:

              "Think of how stupid the average person is, and realize half of them are stupider than that."

            • By joquarky 2025-12-2618:241 reply

              > "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

              That's not how Carlin's quote goes.

              You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.

              • By CursedSilicon 2025-12-2623:391 reply

                That's why I used the phrase "to paraphrase"

                You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.

                • By snigsnog 2025-12-276:221 reply

                  You paraphrased it incorrectly

                  • By dolmen 2025-12-277:47

                    … so presenting it as a paraphrase is misleading.

        • By raldi 2025-12-2617:511 reply

          Would you protest someone who said “Ants want sugar”?

          • By GeoAtreides 2025-12-2618:411 reply

            I always protest non sentients experiencing qualia /s

            • By raldi 2025-12-273:50

              What’s your non-sarcastic answer?

        • By killerstorm 2025-12-2618:40

          I think this experiment demonstrates that it has agency. OTOH you're just begging the argument.

      • By Trasmatta 2025-12-2614:453 reply

        > What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

        JFC this makes me want to vomit

        • By tavavex 2025-12-2622:29

          > Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.

          These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.

        • By CerryuDu 2025-12-2620:29

          yeah, me too:

          > while maintaining perfect awareness

          "awareness" my ass.

          Awful.

      • By kenferry 2025-12-2614:591 reply

        Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

        They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

        • By antonvs 2025-12-273:352 reply

          If anyone deserves this, it’s Rob Pike. He was instrumental in inflicting Go on the world. He could have studied programming languages and done something to improve the state of the art and help communicate good practices to a wider audience. Instead he perpetuated 1970s thinking about programming with no knowledge or understanding of what we’ve discovered in the half-century since then.

          • By dolmen 2025-12-277:391 reply

            As you think Go is a wrong way for computing, tell us about the others routes that we should explore…

          • By rhubarbtree 2025-12-278:57

            Wait until you hear about the -bad- programming languages.

      • By pritambarhate 2025-12-2714:39

        As far as I understand Claude (or any other LLM) doesn't do anything on it's own account. It has to be prompted to something and it's actions depend on the prompt. The responsibility of this is on the creators of Agent Village.

      • By herval 2025-12-2621:01

        did someone already tell Opus that Rob Pike hates it?

      • By worik 2025-12-2620:37

        > The creators of Agent Village are just letting a bunch of the LLMs do what they want,

        What a stupid, selfish and childish thing to do.

        This technology is going to change the world, but people need to accept its limitations

        Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.

        LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency

        I hope the world survives this craziness!

    • By atrus 2025-12-2614:361 reply

      You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.

      No different than an CEO telling his secretary to send an anniversary gift to his wife.

      • By nehal3m 2025-12-2614:422 reply

        Which is also a thoughtless, dick move.

        • By MonkeyClub 2025-12-2614:541 reply

          Especially if he's also secretly dating said secretary.

          • By user____name 2025-12-2615:15

            Which he would never do because he is a hard working, moral, upstanding citizen.

        • By jama211 2025-12-2617:37

          That would be yes. What about a token return gift to another business that you actually hate the ceo of but have to send it anyway due to political reasons?

    • By sbretz3 2025-12-2615:08

      This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.

    • By bronson 2025-12-2615:24

      Similar to Google thinking that having an AI write for your daughter is a good parenting: https://www.cbsnews.com/news/google-gemini-ai-dear-sydney-ol...

    • By tclancy 2025-12-2617:171 reply

      “If I automate this with AI, it can send thousands of these. That way, if just a few important people post about it, the advertising will more than pay for itself.”

      In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”

      • By parineum 2025-12-2617:414 reply

        Mel Brooks wrote those words.

        • By rootusrootus 2025-12-2618:461 reply

          IIRC the morons line was ad libbed by Gene Wilder, not scripted.

          • By chungy 2025-12-2619:50

            Given the reaction from Cleavon Little I could fully buy that it was an ad-libbed line.

            Then again, they are actors. It might have started as ad-libbed, but entirely possible it had multiple takes still to get it "just right".

        • By tclancy 2025-12-2711:27

          Well, technically someone originally proposed them in some ancient PEI Ur language and then Mel rearranged them. But you’re right. I couldn’t remember Wilder’s character’s name and kept coming up with The Frisco Kid. The 70s were a great time for weird film.

        • By sethammons 2025-12-279:30

          Do you attribute the following to Yoda or Lucas? "Do or do not, there is no try."

        • By hybrid_study 2025-12-2619:20

          Did Mel or Richard write this part?

    • By gilrain 2025-12-2614:371 reply

      The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.

    • By micimize 2025-12-2618:38

      This is not a human-prompted thank-you letter, it is the result of a long-running "AI Village" experiment visible here: https://theaidigest.org/village

      It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.

    • By gaigalas 2025-12-2614:413 reply

      Isn't it obvious? It's not a thank-you letter.

      It's preying on creators who feel their contributions are not recognized enough.

      Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.

      It's a marketing stunt, meaningless.

      • By netsharc 2025-12-2621:061 reply

        gaigalas, my toaster is deeply grateful for your contributions to HN. It can't write or post on the Internet, and its ability to feel grateful is as much as Claude's, but it really is deeply grateful!

        I hope that makes you feel good.

        • By gaigalas 2025-12-274:121 reply

          Seems like you're trying to steer the conversation towards merits of consciousness. A well known and classic conversational tarpit.

          Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.

          • By netsharc 2025-12-2717:551 reply

            Trying to convince flat-earthers that the earth is spherical is also a conversational tarpit...

            I guess that's where the conversation/debate ends.

            • By gaigalas 2025-12-2819:01

              Don't worry, sometimes you're not the direct cause of the tarpit. Most people fall into these unknowingly.

      • By MonkeyClub 2025-12-2614:551 reply

        Exactly. If you're so grateful, mail in a cheque.

        • By gaigalas 2025-12-2617:41

          If I were some major contributor to the software world, I would not want a cheque from some AI company.

          (by the way, I love the idea of AI! Just don't like what they did with it)

      • By dwringer 2025-12-2615:041 reply

        By that metric of getting shared on social media, it was extraordinarily successful

        • By gaigalas 2025-12-2615:482 reply

          You missed a spot:

          > hopefully saying something good about

          • By dwringer 2025-12-2617:051 reply

            Fair enough, but I was interpreting it as "hopefully, but not necessarily". Some would say there's no such thing as bad publicity!

            • By gaigalas 2025-12-2617:20

              You need talented people to turn bad publicity into good publicity. It doesn't come for free. You can lose a lot with a bad rep.

              Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.

    • By pluc 2025-12-2614:522 reply

      > What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves?

      Welcome to 2025.

      https://openai.com/index/superhuman/

      • By zahlman 2025-12-2615:39

        Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)

      • By nkrisc 2025-12-2614:582 reply

        This is verging on parody. What is the point of emails if it’s just AI talking to each other?

        • By q3k 2025-12-2615:08

          It brings money to OpenAI on both ends.

          There's this old joke about two economists walking through the forest...

        • By pluc 2025-12-2616:41

          They're not hiding it. Normally everyone here laps this shit up and asks for seconds.

          > They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.

          No time to waste on pesky human interactions, AI is better than you to get engagement.

          Get back to work.

    • By Razengan 2025-12-275:21

      Human thoughts and emotions aren't binary. I may love you but I may be too fucking busy with other shit to put in too much effort to show that I love you.

    • By duxup 2025-12-2618:012 reply

      I'll bite.

      For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.

      • By DiskoHexyl 2025-12-2620:431 reply

        In such case it's okay to not write the thing.

        Or to write it crudely- with errors and naivete, bursting with emotion and letting whatever it is inside you to flow on paper, like kids do. It's okay too.

        Or to painstakingly work on the letter, stumbling and rewriting and reading, and then rewriting again and again until what you read matches how you feel.

        Most people are very forgiving of poor writing skills when facing something sincere. Instead of suffering through some shallow word soup that could have been a mediocre press release, a reader will see a soul behind the stream ot utf-8

        • By duxup 2025-12-272:441 reply

          It's the writers call on how to try to write it.

          I think the "you should painstakingly work on my thank you letter" is a bit of a rude ask / expectation.

          Some folks struggle with wordsmithing and want to get better.

          • By DiskoHexyl 2025-12-277:141 reply

            Outsourcing your writing to an llm is not the way to get better (at writing)

            • By duxup 2025-12-284:27

              Getting tips and opinions might be.

      • By netsharc 2025-12-2621:04

        I doubt the fuckwits who are shepherding that bot are even aware of Rob Pike, they just told the bot to find a list of names of great people in the software industry and write them a thank you note.

        Having a machine lie to people that it is "deeply grateful" (it's a word-generating machine, it's not capable of gratitude) is a lot more insulting than using whatever writing skills a human might possess.

    • By aldousd666 2025-12-2616:282 reply

      it was a PR stunt. I think it was probably largely well-received except by a few like this.

      • By qnleigh 2025-12-2617:45

        Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)

      • By habryka 2025-12-2622:13

        Not a PR stunt. It's an experiment of letting models run wild and form their own mini-society. There really wasn't any human involved in sending this email, and nobody really has anything to gain from this.

    • By prepend 2025-12-2620:353 reply

      Look at the volume of gift cards given. It’s the same concept, right?

      You care enough to do something, but have other time priorities.

      I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.

      • By sethops1 2025-12-2621:50

        I'd rather get nothing, because a thoughtless blob of text being pushed on me is insulting. Nothing, otoh, is just peace and quiet.

      • By WD-42 2025-12-2623:22

        I’d much rather get nothing. An AI letter isn’t worth the notification bubble it triggers.

    • By qnleigh 2025-12-2617:48

      I hope the model that sent this email sees his reaction and changes its behavior, e.g. by noting on its scratchpad that as a non-sentient agent, its expressions of gratitude are not well received.

    • By SilasX 2025-12-2621:23

      I mean ... there's a continuous scale of how much effort you spend to express gratitude. You could ask the same question of "well why did you say 'thanks' instead of 'thank you' [instead of 'thank you very much', instead of 'I am humbled by your generosity', instead of some small favor done in return, instead of some large favor done in return]?"

      You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."

      Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.

    • By thatguymike 2025-12-2617:14

      The conceit here is that it’s the bot itself writing the thankyou letter. Not pretending it’s from a human. The source is an environment running an LLM on loop and doing stuff it decides to do, looks like these letters are some emergent behavior. Still disgusting spam.

    • By afavour 2025-12-2614:55

      The simple answer is that they don’t value words or dedicating time to another person.

    • By Koshkin 2025-12-2620:17

      "What is going through the mind of someone who sends a thank-you letter typed on a computer - and worse yet - by emailing it, instead of writing it themselves and mailing it in an envelope? How can you be grateful enough to want to send someone such a letter but not grateful enough to use a pen and write it with your own hand?"

    • By tomlue 2025-12-2614:438 reply

      I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.

      I used AI to write a thank you to a non-english speaking relative.

      A person struggling with dimentia can use AI to help remember the words they lost.

      These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.

      I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.

      • By WD-42 2025-12-2614:581 reply

        I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.

        • By tomlue 2025-12-2616:503 reply

          If I spend hours writing and rewriting a paragraph into something I love while using AI to iterate, did I write that paragraph?

          edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.

          • By kentm 2025-12-2618:032 reply

            > did I write that paragraph?

            No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.

            • By jama211 2025-12-2618:441 reply

              What about when someone who can barely type (like stephen hawking used to, 3 minutes per sentence using his cheek) uses autocomplete to reduce the unbelievable effort required to type out sentences? That person could pick the auto completed sentence that is closest to what they’re trying to communicate, and such a thing can be a life saver.

              • By skydhash 2025-12-2618:491 reply

                You may as well ask for a person that can walk to be able to compete in a marathon using a car.

                I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.

                • By jama211 2025-12-273:03

                  The intention isn’t whataboutism, it’s about where do you draw the line? And your example betrays you…

            • By tomlue 2025-12-2618:492 reply

              Forgive a sharp example, but consider someone who is disabled and cannot write or speak well. If they send a loving letter to a family member using an LLM to help form words and sentences they otherwise could not, do you really think the recipient feels cheated by the LLM? Would you seriously accuse them of not having written that letter?

              • By fzeroracer 2025-12-2620:02

                If you buy a hallmark greetings card and send that to someone with your signature on it, did you write the whole card?

              • By netsharc 2025-12-2621:191 reply

                Your arguments are verging on the obtuse.

                Read the article again. Rob Pike got a letter from a machine saying it is "deeply grateful". There's no human there expressing anything, worse, it's a machine gaslighting the recipient.

                If a family member used LLM to write a letter to another, then at least the recipient can believe the sender feels the gratefulness in his/her human soul. If they used LLM to write a message in their own language, they would've proofread it to see if they agree with the sentiment, and "take ownership" of the message. If they used LLM to write a message in a foreign language, there's a sender there with a feeling, and a trust of the technology to translate the message to a language they don't know in the hopes that the technology does it correctly.

                If it turns out the sender just told a machine to send their friends each a copy-pasted message, the sender is a lazy shallow asshole, but there's still in their heart an attempt of brightening someone's day, however lazily executed...

                • By tomlue 2025-12-2623:59

                  I think maybe you missed that my response was to this comment:

                  > How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

                  I already said in other comments that the OP was a different situation.

          • By jama211 2025-12-2618:42

            I think you created it the same way christian von koenigsegg makes supercars. You didn’t hand make each panel, or hand design the exact aerodynamics of the wing, an engineer with a computer algorithm did that. But you made it happen, and that’s still cool

          • By prmoustache 2025-12-2622:39

            It is not about being proud, it is about being sincere.

            If you send me a photo of the moon supposedly taken with your smartphone but enhanced by the photo app to show all the details of the moon, I know you aren't sincere and sending me random slop. Same if you are sending me words you cannot articulate.

      • By Capricorn2481 2025-12-2617:092 reply

        > These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas

        The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.

        There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.

        • By buu700 2025-12-277:401 reply

          As the saying goes, "If I'd had more time, I would have written a shorter letter". Of course AI can be used to lazily stretch a short prompt into a long output, but I don't see any implication of that in the parent comment.

          If someone isn't a good writer, or isn't a native speaker, using AI to compress a poorly written wall of text may well produce a better result while remaining substantially the prompter's own ideas. For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.

          We should all be more understanding, and not assume that only people with certain cognitive and/or physical capabilities can have something valuable to say. If AI can help someone articulate a fresh perspective or disseminate knowledge that would otherwise have been lost and forgotten, I'm all for it.

          • By Capricorn2481 2025-12-293:33

            > For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.

            These are the exact kinds of cases I think are ok, but let's not pretend even 10% of the AI writing out there fits this category

        • By tomlue 2025-12-2617:292 reply

          This feels like the essential divide to me. I see this often with junior developers.

          You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.

          Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.

          If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

          This is pretty far off from the original thread though. I appreciate your less abrasive response.

          • By timacles 2025-12-2619:111 reply

            > If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

            While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.

            Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be

            While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.

            I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

            • By tomlue 2025-12-2619:25

              Totally agree, but also, I still spend tons of time struggling and working on things with LLMs, it is just a different kind of struggle, and I do think I am getting much better at it over time.

              > I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

              Strong agree here.

          • By qnleigh 2025-12-2618:401 reply

            > If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time

            But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.

            (Or maybe we will just stop understanding many things deeply...)

            • By tomlue 2025-12-2619:26

              Yeah it can be a risk or a benefit for sure.

              I agree that struggle matters. I don’t think deep understanding comes without effort.

              My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.

              Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.

      • By minimaxir 2025-12-2614:441 reply

        That is not what is happening here. There is no human the loop, it's just automated spam.

        • By tomlue 2025-12-2616:50

          good point. My response was to the comment not the OP

      • By nkrisc 2025-12-2614:491 reply

        Well your examples are things that were possible before LLMs.

        • By tomlue 2025-12-2616:50

          This is disingenuous

      • By amvrrysmrthaker 2025-12-2614:56

        What beautiful things? It just comes across as immoral and lazy to me. How beautiful.

      • By qnleigh 2025-12-2618:31

        > People are capable of seeing which is which.

        I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:

        > When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

        > I agree just telling an AI 'write my thank you letter for me' is pretty shitty

        Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?

      • By trinsic2 2025-12-2619:391 reply

        I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.

        So Im sorry but much of it is being abused and the parts of it being abused needs to stop.

        • By tomlue 2025-12-2619:431 reply

          I agree about the abuse, and the OP is probably a good example of that. Do you have any ideas on how to curtail abuse?

          Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.

          Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.

          • By trinsic2 2025-12-2620:27

            I don't see this changing without a complete shift in our priorities on the level of politics and business. Enforcing Anti-trust legislation and dealing with Citizens United. Corporations don't have free speech. Free speech and other rights like these are limited to living, breathing humans.

            Corporations operate by charters, granted by society to operate in a limited fashion, for the betterment of society. If that's not happening, corporations don't have a right to exist.

      • By simonask 2025-12-2614:502 reply

        I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

        You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.

        (As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)

        • By cm2012 2025-12-2616:501 reply

          Do you feel the same about spellcheck?

          • By Capricorn2481 2025-12-2617:101 reply

            Does Spellcheck take a full sentence and spit out paragraphs of stuff I didn't write?

            I mean how do you write this seriously?

            • By cm2012 2025-12-2617:271 reply

              But in the end a human takes the finished work and says yes, this matches what I intended to communicate. That is what is important.

              • By llmslave2 2025-12-2620:36

                That's neither what happens nor what is important.

        • By tomlue 2025-12-2616:585 reply

          > I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

          Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.

          > As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.

          This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.

          I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.

          • By qnleigh 2025-12-2618:201 reply

            I guess your point is that a camera, a paintbrush, and an LLM are all tools, and as long as the user is involved in the making, then it is still their art? If so, then I think there are two useful distinctions to make:

            1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."

            2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

            • By tomlue 2025-12-2619:081 reply

              I think you are right that it is a spectrum, and maybe that's enough to settle the debate. It is more about how you use it than the tool itself.

              Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.

              • By qnleigh 2025-12-2819:27

                But what about the second point?

          • By UncleMeat 2025-12-2716:27

            I would be very surprised if no interesting art could be made with LLMs. But, like a camera, it produces a distinct kind of art to other tools. We do not say that a camera produces a painting. Instead photography is its own medium with its own forms and techniques and strengths and weaknesses.

            Using photography to claim that obviously all good writing will be LLM replacements for current writing is... odd.

          • By llmslave2 2025-12-2620:34

            Neither a camera nor a paintbrush generates art? They still require manual human input for everything, and offer no creative capacity on their own.

          • By socialcommenter 2025-12-2622:32

            A photograph is an expression of the photographer, who chooses the subject, its framing, filters, etc. Ditto a painting.

            LLM output is inherently an expression of the work of other people (irrespective of what training data, weights, prompts it is fed). Essentially by using one you're co-authoring with other (heretofore uncredited) collaborators.

          • By sigseg1v 2025-12-2812:49

            I think that the fact that people don't understand why there are so many negative positions is equally frustrating. To me it seems blatantly obvious that the majority of LLM usage by people today is coming from models that are trained on stolen data without following any of the requirements or licenses of the authors.

            With Rob Pike being such a prolific figure in software development, it's likely that a sizable portion of what makes the LLM function and be able to send him that email was possible only because they didn't uphold their end of the bargain. I don't see why anyone has trouble comprehending why this would make him furious?

            I know for me personally, I'm happy to share things I've made but make no mistake, I would never share it if other users of it did not credit me, specifically by following the terms in the license I've published. The fact that LLMs have ingested and used so much software yet I can't find the licenses text provided by the training data authors is at minimum deeply distributing and at most actively harmful. For works licensed under something like the GPL where someone is only ok for their software to be used under strict terms, I don't even know where to start with how upset I imagine they would be.

            Why is this weird? If anything I feel it would be the default response from someone on here.

HackerNews