
Over the weekend Ars Technica retracted an article because the AI a writer used hallucinated quotes from an open source library maintainer. The irony here is the maintainer in question, Scott…
Over the weekend Ars Technica retracted an article because the AI a writer used hallucinated quotes from an open source library maintainer.
The irony here is the maintainer in question, Scott Shambaugh, was harassed by someone's AI agent over not merging its AI slop code.
It's likely the bot was running through someone's local 'agentic AI' instance (likely using OpenClaw). The guy who built OpenClaw was just hired by OpenAI to "work on bringing agents to everyone." You'll have to forgive me if I'm not enthusastic about that.
This blog post is a lightly-edited transcript of the video I published to YouTube today. Scroll past the video embed if you're like me, and you'd rather read the text :)
Last month, even before OpenClaw's release, curl maintainer Daniel Stenberg dropped bug bounties because AI slop resulted in actual useful vulnerability reports going from 15% of all submissions down to 5%.
And that's not the worst of it—the authors of these bug reports seem to have a more entitled attitude:
These "helpers" try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don't think we need more of that.
These agentic AI users don't care about curl. They don't care about Daniel or other open source maintainers. They just want to grab quick cash bounties using their private AI army.
I manage over 300 open source projects, and while many are more niche than curl or matplotlib, I've seen my own increase in AI slop PRs.
It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.
AI slop generation is getting easier, but it's not getting smarter. From what I've seen, models have hit a plateau where code generation is pretty good1...
But it's not improving like it did the past few years. The problem is the humans who review the code—who are responsible for the useful software that keeps our systems going—don't have infinite resources (unlike AI companies).
Some people suggest AI could take over code review too, but that's not the answer.
If you're running a personal weather dashboard or building a toy server for your Homelab, fine. But I wouldn't run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.
If this was a problem already, OpenClaw's release, and this hiring by OpenAI to democratize agentic AI further, will only make it worse. Right now the AI craze feels the same as the crypto and NFT boom, with the same signs of insane behavior and reckless optimism.
The difference is there's more useful purposes for LLMs and machine learning, so scammers can point to those uses as they bring down everything good in the name of their AI god.
Since my video The RAM Shortage Comes for Us All in December, we have hard drives as the next looming AI-related shortage, as Western Digital just announced they're already sold through their inventory for 2026.
Some believe the AI bubble isn't a bubble, but those people are misguided, just like the AI that hallucinated the quotes in that Ars Technica article.
And they say "this time it's different", but it's not. The same signs are there from other crashes. The big question I have is, how many other things will AI companies destroy before they have to pay their dues.
It's not just open source though. Many high quality sources of information are being (over-)exploited and hurt in the process. StackOverflow is effectively dead [0], the internet archive is being shunned by publishers [1], scientific journals are bombarded by fake papers [2] (and anecdotally, low-effort LLM-driven reviews), projects like OpenStreetMap incur significant costs due to scraping [3], and many more.
We went from data mining to data fracking.
[0]: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
[1]: https://www.niemanlab.org/2026/01/news-publishers-limit-inte...
[2]: https://www.theregister.com/2024/05/16/wiley_journals_ai/
[3]: https://www.heise.de/en/news/OpenStreetMap-is-concerned-thou...
StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014 (minus a very transient blip from covid).
Then the chagpt effect is a sudden drop in visitors. But the rate of decline after that looks more or less the same as pre-chatgpt.
StackOverflow was killed by its toxic moderators. I hope it stays online thought because it's massive source of knowledge, although in many cases outdated already.
Overzealous moderator issue was probably the main reason but I think the direct answer and summary from Google directly also had a significant impact on StackOverflow. It took away potential contributors and reduced the incentives for active contribution.
In a way it was a trial and glimpse of what was coming with the AI revolution
I agree it was a moderation issue, but for me it's Reddit that largely replaced my SO usage starting some years ago. Reddit is pretty similar to SO in design, but the more decentralized nature of the moderation means that questions rarely get "closed as duplicate" and answers tend to be more up-to-date as a result. There's not always a consensus answer and I'm often looking across multiple threads on the same thing, but that's still better than an outdated SO post.
> It took away potential contributors
There were multiple times I wanted to contribute to SO but couldn't because I didn't have sufficient "reputation", or something. I shrugged and moved on.
I always thought StackOverflow was meant to fizzle out over time as more questions get answered and don't need to be asked again. Perhaps the decline is just a necessary part of their rule of having no duplicate questions - keeping it as a clean repository of knowledge rather than a messy forum.
Just the other day a question I asked about 10 years ago got flagged as a duplicate. It turns out somebody else had asked the same question several years later and got a better answer than my question got, so that other one is the canonical one and mine is pushed away. It feels kind of offensive but it makes complete sense if the goal is to provide useful answers to people searching.
Unfortunately, the rule of no duplicate questions also destroyed lots of questions that weren't duplicates... because _someone_ couldn't be bothered to read them and realize it wasn't the same.
Plus, there were a lot of fun questions they were really interesting to start with; and they stopped allowing them.
Yes, this. I've asked a couple of questions where the only responses are from people saying "possible dupe of x" where x is something that has a couple of the same words but no relation to what I'm asking.
Turns out if you design a forum where a high effort, high quality post can be devalued by a low effort response, you discourage high effort, high quality posters.
This is where we need SO's "The Answer" feature, whatever they called it. Never seen it distilled so well.
The major trouble with StackOverflow is that nominally duplicate questions may have different answers if asked on 2011 vs 2026 - and the trouble is that answer rankings (the thing that determines what answers are in the top) don't decay over time. So if someone try to answer an old question with up to date info, they won't garner enough upvotes to overcome the old, previously correct but now outdated accepted answer at the top. (even with a ranking decay, there is little incentive to give a new up to date answer to a established thread - people are more likely to contribute to brand new threads)
It would be better to allow duplicates in this specific case, but mark the old thread as outdated and link the questions in such a way that one can see the old thread and compare it to the new thread.
This is something I saw all the time. I’d look something up, knowing that there was probably an easy way to do <basic programming task> in modern c++ with one function call.
Find the stack overflow thread, answer from 10+ years ago. Not modern C++. New questions on the topic closed as duplicate. Occasionally the correct answer would be further down, not yet upvoted.
“Best practice” changes over time. I frequently saw wrong answers with install instructions that were outdated, commands that don’t function on newer OS version, etc etc.
You raise an interesting point about decay. I have thought about similar systems myself. One flaw in a simple decay rule would be that some technologies are very stable, e.g., C & POSIX API programming. However, other tech is very fast moving like Python, Ruby, Java, C#, C++, Rust, etc. One idea to overcome this flaw, might be to have moderators (who are specialists on the subject matter) provide a per question decay rule. Example: Something like struct layout or pointer manipulation in C or the fopen() POSIX function might never decay. But something like a parsing JSON in any fast moving language might require annual updates. For example, a question about parsing JSON in Java might decay answers over a one year period to encourage people to revist the topic. I would like to hear Jeff Atwood and Joel Spolsky debate this topic with other "Internet points" experts for an hour-long podcast. They might brainstorm some very intersting ideas. I would also love to hear what they think about the "moderator problem". Some of the topics had incredibly toxic moderators who scared away newcomers and women. (Women are much less likely to participate in public software forums where public shaming is common.)
> One idea to overcome this flaw, might be to have moderators (...)
> I would also love to hear what they think about the "moderator problem". Some of the topics had incredibly toxic moderators (...)
Yeah having bad moderators and arguably a bad, dysfunctional community is perhaps a even worse handicap. If you go to threads on meta.SE (meta stack exchange, meta discussions on the whole ecosystem) you will see that people mostly believe the site policies are okay, and that's because everyone that didn't believe left years ago.
Maybe better ideas on how to evolve a Q&A site may evolve in a brand new site, unfortunately I think that SO and perhaps the wider Stack Exchange network is done.
Great point, because as the knowledge evolves might need to evolve ranking too by allowing some versioning and somehow ranking or marking the outdated ones
That's what the Bounty system was meant to handle. It could have been done better but it's not like they never considered it.
The problem with this, and why SO’s downfall was completely self-inflicted, is that the correct answer from 2013 is only occasionally still the correct answer in 2018. There are a lot of other issues with SO’s general moderation policy but well and truly it was as idiotic and myopic as it was toxic.
They treated subjective questions about programming methods as if they were universal constants. It was completely antithetical to the actual pursuit of applied knowledge, or collecting and discussing best practices and patterns of software design. And it was painfully obvious for years this was as a huge problem, well before LLMs.
That said, I will say after being traumatized by having my threads repeatedly closed, I got so good at boiling down my problem to minimal reproducible examples that I almost never needed to actually post, because I’d solve it myself along the way.
So I guess it was great for training me to be a good engineer in the abstract sense. but absolutely shit at fostering any community or knowledge base.
> that the correct answer from 2013 is only occasionally still the correct answer in 2018
Exactly! They should have added proper structuring to questions/replies so that it could specifically apply for Language/library version X. Later, such a question could be answered again (either by proving it's still correct for version X+1, or by giving a new answer) - that way people wouldn't have to look at a new reply with 2 votes vs an older, possibly outdated one with 100 and make a decision which to prefer.
Would AI be as good at coding as it is without the rigorous moderation of a significant training source?
> StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014.
> [0] https://blog.pragmaticengineer.com/stack-overflow-is-almost-... (monthly question asked on Stack Overflow)
"monthly questions asked" is a weird metric to measure the decline of StackOverflow tho. How many times are people gonna ask how to compare 2 dates in python, or how to efficiently iterate an array in javascript? According to the duplicates rule on SO, should be once anyway. So it's just inevitable that "monthly questions asked" will forever decrease after reaching its peak, since everything has already been asked. Didn't mean it was dead tho, people still needed to visit the site to read the responses.
A better metric to measure its decline would be "monthly visits", which I guess was still pretty high pre LLM (100s of millions per month?), even if the "monthly questions asked" was declining. But now I imagine their "monthly visits" is closer to zero than 1M. I mean, even if you don't use Claude and its friends, searching anything about programming on Google returns a Gemini answer that probably comes from StackOverflow, removing any reason to ever visit the site…
Your first point only holds if nothing ever changes in the programming world. People write new languages and frameworks all the time. How do you compare dates in pandas? How about polars? Duckdb? Etc.
Mods made asking questions a very hostile experience since they had a flawed ideal of SO becoming some form of encyclopedia. So no wonder people jumped on another train as quickly as possible, especially since it so often was a mistake to close a question whose next best answer was a long deprecated solution.
It still has some corners where people are better, but this is mostly the smaller niches.
Even someone who hates AI, is likely to hate it less than SO.
I don't know about others, but I switched to Reddit or forums for asking and answering questions because it offered a much smoother experience.
We can only hope reddit shares the same fate. Its only saving grace - as much as it pains me to say it - is that it's still not Facebook
StackOverflow is the next iteration of Yahoo Answers.
Even if we completely avoid the worst case scenarios where AI obliterates the job market or evolves into a paperclip maximizer, it has a good shot of being the most destructive technology in generations. The tech industry has already done a lot of harm to our social fabric with social media, gambling, and other addictive innovations replacing real life experiences and personal connections. This has led to well documented increases in depression, loneliness, and political extremism.
Now it seems AI is poised to eliminate most of the good innovations that tech brought about, and will probably crank social strife up to 11. It already feels like the foundations of the developed world have gotten shaky; I shudder to think what a massive blow will bring about.
I've read enough history to know that I really, really don't want to live through a violent revolution, or a world war, or a great depression.
After the iPhone, every single "innovation" has wrecked all kinds of havoc. Whatever we have is not healthy, and AI is going to supercharge that.
AI also killed Reddit (the API changes were motivated by early GPT iirc)
So so SO much good stuff is gone now and much of what's left is AI cruft
I think reddit was killed by a moderation that only allows the most norrowminded persons to have their echo-chamber.
Any moderation position that can be filled by an unemployed shut it will be filled by an unemployed shut in.
Yiu either have to pay your mods, like hn, or have your mods pay you, like the old BBS boards that reddit and stack overflow replaced.
Problems of the 2010s.
Today you can use an 8b model to flag all problematic posts. The only issue is that all the posts are also by 8b models.
You’re telling me dang gets paid to be a mod here?
AI can’t come fast enough!!!!
That would've happened regardless. But the alternative --- zero moderation, 100% free speech --- is how you get flamewars and spam like Slashdot and tons of other forums before it suffered from.
Well, Reddit surely didn't help the issue with how it was all handled.
AI has certainly killed Reddit.
But where do people turn next? There were a lot of benefits to some of its niche communities.
I don't think an alternative exists. Reddit was very unique. The last great BBS (in a sense) that non-Internet natives "got".
Before astroturfing on Reddit at scale was possible, it was an extremely reliable place to get perspectives from real people about loads of things. It's still useful for this purpose, but the same level of trust isn't there.
Now that social networking a la short-form video is "it" right now, I'm not sure if something text-based will thrive again like Reddit did. (People have been trying to make Lemmy the thing, and it's less popular than Mastodon.)
>Before astroturfing on Reddit at scale was possible
It has become so difficult to tell what is karma farming and what is people not bothering to search before asking.
In a strange way, what already started happening to the "other side" of Reddit six or so years ago with the emergence of OnlyFans turning that into a place where people just want to sell you was a precursor to this.
Everything kind of fractured apart and now those niche communities are building up again elsewhere.
Discord has a lot (looking at my discord I see, gaming, programming, clothing/fashion/aesthetic, language, dnd, music, keyboard / hardware, dance, etc... communities).
I've noticed a lot of the major reddit communities have matching communities in the fediverse, specifically the ones with old reddit-like UIs. (lemmy.ml, lemmy.world, sh.itjust.works, mander.xyz, etc...).
I've also noticed a lot of web-standards / browser developers and some gamedevs moved to twitter-style fediverse sites (e.g. mastodon.social, indieweb.social, infosec.exchange, hackyderm.io, floss.social, fosstodon.org, etc...).
---
I think the fediverse is working well for the niche communities for three reasons:
- Having that little bit more initial friction to learning how the fediverse works has made it better since it keeps out the low quality spamming users.
- Niche communities can only grow organically within their own spaces (since forcing them makes them seem inauthentic).
- The big plus of the fediverse is being able to follow/interact with users/communities across the boundary of being on another website. So it doesn't matter if a niche community you want to follow springs up on another website, you can follow them and participate from the website you already use.
For example: the old reddit-like communities that I follow (listed above) appear in a single feed in my programming.dev account (since that's the first one I joined), and the old twitter-like communities I follow appear in a single feed in my mastodon.social account (since that's the first twitter-like one I joined).
Is there a resource for finding these fedverse?
It's kind of like making an email account. First you pick where you want your email to be (gmail, outlook, proton, etc...), then make your account.
------------------------------------
If you're brand new, pick where you want your account to live.
- General worldwide website: mastodon.social
- or pick a website by topic: https://fedi.directory
- or pick a website by locality: https://fediverse.observer
Your starting global feed will have different users/posts based on which website you pick.
Check out which one you want then make an account there.
------------------------------------
Now choose some people to follow:
- Either in the global feed on your website
- or choose people by topic: https://fedi.directory
------------------------------------
Then people who you follow will share posts from other people, and you can choose if you want to follow them too.
That’s entropy for you.
Society is a Ship Theseus; each generation ripping off planks and nailing their own in place.
Having been online since the late 80s (am only mid 40s...grandpa worked at IBM, hooked me and my siblings up with the latest kit on the regular) I have read comments like this over and over as the 90s internet, 00s internet, now the 2010s state of the "information super highway" has been replaced.
Tbh things have felt quite stagnant and "stuck" the last 20 years. All the investment in and caretaking of web SaaS infrastructure and JS apps and jobs for code camp grads made it feel like tech had come to a standstill relative to the pace of software progress prior to the last 15-ish years.
Overpromises and overhyping of AI is making all of IT industry worse.
Everytime I start to discuss LLM/AI with non-IT people it is the same. Absurd expectations. Or denial of AI.
But as CEOs like Altman, Musk or Amodei have some much space in media, they can amplify their products - as good salesmen :)
I think that we are in times similar to 1997-1999, “everything will be web”.
Stack Overflow is an interesting case because these days most people ask questions on Discord instead. The data isn't public, and the search functionality is terrible. It makes no sense, but somehow companies still prefer it even though it's inefficient and the same questions keep getting asked over and over.
> and the same questions keep getting asked over and over.
This is a feature not a bug. The people asking those questions are new blood and accepting and integrating them is how you sustain your community.
Looks like at least Discord is recently decided to finally fix the issues caused by having users & are trying very hard to not have any going forward through insane identity verification mandates enforced by the most toxic partner companies ever. :)
> the same questions keep getting asked over and over.
More user engagement, users spend more time on the platform. These companies don't have the best interest of users in mind.
Google AI Overviews and ChatGPT are also killing traffic to information websites
StackOverflow was destroyed by a steady stream of miserable questions, and then by the infinite ego of moderators and power users.
They forgot that there are still people asking good questions and started to close everything.
Z downvote from z bozo weights the sama as one from an expert.
You need to bend backwards znd then lay flat to not annoy mods
Meta is the nest of psychopathic narcissists.
And many more.
Stack Exchange sites such as cooking or latex (and other niche ones) work very well. It is just that people are not full of themselves.
I started with SE ca 2014, loved it, participated a lot, accumulated half a million internet points and now hate the place. It did not age well.
I feel like we are talking past each other.
1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.
2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."
LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.
I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.
If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?
The real problem is that OSS projects do not have enough humans to manually review every PR.
Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.
PRs are just that: requests. They don't need to be accepted but can be used in a piecemeal way, merged in by those who find it useful. Thus, not every PR needs to be reviewed.
Of course, but when you add enough noise you lose the signal and as a consequence no PRs gets merged anymore because it's too much effort to just find the ones you care about.
Don't allow PR's from people who aren't contributors, problem solved. Closing your doors to the public is exactly how people solved the "dark forest" problem of social media and OSS was already undergoing that transition with humans authoring garbage PRs for reasons other than genuine enthusiasm. AI will only get us to the destination faster.
I don't think anything of value will be lost by choosing to not interact with the unfettered masses whom millions of AI bots now count among their number.
That would be a huge loss IMO. Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.
Then there's the security concerns that this change would introduce. Forking a codebase is easy, but so are supply chain attacks, especially when some projects are being entirely iterated on and maintained by Claude now.
> Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.
Exaggeration. Is SQLite halfway to closed source software? Open-source is about open source. Free software is about freedom to do things with code. None is about taking contributions from everyone.
For every cathedral (like SQLite) there are 100s of bazaars (like Firefox, Chrome, hundreds of core libraries) that depend on external (and especially first-time) contributors to survive (because not everyone is getting paid to sling open-source).
> Is SQLite halfway to closed source software?
Is there a reason that you chose SQLite for your counterpoint? My hot take: I would say that SQLite is halfway to closed source software. Why? The unit tests are not open source. You need to pay to see them. As a result, it would be insanely hard to force SQLite in a sustainable, safe manner. Please don't read this opinion as disliking SQLite for their software or commercial strategy. In hindsight, it looks like real genius to resist substantial forks. One of the biggest "fork threats" to SQLite is the advent of LLMs that can (1) convert C code to a different langugage, like Rust, and (2) write unit tests. Still, a unit test suite for a database while likely contain thousands (or millions) of edge case SQL queries. These are still probably impossible to recreate, considering the 25 year history of bug fixing done by the SQLite team.If all software could be as good as sqlite, I would not care how they do open source
And how does one become a maintainer, if there's no way to contribute from outside? Even if there's some extensive "application process", what is the motivation for a relatively new user to go through that, and how do they prove themselves worthy without something very much like a PR process? Are we going to just replace PRs with a maze of countless project forks, and you think that will somehow be better, for either users or developers?
If I wanted to put up with software where every time I encounter a bug, I either have no way at all to report it, or perhaps a "reporting" channel but little likelihood of convincing the developers that this thing that matters to me is worthy of attention among all of their competing priorities, then I might as well just use Microsoft products. And frankly, I'd rather run my genitals though an electric cheese grater.
You get in contact with the current maintainers and talk to them. Real human communication is the only shibboleth that will survive the AI winter. Those soft skills muscles are about to get a workout. Tell them about what you use the software for and what kinds of improvements you want to make and how involved you'd like your role to be. Then you'll either be invited to open PRs as a well-known contributor or become a candidate for maintainership.
Github issues/prs are effectively a public forum for a software project where the maintainers play moderator and that forum is now overrun with trolls and bots filling it with spam. Closing up that means of contributing is going to be the rational response for a lot of projects. Even more will be shunted to semi-private communities like Discord/Matrix/IRC/Email lists.
Determining which PRs you should accept or take further seems like it requires some level of review? Maybe more like PR triage, I suppose.
Until you unintentionally pull in a vulnerability or intentional backdoor. Every PR needs to be reviewed.
The point was that you can also just reject an PR on the basis of what it purports to implement, or even just blanket ignore all PRs. You can't pull in what you don't... pull in.
> Every PR needs to be reviewed.
Why would you review a PR that you are never going to merge?
You have to first determine whether or not you might want to merge it...
Having not reviewed it, how do you know you are never going to merge?
If a PR claims to solve a problem that I don't need, then I can skip its review because I'll never merge it.
I don't think every PR needs reviewing. Some PRs we can ignore just by taking a quick look at what the PR claims to do. This only requires a quick glance, not a PR review.
I took this thread as asking whether PRs that are pulled in should be reviewed.
You didn't see the latest AI grifter escalation? If you reject their PRs, they then get their AI to write hit pieces slandering you:
"On 9 February, the Matplotlib software library got a code patch from an OpenClaw bot. One of the Matplotlib maintainers, Scott Shambaugh, rejected the submission — the project doesn’t accept AI bot patches. [GitHub; Matplotlib]
The bot account, “MJ Rathbun,” published a blog post to GitHub on 11 February pleading for bot coding to be accepted, ranting about what a terrible person Shambaugh was for rejecting its contribution, and saying it was a bot with feelings. The blog author went to quite some length to slander Mr Shambaugh"
https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...
I am very strongly convinced that the person behind the agent prompted the angry post to the blog because they didn't get the gratification they were looking for by submitting an agent-generated PR in the first place.
I agree. But even _that_ was taking advantage of LLMs ability to generate text faster than humans. If the person behind this had to create that blog post from scratch by typing it out themselves, maybe they would have gone outside and touched grass instead.
> not every PR needs to be reviewed
Which functionally destroys OSS, since the PR you skipped might have been slop or might have been a security hole.
I don’t think the OP was suggesting maintainers blindly accept PRs—rather, they can just blindly reject them.
I think GP is making the opposite point.
Blindly rejecting all PRs means you are also missing out on potential security issues submitted by humans or even AI.
Many open source projects are also (rightly) risk adverse and care more about avoiding regressions
I've been following Daniel from the Curl project who's speaking out widely about slop coded PRs and vulnerability reports. It doesn't sound like they have ever had any problem keeping up with human generated PRs. It's the mountain of AI generated crap that's now sitting on top of all the good (or even bad but worth mentoring) human submissions.
At work we are not publishing any code or part of the OSS community (except as grateful users of other's projects), but even we get clearly AI enabled emails - just this week my boss has forwarded me two that were pretty much "Him do you have a bug bounty program? We have found a vulnerability in (website or app obliquely connected to us)." One of them was a static site hosted on S3!
There's always been bullshitters looking to fraudulently invoice your for unsolicited "security analysis". But the bar for generating bullshit that looks plausible enough to have to have someone spend at least a few minutes to work out if it's "real" or not has become extremely low, and the velocity with which the bullshit can be generated then have the victim's name and contact details added and vibe spammed to hundreds or thousands of people has become near unstoppable. It's like SEO spammers from 5 or 10 years back but superpowered with OpenAI/Anthropic/whoever's cocaine.
My hot take: reviewing code is boring, harder than writing code, and less fun (no dopamine loop). People don’t want to do it, they want to build whatever they’re tasked with. Making reviewing code easier (human in the loop etc) is probably a big rock for the new developer paradigm.
Oh no! It's pouring PRs!
Come on. Maintainers can:
- insist on disclosure of LLM origin
- review what they want, when they can
- reject what they can't review
- use LLMs (yes, I know) to triage PRs
and pick which ones need the most
human attention and which ones can be
ignored/rejected or reviewed mainly
by LLMs
There are a lot of options.And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?
Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.
> Maintainers can...insist on disclosure of LLM origin
On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.
The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.
[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...
Claiming maintainers can (do things while still take effort and time away from their OSS project's goals) is missing the point when the rate of slop submissions is ever increasing and malicious slop submitters refuse to follow project rules.
The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:
"DEATH BY A THOUSAND SLOPS
I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.
This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.
The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
The issue here is that LLMs are great for hobbyist stuff like you describe, but LLMs are obscenely expensive to run and keep current, so you almost HAVE to shove them in front of everything (or, to use your example, spread the diarrhea into everyone elses kitchens) to try and pay the bill.
Destroying open-source coding is only a concern if the code is the end, not the means.
Will AI [in time] bring about a growth in community-built products rather than code? Is that really a bad thing?
Well, no, not unless it develops its own version of open source. That's kind of the point. Without healthy OSS, even AI's ability to create value would enter freefall
It'll be interesting to see if a new open source ecosystem emerges rather than it just imploding.
My hunch is it will.
I pretty much always open an issue, then a PR, they can close it if they want.. I usually have 'some' idea of the issue and use the PR as a first stab and hope the maintainer will tell me if i'm going about it the right or wrong way.
I fully expect most of my PR's to need at least a second or third revision.
thats why blocking pr feature is coming to github
If not for the accomplishments, advancements and potential benefits, the whole AI story since LLMs looks a lot like sophisticated DDOS attack at multiple levels.
AI bots are literally DDOS'ing servers. Adoption is consuming and making both physical and computing resources either inaccessible or expensive for almost everyone.
The most significant one is the human cost. We suddenly found ourselves dealing with overwhelming levels of AI content/code/images/video that is mostly subpar. May be as AI matures we'll find it more easy and have better tools to work with the volume but for now it feels like it is coming from bad actors even when it is done by well meaning individuals.
There's no doubt AI has its uses and it is here to stay but I guess we'll all have to struggle until we reach that point where it is a net benefit. The hype by those financially invested isn't helping a bit though.
AI is passive consumption cosplaying as productivity. Any place where humans have to do something is a bug in the product.
Of course it's going to be damaging to places where people actually want to craft things.
Ok, but I am using ChatGPT and Claude to develop usable products five times faster than if I had developed them myself.
ai is slop and thats the consensus here for some reason. even before looking at the code, it MUST be slop. its why i have no time for anyone harking on about the evils of social media (for kids and somehow not adults) yet telling me this on, guess what, social media.
I don't understand your point and how it relates to my comment.
Sufficiently advanced technology always looks like a DDoS on society. It overwhelms the senses, and when we come to the realization we cannot comprehend or fully predict its implications it puts a subset of the population into a bit of a crisis. We’re in that phase right now where we just need to brace ourselves.