Vibe coding kills open source

2026-01-2613:01330285arxiv.org

Generative AI is changing how software is produced and used. In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading…

View PDF HTML (experimental)
Abstract:Generative AI is changing how software is produced and used. In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers. We study the equilibrium effects of vibe coding on the OSS ecosystem. We develop a model with endogenous entry and heterogeneous project quality in which OSS is a scalable input into producing more software. Users choose whether to use OSS directly or through vibe coding. Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns. When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.
From: Miklos Koren [view email]
[v1] Wed, 21 Jan 2026 21:54:59 UTC (642 KB)

Read the original article

Comments

  • By WarmWash 2026-01-2614:0441 reply

    Small bespoke personalized on the spot apps are the future with LLMs.

    The future will absolutely not be "How things are today + LLMs"

    The paradigm now for software is "build a tool shed/garage/barn/warehouse full of as much capability for as many uses possible" but when LLMs can build you a custom(!) hammer or saw in a few minutes, why go to the shed?

    • By anticorporate 2026-01-2614:196 reply

      I think you're missing the enormous value in apps being standardized and opinionated. Standardized means that in addition to documentation, the whole internet is available to help you. Opinionated means as a user of an app in a new domain, you don't have to make a million decisions about how something should work to just get started.

      Sure, there will be more personalized apps for those who have a lot of expertise in a domain and gain value from building something that supports their specific workflow. For the vast majority of the population, and the vast majority of use cases, this will not happen. I'm not about to give up the decades of experience I've gained with my tools for something I vibe coded in a weekend.

      • By tracker1 2026-01-2616:211 reply

        I've seen plenty of "standardized" (ie, "Enterprise" applications)... I'd just assume a bespoke hammer that's simple and easy to understand over a complex beast of HammerFactoryFactory to deliver you a builder of custom hammer builders so you get the JobHammer you need as part of the IoC loader platform that is then controlled through a 1.2gb service orchestrator that breaks at 11am every third Tuesday for an hour. When all you need to do is post up a "Help Wanted" poster on a piece of wood.

        • By anticorporate 2026-01-2617:431 reply

          A standardized hammer can just be a carpenter's hammer, though. Putting a nail pull on the back side is making it opinionated in a way that gives users access to a tool that they may not have thought of if they built their own hammer, but very well might appreciate having.

          This isn't a defense of enterprise applications, though. They're more like a shed fully of rusty tools with a thirty different coping saws blades and not a single handle because corporate policy only allows for you to have a handle if Joe from accounting says you can, but why would he when his VP confidently said you can just hold the blade between your fingers.

          • By tracker1 2026-01-2618:20

            You forgot to mention the IT Security team locking up all the screwdrivers to prevent potential cross-contamination with hammer usage.

      • By seniorThrowaway 2026-01-2616:262 reply

        AI's / LLM's have already been trained on best practices for most domains. I've recently faced this decision and I went the LLM custom app path, because the software I needed was a simple internal business type app. There is open source and COTS software packages available for this kind of thing, but they tend to be massive suites trying to solve a bunch of things I don't need and also a minefield of licensing, freemium feature gating, and subject to future abandonment or rug pulls into much higher costs. Something that has happened many times. Long story short, I decided it was less work to build the exact tool I need to solve my "right now" problem, architected for future additions. I do think this is the future.

        • By spamizbad 2026-01-2617:392 reply

          > AI's / LLM's have already been trained on best practices for most domains.

          I've been at this long enough to see that today's best practices are tomorrow's anti-patterns. We have not, in fact, perfected the creation of software. And the your practices will evolve not just with the technology you use but the problem domains you're in.

          I don't mean this as an argument against LLMs or vibe coding. Just that you're always going to need a fresh corpus to train them on to keep them current... and if the pool of expertly written code dries up, models will begin to stagnate.

          • By seniorThrowaway 2026-01-2618:54

            I've been doing this a long time too. The anti-patterns tend to come from the hype cycles of "xyz shiny tool/pattern will take away all the nasty human problems that end up creating bad software". Yes, LLMs will follow this cycle too, and, I agree we are in a kind of sweet spot moment for LLMs where they were able to ingest massive amounts of training material from the open web. That will not be the case going forward, as people seek to more tightly guard their IP. The (open) question is whether the training material that exists plus whatever the tools can self generate is good enough for them to improve themselves in a closed loop cycle. LLM generated code was the right tool for my job today; doesn't mean it's the right tool for everyone's job or that it always will be. One thing constant in this industry is change. Sold as revolutionary, which is the truth, in the sense of going in circles/cycles.

          • By thfuran 2026-01-2621:13

            Also, they've been trained on common practices more than they've been trained on best practices. And best practice is heavily context dependent anyways.

        • By suddenlybananas 2026-01-2616:321 reply

          What if there is a new domain.

          • By seniorThrowaway 2026-01-2616:361 reply

            Then it is new for everyone, no?

            • By maleldil 2026-01-2617:05

              Humans can learn from new experiences. LLMs have to be retrained (continuous learning isn't good enough yet), or you have to fit enough information into the context while still having enough for the task itself.

      • By theshrike79 2026-01-2813:00

        SAP is THE MOST standardised "app" in the world.

        It's so standard that the usual paradigm is that your company will adapt itself to the way SAP works, not the other way around. Massive gigantic corporations have tried to adapt SAP and failed. IIRC Lidl had a very expensive high-profile failure in this.

        Being standardised isn't always a good thing.

      • By Bishonen88 2026-01-2616:152 reply

        Expertise won't be needed (it already isn't). One can create copies of apps with vague descriptions referencing those big apps:

        "Create a copy of xyz. It needs to look and behave similarly. I want these features ... And on top of that ...". Millions decisions not needed. A handful of vague descriptions of what one wants is all it takes today. I think claude and co. can even take in screenshots.

        Documentation won't be needed either IMO. Since humans won't write nor read the code. They will simply ask LLM's if they have a question.

        I totally am giving up my experience with various paid SaaS this year, which I was paying for last years. Not only am I able to add the features that I was wishing for those tools to have (and would have never made it into the real app because they're niche requests), but am saving money at the same time.

        And the above is just whats happening today. Claude Code is younger than 1 year old. Looking forward to come back to this thread in a year and swallow my words... but I'm afraid I won't have to.

        • By dimitri-vs 2026-01-2617:332 reply

          But millions discussions are needed and will always be needed?

          "Create a copy of Amazon.com"

          ok, how did you want to handle 3pl fulfilment and international red tape?

          "No not that complicated, a minimal copy"

          How minimal? How many servers should I provision? How vertically integrated should we get?

          Etc.

          I really want to see someone build an app of any value with minimal decisions made.

          • By acessoproibido 2026-01-2618:281 reply

            Amazon is not one app, its hundreds of them bundled in some giant monster.

            You could easily replicate the store part of it minimally, at its core its just an index of products, a basket and checkout system. There are other parts that make up the whole thing of course.

            There is a lot of room between no value and trillion dollar company

          • By nerdsniper 2026-01-2621:31

            It would be great if LLM's did this (the relevant, and very pointed, follow-up questions). Instead, today they kind of just go "okay sure yeah here it is. here's as much of Amazon.com as I can code within my token budget. Good luck drawing the rest of the owl."

        • By generallyjosh 2026-01-2711:19

          IMO, documentation becomes much more important if we're planning to hand off coding to the LLMs

          You can ask it about the code, sure, and it'll try to tell you how it works. But, what if there's a bug in the code? Maybe the LLM will guess at how it was supposed to work, or maybe it'll start making stuff up to justify the bug's existence (it's actually a hidden feature!)

          The docs say how the code should work. For an LLM that has to go relearn everything about your code base every time you invoke it, that's vitally important

      • By digiown 2026-01-2616:22

        The apps/use cases for which such standardized and opinions tools can exist for, economically, mostly already exist IMO. Vibe coded tools fill an enormous space of semi-unique problems that only affect a small amount of people. For example various scripts to automate tasks imposed by a boss. The best balance is probably to use LLMs to use the standardized tools for you when available, so that things remain mostly scrutable.

        As the saying goes, 80% of users only use 20% of the features of your program, but they are different 20% parts. When the user vibecode the program instead, only their specific 20% needs to be implemented.

      • By iknowSFR 2026-01-2614:221 reply

        Then you’re going to be left behind. I’m going to be left behind.

        Every problem or concern you raise will adapt to the next world because those things are valuable. These concerns are temporary, not permanent.

        • By blibble 2026-01-2615:102 reply

          > Then you’re going to be left behind.

          I really, really don't care

          I didn't get into programming for the money, it's just been a nice bonus

          • By frizlab 2026-01-2615:17

            > I didn't get into programming for the money, it's just been a nice bonus.

            Exactly the same for me! If kind of feel like an artist whose paintings are worth more more easily than a paint or music artist… But boy would I be poor if this art were worthless!

          • By shimman 2026-01-2617:38

            It's also such a weird claim, how the fuck are we going to be left behind when the skill level is just entering text in a box... a skill we literally do for our jobs..

    • By II2II 2026-01-2614:562 reply

      > when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      Because software developers typically understand how to implement a solution to problem better than the client. If they don't have enough details to implement a solution, they will ask the client for details. If the developer decides to use an LLM to implement a solution, they have the ability to assess the end product.

      The problem is software developers cost money. A developer using an LLM may reduce the cost of development, but it is doubtful that the reduction in cost will be sufficient to justify personalized applications in many cases. Most of the cases where it would justify the cost would likely be in domains where custom software is in common use anyhow.

      Sure, you will see a few people using LLMs to develop personalized software for themselves. Yet these will be people who understand how to specify the problem they are trying to solve clearly, will have the patience to handle the quirks and bugs in the software they create, and may even enjoy the process. You may even have a few small and medium sized businesses hiring developers who use LLMs to create custom software. But I don't think you're going to see the wholesale adoption of personalized software.

      And that only considers the ability of people to specify the problem they are trying to solve. There are other considerations, such as interoperability. We live in a networked world after all, and interoperability was important even before everything was networked.

      • By aspenmartin 2026-01-2617:242 reply

        > Because software developers typically understand how to implement a solution to problem better than the client. If they don't have enough details to implement a solution, they will ask the client for details. If the developer decides to use an LLM to implement a solution, they have the ability to assess the end product.

        Why do you think agents can’t do that? They can’t do this really well today but if the distance we went in 2025 stays similar it’ll be like a year before this starts getting decent and then like another 1 year before it’s excellent.

        > Sure, you will see a few people using LLMs to develop personalized software for themselves. Yet these will be people who understand how to specify the problem they are trying to solve clearly, will have the patience to handle the quirks and bugs in the software they create

        Only humans can do this?

        • By dimitri-vs 2026-01-2617:401 reply

          Hallucinations are not solved, memory is not solved, prompt injection is not solved, context limits are waaay too low at the same time tokens way too expensive to take advantage of context limits, etc. These problems have existed since the very early days of GPT-4 and there is no clear path to them being solved any time soon.

          You basically need AGI and we are nowhere close to AGI.

          • By aspenmartin 2026-01-2621:14

            All of the issue you talk about are true, and I don’t personally care about AGI it’s kind of a mishmash of a real thing and a nice package for investors but what I do care about is what has been released and what it can do

            All of the issues you talk about: they aren’t solved but we’ve made amazing progress on all of them. Continual learning is a big one and labs are likely close to some POCs.

            Token costs per unit performance rapidly goes down. GPT4 level perf costs you 10x less today than two years ago. This will continue to be the case as we just continually push efficiency up.

            The AGI question “are we close” tbh to me these questions are just rabbit holes and bait for flame wars because no one can decide on what it means and then even if you do (e.g. super human perf on all economically viable tasks is maybe more of a solid staring point) everyone fights about the ecological validity of evals.

            All I’m saying is: taking coding in a complete vacuum, we’re very very close to being at a point where it becomes so obviously beneficial and failure rates for many things fall below the critical thresholds that automating even the things people say make engineers unique (working with people to navigate ambiguous issues that they aren’t able to articulate well, making the right tradeoffs, etc) starts looking like less of a research challenge and more of an exercise in deployment

        • By nicbou 2026-01-278:38

          My experience with LLMs is that you can’t get them to clarify things by asking questions. They just assume facts and run with it.

          My experience with software development is that a lot of things are learned by asking probing questions informed by your experience. How would an LLM understand the subtle political context behind a requirement or navigate unwritten rules discussed in meetings a year ago?

      • By pianopatrick 2026-01-2623:45

        > If they don't have enough details to implement a solution, they will ask the client for details.

        but LLMs are chat bots. Surely eventually someone will set up an LLM based coding system that can ask architecture and design questions before starting to code.

    • By palmotea 2026-01-2614:261 reply

      > The paradigm now for software is "build a tool shed/garage/barn/warehouse full of as much capability for as many uses possible" but when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      1) Your specific analogy is kinda missing something important: I don't want my tools working differently every time I use them, also it's work to use LLMs. A hammer is kind of a too-simple example, but going with it anyway: when I need a hammer, I don't want my "LLM" generating a plastic one, then having to iterate for 30 minutes to get it right. It takes me far less than 30 minutes to go to my shed. A better example is would be a UI, even if it was perfect, do you want all the buttons and menus to be different every time you use the tool? Because you generate a new one each time instead of "going to the shed"?

      2) Then there's the question, can an LLM actually build, or does it just regurgitate? A hammer is an extremely we'll understood tool, that's been refined over centuries, so I think an LLM could do a pretty good job with one. There are lots of examples, but that also means the designs the LLM is referencing are probably better than the LLM's output. And then for things not like that, more unique, can the LLM even do it at all or with a reasonable amount of effort?

      I think there's a modern phenomenon where making things "easier" actually results in worse outcomes, a degraded typical state vs. the previous status quo, because it turns what was once a necessity into a question of personal discipline. And it turns out when you remove necessity, a lot of people have a real hard time doing the best thing on discipline alone. LLMs might just enable more of those degenerate outcomes: everyone's using "custom" LLM generated tools all the time, but they all actually suck and are worse than if we just put that effort into designing the tools manually.

      • By tracker1 2026-01-2616:31

        I started picturing AI generating tools like it does images of people... I mean, of course every other hammer will have an extra head off to the side, or split into 3 handles.

        Seriously though, you can tell AI what libraries and conventions you want to follow... that's been a lot of what I've done with it recently... I've been relatively pleased with the results.

        I've said several times that it's not perfect, but it is an overall force multiplier. It's much like working disconnected with an overseas dev team, but you get turn around in minutes instead of the next morning in your email. The better instructions/specs you give, the better the results. On my best day, I got about 3 weeks of what would take me alone done, after about 3 hours of planning/designing and another 2-3 hours of iteration with Claude Code. On my worst day, it was frustrating and it would have been about the same amount of time doing it myself. On average, I'd say I get close to 5 days of work done in 5-6 hours of AI assisted coding. Purely anecdotally.

        That said, I usually have a technical mind for how I want the solution structured as well as features and how those features work... often it clashes with the AI approach and sometimes it swims nicely. I'll also say that not all AI coding is the same or even close in terms of usefulness.

    • By jerf 2026-01-2614:24

      "why go to the shed"

      A good question but there's a good answer: Debugged and tested code.

      And by that, I mean the FULL spectrum of debugging and testing. Not just unit tests, not even just integration tests, but, is there a user that found this useful? At all? How many users? How many use cases? How hard has it been subjected to the blows of the real world?

      As AI makes some of the other issues less important, the ones that remain become more important. It is completely impossible to ask an LLM to produce a code base that has been used by millions of people for five years. Such things will still have value.

      The idea that the near-future is an AI powered wonderland of everyone getting custom bespoke code that does exactly what they want and everything is peachy is overlooking this problem. Even a (weakly) superhuman AI can't necessarily anticipate what the real world may do to a code base. Even if I can get an AI to make a bespoke photo editor, someone else's AI photo editor that has seen millions of person-years of usage is going to have advantages over my custom one that was just born.

      Of course not all code is like this. There is a lot of low-consequence, one-off code, with all the properties we're familiar with on that front, like, there are no security issues because only I will run this, bugs are of no consequence because it's only ever going to be run across this exact data set that never exposes them (e.g., the vast, vast array of bash scripts that will technically do something wrong with spaces in filenames but ran just fine because there weren't any). LLMs are great for that and unquestionably will get better.

      However there will still be great value in software that has been tested from top to bottom, for suitability, for solving the problem, not just raw basic unit tests but for surviving contact with the real world for millions/billions/trillions of hours. In fact the value of this may even go up in a world suddenly oversupplied with the little stuff. You can get a custom hammer but you can't get a custom hammer that has been tested in the fire of extensive real-world use, by definition.

    • By rurp 2026-01-2615:403 reply

      The vast majority of users make zero changes to the default settings of an app or device, even for software they use all the time and where some simple builtin adjustments would significantly improve their experience.

      I simply can't imagine a world where these same people all decide they constantly want to learn a completely unique UX for whatever piece of software they want to use.

      • By WarmWash 2026-01-2617:292 reply

        Your missing the forest for the trees here.

        Users will not fumble with the complex web of nested settings that engineers wet dream about.

        But they will tell the LLM "I'd really like it if the tool bar only had the hammer and saw tools", and it will be done.

        I cannot see software going in any other direction than a blank front end that users prompt LLMs to run scripts on top of.

        Picture MS Word where the GUI is just a page and a sidebar for telling an LLM what you want it to do. And if it's not possible, the LLM could even write extensions and plugins that make it possible.

        Software is going to completely change.

        • By charcircuit 2026-01-2618:16

          That only requires apps to be configurable. You don't need a whole new app to configure a toolbar.

        • By dimitri-vs 2026-01-2617:51

          > Picture MS Word where the GUI is just a page and a sidebar for telling an LLM what you want it to do.

          Done. And it seems absolutely awful.

          "Please bold the text I have selected" instead of a preexisting bold button.

          Oh wait I can just tell it all the tools I commonly use and where to put them... Hmmm topbar or side bar. Wow so much fun getting to make all these decisions!

          Ok time to change fonts. "Please add a font picker so I can pick a font"

      • By ryandrake 2026-01-2615:46

        All the people may not, but a decently skilled software engineer armed with an LLM, who doesn't have a lot of free time might be now be motivated to do it, whereas before it was like, "This thing is going to take months to replace, do I really want to write my own?"

      • By 7e 2026-01-2615:571 reply

        The LLM will know how the user operates, their proclivities and brain structure, and will design UX perfectly suited to them, like a bespoke glove. They won't have to learn anything, it will be like a butler.

        • By parineum 2026-01-2616:03

          Why not just say that the LLM will just do all the work while you're making up future, hypothetical capabilities of LLMs?

    • By candiddevmike 2026-01-2614:181 reply

      Because whatever you use a LLM to build will inevitably need more features added or some kind of maintenance performed. And now you're spending $200+/mo on LLM subscriptions that give you a half-assed implementation that will eventually collapse under its own weight, vs just buying a solution that actually works and you don't have to worry about it.

      • By technofiend 2026-01-2617:24

        >... and you don't have to worry about it.

        Not to be argumentative, but I have a concern that whomever I buy my solution from will have vibe coded it instead. I guess that means my support contract entitles me to hassling them about it, but I'm starting to worry it's just LLMs and vibe coded apps all the way down.

    • By Ravus 2026-01-2614:42

      I do not think that this is likely to be a successful model.

      When (not if) software breaks in production, you need to be able to debug it effectively. Knowing that external libraries do their base job is really helpful in reducing the search space and in reducing the blast radius of patches.

      Note that this is not AI-specific. More generally, in-house implementations of software that is not your core business brings costs that are not limited to that of writing said implementation.

    • By ryandrake 2026-01-2615:44

      The more I experiment with quickly coding up little projects with LLMs the more I am convinced of this. There is that saying: 90% of your customers use 10% of your software's features, but they each use a different 10%. Well, the ability to quickly vibe up a small bespoke app that does that 10% AND NOTHING ELSE is here now, and it kind of solves that problem. We don't need to put up with DoEverythingBloatWare (even open source DoEverything) when you can just have the bits and pieces you actually want/need.

      Also, you don't have to fear breaking updates--you know for sure that the software's UI will not just change out from under you because some designer had to pad their portfolio. Or that you're not going to lose a critical feature because the developer decided to refactor and leave it out.

      I'm currently going through and looking at some of the bigger, bloated, crashing slow-moving software I use and working on replacements.

    • By otikik 2026-01-2614:251 reply

      > when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      Because I thought I needed a hammer for nails (employee payroll) but then I realized I also need it to screw (sales), soldering (inventory management) and cleanup (taxes).

      Oh and don't forget that next month the density of iron can lower up to 50%.

      • By freedomben 2026-01-2614:32

        Screw sales! I've definitely felt that way more than a few times :-D

        Good points. It does feel like that happens quite often

    • By YesBox 2026-01-2617:10

      Hehe, you created quite the dog pile. Here's my woof:

      IMO you only need to look at the 30+ year history of Linux to see how wrong this prediction is. There will be a small group of people who do as you say, but the other 95% will happily pay for someone else to take care of their problem. Convenience is the supreme king of the American market.

    • By jayd16 2026-01-2616:122 reply

      Why use a battle tested, secure, library that you know solves your problem when you can burden your project with custom code you need to maintain?

      • By seniorThrowaway 2026-01-2616:30

        While quality libraries do exist, let's not pretend that most people are validating and testing the libraries they pull in, that abandoned / unmaintained libraries aren't widely used, and that managing the dependency hell caused by libraries is free.

      • By LtWorf 2026-01-2718:27

        Nah the AI will maintain it!

    • By jcarrano 2026-01-2617:551 reply

      The 3d-printer of software: get a custom part quickly and cheaply, even if not suitable for mass production.

    • By wasmitnetzen 2026-01-2614:58

      Because I will probably ask the AI for a rock instead of a bespoke hammer. If I even know what a nail is.

      I very much like to use the years of debugging and innovation others spent on that very same problem that I'm having.

    • By FeloniousHam 2026-01-2615:56

      I can speak to this directly: I've customized a few extensions I use with VSCode, (nearly) completely having the AI generate/iterate over my feature request until it works. I don't have the time to learn the details (or different languages) of the various projects, but I get huge benefit from the improvements.

      - PRO Deployer

      - MS Typescript

      - Typescript-Go

      - a bespoke internal extension to automate a lot of housekeeping when developing against tickets (git checks, branch creation, stash when switching, automatically connecting and updating ticket system)

    • By skybrian 2026-01-2614:111 reply

      Maybe true for some apps, but I suspect we will still have a vibrant ecosystem of package managers and open source libraries and coding agents will know how to use them.

      • By marginalia_nu 2026-01-2614:181 reply

        What would be the point of that? If LLMs ever actually become competent, surely they can just implement what they need.

        • By wongarsu 2026-01-2614:251 reply

          The same reason why they exist now. Why spend millions of tokens on designing, implementing and debugging something, followed by years of discovering edge cases in the real world, if I can just use a library that already did all of that

          Sure, leftpad and python-openai aren't hugely valuable in the age of LLMs, but redis and ffmpeg are still as useful as ever. Probably even more useful now that LLMs can actually know and use all their obscure features

          • By storystarling 2026-01-2617:15

            They know the syntax but seem to miss the architectural context. I've found that models will happily generate valid Redis commands that introduce race conditions or break state consistency the moment you have concurrency. It saves typing but you still need to design the locking strategy yourself.

    • By squigz 2026-01-2614:24

      Because going to the shed to get a work-tested tool is still faster than waiting on an LLM and hoping it meets every use-case you're likely to run into with that tool.

      Whatever it is, the future will also certainly not be what it was a couple decades ago - that is, every one inventing their own solution to solved problems, resulting in a mess of tools with no standardization. There is a reason libraries/frameworks/etc exist.

    • By rglover 2026-01-2614:491 reply

      Would you trust your hand next to a saw made by an LLM?

      • By tracker1 2026-01-2616:341 reply

        Maybe. Were the designs reviewed by qualified engineers and gone trough rigorous QA cycles before getting placed in front of me?

        • By rglover 2026-01-2617:59

          Nope. 100% vibe coded. The brand is DeValt. It may or may not come with a frayed power cord wrapped in electrical tape. As long as you stay away from any water source, it'll be fine.

    • By panta 2026-01-2617:001 reply

      StanLLM-generated hammer: "Dear valued customer, our automated systems have flagged activity on your StanLLM account that appears to be non-compliant with our Acceptable Use Policy. As a precautionary measure your account has been suspended. If you believe this suspension is in error, feel free to contact our customer support at /dev/null^H^H^H^H^H^H^H^H^Hsupport@..."

      • By onraglanroad 2026-01-2618:55

        That's what you get for using proprietary software instead of the GNU version: StaLLMan

    • By dimitri-vs 2026-01-2617:24

      This assumes an educated, passionate and patient user that 99% of people are not. They wont ask for a hammer - they will ask for a rock tied to a stick and get pissed off when it doesn't work like a hammer. They will ask for paint that doesn't drip. They will ask for electrical sockets they can install in their bathtub.

    • By MagicMoonlight 2026-01-2623:021 reply

      You could already pay Indians to make you software if you wanted poorly made software. Poorly making it with AI isn’t that big an improvement.

      You go to the shed because you know the hammer has undergone extensive testing and won’t immediately be hacked, costing you hundreds of millions.

      • By theshrike79 2026-01-2813:05

        It is.

        Getting poor software made by Indians is a cycle that takes months.

        With AI you get the first poor version within hours. Then you can iterate. In a week of hourly iteration you'll most likely have something usable.

        The Indians are still having meetings about when to have meetings and you can see from the logs they haven't accessed any of the documentation you gave them for the issue. Billing is on point though.

    • By GoatInGrey 2026-01-2617:52

      My personal theory is that as the excitement for LLMs normalizes, we'll land on something evolutionary rather than revolutionary. Prompt programming will take over what used to be quick/amateur micro programs (this includes userspace "developers"). Skilled developers will operate on a spectrum: all manual code at one extreme, and at the other extreme, manually defining just the essential patterns that prompt programming can reasonably build upon.

      I do not suspect that we will stay in an individualized programs Tower of Babel situation, if we ever enter it in the first place.

      It's anyone's guess as to what we end up settling on, of course. This is just a guess of mine.

    • By parliament32 2026-01-2617:30

      "Why hire a website designer when you could just drag and drop in WordPress?"

      And many businesses do, and it works, until it doesn't. I agree with you; that's the future I'm seeing for ad-hoc LLM apps.

    • By jvanderbot 2026-01-2616:41

      From what - OS libraires only? Assembly?

      The danger is not "Nobody uses OSS".

      The danger is "building software becomes exponentially more difficult without a commons to build from".

    • By eikenberry 2026-01-2621:23

      Bespoke personalized apps have been around forever in the form of shell scripts. LLMs will expand that capability to a lot more people but won’t change the basic dynamics. Just like shell scripts, LLMs will need lower level building blocks to work with. The main difference will be that the LLM ecosystem will include some libraries as well. Though, I believe, most of those libraries haven’t been written yet.

    • By xacky 2026-01-279:48

      This is also the general threat to books and encyclopedias. Grok for all its point of view flaws still generated a full encyclopedia in two months that Wikipedia by humans took 25 years and Britannica over three centuries. The only remaining issue is when we can finally trust AI to be correct more than expert humans and be aligned with human moral consensus.

    • By exe34 2026-01-2614:33

      along that line of thinking, I've been wondering if there are better building blocks. right now we're asking llms to use the bricks designed for the human hand building a cathedral - what do the bricks look like when we want AI to build many sheds for specific use? functional programming? would the database ideas of data storage like the longhorn vapourware make a come back?

    • By troutpanda 2026-01-278:54

      Finally a world where everyone uses different hammers and no one can work on anything unless you can ask the hammer-oracle to reverse engineer the other people's hammers and nails

    • By pier25 2026-01-2614:221 reply

      I don't think apps where people spend a lot of time are equivalent to small tools. You can vibe code a calculator but you probably spend most of your time on much more complex software.

      • By groundzeros2015 2026-01-2615:04

        A calculator that uses doubles for everything I guess.

    • By matkoniecz 2026-01-2617:10

      > when LLMs can build you a custom(!) hammer or saw in a few minutes, why go to the shed?

      once such LLMs exist, this question may be worth considering

      they do NOT exist at this moment

    • By groundzeros2015 2026-01-2615:03

      Because it can’t really do that for any tools that matter.

    • By pjmlp 2026-01-2615:22

      Exactly, think StarTrek replicator.

    • By reactordev 2026-01-2614:121 reply

      Why need a tool at all when the LLM can just build the house? What is a hammer? What is a keyboard? What’s a “Drivers License”?

    • By MSFT_Edging 2026-01-2618:07

      This is only a realistic expectation for the single person who needs that hammer.

      Otherwise we'll all individually be burning power via GPUs to reinvent the wheel thousands of times.

      Also, look at the level of effort to maintain any bespoke machine. Assembly lines with interchangeable parts were a big deal for a reason. That follows with software via standard libraries and APIs. If you reinvent your software wheel every time, we'll all have careers trying to fix the deluge of slop that has been generated.

    • By nektro 2026-01-274:43

      the money's gonna run out and LLMs will disappear overnight

    • By estimator7292 2026-01-2623:12

      LLMs have been a godsend for throwaway software. I wouldn't want to count the number of hours I've spent hand rolling an application that exists for one purpose for about two weeks and then to the digital graveyard.

      Now that LLMs can do it while I make coffee, I can do more useful things with my time like writing business code and posting on HN.

      Beyond that, meh.

    • By HugoDz 2026-01-2614:43

      [dead]

    • By player1234 2026-01-2619:58

      [dead]

    • By computerthings 2026-01-272:49

      [dead]

    • By draxil 2026-01-2614:143 reply

      I think that's an optimistic interpretation of how good LLMs are?

      But I think the reality is: LLMs democratise access to coding. In a way this decreases the market for complete solutions, but massively increases the audience for building blocks.

      • By ipaddr 2026-01-2614:17

        That you get no credit for open sourcing. Why would creators spend time anymore?

      • By croes 2026-01-2614:181 reply

        >LLMs democratise access to coding

        Vibe coders don't code, they let code. So LLMs democratise access to coders.

        • By kibwen 2026-01-2614:252 reply

          Closed-source models aren't "democratizing" access to anything. If you wanted to hire a contractor to write some code for you, that's always been possible.

          • By fragmede 2026-01-2614:561 reply

            Part of democracy is that it's available to all citizens, and not just for the rich. Yes, it's always been possible to find someone, but not for $200/month that will work tirelessly wherever you want them to. 9:00 am Monday? great. 7pm Tuesday? Also great. 4 am on Sunday? Just as great, for an LLM.

            • By dns_snek 2026-01-2616:181 reply

              How long will this heavily subsidized price of $200/month last? Do you really think these companies are going to let you pocket all the surplus value forever?

              We all know that the music is going to stop eventually and that the landscape after that is going to look very different. Subsidies will stop and investors will want their trillions in returns. Talking about "democratization" while everyone is just using other people's money is completely premature.

              Airbnb "democratized travel" for a while and now they're more expensive than their predecessors.

              • By fragmede 2026-01-278:142 reply

                So as soon as all software has been written that can be written, they'll jack up the prices? After that comes to pass, I'm willing to pay them a premium on having AI write additional software.

                • By dns_snek 2026-01-2711:01

                  > So as soon as all software has been written that can be written,

                  I'm not sure where you got this from. They'll jack up the prices at some undetermined time in the future once the weight of reality overpowers the manic hype we've been experiencing for the past few years.

                  My expectation is that this will start unfolding within the next 5 years and people will start looking back to today as the golden age of AI where everything was cheap, exciting, and ad-free.

                • By croes 2026-01-279:46

                  Much sooner.

                  And if all software has been written that can be written what additional software do you want to write?

          • By pepperball 2026-01-2616:59

            [dead]

      • By blibble 2026-01-2615:451 reply

        > LLMs democratise access to coding

        by making the world dependent on 3, fascist adjacent, US tech companies?

        • By vel0city 2026-01-2616:251 reply

          I didn't know Mistral, Z.ai, Qwen, and Deepseek were all fascist adjacent US tech companies.

    • By InMice 2026-01-2614:21

      I like this take. "How things are today + LLM" is in some ways the best we can approximate because one is all we know and other side is the future unfolding before our eyes. One of the coolest things about vibe coding I find is starting with a base like django then using vibe coding to build models and templates exactly how one wants for a UIUX. Basically maybe we still need humans for the guts and low level stuff but that provides a base for fast, easy personalized customization.

      I had a job where in short we had a lot of pain points with software that we had no resources permitted to fix them. With a mix past experience, googling I started writing some internal web based tools to fix these gaps. Everyone was happy. This is where I see vibe coding being really helpful in the higher level stuff like higher level scripting and web based tools. Just my opinion based on my experience.

  • By nicoburns 2026-01-2614:073 reply

    Something I've noticed is that AI code generation makes it easier/faster to generate code while shifting more work of the work of keeping code correct and maintainable to the code review stage. That can be highly problematic for open source projects that are typically already bottlenecked by maintainer review bandwidth.

    It can be mitigated by PR submitters doing a review and edit pass prior to submitting a PR. But a lot of submitters don't currently do this, and in my experience the average quality of PRs generated by AI is definitely significantly lower than those not generated by AI.

    • By pgroves 2026-01-2615:341 reply

      I was expecting this to be the point of the article when I saw the title. Popular projects appear to be drowning in PRs that are almost certainly AI generated. OpencodeCli has 1200 open at the moment[1]. Aider, which is sort of abandoned has 200 [2]. AFAIK, both projects are mostly one maintainer.

      [1] https://github.com/anomalyco/opencode/pulls [2] https://github.com/Aider-AI/aider/pulls

      • By matkoniecz 2026-01-2617:14

        Even not-so popular niche projects are getting LLM spam. Curiously, at least where Ia m active most comes from account with India-related usernames.

        Some are opening PRs, some are posting comments in issues that repeat what was said already, just in more words.

    • By trey-jones 2026-01-2614:233 reply

      To me, an old guy, I would rather have LLM doing (assisting with) the code review than the actual code production. Is that stupid?

      • By electroly 2026-01-2615:382 reply

        LLMs are great at reviewing. This is not stupid at all if it's what you want; you can still derive benefit from LLMs this way. I like to have them review at the design level where I write a spec document, and the LLM reviews and advises. I don't like having the LLM actually write the document, even though they are capable of it. I do like them writing the code, but I totally get it; it's no different than me and the spec documents.

        • By trey-jones 2026-01-2616:10

          Right, I'd say this is the best value I've gotten out of it so far: I'm planning to build this thing in this way, does that seem like a good idea to you? Sometimes I get good feedback that something else would be better.

        • By torginus 2026-01-2617:102 reply

          If LLMs are great at reviewing, why do they produce the quality of code they produce?

          • By electroly 2026-01-2618:41

            Reviewing is the easier task: it only has to point me in the right direction. It's also easy to ignore incorrect review suggestions.

          • By gjadi 2026-01-2617:281 reply

            Imho it's because you worked before asking the LLM for input, thus you already have information and an opinion about what the code should look like. You can recognize good suggestions and quickly discard bad ones.

            It's like reading, for better learning and understanding, it is advised that you think and question the text before reading it, and then again after just skimming it.

            Whereas if you ask first for the answer, you are less prepared for the topic, is harder to form a different opinion.

            It's my perception.

            • By hxugufjfjf 2026-01-2618:23

              Its also because they are only as good as they are with their given skills. If you tell them "code <advandced project> and make no x and y mistakes" they will still make those mistakes. But if you say "perform a code review and look specifically for x and y", then it may have some notion of what to do. That's my experience with using it for both writing and reviewing the same code in different passes.

      • By groundzeros2015 2026-01-2615:06

        This makes sense to me.

        I need to make decisions about how things are implemented. Even if it can pick “a way” that’s not necessarily going to be a coherent design that I want.

        In contrast for review I already made the choices and now it’s just providing feedback. More information I can choose to follow or ignore.

      • By Leynos 2026-01-2615:52

        Take a look at CodeRabbit and Sourcery if you want to give that a go.

    • By echelon 2026-01-2614:126 reply

      The maintainers can now do all the work themselves.

      With the time they save using AI, they can get much more work done. So much that having other engineers learn the codebase is probably not worth it anymore.

      Large scale software systems can be maintained by one or two folks now.

      Edit: I'm not going to get rate limited replying to everyone, so I'll just link another comment:

      https://news.ycombinator.com/item?id=46765785

      • By tracker1 2026-01-2616:43

        No, because proper QA/QC will be the bottleneck.... AI is ill-suited to test for fit/use. I built an ANSi terminal with AI assist (rust/wasm/canvas)... it literally took longer to get the scrollback feature working with keyboard and mousewheel interactions than it took to get the basic rendering correct. And there are still a few bugs in it.

        In the end, you should not just skip QA/QC and fitness testing. Many things can fit a technical spec and still be absolutely horrible. With AI assisted developmnet, imo it's that much more important to get the UX right. I don't want 10x the apps if they're all half-implemented garbage that look like garbage are hard to use and just painful to install, maintain and use.

        Library creation still has a place here... and so far, getting AI code assistants to actually understand and use a given library that may be less popular has been at the very least, interresting.

      • By wooderson_iv 2026-01-2614:16

        Do you have anecdotes or evidence of this or is it speculative?

      • By j16sdiz 2026-01-2614:27

        Those are the most mentally exhausting task. Are you sure putting this burden on single person is good?

      • By matkoniecz 2026-01-2617:12

        > So much that having other engineers learn the codebase is probably not worth it anymore.

        > Large scale software systems can be maintained by one or two folks now.

        No, LLMs are not so powerful yet.

      • By erelong 2026-01-2618:42

        Yeah, it should change things but also free up other energies to work on things

      • By shafyy 2026-01-2614:16

        Not sure if you're being sarcastic or not?

  • By marginalia_nu 2026-01-2614:163 reply

    > When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.

    I can't think of even a single example of OSS being monetized through direct user engagement. The bulk of it just isn't monetized at all, and what is monetized (beyond like a tip jar situation where you get some coffee money every once in a while) is primarily sponsored by enterprise users, support license sales, or through grants, or something like that. A few projects like Krita sell binaries on the steam store.

    • By dfox 2026-01-2614:34

      There is this kind of webdev-adjacent niche where the model of using documentation (or even intentionally sub-par documentation) as a marketing funnel for consulting and/or "Pro" versions is a thing. These projects are somewhat vocal about vibe coding killing their business models. If these projects really create any meaningful value is another question.

    • By WorldMaker 2026-01-2617:12

      > what is monetized (beyond like a tip jar situation where you get some coffee money every once in a while) is primarily sponsored by enterprise users, support license sales, or through grants, or something like that

      All of those examples are the "direct user engagement" in question. No one tips a tip jar that they don't see. Enterprise users don't sponsor projects or buy licenses when they don't know they are using projects they should sponsor or buy a better license.

      If an LLM is automating your `npm install` you probably don't see the funding requests. Are you running `npm fund` in your spare time?

      If an LLM is automating your need to lookup library documentation you probably don't see that the library's own documentation has a Ko-Fi link or GitHub Sponsors request. Would you check library websites and GitHub repos on your own for such things without the need to read their documentation?

    • By nprateem 2026-01-2615:58

      Terraform, ansible, countless others. No community=no enterprise version, no awareness

HackerNews