Coding agents have replaced every framework I used

2026-02-0713:07375594blog.alaindichiappari.dev

Coding agents have replaced every framework I used

I don’t post a lot. But when I do, it’s because I think few people are saying out loud what I’m noticing.

I’ve been building a product from the ground up. Not the “I spun up a Next.js template” kind of ground up. I mean from network configuration to product design to pricing decisions. Truly end to end. And I’ve been doing it using frontier models and coding agents for hours and hours every single day, both on this project and in my full time work. I’ve been trying to stay away from the chaos and the hype, filtering hard for what is actually valuable.

Since December 2025, things have dramatically changed for the better. Many have noticed. Few are drawing the right conclusions.

Antirez likes to call it “automated programming”, and I really like that framing. It captures the essence far better than the shallow, almost dismissive label of “vibe coding”. Automation was at the core of most of the work and cultural revolutions of human history. The printing press, the loom, the assembly line. This one doesn’t differ much.

Most of my work is still there. I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am. What’s gone is the tearing, exhausting manual labour of typing every single line of code.

At this point in time, models and tools, when put in a clean and maniacally well set up environment, can truly make the difference. I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric. But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.

Automated programming especially allows me to quickly build the tools I need so fast that every blacksmith that ever existed on this earth would envy me deeply. Finally able to really focus on the things they have in mind. Finally dedicating more time of their craft to the art they conceive, not the sweat of the forge.

It’s been months now that I have this thought crystallized in my mind. It is so clear to me that I genuinely don’t understand why everyone is not screaming it to the world.

We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.

Think about what happened. We, as an industry, looked at the genuine complexity of building software and instead of sharpening our thinking, we bought someone else’s thinking off the shelf. We wrapped everything in frameworks like wrapping a broken leg in silk. It looks nice. The leg is still broken.

In my mind, besides the self declared objectives, frameworks solve three problems. Two explicit and one obvious but never declared.

“Simplification”. Software engineers are scared of designing things themselves. They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.

Automation. This is the only point I can actually, more or less, understand and buy. Boilerplate is boring work. I hate it. And I especially hate using libraries that I then need to study, keep updated, be aware of vulnerabilities for, just for the purpose of removing the creation of duplicated but necessary code. Think about ORMs, CRUD management, code generation, API documentation and so on. The grunt work that nobody wants to do but everybody needs done. Fair enough. But hold that thought, because this is exactly the point where everything changes.

Labour cost. This is the quiet one. The one nobody puts on the conference slide. For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to host, deploy, store your stuff. And you unlock a feature that has nothing to do with engineering: you no longer need to hire a software engineer. You hire a React Developer. No need to train. Plug and play. Easy to replace. A cog in a machine designed by someone else, maintaining a system architected by someone else, solving problems defined by someone else. This is not engineering. This is operating.

In my opinion Software engineering, the true one, is back again.

I am not speaking out of my lungs only. I’ve been developing this way almost flawlessly for over two years at this point. But the true revolution happened clearly last year, and since December 2025 this is obvious to anyone paying attention. From now on it will be even more so.

We have the chance again to get rid of useless complexity and keep working on the true and welcome complexity of our ideas, our features, our products. The complexity that matters. The complexity that is actually yours.

Automation and boilerplating have never been so cheap to overcome. I’ve been basically never writing twice the same line of code. I’m instantly building small tools I need, purpose built, exactly shaped around the problem at hand. I don’t need any fancy monorepo manager. A simple Makefile covers 100% of my needs for 99% of my use cases. When things will get very complicated, and if they get very complicated, I’ll think about it. But only then. Not a second before. This is engineering. You solve the problem you have, not the problem someone on a conference stage told you that you’ll eventually have.

Agents are really well prepared when it comes to basic tools. Tools that have been around not for months, but literally for decades. Bash was born in 1989, just preceding me by two months. The most mediocre model running at this time knows bash better than any person in the world. Bash is the universal adapter. It is not a coincidence that coding agents are shifting from complex and expensive MCP configurations to a simple agent loop with bash as a way to interact, literally, with the world. The oldest tool turned out to be the most future proof. There’s a lesson in there if you care to listen.

Really think about it.

Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities? With all the costs associated with it. From the “least” expensive: operational costs like keeping everything updated because they once again found a critical vulnerability in your Next.js version. To the most expensive one: the cost to your Design Choices. The invisible cost. The one you pay every day without even realizing it, because you’ve been paying it so long you forgot what freedom felt like.

If you keep accepting this trade off, you are not only losing the biggest opportunity we’ve seen in software engineering in decades. You are probably not recognizing your own laziness in once again buying whatever the hyperscalers have decided for you. You’re letting Google and Meta and Vercel be your architect, your designer, your thinker. And in exchange, you get to be their operator.

The tools are here. The models are here. The revolution already happened and most people are still decorating the old house.

Stop wrapping broken legs in silk. Start building things that are yours.


Read the original article

Comments

  • By rglover 2026-02-0716:1840 reply

    A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

    You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).

    The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.

    Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").

    • By mark242 2026-02-0717:1916 reply

      I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

      Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.

      Two minutes later, the SRE agents had the bug fixed and ready to be merged.

      "understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.

      AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.

      • By dasil003 2026-02-0718:114 reply

        I agree with both you and the GP. Yes, coding is being totally revolutionized by AI, and we don't really know where the ceiling will be (though I'm skeptical we'll reach true AGI any time soon), but I believe there still an essential element of understanding how computer systems work that is required to leverage AI in a sustainable way.

        There is some combination of curiosity of inner workings and precision of thought that has always been essential in becoming a successful engineer. In my very first CS 101 class I remember the professor alluding to two hurdles (pointers and recursion) which a significant portion of the class would not be able to surpass and they would change majors. Throughout the subsequent decades I saw this pattern again and again with junior engineers, bootcamp grads, etc. There are some people no matter how hard they work, they can't grok abstraction and unlock a general understanding of computing possibility.

        With AI you don't need to know syntax anymore, but to write the write prompts to maintain a system and (crucially) the integrity of its data over time, you still need this understanding. I'm not sure how the AI-native generation of software engineers will develop this without writing code hands-on, but I am confident they will figure it out because I believe it to be an innate, often pedantic, thirst for understanding that some people have and some don't. This is the essential quality to succeed in software both in the past and in the future. Although vibe coding lowers the barrier to entry dramatically, there is a brick wall looming just beyond the toy app/prototype phase for anyone without a technical mindset.

        • By athrowaway3z 2026-02-0718:556 reply

          I can see why people are skeptical devs can be 10x as productive.

          But something I'd bet money on is that devs are 10x more productive at using these tools.

          • By bodge5000 2026-02-0722:191 reply

            I get its necessary for investment, but I'd be a lot happier with these tools if we didn't keep making these wild claims, because I'm certainly not seeing 10x the output. When I ask for examples, 90% its claude code (not a beacon of good software anyway but if nearly everyone is pointing to one example it tells you thats the best you can probably expect) and 10% weekend projects, which are cool, but not 10x cool. Opus 4.5 was released in Dec 2025, by this point people should be churning out year long projects in a month, and I certainly haven't seen that.

            I've used them a few times, and they're pretty cool. If it was just sold as that (again, couldn't be, see: trillion dollar investments) I wouldn't have nearly as much of a leg to stand on

            • By scotty79 2026-02-080:106 reply

              Have you seen moltbook? One dude coded reddit clone for bots in less the a week. How is it not at least 10x of what was achievable in pre-ai world?

              Granted he left the db open to public, but some meat powered startups did exactly the same few years ago.

              • By coldtea 2026-02-081:142 reply

                Any semi-capable coder could build a Reddit clone by themselves in a week since forever. It's a glorified CRUD app.

                The barrier to creating a full blown Reddit the huge scaling, not the functionality. But with AWS, Azure, Google Cloud, and backends like S3, CF etc, this hasn't been a barrier since a decade or more, either.

                • By scotty79 2026-02-0812:40

                  What I could do in a week is maybe set up an open source clone of reddit (that was written by many people for many months) and customize it a little bit.

                  And I have a pretty decent career behind me as a aoftware developer and my peers percieved me as kinda good.

                • By spiderfarmer 2026-02-083:321 reply

                  I think you’re wrong in several ways.

                  Even capable coders can’t create a Reddit clone in a week. Because it’s not just a glorified CRUD app. And I encourage you to think a bit harder before arguing like that.

                  Yes you can create a CRUD app in some kind of framework and style it like Reddit. But that’s like putting lines on your lawn and calling it a clone of the Bernabeu.

                  But even if you were right, the real barrier to building a Reddit clone is getting traction. Even if you went viral and did everything right, you’d still have to wait years before you have the brand recognition and SEO rankings they enjoy.

                  • By coldtea 2026-02-084:581 reply

                    >Because it’s not just a glorified CRUD app

                    In what way (that's not related to the difficulty of scaling it, which I already addressed separately)?

                    The point of my comment was:

                    "Somebody with AI cloning Reddit in a week is not as special as you make it to be, all things considering. A Reddit clone is not that difficult, it's basically a CRUD app. The difficult part of replicating it, or at least all the basics of it, is its scaling - and even that wouldn't be as difficult for a dev in 2026, the era of widespread elastic cloud backends".

                    The Bernabeu analogy handwavingly assumes that Reddit is more challenging than a homegrown clone, but doesn't address in what way Reddit differs from a CRUD app, and how my comment doesn't hold.

                    And even if it did, it would be moot regarding the main point I make, unless the recent AI-clone also handles those differentiating non-CRUD elements and thus also differs from a CRUD app.

                    >But even if you were right, the real barrier to building a Reddit clone is getting traction.

                    True, but not relevant to my point, which is about the difficulty of cloning Reddit coding-wise, not business wise, and whether it's or isn't any great feat for someone using AI to do it.

                    • By spiderfarmer 2026-02-089:254 reply

                      Calling Reddit a CRUD app isn’t wrong, it’s just vacuous.

                      It strips away every part that actually makes Reddit hard.

                      What happens when you sign up?

                      A CRUD app shows a form and inserts a row.

                      Reddit runs bot detection, rate limits, fingerprinting, shadow restrictions, and abuse heuristics you don’t even see, and you don’t know which ones, because that knowledge is their moat.

                      What happens when you upvote or downvote?

                      CRUD says “increment a counter.”

                      Reddit says “run a ranking algorithm refined over years, with vote fuzzing, decay, abuse detection, and intentional lies in the UI.” As the number you see is not the number stored.

                      What happens when you add a comment?

                      CRUD says “insert record.”

                      Reddit applies subreddit-specific rules, spam filters, block lists, automod logic, visibility rules, notifications, and delayed or conditional propagation.

                      What happens when you post a URL?

                      CRUD stores a string.

                      Reddit fingerprints it, deduplicates it, fetches metadata, detects spam domains, applies subreddit constraints, and feeds it into ranking and moderation systems.

                      Yes, anyone can scaffold a CRUD app and style it like Reddit.

                      But calling that a clone is like putting white lines on your lawn and calling it the Bernabeu.

                      You haven’t cloned the system, only its silhouette.

                      • By Ma8ee 2026-02-0811:121 reply

                        Why do you think the app they call a clone of Reddit do all of those things, or most, or any?

                        • By carlmr 2026-02-0818:401 reply

                          I was thinking the exact same thing. Moltbook isn't that sophisticated. We're moving goal posts a lot here.

                          However, I do think 1 week is ambitious, even for a bad clone.

                          • By spiderfarmer 2026-02-0820:081 reply

                            So if Reddit is just a CRUD app, what is Moltbook?

                            • By Ygg2 2026-02-0823:161 reply

                              An impressive MVP of Reddit, with zero sophistication. It's a CRAP app.

                              • By spiderfarmer 2026-02-097:42

                                My point exactly. But if you're semi-capable and have a week of spare time, you can build a better Reddit clone, or so I heard.

                      • By bodge5000 2026-02-0822:281 reply

                        > Reddit runs bot detection, rate limits, fingerprinting, shadow restrictions, and abuse heuristics you don’t even see, and you don’t know which ones, because that knowledge is their moat.

                        > Reddit says “run a ranking algorithm refined over years, with vote fuzzing, decay, abuse detection, and intentional lies in the UI.” As the number you see is not the number stored.

                        > etc...

                        The question is; is moltbook doing this? That was the original point, it took a week to build a basic reddit clone, as you call it the silhouette, with AI, that should surely be the point of comparison to what a human could do in that time

                        • By spiderfarmer 2026-02-097:441 reply

                          "A basic Reddit clone"

                          So as we have established, it's not even a basic Reddit clone.

                          And anyone who says they can build one in a week is giving HN a bad reputation.

                          • By bodge5000 2026-02-109:49

                            That just seems like a completely different argument, Reddit only came into a part of this in relation to Moltbook

                      • By csande17 2026-02-0810:591 reply

                        Moltbook didn't do any of that stuff either, though!

                        • By spiderfarmer 2026-02-0820:08

                          So if Reddit is just a CRUD app, what is Moltbook

                      • By coldtea 2026-02-0815:10

                        Sorry, but this reads like AI slop.

              • By rsynnott 2026-02-0820:00

                Remember the Ruby on Rails hype? You could make a twitter clone in an afternoon! It obviously wouldn't work properly, but, y'know...

                This is, like, not the industry's first run-in with "this makes you 10x more productive!"

              • By Ygg2 2026-02-0818:46

                Have you seen the shitshow moltbook was?

                Anyone could insert themselves AI or not. Anyone could post any number of likes.

                This isn't a Reddit clone. This is Reddit written by Highschoolers.

              • By bodge5000 2026-02-080:471 reply

                I mean as has already been pointed out the fact that its a clone is a big reason why, but then I also think I could probably churn out a simple clone of reddit in less than a week. We've been through this before with twitter, the value isnt the tech (which is relatively straightforward), its the userbase. Of course Reddit has some more advanced features which would be more difficult, but I think the public db probably tells you that wasn't much of a concern to Moltbook either, so yeh, I reckon I could do that.

                • By scotty79 2026-02-0812:431 reply

                  Double your estimate and switch the unit or time to next larger one. That's how programmers time estimate tend to be. So two months and I'm right there with you.

                  • By bodge5000 2026-02-0822:20

                    That only counts if its something you care about. If you throw maintenance out the window (eg you dont close off your db) it gets a lot easier

              • By jimbokun 2026-02-081:25

                1. Do you have insider knowledge of the Reddit code base and the Moltbook code base and how much it reproduced?

                2. Copying an existing product should take a minuscule fraction of the time it took to evolve the original.

                3. I glanced at some of the Moltbook comments which were meaningless slop, very few having any replies.

              • By Krei-se 2026-02-080:24

                Because its a clone.

          • By J_Shelby_J 2026-02-0719:523 reply

            Id wager my life savings that devs aren’t even 1.5x more productive using these tools.

            • By mattlondon 2026-02-0721:431 reply

              Even if I am only slightly more productive, it feels like I am flying. The mental toll is severely reduced and the feel good factor of getting stuff done easily (rather than as a slog) is immense. That's got to be worth something in terms of the mental wellbeing of our profession.

              FWIW I generally treat the AI as a pair programmer. It does most of the typing and I ask it why it did this? Is that the most idiomatic way of doing it? That seems hacky. Did you consider edge case foo? Oh wait let's call it a BarWidget not a FooWidget - rename everything in all other code/tests/make/doc files Etc etc.

              I save a lot of time typing boilerplate, and I find myself more willing (and a lot less grumpy!!!) to bin a load of things I've been working on but then realise is the wrong approach or if the requirements change (in the past I might try to modify something I'd been working on for a week rather than start from scratch again, with AI there is zero activation energy to start again the right way). Thats super valuable in my mind.

              • By 9dev 2026-02-088:53

                I absolutely share your feelings. And I realise I’m way less hesitant to pick up the dredge tasks; migrating to new major versions of dependencies, adding missing edge case tests, adding CRUD endpoints, nasty refactorings, all these things you usually postpone or go on procrastination sprees on HN are suddenly very simple undertakings that you can trivially review.

            • By zer00eyz 2026-02-0720:123 reply

              Dead wrong.

              Because the world is still filled with problems that would once have been on the wrong side of the is it worth your time matrix ( https://xkcd.com/1205/ )

              There are all sorts of things that I, personally, should have automated long ago that I threw at claud to do for me. What was the cost to me? Prompt and a code review.

              Meanwhile, on larger tasks an LLM deeply integrated into my IDE has been a boon. Having an internal debate on how to solve a problem, try both, write a test, prove out what is going to be better. Pair program, function by function with your LLM, treat it like a jr dev who can type faster than you if you give it clear instructions. I think you will be shocked at how quickly you can massively scale up your productivity.

              • By xeromal 2026-02-0722:511 reply

                Yup, I've already run like 6 of my personal projects including 1 for my wife that I had lost interest in. For a few dollars, these are now actually running and being used by my family. These tools are a great enabler for people like me. lol

                I used to complain when my friends and family gave me ideas for something they wanted or needed help with because I was just too tired to do it after a day's work. Now I can sit next to them and we can pair program an entire idea in an evening.

                • By Micheasherman 2026-02-119:101 reply

                  If it is 20% slower for you to write with AI, but you are not stressed out and enjoy it so you actually code then the AI is a win and you are more productive with it.

                  I think that's what is missing from the conversation. It doesn't make developers faster, nor better, but it can automate what some devs detest and feel burned out having to write and for those devs it is a big win.

                  If you can productively code 40 hours a week with AI and only 30 hours a week without AI then the AI doesn't have to be as good, just close to as good.

                  • By xeromal 2026-02-1115:27

                    I'm in agreeance with you 100%. A lot of my job is coming into projects that have been running already and having to understand how the code was written, the patterns, and everything else. Generating a project with an LLM feels like doing the same thing. It's not going to be a perfect code base but it's enough.

                    Last night I was working on trying to find a correlation between some malicious users we had found and information we could glean from our internet traffic and I was able to crunch a ton of data automatically without having to do it myself. I had a hunch but it made it verifiable and then I was able to use the queries it had used to verify myself. Saved me probably 4 or 5 hours and I was able to wash the dishes.

              • By newAccount2025 2026-02-0720:36

                The matrix framing is a very nice and way to put it. This morning I asked my assistant to code up a nice debugger for a particular flow in my application. It’s much better than I would have had time/patience to build myself for a nice-to-have.

              • By rubyn00bie 2026-02-0721:281 reply

                I sort of have a different view of that time matrix. If AI is only able to help me do tasks that are of low value, where I previously wouldn’t have bothered—- is it really saving me anything? Before where I’d simply ignore auxiliary tasks, and focus on what matters, I’m now constantly detoured with them thinking “it’ll only take ten minutes.”

                I also primarily write Elixir, and I have found most Agents are only capable of writing small pieces well. More complicated asks tend to produce unnecessarily complicated solutions, ones that may “work,” on the surface, but don’t hold up in practice. I’ve seen a large increase in small bugs with more AI coding assistance.

                When I write code, I want to write it and forget about it. As a result, I’ve written a LOT of code which has gone on to work for years without touching it. The amount of time I spent writing it is inconsequential in every sense. I personally have not found AI capable of producing code like that (yet, as all things, that could change).

                Does AI help with some stuff? Sure. I always forget common patterns in Terraform because I don’t often have to use it. Writing some initial resources and asking it to “make it normal,” is helpful. That does save time. Asking it to write a gen server correctly, is an act of self-harm because it fundamentally does not understand concurrency in Erlang/BEAM/OTP. It very much looks like it does, but it 100% does not.

                tldr; I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.

                • By zer00eyz 2026-02-0722:23

                  > are only capable of writing small pieces well.

                  It excels at this, and if you have it deeply integrated into your workflow and IDE/dev env the loop should feel more like pair programing, like tennis, than it should feel like its doing everything for you.

                  > I also primarily write Elixir,

                  I would also venture that it has less to do with the language (it is a factor) and more to do with what you are working on. Domain will matter in terms of sample size (code) and understanding (language to support). There could be 1000s of examples in its training data of what you want, but if no one wrote a commment that accurately describes what that does...

                  > I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.

                  This is spot on. I stopped thinking of it as "AI" and started thinking of it as "power tools". Useful, and like a power tool you should be cautious because there is danger there... It isnt smart, it's not doing anything that isnt in its training data, but there is a lot there, everything, and it can do some basic synthesis.

            • By leptons 2026-02-0721:21

              It probably depends on the developer, and how much slop/bugs is willing to be tolerated.

          • By rirze 2026-02-0721:18

            Like others are saying, AI will accelerate the gap between competent devs and mediocre devs. It is a multiplier. AI cannot replace fundamentals, at least not a good helmsman with a good rational, detail-oriented mind. Having fundamentals (skill & knowledge) + using AI will be the cheat code in the next 10 years.

            The only historical analogue of this is perhaps differentiating a good project manager from an excellent one. No matter how advanced, technology will not substitute for competence.

          • By gtowey 2026-02-081:021 reply

            At the company I work for, despite pushing widespread adoption, I have seen exactly a zero percent increase in the rate at which major projects get shipped.

            • By qualifck 2026-02-0823:181 reply

              This is what keeps getting me. People here keep posting benchmarks, bragging about 5x, 10x, 20x. None of the companies we work with are putting anything faster.

              The evangelist response is to call it a skill issue, but looking around it seems like no one anywhere is actually pushing out new products meaningfully faster.

              • By gtowey 2026-02-115:141 reply

                I think it shows the barrier to building software quickly was never about how fast you can write the code.

                Maybe at a startup it is, but for any established company I find most of the friction is systemic management issues.

                • By Micheasherman 2026-02-119:12

                  Well, considering there are coders who type under 30 WPM and are senior level programmers I would have thought people realized that years ago.

                  Oh well

          • By seff 2026-02-0720:001 reply

            I view the current tools as more of a multiplier of base skill.

            A 1x engineer may become a 5x engineer, but a -1x will also produce 5x more bad code.

            • By Retric 2026-02-0721:13

              Several experiments have shown quality of output at every skill level drops.

              In many cases the quantity of output is good enough to compensate, but quality is extremely difficult to improve at scale. Beefing up QA to handle significantly more code of noticeably lower quality only goes so far.

          • By rsynnott 2026-02-0819:58

            > But something I'd bet money on is that devs are 10x more productive at using these tools.

            If this were true, we should be seeing evidence of it by now, either in vastly increased output by companies (and open source projects, and indie game devs, etc), or in really _dramatic_ job losses.

            This is assuming a sensible definition of 'productive'; if you mean 'lines of code' or 'self-assessment', then, eh, maybe, but those aren't useful metrics of productivity.

        • By stuartaxelowen 2026-02-0822:21

          It is tempting to think that we can delegate describing the mental model to AI, but it seems like all of this boils down to humans making bets, and it also seems like the fundamental bets engineers are making are about the formalisms that encode the product and make it valuable.

        • By rendaw 2026-02-084:011 reply

          What an awful professor! When I first tried to learn pointers, I didn't get it. I tried again 6 months later and suddenly it clicked. The same thing happened for another guy I was learning with.

          So the professor just gaslit years of students into thinking they were too dumb to get programming, and also left them with the developmental disability of "if you can't figure something out in a few days, you'll never get it".

          • By discreteevent 2026-02-087:59

            Indeed. Never let any idiot like this put you off. Back yourself and persist, persist, persist.

        • By plomme 2026-02-089:57

          I don’t think there will be an “AI native” generation of developers. AI will be the entity that “groks pointers” and no one else will know it or care what goes on under the hood.

      • By alexpotato 2026-02-0723:011 reply

        Speaking as someone who has been both a SRE/DevOps from all levels from IC to Global Head of a team:

        - I 100% believe this is happening and is probably going to be the case in the next 6 months. I've seen Claude and Grok debug issues when they only had half of the relevant evidence (e.g. Given A and B, it's most likely X). It can even debug complex issues between systems using logs, metrics etc. In other words, everything a human would do (and sometimes better).

        - The situation described is actually not that different from being a SRE manager. e.g. as you get more senior, you aren't doing the investigations yourself. It's usually your direct reports that are actually looking at the logs etc. You may occasionally get involved for more complex issues or big outages but the direct reports are doing a lot of the heavy lifting.

        - All of the above being said, I can imagine errors so weird/complex etc that the LLMs either can't figure it out, don't have the MCP or skill to resolve it or there is some giant technology issue that breaks a lot of stuff. Facebook engineers using angle grinders to get into the data center due to DNS issues comes to mind for the last one.

        Which probably means we are all going to start to be more like airline pilots:

        - highly trained in debugging AND managing fleets of LLMs

        - managing autonomous systems

        - around "just in case" the LLMs fall over

        P.S. I've been very well paid over the years and being a SRE is how I feed my family. I do worry, like many, about how all of this is going to affect that. Sobering stuff.

        -

        • By misir 2026-02-081:113 reply

          > Which probably means we are all going to start to be more like airline pilots:

          Airline pilots are still employed because of regulations. The industry is heavily regulated and the regulations move very slowly because of its international cooperative nature. The regulations dictate how many crew members should be on board for each plane type and other various variables. All the airlines have to abide by the rules of the airspace they're flying over to keep flying.

          The airlines on the other hand along with the technology producers (airbus for example) are pursuing to reduce number of heads in the cockpit. While their recent attempt to get rid of co-pilots in EASA land has failed [1], you can see the amount of pursuit and investment. The industry will continue to force through cost optimization as long as there's no barrier to prevent. The cases where automation has failed will be just a cost of the business, since the life of the deceased is no concern to the company's balance sheet.

          Given the lack of regulation in the software, I suspect the industry will continue the cost optimization and eliminate humans in the loop, except in the regulated domains.

          [1] - https://www.easa.europa.eu/en/research-projects/emco-sipo-ex... ; while this was not a direct push to get rid of all pilots, it's a stepping stone in that direction.

          • By anonnon 2026-02-085:52

            It's crazy how many developers are starry-eyed optimists about all of this, just casually assuming that they'll still be highly-paid, well-respected professionals ("we'll be like pilots, mostly monitoring the autopilot") if this technology doesn't hit a wall in the next year or two, despite lacking any of the legal protections that other professions enjoy.

          • By alexpotato 2026-02-082:12

            The below post is a great explanation of how the airlines are essentially at the mercy of the pilot unions: https://philip.greenspun.com/flying/unions-and-airlines

          • By nunez 2026-02-111:28

            Regulation and strong unions are the only thing holding airlines back from doing what the cruise lines did long ago: importing all of their labor from cheaper countries and paying them trash while working them to the bone.

            In the meantime, captains at legacy airlines are the only ones getting paid well. Everyone else struggles to make ends meet. All while airlines constantly compain that they "can't find enough qualified pilots." Where have I heard this said before...

            Also, every pilot is subject to furloughs, which happen every time economic headwinds blow a little too hard, which resets their tenure, and their payscales, if they switch employers.

      • By pragmatic 2026-02-0717:243 reply

        Now run that loop 1000 times.

        What does the code /system look like.

        It is going to be more like evolution (fit to environment) than engineering (fit to purpose).

        It will be fascinating to watch nonetheless.

        • By ThrowawayR2 2026-02-0720:482 reply

          It'll probably look like the code version of this, an image run through a LLM 101 times with the directive to create a replica of the input image: https://www.reddit.com/r/ChatGPT/comments/1kbj71z/i_tried_th... Despite being provided with explicit instructions, well...

          People are still wrongly attributing a mind to something that is essentially mindless.

          • By therealdrag0 2026-02-081:091 reply

            Mind is irrelevant. We see the features performing before our eyes, it’s wild people deny code gens abilities still.

            • By habinero 2026-02-082:121 reply

              They do okay-ish for things that don't matter and if you don't look that hard. If you do look, the "features" turn out to be very limited, or not do what they claim or not work at all.

              • By therealdrag0 2026-02-083:331 reply

                It’s still a collaborative and iterative process. That doesn’t mean they don’t work. I don’t need ai to one shot my entire job for it to be crazy useful.

                • By habinero 2026-02-088:02

                  If you find it helpful, that's fine. I like it as spicy autocorrect, and turn it off when I find it annoying.

                  I actually do look into what people do because as much fun as being a hater is, it's important not to get lost in the sauce.

                  From what I've seen, it's basically all:

                  1. People tricking themselves into feeling productive but they're not, actually

                  2. People tricking themselves into feeling productive but they're actually doing sloppy work

                  3. Hobby or toy stuff

                  4. Stuff that isn't critical to get right

                  5. Stuff they don't know how to judge the quality of

                  6. ofc the grifters chasing X payouts and driving FOMO

                  7. People who find it kinda useful in some limited situations (me)

                  It has its uses for sure, but I don't find it transformative. It can't do the hard parts and for anything useful, I need to check exactly what it did, and if I do that, it's much faster to do myself. Or make a script to do it.

          • By pinkgolem 2026-02-0721:541 reply

            I mean, if you tell a chain of 100 humans to redraw a a picture i would expect it to go similar, just much faster

            • By camdenreslink 2026-02-082:051 reply

              If you handed a human an image and said please give me back this image totally unmodified, I bet the human could do it.

              • By cryptonector 2026-02-0816:04

                Not if you were asking them to redraw the image as they saw it. That's what's happening in this particular case, only with an LLM.

        • By finebalance 2026-02-0717:29

          "evolution (fit to environment) than engineering (fit to purpose)."

          Oh, I absolutely love this lens.

        • By skybrian 2026-02-0717:365 reply

          Sure, if all you ask it to do is fix bugs. You can also ask it to work on code health things like better organization, better testing, finding interesting invariants and enforcing them, and so on.

          It's up to you what you want to prioritize.

          • By smashed 2026-02-0717:583 reply

            I have some healthy skepticism on this claim though. Maybe, but there will be a point of diminishing returns where these refactors introduce more problems than they solve and just cause more AI spending.

            Code is always a liability. More code just means more problems. There has never been a code generating tool that was any good. If you can have a tool generate the code, it means you can write something on a higher level of abstraction that would not need that code to begin with.

            AI can be used to write this better quality / higher level code. That's the interesting part to me. Not churning out massive amounts of code, that's a mistake.

            • By Applejinx 2026-02-0722:12

              Microsoft will be an excellent real-world experiment on whether this is any good. We so easily forget that giant platform owners are staking everything on all this working exactly as advertised.

              Some of my calculations going forward will continue to be along the lines of 'what do I do in the event that EVERYTHING breaks and cannot be fixed'. Some of my day job includes retro coding for retro platforms, though it's cumbersome. That means I'll be able to supply useful things for survivors of an informational apocalypse, though I'm hoping we don't all experience one.

            • By skybrian 2026-02-0719:551 reply

              "What can we do to reduce the size of the codebase" seems like an interesting prompt to try.

              • By NitpickLawyer 2026-02-0721:561 reply

                There's an interesting phenomenon I noticed with the "skeptics". They're constantly using what-ifs (aka goalpost moving), but the interesting thing is that those exact same what-ifs were "solved" earlier, but dismissed as "not good enough".

                This exact thing about optimisation has been shown years ago. "Here's a function, make it faster". With "glue" to test the function, and it kinda worked even with GPT4 era models. Then came alphaevolve where google found improvements in real algorithms (both theoretical i.e. packing squares and practical i.e. ML kernels). And yet these were dismissed as "yeah, but that's just optimisation, that's easyyyy. Wake me up when they write software from 0 to 1 and it works".

                Well, here we are. We now have a compiler that can compile and boot linux! And people are complaining that the code is unmaintainable and that it's slow / unoptimised. We've gone full circle, but forgot that optimisation was easyyyy. Now it's something to complain about. Oh well...

                • By anon22981 2026-02-0723:041 reply

                  I use LLM’s daily and agents occasionally. They are useful, but there is no need to move any goal posts; they easily do shit work still in 2026.

                  All my coworkers use agents extensively in the backend and the amount of shit code, bad tests and bugs has skyrocketed.

                  Couple that with a domain (medicine) where our customer in some cases needs to validate the application’s behaviour extensively and it’s a fucking disaster —- very expensive iteration instead of doing it well upfront.

                  • By skybrian 2026-02-080:091 reply

                    I think we have some pretty good power tools now, but using them appropriately is a skill issue, and some people are learning to use them in a very expensive way.

                    • By andrekandre 2026-02-081:411 reply

                        > appropriately is a skill issue
                      
                      or maybe its a ux issue?

                      maybe chatbot style interfaces are just an artifact of the medium?

                      people talk about setting up harnesses and feedback loops etc, but a lot of the ux is a frankly mess...

                      • By skybrian 2026-02-082:46

                        I find that chat is pretty good when you're describing what you want to do, for saying "actually, I wanted something different," or for giving it a bug report. For making fine adjustments to CSS, it would be nice if you could ask the bot for a slider or a color picker that makes live updates.

            • By habinero 2026-02-082:28

              I vigorously agree with all of this.

              It doesn't really matter for hobby projects or demos or whatever, but there's this whole group who thinks they can yell at the computer and have a business fall out and no.

          • By xtracto 2026-02-0719:073 reply

            I agree but want to interject that "code organization " won't matter for long.

            Programming Languages were made for people. I'm old enough to have programmed in z80 and 8086 assembler. I've been through plenty of prog.langs. through my career.

            But once building systems become prompting an agent to build a flow that reads these two types of excels, cleans them,filters them, merges them and outputs the result for the web (oh and make it interactive and highly available ) .

            Code won't matter. You'll have other agents that check that the system is built right, you'll have agents that test the functionality and agents that ask and propose functionality and ideas.

            Most likely the Programming language will become similar to the old Telegraph texts (telegrams) which were heavily optimized for word/token count. They will be optimized to be LLM grokable instead of human grokable.

            Its going to be amazing.

            • By manmal 2026-02-0722:311 reply

              What you’re describing is that we’d turn deterministic engineering into the same march of 9s that FSD and robotics are going through now - but for every single workflow. If you can’t check the code for correctness, and debug it, then your test system must be absolutely perfect and cover every possible outcome. Since that’s not possible for nontrivial software, you’re starting a march of 9s towards 100% correctness of each solution.

              That accounting software will need 100M unit tests before you can be certain it covers all your legal requirements. (Hyperbole but you get the idea) Who’s going to verify all those tests? Do you need a reference implementation to compare against?

              Making LLM work opaque to inspection is kind of like pasting the outcome of a mathematical proof without any context (which is almost worthless AFAIK).

              • By andrekandre 2026-02-081:45

                  > Who’s going to verify all those tests?
                
                why, the user of course

            • By thesmtsolver2 2026-02-0719:40

              Will you trust code like this to run airplanes?

              Remember, even Waymo has a ton of non-AI code it is built upon. We will still have PyTorch, embedded systems software, etc.

            • By skybrian 2026-02-0719:58

              There are certainly people working on making this happen. As a hobbyist, maybe I'll still have some retro fun polishing the source code for certain projects I care about? (Using our new power tools, of course.)

          • By pragmatic 2026-02-0718:141 reply

            Your assuming that scrum/agile/management won't take this over?

            What stakeholder is prioritizing any of those things and paying for it out of their budget?

            Code improvement projects are the White Whale of software engineering - obsessed over but rarely from a business point of view worth it.

            • By skybrian 2026-02-0720:08

              The costs for code improvement projects have gone down dramatically now that we have power tools. So, perhaps it will be considered more worthwhile now? But how this actually plays out for professional programming is going to depend on company culture and management.

              In my case, I'm an early-retired hobbyist programmer, so I control the budget. The same is true for any open source project.

          • By habinero 2026-02-082:23

            My unpopular opinion is AI sucks at writing tests. Like, really sucks. It can churn out a lot of them, but they're shitty.

            Actually writing good tests that exercise the behavior you want, guard against regressions, and isn't overfitted to your code is pretty difficult, really. You need to both understand the function and understand the structure to do it

            Even for hobby projects, it's not great. I'm learning asyncio by writing a matrix scraper and writing good functional tests as you go is worth it to make sure you actually do understand the concepts

          • By sifar 2026-02-0720:29

            And what happens when these different objectives conflict or diverge ? Will it be able to figure out the appropriate trade-offs, live with the results and go meta to rethink the approach or simply delude itself ? We would definitely lose these skills if it continues like this.

      • By seba_dos1 2026-02-0717:532 reply

        > Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write".

        That's the vast majority of my job and I've yet to find a way to have LLMs not be almost but not entirely useless at helping me with it.

        (also, it's filled with that even when you are a single engineer)

        • By fragmede 2026-02-0721:14

          And even if you are the single engineer, I'll be honest, it might as well have been somebody else that wrote the code if I have to go back to something I did seven years ago and unearth wtf.

        • By kaydub 2026-02-0719:143 reply

          I hope you realize that means your position is in danger.

          • By seba_dos1 2026-02-0719:59

            It would be in danger if LLMs could actually do that for me, but they're still very far from it and they progress slowly. One day I could start worrying, but it's not today.

          • By j-krieger 2026-02-0810:45

            We have yet to see any large codebase that LLMs work on for a long time.

          • By rsynnott 2026-02-0820:07

            Eh? If all the things that LLMs are not very good at (a long list), they are particularly not good at debugging.

      • By robhlt 2026-02-0719:174 reply

        It's nice that AI can fix bugs fast, but it's better to not even have bugs in the first place. By using someone else's battle tested code (like a framework) you can at least avoid the bugs they've already encountered and fixed.

        • By scoofy 2026-02-0719:282 reply

          I spent Dry January working on a new coding project and since all my nerd friends have been telling me to try to code with LLM's I gave it a shot and signed up to Google Gemini...

          All I can say is "holy shit, I'm a believer." I've probably got close to a year's worth of coding done in a month and a half.

          Busy work that would have taken me a day to look up, figure out, and write -- boring shit like matplotlib illustrations -- they are trivial now.

          Things that are ideas that I'm not sure how to implement "what are some different ways to do this weird thing" that I would have spend a week on trying to figure out a reasonable approach, no, it's basically got two or three decent ideas right away, even if they're not perfect. There was one vectorization approach I would have never thought of that I'm now using.

          Is the LLM wrong? Yes, all the damn time! Do I need to, you know, actually do a code review then I'm implementing ideas? Very much yes! Do I get into a back and forth battle with the LLM when it gets starts spitting out nonsense, shut the chat down, and start over with a newly primed window? Yes, about once every couple of days.

          It's still absolutely incredible. I've been a skeptic for a very long time. I studied philosophy, and the conceptions people have of language and Truth get completely garbled by an LLM that isn't really a mind that can think in the way we do. That said, holy shit it can do an absolute ton of busy work.

          • By poulsbohemian 2026-02-0720:205 reply

            What kind of project / prompts - what’s working for you? /I spent a good 20 years in the software world but have been away doing other things professionally for couple years. Recently was in the same place as you, with a new project and wanting to try it out. So I start with a generic Django project in VSCode, use the agent mode, and… what a waste of time. The auto-complete suggestions it makes are frequently wrong, the actions it takes in response to my prompts tend to make a mess on the order of a junior developer. I keep trying to figure out what I’m doing wrong, as I’m prompting pretty simple concepts at it - if you know Django, imagine concepts like “add the foo module to settings.py” or “Run the check command and diagnose why the foo app isn’t registered correctly” Before you know it, it’s spiraling out of control with changes it thinks it is making, all of which are hallucinations.

            • By scoofy 2026-02-0720:36

              I'm just using Gemini in the browser. I'm not ready to let it touch my code. Here are my last two prompts, for context the project is about golf course architecture:

              Me, including the architecture_diff.py file: I would like to add another map to architecture_diff. I want the map to show the level of divergence of the angle of the two shots to the two different holes from each point. That is, when your are right in between the two holes, it should be a 180 degree difference, and should be very dark, but when you're on the tee, and the shot is almost identical, it should be very light. Does this make sense? I realize this might require more calculations, but I think it's important.

              Gemini output was some garbage about a simple naive angle to two hole locations, rather than using the sophisticated expected value formula I'm using to calculate strokes-to-hole... thus worthless.

              Follow up from me, including the course.py and the player.py files: I don't just want the angle, I want the angle between the optimal shot, given the dispersion pattern. We may need to update get_smart_aim in the player to return the vector it uses, and we may need to cache that info. We may need to update generate_strokes_gained_map in course to also return the vectors used. I'm really not sure. Take as much time as you need. I'd like a good idea to consider before actually implementing this.

              Gemini output now has a helpful response about saving the vector field as we generate the different maps I'm trying to create as they are created. This is exactly the type of code I was looking for.

            • By jacomoRodriguez 2026-02-0721:29

              I recently started building a POC for an app idea. As framework I choose django and I did not once wrote code myself. The whole thing was done in a github codespace with copilot in agentic mode and using mostly sonnet and opus models. For prompting, I did not gave it specific instructions like add x to settings. I told it "We are now working on feature X. X should be able to do a, b and c. B has the following constraints. C should work like this." I have also some instructions in the agents.md file which tells the model to, before starting to code, ask me all unclear questions and then make a comprehensive plan on what to implement. I would then go over this plan, clarify or change if needed - and then let it run for 5-15 minutes. And every time it just did it. The whole thing, with debugging, with tests. Sure, sometimes there where minor bugs when I tested - but then I prompted directly the problem, and sure enough it got fixed in seconds...

              Not sure why we had so different experiances. Maybe you are using other models? Maybe you miss something in your prompts? Letting it start with a plan which I can then check did definitly help a lot. Also a summary of the apps workings and technical decissions (also produced by the model) did maybe help in the long run.

            • By yoyonamite 2026-02-084:55

              I don't use VSCode, but I've heard that the default model isn't that great. I'd make sure you're using something like Opus 4.5/4.6. I'm not familiar enough with VSCode to know if it's somehow worse than Claude Code, even with the same models, but can test Claude Code to rule that out. It could also be you've stumbled upon a problem that the AI isn't that good at. For example, I was diagnosing a C++ build issue, and I could tell the AI was off track.

              Most of the people that get wowed use an AI on a somewhat difficult task that they're unfamiliar with. For me, that was basically a duplicate of Apple's Live Captions that could also translate. Other examples I've seen are repairing a video file, or building a viewer for a proprietary medical imaging format. For my captions example, I don't think I would have put in the time to work on it without AI, and I was able to get a working prototype within minutes and then it took maybe a couple more hours to get it running smoother.

            • By kolinko 2026-02-0722:48

              Also >20 years in software. The VSCode/autocomplete, regardless of the model, never worked good for me. But Claude Code is something else - it doesn't do autocomplete per se - it will do modifications, test, if it fails debug, and iterate until it gets it right.

            • By graemep 2026-02-0813:38

              Try Claude as others have said.

              For Django try generating tests and test data. This works reasonably well for me even with fairly small local LLMs on my laptop.

          • By robhlt 2026-02-0719:491 reply

            I'm (mostly) a believer too, and I think AI makes using and improving these existing frameworks and libraries even easier.

            You mentioned matplotlib, why does it make sense to pay for a bunch of AI agents to re-invent what matplotlib does and fix bugs that matplotlib has already fixed, instead of just having AI agents write code that uses it.

            • By scoofy 2026-02-0720:11

              I mean, the thesis of the post is odd. I'll grant you that.

              I work mostly with python (the vast majority is pure python), flask, and htmx, with a bit of vanilla js thrown in.

              In a sense, I can understand the thesis. On the one hand Flask is a fantastic tool, with a reasonable abstraction given the high complexity. I wouldn't want to replace Flask. On the otherhand HTMX is a great tool, but often imperfect for what I'm exactly trying to do. Most people would say "well just just React!" except that I honestly loathe working with js, and unless someone is paying me, I'll do it in python. I could see working with an LLM to build a custom tool to make a version of HTMX that better interacts with Flask in the way I want it to.

              In fact, in my project I'm working on now I'm building complex heatmap illustrations that require a ton of dataprocessing, so I've been building a model to reduce the NP hard aspects of that process. However, the illustrations are the point, and I've already had a back and forth with the LLM about porting the project into HTML, or some web based version of illustration at least, simply because I'd have much more control over the illustrations. Right now, matplotlib still suits me just fine, but if I had to port it, I could see just building my own tool instead of finding an existing framework and learning it.

              Frameworks are mostly useful because of group knowledge. I learn Flask because I don't want to build all these tools from scratch, and because I makes me literate in a very common language. The author is suggesting that these barriers -- at least for your own code -- functionally don't exist anymore. Learning a new framework is about as labor intensive as learning one you're creating as you go. I think it's short-sighted, yes, but depending on the project, yea when it's trivial to build the tool you want, it's tempting to do that instead learning to use a similar tool that needs two adapters attached to it to work well on the job you're trying to do.

              At the same time, this is about scope. Anyone throwing out React because they want to just "invent their own entire web framework" is just being an idiot.

        • By james_marks 2026-02-0721:152 reply

          Because frameworks don’t have bugs? Or unpredictable dependency interactions?

          This is generous, to the say the least.

          • By camdenreslink 2026-02-082:11

            Well maintained, popular frameworks have github issues that frequently get resolved with newly patched versions of the framework. Sometimes bugs get fixed that you didn't even run into yet so everybody benefits.

            Will your bespoke LLM code have that? Every issue will actually be an issue in production experienced by your customers, that will have to be identified (better have good logging and instrumentation), and fixed in your codebase.

          • By g947o 2026-02-0811:12

            Frameworks that are (relatively) buggy and slow to address bugs lose popularity, to the point that people will spontaneously create alternatives. This happened too many times.

        • By fullstackchris 2026-02-0720:251 reply

          > better to not have bugs in the first place

          you must have never worked on any software project ever

          • By Balinares 2026-02-0721:07

            Have you? Then you know that the amount of defects scales linearly with the amount of code. As things stand models write a lot more code than a skilled human for a given requirement.

        • By MrDarcy 2026-02-0719:281 reply

          In practice using someone else’s framework means you’re accepting the risk of the thousands of bugs in the framework that have no relevance to your business use case and will never be fixed.

          • By g947o 2026-02-0811:14

            Yet people still use frameworks, before and after the age of LLMs. Frameworks must have done something right, I guess. Otherwise everyone will vibe their own little React in the codebase.

      • By pron 2026-02-0723:031 reply

        > Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write".

        True, but there's usually at least one person who knows that particular part of the system that you need to touch, and if there isn't, you'll spend a lot of time fixing that bug and become that person.

        The bet you're describing is that the AI will be the expert, and if it can be that, why couldn't it also be the expert at understanding the users' needs so that no one is needed anywhere in the loop?

        What I don't understand about a vision where AI is able to replace humans at some (complicated) part of the entire industrial stack is why does it stop at a particular point? What makes us think that it can replace programmers and architects - jobs that require a rather sophisticated combination of inductive and deductive reasoning - but not the PMs, managers, and even the users?

        Steve Yegge recently wrote about an exponential growth in AI capabilities. But every exponential growth has to plateau at some point, and the problem with exponential growth is that if your prediction about when that plateau happens is off by a little, the value at that point could be different from your prediction by a lot (in either direction). That means that it's very hard to predict where we'll "end up" (i.e. where the plateau will be). The prediction that AI will be able to automate nearly all of the technical aspects of programming yet little beyond them seems as unlikely to me as any arbitrary point. It's at least as likely that we'll end up well below or well above that point.

        • By anonnon 2026-02-085:58

          > Steve Yegge recently wrote about an exponential growth in AI capabilities

          I'm not sure that the current growth rate is exponential, but the problem is that it doesn't need to be exponential. It should have been obvious the moment ChatGPT and stable diffusion-based systems were released that continued, linear progress of these models was going to cause massive disruption eventually (in a matter of years).

      • By panny 2026-02-080:34

        That doesn't change the fact that the submission is basically repeating the LISP curse. Best case scenario: you end up with a one-off framework and only you know how it works. The post you're replying to points out why this is a bad idea.

        It doesn't matter if you don't use 90% of a framework as the submission bemoans. When everyone uses an identical API, but in different situations, you find lots of different problems that way. Your framework, and its users become a sort of BORG. When one of the framework users discovers a problem, it's fixed and propagated out before it can even be a problem for the rest of the BORG.

        That's not true in your LISP curse, one off custom bespoke framework. You will repeat all the problems that all the other custom bespoke frameworks encountered. When they fixed their problem, they didn't fix it for you. You will find those problems over and over again. This is why free software dominates over proprietary software. The biggest problem in software is not writing the software, it's maintaining it. Free software shares the maintenance burden, so everyone can benefit. You bear the whole maintenance burden with your custom, one off vibe coded solutions.

      • By poulsbohemian 2026-02-0720:08

        I think back on the ten+ years I spent doing SRE consulting and the thing is, finding the problems and identifying solutions — the technical part of the work — was such a small part of the actual work. So often I would go to work with a client and discover that they often already knew the problem, they just didn’t believe it - my job was often about the psychology of the organization more than the technical knowledge. So you might say “Great, so the agent will automatically fix the problem that the organization previous misidentified.” That sounds great right up until it starts dreaming… it’s not to say there aren’t places for these agents, but I suspect ultimately it will be like any other technology we use where it becomes part of the toolkit, not the whole.

      • By max51 2026-02-081:172 reply

        >I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

        If you manage a code base this way at your company, sooner or later you will face a wall. What happens when the AI can't fix an important bug or is unable to add a very important feature? now you are stuck with a big fat dirty pile of code that no human can figure out because it wasn't coded by human and was never designed to be understood by a human in the first place.

        • By jdswain 2026-02-081:32

          I treat code quality, and readability, as one of the goals. The LLM can help with this and refactor code much quicker than a human. If I think the code is getting too complex I change over to architecture review and refactoring until I am happy with it.

        • By ghiculescu 2026-02-081:211 reply

          What happens when humans can’t fix a bug or build an important feature? That is a pretty common scenario, that doesn’t result in the doomsday you imply.

          • By max51 2026-02-081:39

            There will always be bugs you can't fix, that doesn't mean we should embrace having orders of magnitude more of them. And it's not just about bugs, it's also about adding new features.

            This is tech debt on steroid. You are building an entire code base that no can read or understand and pray that the LLM won't fuck up too much. And when it does, no one in the company knows how to deal with it other than by throwing more LLM tokens at it and pray it works.

            As I said earlier, using pure AI agents will work for a while. But when it doesn't you are fucked.

      • By pphysch 2026-02-0717:241 reply

        Automatically solving software application bugs is one thing, recovering stateful business process disasters and data corruption is entirely another thing.

        Customer A is in an totally unknown database state due to a vibe-coded bug. Great, the bug is fixed now, but you're still f-ed.

        • By andrekandre 2026-02-081:50

            > Automatically solving software application bugs
          
          the other issue is "fixing" false-positives; i've seen it before with some ai tools: they convince you its a bug and it looks legit and passes the tests but later on something doesn't quite work right anymore and you have to triage and revert it... it can be real time sink.

      • By j-krieger 2026-02-0810:42

        „understand how these systems actually function" isn't incompatible with "I didn't write most of this code“

        Except now you have code you didn‘t write and patches you din‘t write either. Your „colleague“ also has no long term memory.

      • By agosta 2026-02-0723:33

        Believe the re:Invent session is this one but correct me if I'm wrong: https://www.youtube.com/watch?v=rMPe622eGY0

      • By 113 2026-02-0719:341 reply

        > one to do the work and submit PRs to your repo

        Have we not seen loads of examples of terrible AI generated RPs every week on this site?

        • By viraptor 2026-02-0719:43

          Because nobody posts the good ones. They're boring, correct, you merge them and move on to the next one. It's like there's a murder in the news every day but generally we're still all fine.

          Don't assume that when people make fun of some examples that there aren't thousands more that nobody cares to write about.

      • By brightball 2026-02-0723:40

        Was that session published online somewhere? I’d love to watch that.

      • By rsynnott 2026-02-0819:55

        An AI thing demoed well? Well, colour me shocked.

        Amazon, which employs many thousands of SREs (or, well, pseudo-SREs; AIUI it's not quite the conventional SRE role), is presumably just doing so for charitable purposes, if they are so easy to replace with magic robots.

      • By IhateAI 2026-02-0819:03

        [dead]

    • By geophile 2026-02-0716:524 reply

      The article gets at this briefly and moves on: "I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time."

      I think this dynamic applies to any use of AI, or indeed, any form of outsourcing. You can outsource a task effectively if you understand the complete task and its implementation very deeply. But if you don't, then you don't know if what you are getting back is correct, maintainable, scalable.

      • By SoftTalker 2026-02-0716:561 reply

        > instructing once and for all my setup to do what I want next time.

        This works up to a point, but eventually your "setup" gets complicated, some of your demands conflict, or have different priorities, and you're relying on the AI to sort it out the way you expect.

        • By tsycho 2026-02-0721:55

          But setups get equally complicated, even with human software engineers. The approach that the OP is talking about applies only to experienced, good architect-level SWEs, and I suspect that the code quality and its problems are going to be the same whether they are directing LLMs vs a set of junior SWEs to write the code.

          There is an inherent level of complexity in projects that solve some real world problem, due to all the code handling edge cases that were added incrementally over time.

      • By eqvinox 2026-02-0717:19

        > any use of AI, or indeed, any form of outsourcing

        Oh that's a good analogy/categorization, I hadn't thought about it in those terms yet. AI is just the next cheaper thing down from the current southeast asian sweatshop labor.

        (And you generally get what you pay for.)

      • By bodge5000 2026-02-0722:37

        On the face of it, this or at least acting as a code reviewer from an experienced point of view seems like the solution, the problem is that we all naturally get lazy and complacent. I actually think AI was at its best for coding a year or so ago, when it could kind of do part of the work but theres no way you could ever ship it. Code that works today but breaks in 6 months is far more insidious.

      • By zmmmmm 2026-02-0721:53

        It does beg, the question , whether any of this applies to less experienced people. I have a hunch that the open-ended nature of what can be achieved with AI will actually lead right back to needing frameworks, just as much as we do now, if not more, when it comes to less experienced people.

    • By manuelabeledo 2026-02-0719:32

      There are people out there who truly believe that they can outsource the building of highly complex systems by politely asking a machine, and ultimately will end up tasking the same machine to tell them how these systems should be built.

      Now, if I were in business with any of these people, why would I be paying them hundreds of thousands, plus the hundreds of thousands in LLM subscriptions they need to barely function, when they cannot produce a single valuable thought?

    • By FeteCommuniste 2026-02-0716:432 reply

      I don't think there's going to be any catastrophic collapse but I predict de-slopping will grow to occupy more and more developer time.

      Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.

      • By HighGoldstein 2026-02-0716:541 reply

        > Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.

        Fire, meet oil.

        • By woeirua 2026-02-0717:073 reply

          The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.

          • By omnicognate 2026-02-0717:447 reply

            What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.

            The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

            I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.

            • By pengaru 2026-02-0718:00

              It's no coincidence HN is hosted by a VC. VC-backed tech is all about boom-bust hype cycles analogous to the lever pull of a giant slot machine.

            • By beoberha 2026-02-0718:491 reply

              > The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

              I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.

              I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.

              • By latexr 2026-02-0723:371 reply

                > I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work

                Please name it. If it’s that good, you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.

                • By skydhash 2026-02-081:26

                  > you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.

                  That's kinda my bar at this point. On YouTube, there are so many talks and other videos about people using technology X to build Y software or managing Z infrastructure. But here all we got is slop, toys that should have been a shell script, or vague claims like GP.

                  Even ed(1) is more useful that what has been presented so far.

            • By sph 2026-02-0718:472 reply

              > I think I'm done with HN at this point.

              On the bright side, this forum is gonna be great fun to read in 2 or 3 years, whether the AI dream takes off, or crashes to the ground.

              • By pixelmelt 2026-02-0719:25

                I do not await the day where the public commons is trashed by everyone and their claudebot, though perhaps the segmentation of discourse will be better for us in the long run given how most social media sites operate.

              • By leptons 2026-02-0721:32

                Same as it was for "blockchain" and NFTs. Tech "enthusiasts" can be quite annoying, until whatever they hype is yesterday's fad. Then they jump on the next big thing. Rinse, repeat.

            • By kuboble 2026-02-0718:101 reply

              I am not in a high stakes environment and work on a one-person size projects.

              But for months I have almost stopped writing actual lines of code myself.

              Frequency and quality of my releases had improved. I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.

              The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.

              • By kaydub 2026-02-0719:20

                Feels like code written by a co-worker. No different than working on any decent sized code-base anywhere.

                I've stopped writing code too. Who the fuck wants to learn yet ANOTHER new framework. So much happier with llm tools.

            • By woeirua 2026-02-0720:17

              You have your head in the sand. Anyone making this claim in 2026 hasn’t legitimately tried these tools.

            • By kaydub 2026-02-0719:19

              The excitement hasn't cooled off where I'm working.

              Honestly, I'm personally happy to see so many naysayers online, it means I'm going to have job security a little longer than you folks.

            • By AndrewKemendo 2026-02-0719:222 reply

              I make mission critical software for robust multi robotic control in production flying real robots every day

              16% of our production codebase is generated from claude or another LLM

              Just because you can’t do it doesn’t mean other people can’t

              Denial is a river

              • By ektjah 2026-02-0720:142 reply

                CTO at Gambit AI? How generous of you to talk your book while insulting us. At least we know what to avoid.

                • By AndrewKemendo 2026-02-084:10

                  What does “talk my book” mean?

                  I don’t have a book

                  Edit: Apparently a financial term to mean “talk up your stock” which…if you don’t think that’s a good metric then why would you consider it talking my book lol cmon mayne

                • By the__alchemist 2026-02-0720:391 reply

                  My guess: Their UASs run modified PX4 firmware.

              • By the__alchemist 2026-02-0720:25

                Yikes.

          • By kaydub 2026-02-0719:17

            The AI agents can ALREADY "de-slopify" the code. That's one of the patterns people should be using when coding with LLMs. Keep an agent that only checks for code smells, testability, "slop", scalability problems, etc. alongside whatever agent you have writing the actual code.

          • By majormajor 2026-02-0719:16

            > The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.

            "The naysayers"/"the graybeards" have never been on top.

            If they had been, many of the things the author here talks about getting rid of never would've been popular in the first place. Giant frameworks? Javascript all the things? Leftpad? Rails? VBA? PHP? Eventually consistent datastores?

            History is full of people who successfully made money despite the downsides of all those things because the downsides usually weren't the most important thing in the moment of building.

            It's also full of people who made money cleaning it all up when the people who originally built it didn't have time to deal with it anymore. "De-slopify" is going to be a judgment question that someone will need to oversee, there's no one-size-fits-all software pattern, and the person who created the pile of code is unlikely to be in a position to have time to drive that process.

            Step 1: make money with shortcuts

            Step 2: pay people to clean up and smooth out most of those shortcuts

            I've bounced between both roles already a lot due to business cycles of startup life. When you're trying to out-scale your competitor you want to find every edge you can, and "how does this shit actually work" is going to be one of those edges for making the best decisions about how to improve cost/reliability/perf/usability/whatever. "It doesn't matter what the code looks like" is still hard to take seriously compared to the last few iterations of people pitching tools claiming the same. The turnaround loop of modifying code is faster now; the risk of a tar-pit of trying to tune on-the-fly a pile of ill-fitting spaghetti is not. It's gonna be good enough for a lot of people, Sturgeon's law - e.g. most people aren't great at knowing what usefully-testable code looks like. So let's push past today's status quo of software.

            If I was working on a boring product at a big tech co I'd be very worried, since many of those companies have been hiring at high salaries for non-global-impact product experiments that don't need extreme scale or shipping velocity. But if you want to push the envelope, the opportunity to write code faster should be making you think about what you can do with it that other people aren't yet. Things beyond "here's a greenfield MVP of X" or "here's a port of Y."

      • By kaydub 2026-02-0719:15

        We already do. It's the same tool.

    • By thephyber 2026-02-0722:021 reply

      This sounds entirely too doomer.

      There will obviously be companies that build a vibe coded app which too many people depend on. There will be some iteration (maybe feature addition, maybe bug fix) which will cause a catastrophic breakage and users will know.

      But there will also be companies who add a better mix of incantations to the prompts, who use version control and CI, who ensure the code is matched with tests, who maintain the prompts and requirements documents.

      The former will likely follow your projected path. The latter will do fine and may even thrive better than either traditional software houses of cheap vibe coding shops.

      Then again, there are famous instances of companies who have tolerated terribly low investment in IT, including SouthWest Airlines.

      • By rglover 2026-02-0722:071 reply

        I'd say you're absolutely right.

        The problem is...what is the distribution of companies who do it "right" to companies that don't?

        • By nekitamo 2026-02-0722:29

          The same as the distribution of companies which are profitable over time and grow steadily, vs the others which clumsily flail around to somehow stay alive. To the winner go the spoils, and the winners will be a tiny fraction of companies, same as it ever was.

          A way I look at it is that all net wealth creation in public companies has come from just 4% of businesses:

          https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2900447

          https://www.reddit.com/r/investing/comments/rk4udc/only_4_of...

          It'll be similar with software companies. 4% of them will hit on a unique cultural and organizational track which will let them thrive, probably using AI in one form or another. The other 96% will be lucky to stay alive.

          Same as it ever was.

    • By giancarlostoro 2026-02-0716:501 reply

      I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.

      I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.

      • By kenjackson 2026-02-0716:595 reply

        This. For area where you can use tested and tried libraries (or tools in general) LLMs will generate better code when they use them.

        In fact, LLMs will be better than humans in learning new frameworks. It could end up being the opposite that frameworks and libraries become more important with LLMs.

        • By nottorp 2026-02-0717:551 reply

          > In fact, LLMs will be better than humans in learning new frameworks.

          LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.

          Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?

          They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.

          • By giancarlostoro 2026-02-0718:442 reply

            If its on github eventually it will cycle into the training data. I have also seen Claude pull down code to look at from github.

            • By fauigerzigerk 2026-02-0719:081 reply

              Wouldn't there be a chicken and egg problem once humans stop writing new code directly? Who would write the code using this new framework? Are the examples written by the creators of the framework enough to train an AI?

              • By giancarlostoro 2026-02-0721:08

                There's tooling out there 100% vibe coded, that is used by tens of thousands of devs daily, if that codebase found its way to training data, would it somehow ruin everything? I don't think this is really a problem, the problem will become people will need to identify good codebases from bad ones, if you point out which codes bad during training it makes a difference. There's a LOT of writings about how to write better code out there that I'm sure are already part of the training data.

            • By nottorp 2026-02-0722:56

              How much proprietary business logic is on public github repos?

              I'm not talking about "do me this solo founder saas little thing". I'm talking about working on existing codebases running specialized stuff for a functional company or companies.

        • By eqvinox 2026-02-0717:23

          > LLMs will be better than humans in learning new frameworks.

          I don't see a base for that assumption. They're good at things like Django because there is a metric fuckton of existing open-source code out there that they can be trained on. They're already not great at less popular or even fringe frameworks and programming languages. What makes you think they'll be good at a new thing that there are almost no open resources for yet?

        • By kaydub 2026-02-0719:24

          Yeah, I don't know why you'd drop using frameworks and libraries just because you're using an LLM. If you AREN'T using them you're just loading a bunch of solved problems into the LLMs context so it can re-invent the wheel. I really love the LLM because now I don't need to learn the new frameworks myself. LLMs really remove all the bullshit I don't want to think about.

        • By catlifeonmars 2026-02-0718:132 reply

          LLMs famously aren’t that good at using new frameworks/languages. Sure they can get by with the right context, but most people are pointing them at standard frameworks in common languages to maximize the quality of their output.

          • By tappio 2026-02-0718:491 reply

            This is not my experience any longer. With properly set feedback loop and frameworks documentation it does not seem to matter much if they are working with completely novel stuff or not. Of course, when that is not available they hallucinate, but who anymore does that even? Anyone can see that LLMs are just glorified auto-complete machines, so you really have to put a lot of work in the enviroment they operate and quick feedback loops. (Just like with 90% of developers made of flesh...)

            • By catlifeonmars 2026-02-0719:28

              Or you could use an off the shelf popular framework in Python and save yourself some time curating the context.

          • By giancarlostoro 2026-02-0721:091 reply

            I asked Claude to use some Dlang libraries even I had not heard of and it built a full blown proof of concept project for me, using obscure libraries nobody really knows. It just looked through docs and source code. Maybe back 3 years ago this would have been the case.

        • By lenkite 2026-02-0717:401 reply

          How will LLM's become better than humans in learning new frameworks when automated/vibe coders never manually code how to use those new frameworks ?

    • By harrisi 2026-02-0718:261 reply

      The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).

      This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.

      LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.

      • By feastingonslop 2026-02-0718:292 reply

        Nobody is saying to skip testing the software. Testing is still important. What the code itself looks like, isn’t.

        • By JackSlateur 2026-02-0723:42

          Knowing the inner working of a complex system is a hard requirement of its testing.

        • By harrisi 2026-02-0810:00

          Testing is not a proof a software system is correct. Also, if tests are generated as well, there's no trust in how anything works or if the tests are covering important aspects.

    • By straydusk 2026-02-0717:504 reply

      Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?

      • By bsder 2026-02-0723:25

        My favorite open source projects still have zillions of open bugs. Lots of projects are shutting down accepting external contributions because the PRs are so terrible. Wipro still isn't bankrupt.

        Cheerleading is nice. Seeing the bug counts from my favorite projects significantly decrease would be evidence.

      • By rglover 2026-02-0721:39

        I'm not betting against them, I use them every day (but I don't "vibe code"—there's more intent). I'm just not treating them as a deity or other prayer-candle worthy entity. They're business tools. It's just a chat bot bro.

      • By squidbeak 2026-02-0718:151 reply

        I agree. We've been assured by these skeptics that models are stochastic parrots, that progress in developing them was stalling, and that skills parity with senior developers was impossible - as well as having to listen to a type of self-indulgent daydreaming relish about the eventual catastrophes companies adopting them would face. And perhaps eventually these skeptics will turn out to be right. Who knows at this stage. But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams. At this stage, I'd bet on the skeptics being the camp to eventually be forced to make the hard adjustments.

        • By imiric 2026-02-0721:031 reply

          Pray tell, how has the world benefited from a flood of all these superhuman developers? Where is the groundbreaking software that is making our lives better?

          • By squidbeak 2026-02-0817:16

            Is this reply meant to me? Because what I wrote was:

            > But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams.

      • By rsynnott 2026-02-0820:18

        I mean, if it ever gets good, eh, I suppose I'll use it? Pre-emptively using it in case it one day works properly seems rather perverse, tho.

    • By drcode 2026-02-0717:142 reply

      I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable

      • By kgeist 2026-02-0720:291 reply

        >is going to improve the human programming skills worldwide to levels never before imaginable

        "We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand"

        https://www.anthropic.com/research/AI-assistance-coding-skil...

        • By drcode 2026-02-0723:59

          my comment is that it lowers the threshold to "just doing things"

          An experiment where people "HAVE TO DO something" either way is testing something different

          I know a fair amount about native android app development now because of using AI to build several native apps. I would know zero about about native android development if I had never attempted to build a native android app.

      • By pragmatic 2026-02-0717:283 reply

        How would it improve skills?

        Does driving a car improve your running speed?

        • By drcode 2026-02-0718:121 reply

          I have to stretch your analogy in weird ways to make it function within this discussion:

          Imagine two people who have only sat in a chair their whole lives. Then, you have one of them learn how to drive a car, whereas the other one never leaves the chair.

          The one who learned how to drive a car would then find it easier to learn how to run, compared to the person who had to continue sitting in the chair the whole time.

          • By leptons 2026-02-0721:40

            You made the analogy worse. It's nonsense. The original analogy is far better.

        • By FeteCommuniste 2026-02-0717:46

          I've found AI handy as a sort of tutor sometimes, like "I want to do X in Y programming language, what are some tools / libraries I could use for that?" And it will give multiple suggestions, often along with examples, that are pretty close to what I need.

        • By literalAardvark 2026-02-0718:12

          No, but it does improve your ability to get to classes after work

    • By divbzero 2026-02-0719:011 reply

      An HN post earlier this week declared that “AI is killing B2B SaaS”:

      https://news.ycombinator.com/item?id=46888441

      Developers and businesses with that attitude could experience a similarly rude awakening.

      • By rglover 2026-02-0720:36

        If you've ever been responsible for large systems, you'll know why that is an incredibly naive position to take.

    • By tenthirtyam 2026-02-0718:051 reply

      My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."

      Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).

      I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.

      • By layer8 2026-02-0718:291 reply

        Maybe software systems will become more like biological organisms. Huge complexity with parts bordering on chaos, but still working reasonably well most of the time, until entropy takes its course.

        • By leptons 2026-02-0721:491 reply

          It's already like that, for a long time. Humans are quite capable of creating complex systems that become unwieldy the bigger they get. No one person can understand all of it. I will offer the AT&T billing system as an example that I'm all too familiar with as a customer, due to the pain it causes me. So many ridiculous problems with that system, it's been around a long time, and it is just so screwball.

          • By layer8 2026-02-0723:171 reply

            Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would. Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.

            • By leptons 2026-02-080:571 reply

              You're kind of jumping around in scope here, and I think you got it a little wrong.

              >Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would.

              Removing a single node from a massively parallel system doesn't kill the system either, it only removes one node, another node will spin up and replace it, just like a failing cell would in a biological system. One liver cell doesn't do anything for the host on its own, it's part of a massively parallel system.

              > Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.

              Removing some "parts" from a cell certainly could kill it, or a million other things that lead to apoptosis, or turn it cancerous. But "parts" isn't analogous to software, DNA is. The same goes for a single node in a system - remove part of the code and you're going to have big problems - for that node. But probably won't kill the other nodes in the system (though a virus could).

              There are 3 Billion base pairs in Human DNA. I could imagine that there's more than 3 billion lines of code running important things throughout the world right now. Maybe even in one system, or likely soon there will be. With "AI" doing the coding that number is going to explode, without anyone able to understand all of it. And so I could imagine that "AI" will probably lead to some kind of "digital cancer", the same way there are viruses and other analogues to biological systems.

              • By tenthirtyam 2026-02-0919:531 reply

                > "AI" will probably lead to some kind of "digital cancer"

                Gosh I've just imagined someone asking an AI agent to code a computer virus to infect software "X". The virus' code will be wonderfully complex and therefore so will the response of the AI responsible for keeping "X" uninfected and in good working order.

                I was imagining code becoming awesomely complex even without the adversarial element in play.

                • By tenthirtyam 2026-02-0920:56

                  Replying to myself here. Maybe coding will eventually be simply learning how to give an AI the right prompt. e.g. instead of

                    "Hey AI, create my new banking app with such-and-such functionality, appearance,  properties, APIs, network connections etc"
                  
                  we will instead do:

                    "Hey AI, you are a banking app on a user's cellphone. Connect to mybank.com, authenticate the user and allow the user to perform these-and-those actions in a sensible interface in accordance with the API spec. Don't let yourself be jailbroken."
                  
                  Then the virus writer's job changes into jailbreaking the AI. Obviously with an AI's assistance...?

                  Then it would be logical to have a single AI on the phone managing all the prompts in parallel: e.g. "Hey AI, be android by doing [actions]", "Hey AI, be firefox...", "Hey AI, be snapchat...", "Hey AI, be [insert app name]...".

    • By killerstorm 2026-02-0721:401 reply

      Software engineers have been confidently wrong about a lot of things.

      E.g. OOP and "patterns" in 90s. What was the last time you implemented a "visitor"?

      P. Norvig mentioned most of the patterns are transparent in Common Lisp: e.g. you can just use a `lambda` instead of "visitor". But OOP people kept doing class diagrams for a simple map or fold-like operation.

      AI producing a flawed code and "not understanding" are completely different issues. Yes, AI can make mistakes, we know. But are you certain your understanding is really superior?

      • By joshribakoff 2026-02-081:221 reply

        Last time I used a visitor, it was probably last week when I created a lint rule. Visiting every node in a tree (ast or otherwise) with lambda is doing the pattern regardless of what you call the pattern. Tools like eslint still literally use the visitor pattern. I would point to software engineers dismissing tried and true ideas as the better generalization.

        • By killerstorm 2026-02-089:04

          That means you don't understand visitor pattern: it's specifically an OOP pattern using OOP dispatch.

          A tree traversal is a tree traversal.

    • By fennecbutt 2026-02-0718:59

      Business has been operating on a management/executive culture for many decades now.

      These people get paid millions a year to fly around and shake hands with people aka shit fuck all.

      At times in the past I have worked on projects that were rushed out and didn't do a single thing that they were intended to do.

      And you know what management's response was? They loved that shit. Ooooh it looks do good, that's so cool, well done. Management circle jerking each other, as if using everyone else's shafts as handles to climb the rungs of the ladder.

      It's just...like it kills me that this thing I love, technology/engineering/programming...things that are responsible for many of the best things present in our modern lives, have both been twisted to create some of the worst things in our modern lives in the pursuit of profit. And the people in charge? They don't even care if it works or not, they just want that undeserved promotion for a job that a Simpsons-esque fucking drinking bird is capable of.

      I just want to go back to the mid 2000s. ;~;

    • By kaydub 2026-02-0719:111 reply

      The hubris is with the devs that think like you actually.

      • By rglover 2026-02-0721:401 reply

        Carefully, patiently, and intentionally in the direction of stability and good user experience?

        • By kaydub 2026-02-0721:57

          The arrogance to think this isn't possible using llm tools and that the opposite didn't already exist at plenty of software shops pre-llm tools...

    • By GorbachevyChase 2026-02-083:22

      Every single CVE at least prior to the last couple years was written by a human. Azure and AWS have failed or have been compromised in embarrassing ways. I don’t take this human good, machine bad argument seriously. On the bright side, high integrity software may finally be an expectation for customers paying for software as “barely works” is becoming practically free.

    • By karmasimida 2026-02-0719:40

      I give a year, the realization would be brutal.

    • By ineedasername 2026-02-082:581 reply

      coding agents are an order of magnitude better than they were less than a year ago: Claude code, arguable the best, didn't exist a year ago and nonetheless, is an order better than when it did in late February.

      It's really not a great sense of things for someone working in tech to make such a strong statement as the above given these circumstances. There is no reason to expect improvement will not continue, that agents' ability to review code-- drastically improved even since May/June, won't improve and, to the extend your concerns are valid about current agents, be able to manage the review and surfacing of issues for review and remediation.

      This also ignores the growing awareness within the community of professional developers that, yes, blind trust-- as naively but minimally understandable given lack of knowledge on capabilities-- that blind trust cannot be given, and reasonable review of generated code at the end of a session is good practice. Developers are quite often already changing their habits with agents, if they were even avoiding such review before.

      As for companies that may fire developers, rely near completely on agentic coding? I agree with the sentiment, but I really have not seen that happen to any great degree.

      I won't assume this is the case for you, you may very well have dug in depth and worked with agents like claude code, understanding how best to scaffold by creating skills and agents, and still come to your conclusions. I would hope so. For the most part though, a large number of the people making comments of this sort seem to betray directly or indirectly that they-- nearly by definition of distrusting these agents and therefore avoiding them-- have little idea of what they are dismissing precisely because they have not taken the time to do more than dismiss them after little more than cursory review.

      At the same time, many notable developers of significant accomplished near daily comment in pages here sharing experiences of true capability and assistance by these agents.

      • By marscopter 2026-02-0818:28

        > the growing awareness within the community of professional developers that ... blind trust cannot be given, and reasonable review of generated code at the end of a session is good practice.

        This saddens me. It saddens me that this is a trend among professional developers. It's an old lesson. We've known it for decades. Running arbitrary code is probably not the best idea.

    • By bdcravens 2026-02-0716:38

      You still "find most stuff by building/struggling". You just move up stack.

      > there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign

      For those who are "vibe code only", perhaps. But it's no different than the "coding bootcamp only" developers who never really learned to think holistically. Or the folks who learned the bare minimum to get those sweet dotcom boom dollars back in the day, and then had to return to selling cars when it call came crashing down.

      The winners have been, and will always be, those who can think bigger. The ones today who already know how to build from scratch but then find the superpower is in architecture, not syntax, and suddenly find themselves 10x more productive.

    • By cookiengineer 2026-02-0717:551 reply

      Come to the redteam / purpleteam side. We're having fun times right now. The definition of "every software has bugs" is now on a next level, because people don't even care about sql injection anymore. It's right built into every vibecoded codebase.

      Authentication and authorization is as simple as POST /api/create/admin with zero checks. Pretty much every API ever slop coded looks like this. And if it doesn't, it will forget about security checks two prompts later and reverse the previously working checks.

      • By rglover 2026-02-0815:42

        Good lord. That's a level of carelessness that's just chef's kiss.

    • By j-krieger 2026-02-0810:40

      This article is a prime example. You‘re not foregoing frameworks. You‘re building your own. Except this time it‘s not „yours“, you let an AI build an unholy fusion of all frameworks in its dataset.

    • By MrDarcy 2026-02-0716:571 reply

      This comment ignores the key insight of the article. Design is what matters most now. Design is the difference between vibe coding and software engineering.

      Given a good design, software engineers today are 100x more productive. What they produce is high quality due to the design. Production is fast and cheap due to the agents.

      You are correct, there will be a reckoning for large scale systems which are vibe coded. They author is also correct, well designed systems no longer need frameworks or vendors, and they are unlikely to fail because they were well designed from the start.

      • By goostavos 2026-02-0717:343 reply

        >software engineers today are 100x more productive

        Somebody needs to explain to my lying eyes where these 100xers are hiding. They seem to live in comments on the internet, but I'm not seeing the teams around me increase their output by two orders of magnitude.

        • By camdenreslink 2026-02-082:23

          I would say I'm like 1.2x more productive, and I think I'm more of the typical case (of course I read all of the code the LLM produces, so maybe that's where I've gone wrong).

        • By insin 2026-02-091:22

          If they did a year of work in ~3 days, presumably they're on a beach somewhere.

        • By MrDarcy 2026-02-0718:033 reply

          They are the people who have the design sense of someone like Rob Pike but lack his coding skill. These people are now 100x more capable than they were previously.

          • By devsda 2026-02-0718:40

            This is how you get managers saying

            "we have taken latest AI subscription. We expect you to be able to increase productivity and complete 5/10/100 stories per sprint from now on instead of one per sprint that we planned previously".

          • By seabrookmx 2026-02-0719:041 reply

            Citation needed. For both the existence of said people (how do you develop said design sense without a ton of coding experience?) and that they are 100x more productive.

            • By leptons 2026-02-080:431 reply

              If you produced 1 line of code per hour before "AI" because you suck, and now produce 100 lines of code per hour with AI, you are now a 100x programmer.

              I'm joking of course, but that's probably how some people see it.

              • By seabrookmx 2026-02-081:14

                No I think you're 100% correct. But these people also miss out on the irony that using "lines of code" as a metric is a literal meme amongst software developers.

          • By vips7L 2026-02-0718:20

            No they’re not.

    • By tempodox 2026-02-088:15

      We’ve seen so many instances of Worse Is Better™ materialize around software development (and other areas) that I don’t dare to be as optimistic as you.

    • By johannes1234321 2026-02-0723:131 reply

      The interesting aspect is: that allows people who couldn't do it before to build tools for their department needs. Where in the past they would have needed a budget for going to the IT department and then do complex processes to transfer the requirements from business needs into a form the developers understand.

      Now they can quickly build simple tools. Yes, they are bad from software quality pov, but they solve problems.

      • By xtiansimon 2026-02-0922:05

        > "Now they can quickly build simple tools. Yes, they are bad from software quality pov, but they solve problems."

        Simple tools can make big mistakes.

        I've done it. It was an outlier case. A small mistake of logic giving out money already paid once in a neglected corner (so to speak) of the logic tree. One bad script can overshadow tens of good ones.

    • By farseer 2026-02-0720:191 reply

      I think it would be the opposite and we are all in for a rude awakening. If you have tried playing with Opus 4.6 you would know what I am talking about.

      • By nomel 2026-02-0720:50

        I tell my colleagues we're in the instantaneous peak of the AI developer relationship, especially for code monkeys. We're still valued, still paid really well, and our jobs will get easier and easier probably for the next 5-10 years! After that, maybe not so great for many of us, with the developers that use software as a means of their actual profession continuing to do just fine (hard math/science/optimization/business planning/project planners/etc).

        I think it's going to be an amazing shift from those that know intricate details of software to enabling those that have the best ideas that can be implemented with software (a shift from tool makers to tool users).

        I think many developers misunderstand the quality of software that people outside of software are willing to live with, if it does exactly what they want when they need it. For a user, it's all black box "do something I want or not" regardless of what's under the hood. Mostly "academic", things like "elegant" and "clean" and "maintainable" almost never practically matter for most practical solutions to actual problems. This is something I learned far too late in my professional career, where the lazy dev with shite code would get the same recognition as the guy that had beautiful code: does it solve the real world problem or not?

        Safety critical, secure, etc, sure, but most is not. And, even with those, the libraries/APIs/etc are separate components.

    • By markus_zhang 2026-02-0717:35

      But by then many of us are already starved. That’s why I always said that engineers should NOT integrate AI with internal data.

    • By aogaili 2026-02-0719:132 reply

      What's makes you so sure of your statement?

      I have be building systems for 20 years and I think the author is right.

      • By leptons 2026-02-080:44

        I've been building systems for 40 years, and I think... you know what, nevermind, it doesn't really matter what either of us thinks.

      • By rglover 2026-02-0815:24

        Observation.

    • By igleria 2026-02-0720:34

      > A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

      I pray (?) for times like the ones you predict. But companies can stay irrational longer than the average employee can afford.

    • By nojito 2026-02-0716:383 reply

      >A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

      Correct. Those who wave away coding agents and refuse to engrain them into their workflows are going to be left behind in the dust.

      • By HighGoldstein 2026-02-0716:563 reply

        > Correct. Those who wave away AI and refuse to engrain it into their workflows are going to be left behind in the dust.

        Similar to those who waved away crypto and are now left behind in the dust, yes?

        • By literalAardvark 2026-02-0718:211 reply

          Might not be the best counter example since everyone who has bought BTC before Jan 2024 is now in massive profit.

          • By oblio 2026-02-0723:261 reply

            Bitcoin is one of the few real survivors of the crypto crash and even it has failed in its mission. Read what Satoshi meant and what Bitcoin is now.

            It's not a coin, it's gold bars.

            • By literalAardvark 2026-02-1011:37

              Gold bars with near instant, near free global delivery?

              Think you might need to update that idea.

        • By superze 2026-02-0717:381 reply

          You forgot NFTs

          • By FeteCommuniste 2026-02-0717:49

            Remember when the geniuses at Andreessen Horowitz were dumping hundreds of millions into the "metaverse?"

        • By pawelduda 2026-02-0717:101 reply

          I think Bitcoin and major cryptos outperformed a lot of assets over the last decade, so you could say it left some people behind in the dust, yes

          • By LunaSea 2026-02-0717:592 reply

            Like being ratioed with a 50% price crash?

            • By literalAardvark 2026-02-0718:23

              You mean just like META, NFLX, AMZN, TSLA, NVDA, CSCO, MSFT, GE, BAC ?

            • By pawelduda 2026-02-0718:34

              I can tell you what a decade is but I'll have to leave the reading comprehension to you

      • By htuibxtuhidb 2026-02-0716:58

        [dead]

      • By otabdeveloper4 2026-02-0716:451 reply

        Doubt on that. AI usually only wastes time and produces bugs.

        > bbut you're holding it wrong, just two more prompts and three more agents and it will be a real boy

        So, you invented an IDE, except more opaque and expensive? Welcome to the club.

        • By verdverm 2026-02-0716:512 reply

          You both are likely incorrect, the answer lies in the middle rather than the extremes

          • By kaydub 2026-02-0719:581 reply

            I think a lot of the people pro-llm are already in the middle. It's the naysays that are sticking to a strict definition of "vibe coding" as if it's random people typing in "make me $app but make no bugs" and getting a 100% working $app with no bugs. You have the gastown folks trying to do that, but nobody serious putting in good work with llm agents is "vibe coding" in that way.

            • By otabdeveloper4 2026-02-097:36

              > as if it's random people typing in "make me $app but make no bugs" and getting a 100% working $app with no bugs

              Yes, that's literally how LLMs are sold to management.

          • By redleggedfrog 2026-02-0717:20

            This is not just software development wisdom, it's life wisdom.

    • By wouldbecouldbe 2026-02-0718:08

      Yeah I completely disagree with the author actually, but also with you.

      The frameworks are what make the AI write easily understandable code. I let it run nextjs with an ORM, and it almost always creates very well defined api routes, classes & data models. etter then I would do often,

      I also ask it be way more correct on the validation & error handling then I would ever do. It makes mistakes, I shout at it and corrects quickly.

      So the project I've been "vibe coding" have a much better codebase then I used to have on my solo projects.

    • By bthornbury 2026-02-0717:483 reply

      Why does there seem to be such a divide in opinions on AI in coding? Meanwhile those who "get it" have been improving their productivity for literally years now.

      • By paulhebert 2026-02-0718:392 reply

        I think there are a number of elements:

        - What you are working on. AI is better at solving already solved problems with lots of examples.

        - How fast/skilled you were before. If you were slow before then you got a bigger speed up. If AI can solve problems you can’t you unlock new abilities

        - How much quality is prioritized. You can write quality, bug free code with AI but it takes longer and you get less of a boost.

        - How much time you spend coding. If a lot of your job is design/architecture/planning/research then speeding up code generation matters less

        - How much you like coding. If you like coding then using AI is less fun. If you didn’t like coding then you get to skip a chore

        - How much you care about deeply understanding systems

        - How much you care about externalities: power usage, data theft, job loss, etc.

        - How much boilerplate you were writing before

        I’m sure that’s not a complete list but they are a few things I’ve seen as dividers

        • By ggregoire 2026-02-080:281 reply

          > How much you like coding. If you like coding then using AI is less fun.

          I'm surprised this is never brought up here on "Hacker" News. I've been reading HN for 14 years and all this time I thought people here enjoyed programming. Turns out the majority hates it apparently.

          • By paulhebert 2026-02-080:38

            Yeah, it’s strange to me. One of the reasons I chose this career is because I enjoy programming.

            I don’t get the same satisfaction when using AI coding tools. I feel like I’m the manager of a faceless team of mediocre employees

        • By paulhebert 2026-02-0718:501 reply

          A few more:

          - How much do you prioritize speed?

          - Do you have a big backlog of dev tasks ready to go?

          - What are the risks if your software doesn’t work?

          - Are you working on a green field or legacy project? Prototypes or MVPs?

          • By paulhebert 2026-02-0719:01

            - Do you prefer working as a manager or an individual contributor? Are you used to owning the code or managing others who write codd?

            - How does your brain work? Do you rely on flow states? Do you get distracted waiting for the LLM response? How do you handle context switching?

      • By rglover 2026-02-0722:03

        I've been using it every day for nearly two years now, with your suggested productivity boost observed.

        The difference is that I'm not just letting agents willy nilly commit code. I treat them more like a companion and guide their steps (I use Cline w/ Sonnet/Opus 4.5/4.6). Not only do I save a ton of money on tokens, but the results end up being infinitely better than the "yolo" mode outcomes (even with excellent prompting/context).

        From my POV, the only divide is between a willingness to be an accountable professional versus someone who just "lets the AI do it" and whistles-with-hands-in-pockets when that code inevitably blows up in a way you couldn't predict (because you weren't checking, only the AI was, which you swore was "good enough").

        That approach works if you're just sitting on your couch hacking up toys to dice roll on X. But if you're trying to build reliable, deterministic systems that aren't constantly buzzing you awake at 3am, you're asking for a serious humbling if no one in your organization can explain how or why anything your business relies on works the way it does (that's operationally suicidal, imo, but hey—America).

        That gets misinterpreted as being a "luddite," when really it's just having been down the rabbit hole enough times to know that if you can't point to and understand why it's happening (and ideally, whodunit), you don't know shit.

      • By kaydub 2026-02-0719:37

        There's a lot of reasons. There's a lot of breadth to "software engineering" (FAANG, web dev, embedded, OS, small business, etc.)

        I'm sure there are some places where LLMs are bad due to lack of training data. There are some places where the LLMs are bad because the code base is terrible (and there's always "rockstars" at these jobs that severely overestimate their skills because they're always the one fixing the mess... which they also probably caused). Some devs/engineers feel threatened. Many devs/engineers think they're special and super smart so surely no machine can do their job.

    • By arathis 2026-02-0723:06

      The hard thing to watch is developers grapple with the idea that what makes them special isn’t so special anymore.

    • By habinero 2026-02-081:52

      Clearing out the digital asbestos is going to make a lot of consultants a lot of money.

    • By xnx 2026-02-0723:35

      Possible, but this sounds like motivated reasoning.

    • By redleggedfrog 2026-02-0717:154 reply

      The future is already here. Been working a few years at a subsidiary of a large corporation where the entire hierarchy of companies is pushing AI hard, at different levels of complexity, from office work up through software development. Regular company meetings across companies and divisions to discuss methods and progress. Overall not a bad strategy and it's paying dividends.

      A experiment was tried on a large and very intractable code-base of C++, Visual Basic, classic .asp, and SQL Server, with three different reporting systems attached to it. The reporting systems were crazy being controlled by giant XML files with complex namespaces and no-nos like the order of the nodes mattering. It had been maintained by offshore developers for maybe 10 years or more. The application was originally created over 25 years ago. They wanted to replace it with modern technology, but they estimated it'd take 7 years(!). So they just threw a team at it and said, "Just use prompts to AI and hand code minimally and see how far you get."

      And they did wonderfully (and this is before the latest Claude improvements and agents) and they managed to create a minimal replacement in just two months (two or maybe three developers full time I think was the level of effort). This was touted at a meeting and given the approval for further development. At the meeting I specifically asked, "You only maintain this with prompts?" "Yes," they said, "we just iterate through repeated prompts to refine the code."

      It has all mostly been abandoned a few months later. Parts of it are being reused, attempting a kind of "work in from the edges" approach to replacing parts of the system, but mostly it's dead.

      We are yet to have a postmortem on this whole thing, but I've talked to the developers, and they essentially made a different intractable problem of repeated prompting breaking existing features when attempting to apply fixes or add features. And breaking in really subtle and hard to discern ways. The AI created unit tests didn't often find these bugs, either. They really tried a lot of angles trying to sort it out - complex .md files, breaking up the monolith to make the AI have less context to track, gross simplification of existing features, and so on. These are smarty-pants developers, too, people who know their stuff, got better than BS's, and they themselves were at first surprised at their success, then not so surprised later at the eventual result.

      There was also a cost angle that became intractable. Coding like that was expensive. There was a lot of hand-wringing from managers over how much it was costing in "tokens" and whatever else. I pointed out if it's less cost than 7 years of development you're ahead of the game, which they pointed out it would be a cost spread over 7 years, not in 1 year. I'm not an accountant, but apparently that makes a difference.

      I don't necessarily consider it a failed experiment, because we all learned a lot about how to better do our software development with AI. They swung for the fences but just got a double.

      Of course this will all get better, but I wonder if it'll ever get there like we envision, with the Star Trek, "Computer, made me a sandwich," method of software development. The takeaway from all this is you still have to "know your code" for things that are non-trivial, and really, you can go a few steps above non-trivial. You can go a long way not looking to close at the LLM output, but there is a point at which it starts to be friction.

      As a side note, not really related to the OP, but the UI cooked up by the LLMs was an interesting "card" looking kind of thing, actually pretty nice to look at and use. Then, when searching for a wiki for the Ball x Pit game, I noticed that some of the wikis very closely resembled the UI for the application. Now I see variations of it all over the internet. I wonder if the LLMs "converge" on a particular UI if not given specific instructions?

      • By pragmatic 2026-02-0718:191 reply

        These are the blog posts we need.

        This is the siren song of llm. "Look how much progress we made"

        Effort increases as time to completion decreases. The last 10% of the project takes 90% of the effort as you try to finish up, deploy,integrate and find the gaps.

        Llms are woefully incapable of that as that knowledge doesn't exist in a markdown file. It's in people's heads and you have to pry it out with a crowbar or as happens to so many projects, they get released and no one uses it.

        See Google et Al. "We failed to find market fit on the 15th iteration of our chat app, we'll do better next time"

        • By J_Shelby_J 2026-02-0720:07

          For complex code bases generated by AI the last 10% takes 190% of the effort because you end up tearing it all apart to rebuild it right.

      • By nottorp 2026-02-0717:58

        I've noticed this in my small scale tests. Basically the larger the prompt gets (and it includes all the previously generated code because that's what you want to add features to), the more likely is that the LLM will go off the rails. Or forget the beginning of the context. Or go into a loop.

        Now if you're using a lot of separate prompts where you draw from whatever the network was trained on and not from code that's in the prompt, you can get usable stuff out of it. But that won't build you the whole application.

      • By kaydub 2026-02-0719:54

        > I wonder if the LLMs "converge" on a particular UI if not given specific instructions?

        Purple. They really fucking like this purple gradient background for some reason lol.

      • By sonofhans 2026-02-0717:49

        In a veritable ocean of opinions it is excellent to see a detailed, first-hand report. Many thanks!

    • By coldtea 2026-02-081:10

      Most systems are not needed anyway. They are built to be built, by companies not caring for how good, secure, or fast they are, or even how usable they are, as long as users are lured or forced to use them.

      Nothing speciail will happen when things are built "this way". People will just be used to lower quality, like they've been used to enshittified software for the past two decades.

      No "artisans" will be employed to fix any mess. The mess will be our software baseline going forward.

    • By radium3d 2026-02-0923:19

      Nah, do you honestly think the thousands of sub-packages in your project are actually being maintained? False sense of security there.

    • By bodge5000 2026-02-0722:13

      > Good time to be in business if you can see through the bs and understand how these systems actually function

      You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.

    • By simianuuords 2026-02-0717:40

      [dead]

    • By rugPool 2026-02-0718:12

      Back in the 00s people like you were saying "no one will put their private data in the cloud!"

      "I am sick of articles about the cloud!"

      "Anyone know of message boards where discussing cloud compute is banned?"

      "Businesses will not trust the cloud!"

      Aside from logistics of food and medicine, most economic activity is ephemeral wank.

      It's memes. It's a myth. Allegory.

      These systems are electrical state in machines and they can be optimized at the hardware layer.

      Your Python or Ruby or whatever you ship 9,000 layers of state and abstraction above the OS running in the data center has little influence on how these systems actually function.

      To borrow from poker; software engineers were being handed their hat years ago. It's already too late.

  • By ipsento606 2026-02-0715:508 reply

    > Software engineers are scared of designing things themselves.

    When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.

    Those beliefs aren't always true, but they're often true.

    Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.

    • By GoatInGrey 2026-02-0716:483 reply

      To be blunt, I think it's a form of mania that drives someone to reject human-written code in favor of LLM-generated code. Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM. When they automate this much writing, it makes me wonder how much of their own reading they automate away too.

      The below captures this perfectly. The author is trying to explain that vibe-coding their own frameworks lets them actually "understand" the code, while not noticing that the LLM-generated text they used to make this point is talking about cutting and sewing bricks.

      > But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.

      • By Dfiesl 2026-02-0717:331 reply

        I think the bit you quoted is a tie in with an earlier bit:

        “ I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric”

        To me, this text doesn’t read as being entirely written by an LLM, there is definitely an air of LLM about it though, so maybe the first draft was.

        • By daveguy 2026-02-081:20

          Those additional jumbled incoherent mixing of metaphors don't make it any better.

      • By potsandpans 2026-02-0718:441 reply

        > Every time I read writing from this perspective that exceeds a paragraph, I quickly realize the article itself was written by an LLM.

        Likewise, I think that this mentality is a modern day conversion disorder.

        "Everything I don't agree with is an LLM, I have no proof it just feels good!"

        • By cortesoft 2026-02-0720:34

          It’s the ‘woke’ of the computer world.

      • By GeoAtreides 2026-02-0718:331 reply

        > it's a form of mania

        Correct. The history is rife with examples of manias taking hold of societies, I recommend "Memoirs of Extraordinary Popular Delusions and the Madness of Crowds" by Charles Mackay[1], it's an absolutely fascinating book.

        [1]https://www.gutenberg.org/ebooks/24518

        • By lstodd 2026-02-080:36

          > absolutely fascinating book

          Indeed. Thank you for posting this link.

    • By fourside 2026-02-0716:551 reply

      Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.

      What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.

      • By solidasparagus 2026-02-0722:43

        I'm not sure.

        The cost of replacement-level software drops a lot with agentic coding. And maintenance tasks are similarly much smaller time syncs. When you combine that with the long-standing benefits of inhouse software (customizable to your exact problem, tweakable, often cleaner code because the feature-set can be a lot smaller), I think a lot of previously obvious dependencies become viable to write in house.

        It's going to vary a lot by the dependency and scope - obvious owning your own react is a lot different than owning your own leftpad, but to me it feels like there's no way that agentic coding doesn't shift the calculus somewhat. Particularly when agentic coding make a lot of nice-to-have mini-features trivial to add so the developer experience gap between a maintained library and a homegrown solution is smaller than it used to be.

    • By exe34 2026-02-0716:542 reply

      my problem with frameworks has always been that the moment I want to do something the framework writers aren't interested in, I now have three problems: my problem, how to implement it in the underlying platform and how to work around the framework to not break my feature.

      • By SoftTalker 2026-02-0717:001 reply

        Yes this happens in every framework I've ever used. My approach used to be to try to work around it, but now I've got these local exceptions to what the framework does and that is inevitably where problems/bugs pop up. Now I simply say "we can't implement the feature that way in this framework, we need to rework the specification." I no longer try to work against the framework, it's just a massive time sink and creates problems down the road.

        It's like designing a kitchen and you don't make all the spaces some multiple of three inches. Now, standard cabinets and appliances will not fit. You will be using filler panels or need custom cabinetry. And anyone who later wants countertops or different cabinets will be working around this design too. Just follow the established standard practices.

        • By exe34 2026-02-0717:102 reply

          I'm so glad software engineering isn't my job. I love solving problems, and I'm somewhat better at using code to do it than my peers (fellow scientists), but I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".

          • By skydhash 2026-02-081:381 reply

            I've never found something that was impossible to implement in any framework or SDK. Even in Android SDK land, you can easily get access to an OpenGL surface and import the whole world via the NDK. There's nothing limiting other than the OS itself and its mechanism.

            Same with Web framework. Even React (a library) has its escape hatches to let in the rest of the world.

            • By exe34 2026-02-087:511 reply

              okay, push notification without going through firebase. go!

              • By skydhash 2026-02-0812:311 reply

                Where is your copy of the android source code for the device you’re manufacturing? Because that’s how you can get the full feature set. Otherwise you will be restricted by Android aggresive suspending and killing policy.

                • By exe34 2026-02-0816:101 reply

                  Where did I say I was manufacturing a device? You just proved my point.

                  • By skydhash 2026-02-0817:091 reply

                    You wrote

                    > I would hate to have a boss/client that says "it needs to do X" and the framework writer (or SDK, ala Android/Xcode) say "no, that hurts my profits/privacy busting".

                    An answer to such request should be: "We would need to ship a custom version of Android". Just like if you need to setup a web server on a Linux system, you would need to be root. You don't choose a shared hosting and then complain about the lack of permissions.

                    • By exe34 2026-02-0821:34

                      that's amazing, shared hosting on the device I bought. no thank you. I'll root the damn thing and do as I please. If future devices don't allow that, I won't have a reason to carry them in my pocket.

          • By andoando 2026-02-0717:53

            Software companies all understand that so its not really a problem. Maybe in companies where the software isn't the main product

      • By devin 2026-02-0717:56

        In Clojure land, the mantra has long been "libraries over frameworks" for this reason.

    • By kaydub 2026-02-0720:09

      Yeah, I'm huge on using LLMs for coding, but one of the biggest wins for me is that the LLM already knows the frameworks. I no longer need to learn whatever newest framework there is. I'll stick to my frameworks, especially when using an LLM to code.

    • By bdangubic 2026-02-0716:42

      after 3 decades as SWE I mostly found both i) and ii) to not be true, for the most part. a lot of frameworks are not built from the ground up as “i am building a thing to solve x” but “i had a thing and built something that may (or may not) be generally useful.” so a lot of them carry weight from what they were originally built from. then people start making requests to mold the framework to their needs, some get implemented, some don’t. those that don’t good teams will build extensions/plugins etc into the framework and pretty soon you got a monster thing inside of your codebase you probably did not need to begin with. i think every single ORM that i’ve ever used fits this description.

    • By tvbusy 2026-02-088:45

      Totally. Frameworks also make it a lot easier for new team members to contribute. React, for example, makes it a lot easier to hire. Any project with moderate size will require some kind of convention to keep things consistent and choosing a framework makes this easier.

      Now look at the cross team collaboration and it gets even harder without frameworks. When every team has their own conventions, how would they communicate and work together? Imagine a website with React, Vue, Angular all over the place, all fighting for the same DOM.

    • By marcosdumay 2026-02-0717:14

      > Those beliefs aren't always true, but they're often true.

      You can probably tell with a high certainty, from the API in an hour or so.

    • By feastingonslop 2026-02-0716:124 reply

      And there was a time when using libraries and frameworks was the right thing to do, for that very reason. But LLMs have the equivalent of way more experience than any single programmer, and can generate just the bit of code that you actually need, without having to include the whole framework.

      • By trescenzi 2026-02-0716:28

        As someone who’s built a lot of frontend frameworks this isn’t what I’ve found. Instead I’ve found that you end up with the middle ground choice which while effective is no better than the externally maintained library of choice. The reason to build your own framework is so it’s tailor suited to your use cases. The architecting required to do that LLMs can help with but you have to guide them and to guide them you need expertise.

      • By plagiarist 2026-02-0716:331 reply

        I would like a more reliable way to activate this "way more experience."

        What I see in my own domain I often recognize as superficially working but flawed in various ways. I have to assume the domains I am less familiar are the same.

        • By koverstreet 2026-02-0717:42

          Claude's a smart junior engineer who's read a lot of books but is lacking in real word experience.

          It definitely eliminates a lot of tedium, but needs a lot of guidance if you want good results.

      • By leecommamichael 2026-02-0716:46

        > can generate just the bit of code that you actually need

        Design is the key. Codebases (libraries and frameworks not exempt,) have a designed uniformity to them. How does a beginner learn to do this sort of design? Can it be acquired completely by the programmer who uses LLMs to generate their code? Can it be beneficial to recognize opinionated design in the output of an LLM? How do you come to recognize opinion?

        In my personal history, I've worked alongside many programmers who only ever used frameworks. They did not have coding design sensibilities deeper than a social populist definition of "best practice." They looked to someone else to define what they can or cannot do. What is right to do.

      • By mnicky 2026-02-0716:19

        Critically, they will also enable faster future migration to a framework in case it proves useful.

  • By abcde666777 2026-02-0714:2019 reply

    It's strange to me when articles like this describe the 'pain of writing code'. I've always found that the easy part.

    Anyway, this stuff makes me think of what it would be like if you had Tolkein around today using AI to assist him in his writing.

    'Claude, generate me a paragraph describing Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'

    'Revise that so that Sam is Harsher and Frodo more stubborn.'

    Sooner or later I look at that and think he'd be better off just writing the damned book instead of wasting so much time writing prompts.

    • By capyba 2026-02-0714:2311 reply

      Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.

      Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!

      • By jatora 2026-02-0715:241 reply

        I dont really understand how this is possible. I've built some very large applications, and even a full LLM data curation,tokenizer, pretrain, posttrain SFT/DPO pipeline with LLM's and it most certainly took far less time than if i had done it manually. Sure it isnt all optimal...but it most certainly isnt subpar, and it is fully functional

        • By Ocha 2026-02-0715:363 reply

          So you skipped the code review and just checked that it does what you needed it to do?

          • By jatora 2026-02-081:17

            Nay, I periodically go through phases in most of my large applications where I do nothing but test, refine, refactor, consolidate, and modularize (the multi-thousand line monoliths that LLM's love to make so much) for days before resuming my roadmaps.

            This part is honestly one of the most satisfying, where you merge the already existing functionality with beautiful patterns.

            I also make good use of precommit hook scripts to catch drift where possible

          • By enraged_camel 2026-02-0716:021 reply

            I don't know how anyone can make this assumption in good faith. The poster did not imply anything along those lines.

            • By joshuahaglund 2026-02-0716:52

              That looked like a leading question to me, asking for confirmation but not an outright assumption. Seems like a fair question

          • By TideAd 2026-02-0717:011 reply

            GPT-5 codex variants with xhigh reasoning make great code reviewers.

            • By SamPatt 2026-02-0717:51

              5.2 Codex is excellent at reviewing commits. I haven’t used 5.3, I assume it's as good or better.

              Especially for large commits, it's become indispensable.

      • By dvfjsdhgfv 2026-02-0714:506 reply

        > if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.

        I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.

        • By hakunin 2026-02-0715:142 reply

          Of all the points the "other side" makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.

          If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.

          Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.

          If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.

          • By cheema33 2026-02-0716:392 reply

            “Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.”

            This is a valid argument. However, if you create test harnesses using multiple LLMs validating each other’s work, you can get very close to compiler-like deterministic behavior today. And this process will improve over time.

            • By hakunin 2026-02-0716:542 reply

              It helps, but it doesn't make it deterministic. LLMs could all be misled together. A different story would be if we had deterministic models, where the exact same input always results in the exact same output. I'm not sure why we don't try this tbh.

              • By verdverm 2026-02-0717:00

                I've been wondering if there are better random seeds, like how there are people who hunt for good seeds in Minecraft

              • By exe34 2026-02-0717:061 reply

                it's literally just setting T=0. except they are not as creative then. they don't explore alternative ideas from the mean.

                • By hakunin 2026-02-0717:18

                  Are you sure that it’s T=0. My comment’s first draft said “it can’t just be setting temp to zero can it?” But I felt like T is not enough. Try running the same prompt in new sessions with T=0, like “write a poem”. Will it produce the same poem each time? (I’m not where I can try it currently).

            • By otabdeveloper4 2026-02-0716:48

              > just add more magic turtles to the stack, bro

              You're just amplifying hallucination and bias.

          • By ctoth 2026-02-0715:442 reply

            > other side???

            > We don’t have to look at assembly, because a compiler produces the same result every time.

            This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.

            Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!

            • By hakunin 2026-02-0715:52

              You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.

              Re: “other side” - I’m quoting the grandparent’s framing.

            • By danny_codes 2026-02-0715:541 reply

              GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.

              • By hakunin 2026-02-0716:001 reply

                It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.

                • By philipswood 2026-02-0717:571 reply

                  Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.

                  Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.

                  Edit: added second paragraph

                  • By hakunin 2026-02-0719:321 reply

                    I'm not using a narrow sense. There is no elasticity here. See https://en.wikipedia.org/wiki/Deterministic_system

                    > significant extra effort is required to make them reproducible.

                    Zero extra effort is required. It is reproducible. The same input produces the same output. The "my machine" in "Works on my machine" is an example of input.

                    > Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.

                    You can have unreliable AIs building a thing, with some guidance and self-course-correction. What you can't have is outcomes also verified by unreliable AIs who may be prompt-injected to say "looks good". You can't do unreliable _everything_: planning, execution, verification.

                    If an AI decided to code an AI-bound implementation, then even tolerance verification could be completely out of whack. Your system could pass today and fail tomorrow. It's layers and layers of moving ground. You have to put the stake down somewhere. For software, I say it has to be code. Otherwise, AI shouldn't build software, it should replace it.

                    That said, you can build seemingly working things on moving ground, that bring value. It's a brave new world. We're yet to see if we're heading for net gain or net loss.

                    • By dvfjsdhgfv 2026-02-0721:581 reply

                      If we want to get really narrow I'd say real determinism is possible only in abstract systems, to which you'd reply it's just my ignorance of all possible factors involved and hence the incompleteness of the model. To which I'd point of practical limitations involved with that. And that reason, even though it is incorrect and I don't use it in this way, I understand why some people are using the quantifiers more/less with the term "deterministic", probably for the lack of a better construct.

                      • By hakunin 2026-02-0722:481 reply

                        I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.

                        • By philipswood 2026-02-088:411 reply

                          To be clear: I'm not engaging with your main point about whether LLMs are usable in software engineering or not.

                          I'm specifically addressing your use of the concept of determinism.

                          An LLM is a set of matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.

                          By your strict use of the definition they absolutely can be deterministic.

                          But that is not actually interesting for the point at hand. The real point has to do with reproducibility, understand ability and tolerances.

                          3blue1brown has a really nice set of videos on showing how the LLM machinery fits together.

                          • By hakunin 2026-02-0916:30

                            > they absolutely can be deterministic.

                            They _can_ be deterministic, but they usually _aren't_.

                            That said, I just tried "make me a haiku" via Gemini 3 Flash with T=0 twice in different sessions, and both times it output the same haiku. It's possible that T=0 enables deterministic mode indeed, and in that case perhaps we can treat it like a compiler.

        • By AstroBen 2026-02-0716:241 reply

          That will never happen unless we figure out a far simpler way to prove the system does what it should. If you've ever had bugs crop up with a full test suite you should know this is incredibly hard to do

          LLMs can't read your mind. In the end they're always taking the english prompt and making a bunch of fill in the blank assumptions around it. This is inevitable if we're to get any productivity improvements out of them.

          Sometimes it's obvious and we can catch the assumptions we didn't want (the div isn't centered! fix it claude!) and sometimes you actually have to read and understand the code to see that it's not going to do what you want under important circumstances

          If you want a 100% perfect communication of the system in your mind, you should use a terse language built for it: that's called code. We'd just write the code instead

          • By exe34 2026-02-0717:07

            we can do both. we can write code for the parts where it matters and let the LLM code the parts that aren't as critical.

        • By dkersten 2026-02-0715:18

          People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.

          I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.

        • By Thanemate 2026-02-0715:58

          >I believe the argument from the other camp is that you don't need to understand the code anymore

          Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?

          Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?

          The deeper I look in the rabbit hole they think we're walking towards the more issues I see.

        • By testuser312 2026-02-0715:081 reply

          At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.

          Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.

          • By jbloggs777 2026-02-0716:48

            Indeed. This is very much the way I use it at work. Present an idea of a design, iterate on it, then make a task/todo list and work through the changes piecemeal, reviewing and committing as I go. I find pair design/discussion practical here too. I expect to see smaller teams working like this in the future.

            For small personal projects, it's more vibey.. eg. Home automation native UIs & services for Mac & Windows, which I wouldn't otherwise start.. more itches that can be scratched in my limited time.

        • By scrame 2026-02-0715:192 reply

          quite a bit of software you would need to understand the assembly. not everything is web-services.

          • By verdverm 2026-02-0717:04

            I've only needed assembly once in more than 20 years of programming, not a webdev

            It was during university to get access to CPU counters for better instrumenting, like 15 years ago. Havent needed it since

          • By manofmanysmiles 2026-02-0716:331 reply

            I've found LLMs (since Opus 4.5) exceptionally good at reading and writing and debugging assembly.

            Give them gdb/lldb and have your mind blown!

            • By HarHarVeryFunny 2026-02-0717:431 reply

              Do you mean gdb batch mode (which I've heard of others using with LLMs), or the LLM using gdb interactively ?

              • By manofmanysmiles 2026-02-1120:19

                I wrote a wrapper python script debug.py that will run gdb as a subprocess, and then takes input from the args.

                Usage is somewhat like:

                $ debug.py start $ debug.py -c "break main" $ debug.py -c "continue"

                Cursor at least doesn't seem to like running interactive programs yet.

      • By sowbug 2026-02-0716:341 reply

        That is what's hard about transitioning from coder to lead. A good coder makes full use of a single thread of execution. A good lead effectively handles the coordination of multiple threads. Different skills.

        An LLM coding assistant today is an erratic junior team member, but its destructive potential is nowhere near some of the junior human engineers I've worked with. So it's worth building the skills and processes to work with them productively. Today, Claude is a singular thing. In six months or a year, it'll be ten or a hundred threads working concurrently on dozens of parts of your project. Either you'll be comfortable coordinating them, or you'll nope out of there and remain an effective but solitary human coder.

        • By goobert 2026-02-0717:131 reply

          I use conductor with git worktrees and will literally have 10 or 20 running at a time getting pinged as they finish stuff for me to review, mostly getting rid of small ticket and doing random POCs while I focus on bigger stuff, the bottleneck has literally become the company doesn't have enough stuff to give me. It only really works however because I have a lot of context and understanding of the codebase. It's already here.

          • By SamPatt 2026-02-0717:52

            I found coordinating skill usage across worktrees quite annoying, how are you managing this?

      • By karmasimida 2026-02-0719:43

        Then you are using it the wrong way

        Driving is a skill that needs to be learnt, same with working with agents.

      • By andoando 2026-02-0717:56

        Are you giving it huge todos in one prompt or working modularly?

        Claude set up account creation / login with SSO login, OTP and email notifications in like 5 mins and told me exactly what to do on the provider side. Theres no way that wouldn't have taken me few hours to figure out

        There is no way its not faster at a large breadth of the work, unless youre maybe a fanatic with reviewing and nitpicking every line of code to the extreme

      • By wtetzner 2026-02-0714:31

        > I would have finished the project in the same amount of time

        Probably less time, because you understood the details better.

      • By verdverm 2026-02-0716:56

        Have you added agents.md files?

        You have to do more than prompts to get the more impressive results

      • By throwaw12 2026-02-0715:54

        skill issue.

        sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.

    • By wtetzner 2026-02-0714:304 reply

      > It's strange to me when articles like this describe the 'pain of writing code'.

      I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.

      In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?

      Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.

      • By falkensmaize 2026-02-0714:57

        What I never understand is that people seem to think the conception of the idea and the syntactical nitty gritty of the code are completely independent domains. When I think about “how software works” I am at some level thinking about how the code works too, not just high level architecture. So if I no longer concern myself with the code, I really lose a lot of understanding about how the software works too.

      • By geetee 2026-02-0715:18

        Writing the code is where I discover the complexity I missed while planning. I don't truly understand my creation until I've gone through a few iterations of this. Maybe I'm just bad at planning.

      • By thwarted 2026-02-0715:371 reply

        At first I thought you were referring to the debates over using vim or using emacs, but I think you mean to refer to the discussions about learning to use/switching to powerful editors like vim or emacs. If you learn and use a sharp, powerful editor and learn to type fast, the "burden" of editing and typing goes away.

        • By notpachet 2026-02-0717:06

          I wonder how many vibecoding/automatic programming zealots are fluent in a multimodal editor.

    • By simonw 2026-02-0715:536 reply

      Have you really never found writing code painful?

      CI is failing. It passed yesterday. Is there a flaky API being called somewhere? Did a recent commit introduce a breaking change? Maybe one of my third-party dependencies shipped a breaking change?

      I was going to work on new code, but now I have to spend between 5 minutes and an hour+ - impossible to predict - solving this new frustration that just cropped up.

      I love building things and solving new problems. I'd rather not have that time stolen from me by tedious issues like this... especially now I can outsource the CI debugging to an agent.

      These days if something flakes out in CI I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.

      • By sevensor 2026-02-0716:501 reply

        What you’ve described is very much not writing code though. It’s the tedious and unpleasant outcome of having a flaky or under resourced CI setup or pulling in a misbehaving dependency. Neither of those is typing code per se. I don’t think it’s fair to conflate that kind of problem with the creative work involved in implementation itself.

        “Writing code is boring and tedious” says more about the speaker than it does about programming.

        • By therealdrag0 2026-02-081:15

          So helps with all of it. I’ve had AI fix flakey builds multiple times in one shot. Big time saver.

      • By g947o 2026-02-0811:311 reply

        Not everyone thinks this way. For sure, I don't enjoy every repetitive/tedious tasks or having to put out fires under time pressure, but I happen to be someone who enjoys solving problems. Many times, I helped others find and understand the root causes of bugs after they spend hours cluelessly debugging. (And often LLMs are completely lost as well, provide terrible suggestions or fixes) In an extreme case, I helped fix a bug an entire team of people (of several people) failed to address. And I always learn from such experiences, try to come up with solutions that would prevent it from happening in the first place, and think through the whole thing to be even more efficient at Debugging.

        The point is that, LLMs can't always do all of this, they don't necessarily help you think about the root causes or address the human part of the causes, they don't help you build technical skills that you can use in the future. They fix it and it's done (possibly with a terrible, short time solution), unless the human wants to dive deep. For sure this happens all the time, in fact probably more than not, but LLMs are not going to help the situation.

        • By simonw 2026-02-0813:19

          I enjoy the kind of problem solving you are describing there too. That's why I like being able to point LLMs at them first - if they can find the fix I get to save a bunch of time and spend it on more interesting problems, and if they can't find the fix then I know I'm going to have a great time digging into a really gnarly problem myself!

      • By enraged_camel 2026-02-0716:341 reply

        Incidentally, I've been using AI to deal with the weird bugs, cryptic errors and generally horrendous complexities of a framework we've been using at work (Elixir's Ash). It's really nice to no longer have to read badly organized docs, search the Internet for similar problems and ask around in the developers' Slack/Discord.

        • By simonw 2026-02-0716:44

          So many of my coding agent sessions start with "clone <github URL to relevant dependency> into /tmp for reference" - it's such a great pattern because incomplete or inaccurate decimation matters way less if the agent can dispatch a sub-agent to explore the codebase any time it needs to answer an obscure question.

      • By verdverm 2026-02-0717:07

        You have a solution, I've seen them recommend some pretty terrible bug fixes, especially in the ci realm because they get rather clueless as the perspective gets higher or broader

      • By throwaw12 2026-02-0715:58

        > I point Claude Code at it and 90% of the time I have the solution a couple of minutes later.

        Same experience, I don't know why people keep saying code was easy part, sure, only when you are writing a boilerplate which is easy and expectations are clear.

        I agree code is easier than some other parts, but not the easiest, industry employed millions of us, to write that easy thing.

        When working on large codebases or building something in the flow, I just don't want to read all the OAuth2 scopes Google requires me to obtain, my experience was never: "now I will integrate Gmail, let me do gmail.FetchEmails(), cool it works, on to the next thing"

      • By ratatatatata 2026-02-0717:55

        [dead]

    • By everforward 2026-02-0715:10

      I was talking to a coworker that really likes AI tooling and it came up that they feel stronger reading unfamiliar code than writing code.

      I wonder how much it comes down to that divide. I also wonder how true that is, or if they’re just more trusting that the function does what its name implies the way they think it should.

      I suspect you, like me, feel more comfortable with code we’ve written than having to review totally foreign code. The rate limit is in the high level design, not in how fast I can throw code at a file.

      It might be a difference in cognition, or maybe we just have a greater need to know precisely how something works instead of accepting a hand wavey “it appears to work, which is good enough”.

    • By jesse_dot_id 2026-02-0715:061 reply

      People are different. Some are painters and some are sculptors. Andy Warhol was a master draftsman but he didn't get famous off of his drawings. He got famous off of screen printing other people's art that he often didn't own. He just pioneered the technique and because it was new, people got excited, and today he's widely considered to be a generational artistic genius.

      I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.

      If you use an LLM assisted workflow to write something that a lot of people love, then you have created art and you are a great artist. It's probable that if Tolkien was born in our time instead of his, he'd be using modern tools while still creating great art, because his creative mind and his work ethic are the most important factors in the creative process.

      I'm not of the opinion that any LLM will ever provide quality that comes close to a master work by itself, but I do think they will be valuable tools for a lot of creative people in the grueling and unrewarding "just make it exist first" stage of the creative process, while genius will still shine as it always has in the "you can make it good later" stage.

      • By thwarted 2026-02-0715:251 reply

        I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.

        If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.

        • By jack_pp 2026-02-0716:031 reply

          That's a moral debate, not suitable for this discussion.

          The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.

          Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.

          Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.

          • By thwarted 2026-02-0719:33

            It sure is myopic to think that the debate over if the ends justifies the means is solely a moral consideration, and then literally list cases where the value of the means compared to the ends is a judgment call results in "it depends".

    • By jarjoura 2026-02-0720:08

      Your comment is spot on, but the nuance people who are still new to these LLMs don't yet see is the real reason "he'd be better off just writing the damned book instead."

      1. That prompt is always a slot machine. It's never 100% deterministic and that's why we haven't seen an explosion of claude skills. When it works for you, and it's magical, everyone is wowed. However, there is a set of users who then bang their head, wondering why their identical attempt is garbage compared to their coworker. "It must be a skills issue." No, it's just the LLM being an LLM.

      2. Coding agents are hyper localized and refuse to consider the larger project when it solves something. So you end up with these "paper cuts" of duplicated functions or classes that do one thing different. Now the LLM in future runs has to decide which of these classes or functions to use and you end up with two competing implementations. Future you will bang your head trying to figure out how to combine them.

      3. The "voice" of the code it outputs is trained on public repositories so if your internal codebase is doing something unique, the LLM will consistently pick the voice it's trained on, forcing you to rewrite behind it to match your internal code.

      4. It has no chill. If I set any "important" rules in the prompt then it sometimes adheres to it at the expense of doing the "right" thing in its changes. Or it completely ignores it and does its own thing, when it would have been the perfect time to follow the rule. This is to your point that, if I had just written the code myself, it would have been less words than any "perfect" prompt it would have taken to get the same code change.

    • By kmac_ 2026-02-0715:511 reply

      Current models won't write anything new, they are "just" great at matching, qualifying, and copying patterns. They bring a lot of value right now, but there is no creativity.

      • By throwaw12 2026-02-0716:023 reply

        95% of the industry wasn't creating creative value, it was repetitive.

        * auth + RBAC, known problem, just needs integration

        * 3rd party integration, they have API, known problem, just needs integration

        * make webpage responsive, millions of CSS lines

        * even video gaming, most engines are already written, just add your character and call couple of APIs to move them in the 3D space

        • By bilbo0s 2026-02-0716:17

          So true.

          You can only complain about creativity if you were actually being creative. 99.99999% of the industry was not.

          But sure, for the 0.000001% of the industry coming up with new deep learning algorithms instead of just TF/PyTorch monkey-ing, maybe the LLMs won’t help as much as a good foundation in some pretty esoteric mathematics.

        • By kmac_ 2026-02-0716:16

          That's why they bring a lot of value. Plus, new models and methods enable solutions that weren't available a decade ago.

        • By GeoAtreides 2026-02-0718:42

          Not sure what 95% of the industry created, but it did bring liquidity in my bank account and a roof over my head.

          now what?

    • By Aperocky 2026-02-0715:22

      Tolkien's book is an art, programs are supposed to do something.

      Now, some program may be considered art (e.g. codegolf) or considered art by their creator. I consider my programs and code are only the means to get the computer to do what it wants, and there are also easy way to ensure that they do what we want.

      > Frodo and Sam having an argument over the trustworthiness of Gollum. Frodo should be defending Gollum and Sam should be on his side.'

      Is exactly what programs are. Not the minutiae of the language within.

    • By alainrk 2026-02-0714:294 reply

      I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?

      • By estimator7292 2026-02-0714:31

        If you're gonna take this track, at least be honest with yourself. Does your boss get more value out of you? You aren't going to get a kickback from being more productive, but your boss sure will.

      • By milowata 2026-02-0714:561 reply

        I had this moment recently with implementing facebook oauth. I don’t need to spend mental cycles figuring that out, doing the back and forth with their API, pulling my hair out at their docs, etc. I just want it to work and build my app. AI just did that part for me and could move on.

        • By normie3000 2026-02-0715:49

          Integrating auth code is probably a good example of code you want to understand, rather than just seeing that it appears to work.

      • By marginalia_nu 2026-02-0715:301 reply

        I honestly think the stuff AI is really good at is the stuff around the programming that keeps you from the actual programming.

        Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.

        It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.

        "Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.

        • By normie3000 2026-02-0715:47

          I'm glad to hear the gradle experience hasn't changed in the decade since I started avoiding it.

      • By wtetzner 2026-02-0714:322 reply

        I think it's still an open question if it's actually a net savings of time.

        • By chasd00 2026-02-0714:411 reply

          One thing I’ve noticed is that effort may be saved but not as much time. The agent can certainly type faster than me but I have to sit there and watch it work and then check its work when done. There’s certainly some time savings but not what you think.

          • By FeteCommuniste 2026-02-0714:471 reply

            Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.

            • By kaydub 2026-02-0721:34

              Always have another agent around that is just reviewing code. "Look for duplicate code, code smells, bad architecture, future scalability problems..."

              I usually keep one agent/prompt/context ripping through new features and another 1-3 agents/prompts/contexts reviewing, testing, planning.

        • By strange_quark 2026-02-0715:58

          The absence of evidence is evidence in its own way. I don’t understand how there haven’t been more studies on this yet. The one from last year that showed AI made people think they were faster but were actually slower gets cited a lot, and I know that was a small study with older tools, but it’s amazing that that hasn’t been repeated. Or maybe it has and we don’t know because the results got buried.

    • By dkersten 2026-02-0715:23

      “ What’s gone is the tearing, exhausting manual labour of typing every single line of code.”

      Yeah, this was always the easy part.

    • By bilekas 2026-02-0716:301 reply

      Writing he code should be the easy part and one of the smaller time sinks actually. The fruits of the labour is in the planning, the design, the architecture and the requirements that you want to achieve now and potentially in the future.. these all require a serious amount of effort and foresight to plan out.

      When you're ready, maybe you've done some POC in areas you were unsure, maybe some good skeletons work to see a happy path draw a shadow of s solution, iterate over your plans and then put some real "code"/foundation in place.

      It's a beautiful process. Starting out I used to just jump into s project deep with the code first and hit that workaround button one too many times and it's far more expensive, we all know that.

      • By kaydub 2026-02-0721:311 reply

        So waterfall sdlc...

        • By bilekas 2026-02-0721:39

          If the shoe fits your work flow, call it what you want. Change it how you like for all I care. But coding shouldn't be the majority of your time.

    • By bdcravens 2026-02-0716:29

      I don't find writing code painful, but I do find it tedious. The amount of time wasted on boilerplate keeps me from getting to the good stuff. LLMs let me speed run through all of that.

      To take it back to your example, let's imagine Tolkien is spending a ton of time on setting up his typewriter, making sure he had his correction tape handy, verifying his spelling and correcting mistakes, ensuring his tab stops were setup to his writing standard, checking for punctuation marks, etc. Now imagine eliminating all that crap so he can focus on the artistic nature of the dialogue.

    • By mycall 2026-02-0714:521 reply

      Isn't that what Tolkien did in his head? Write something, learn what he liked/didn't like then revise the words? Rinse/repeat. Same process here.

      • By irishcoffee 2026-02-0715:421 reply

        If Tolkien had not lived an entire life, fought in a war, been buddies with other authors, and also been a decent writer, the story doesn’t exist. And an LLM won’t come up with it.

        An LLM isn’t coming up with the eye of Sauron, or the entire backstory of the ring, or gollum, etc etc

        The LLM can’t know Tolkien had a whole universe built in his head that he worked for decades to get on to paper.

        I’m so tired of this whole “an LLM just does what humans already do!” And then conflating that with “fuck all this LLM slop!”

        • By mycall 2026-02-0818:27

          In the age of OpenClaw and interest agentic processes, the LLM doesn't need to dream up everything by itself. There are tons of tools it can use. Using those tools, the LLM can generate a random world and write about that, and score it too -- with enough iterations, a best seller could pop out.

    • By karel-3d 2026-02-0718:361 reply

      Sometimes you are not writing Lord of the Rings.

      Sometimes you are writing a marketing copy for a new Nissan that's basically the same as last year Nissan, yet you need to sell it somehow. Nobody will REALLY read it more than 2 seconds and your words will be immediately forgotten. Maybe some AI is good then.

      • By g947o 2026-02-0811:42

        Copy & paste will do the job as well. Just remember to update the year number with a tool we have had for decades named "find/replace". Since nobody reads it, nobody knows it's almost the same thing, and there is no chance you get fired because LLM made things up that you didn't review/catch.

    • By bufordtwain 2026-02-0716:40

      I didn't fully realize how much pain there was until I started delegating the coding to AI. It's very freeing. Unfortunately I think this will soon lead to mass layoffs.

    • By n4r9 2026-02-0714:231 reply

      Pain can mean tedium rather than intellectual challenge.

      • By wtetzner 2026-02-0714:323 reply

        I really struggle to understand how people can find coding more tedious than prompting. To each their own I guess.

        • By TuringTest 2026-02-0715:041 reply

          I can only speak for myself but for me, it's all about the syntax. I am terrible at recalling the exact name of all the functions in a library or parameters in an API, which really slows me down when writing code. I've also explored all kinds of programming languages in different paradigms, which makes it hard to recall the exact syntax of operators (is comparison '=' or '==' in this language? Comments are // or /*? How many parameters does this function take, and in what order...) or control structures. But I'm good at high level programming concepts, so it's easy to say what I want in technical language and let the LLM find the exact syntax and command names for me.

          I guess if you specialise in maintaining a code base with a single language and a fixed set of libraries then it becomes easier to remember all the details, but for me it will always be less effort to just search the names for whatever tools I want to include in a program at any point.

          • By gertlex 2026-02-0715:30

            I agree with a bunch of this (I'm almost exclusively doing python and bash; bash is the one I can never remember more than the basics of). I will give the caveat that I historically haven't made use of fancy IDEs with easy lookup of function names, so would semi-often be fixing "ugh I got the function name wrong" mistakes.

            Similar to how you outlined multi-language vs specialist, I wonder if "full stack" vs "niche" work unspokenly underlies some of the camps of "I just trust the AI" vs "it's not saving me any time".

        • By dgacmu 2026-02-0714:441 reply

          Some code is fun and some sucks?

          There's a joke that's not entirely a joke that the job of a Google SWE is converting from one protobuf to another. That's generally not very fun code, IMO (which may differ from your opinion and that's why they're opinions!). Otoh, figuring out and writing some interesting logic catches my brain in a way that dealing with formats and interoperability stuff doesn't usually.

          We're all did but we all probably have things we like more than others.

          • By wtetzner 2026-02-0714:481 reply

            I mean, I agree if it's really just "machine translate this code to use the approved method of doing this thing". That seems like a perfect use case for AI. Though one would think Google would already have extensive code mod infrastructure for that kind of thing.

            But those aren't the stories you hear about with people coding with AI, which is what prompted my response.

            • By dgacmu 2026-02-0715:10

              They do and I think a lot of that is LLM'd these days, though that's just what I hear third-hand.

              I do agree that this:

              > What’s gone is the tearing, exhausting manual labour of typing every single line of code.

              seems more than a little overblown. But I do sympathize with not feeling motivated to write a lot of glue and boilerplate, and that "meh" often derails me on personal projects where it's just my internal motivation competing against my internal de-motivation. LLMs have been really good there, especially since many of those are cases where only I will run or deal with the code and it won't be exposed to the innertubes.

              Maybe the author can't touch type, but that's a separate problem with its own solution. :)

        • By kaydub 2026-02-0721:411 reply

          How long have you been coding? How many languages and frameworks have you worked with? Where has your professional focus been?

          I've written professional code in production for the past 15+ years in VB, C# (MVC2/MVC3 + Razor), Php(Yii, Yii2, Symfony), Perl, Python(Flask, Cherrpy), Java(Spring MVC, Spring boot, JSF, J2EE), Golang, Rust, Ruby. I've worked on build/ci pipelines from Jenkins, CircleCI, Github, Gitlab, Teamcity, etc. I've had to deploy/manage infrastructure from bare metal to the cloud with Ansible, Puppet, Saltstack, Terraform, Cloudformation. I've had to run on MySQL, Postgres, Mariadb, SQL Server and use ActiveMQ, RabbitMQ, Kafka, SQS, SNS, MSK, Kinesis (of all flavors). I could literally keep going and going and going.

          I'm tired. It's way easier to prompt than keep track of all this shit at this point. I don't need to know how to implement $feature or $tool in each and every framework, I'll let the machines worry about that.

          • By wtetzner 2026-02-0721:491 reply

            I've been writing professional code for 20 years at this point, using many languages, libraries, frameworks, etc. But I certainly don't use them all at the same time.

            This also just feels like we're solving the wrong problem. Using AI doesn't fix any of it, it just makes it easier to make the problem worse faster.

            • By kaydub 2026-02-0721:561 reply

              You haven't worked somewhere where you're working on multiple different microservices each written by different teams in different languages?

              • By n4r9 2026-02-0723:081 reply

                That absolutely sounds like a case where AI will only plaster over the problem and eventually make it worse.

                • By kaydub 2026-02-083:191 reply

                  I don't see how...

                  • By n4r9 2026-02-099:04

                    AI can hack features onto needlessly complex architectures well beyond what humans will put up with. PMs and upper management tend to be laser focused on short term gains and incentive this behaviour. So when you eventually need to step back and consolidate/simplify, it's much more difficult to do so even with AI assistance.

    • By franze 2026-02-0714:582 reply

      Claude Opus 4.6:

      “He’s a liar and a sneak, Mr. Frodo, and I’ll say it plain — he’d slit our throats in our sleep if he thought he could get away with it,” Sam spat, glaring at the hunched figure scrabbling over the stones ahead. “Every word out of that foul mouth is poison dressed up as helpfulness, and I’m sick of pretending otherwise.” Frodo stopped walking and turned sharply, his eyes flashing with an intensity that made Sam take half a step back. “Enough, Sam. I won’t hear it again. I have decided. Sméagol is our guide and he is under my protection — that is the end of it.” Sam’s face reddened. “Protection! You’re protecting the very thing that wants to destroy you! He doesn’t care about you, Mr. Frodo. You’re nothing to him but the hand that carries what he wants!” But Frodo’s expression had hardened into something almost unrecognizable, a cold certainty that brooked no argument. “You don’t understand what this Ring does to a soul, Sam. You can’t understand it. I feel it every moment of every day, and if I say there is still something worth saving in that creature, then you will trust my judgment or you will walk behind me in silence. Those are your choices.” Sam opened his mouth, then closed it, stung as if he’d been struck. He fell back a pace, blinking hard, and said nothing more — though the look he fixed on Gollum’s retreating back was one of pure, undisguised loathing.

      • By Calavar 2026-02-0715:23

        Claude already knows who the characters Frodo, Sam, and Gollum are, what their respective character traits are, and how they interacted with each other. This isn't the same as writing something new.

      • By dmbche 2026-02-0716:241 reply

        Do you find this interesting to make and read?

        • By franze 2026-02-0719:01

          I am probably the only person who ever willingly created a complete AI generated book and willingly read it front to cover. Last summer. I called it "Claude Code: A Primer" an Claude Code Origin Story. Good book, complete made up.

          The technology is here, lets explore it. And when somebody states something in an HN comment. Lets just try it. Imperfect method. But better than to just talk Hypothetically about AI.

          If AI will write better books than ever written until now? More insights than ever created before. Would we read it? Is it even possible? If not, why not. Whats is missing?

          Thats the questions I find fascinating. I for one want to find out. With experimentation, not via predefined believes.

    • By echelon 2026-02-0714:247 reply

      Please forgive me for being blunt, I want to emphasize how much this strikes me.

      Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?

      If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?

      There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.

      I want to see the future unfold during my career, not just have it be incrementalism until I retire.

      I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.

      God, just the thought of another ten years of the same is killing me. It's so fucking mundane.

      The future is exciting.

      Bring it.

      • By abcde666777 2026-02-0714:411 reply

        I think my take on the matter comes from being a games developer. I work on a lot of code for which agentic programming is less than ideal - code which solves novel problems and sometimes requires a lot of precise performance tuning, and/or often has other architectural constraints.

        I don't see agentic programming coming to take my lunch any time soon.

        What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.

        Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.

        • By kaydub 2026-02-0721:53

          I knew nothing about game development a few months ago. Now I've built a simple godot game. I'm sure the game is all pretty common (simple 2d naval combat game) but it's still impressive that a couple claude/gemini/codex cli sessions spit out a working game (admittedly, I'm not a professional artist, so THAT part of it has been painful since I can't rely on generative AI to do that, I have to do it myself with aesprite. But maybe a professional artist would know HOW to prompt for the artwork)

          Agentic programming still needs devs/engineers. It's only going to take your lunch if you let it. And by that, I mean the FUD and complete refusal to give good faith attempts to use the ai/llm tools.

      • By wtetzner 2026-02-0714:351 reply

        > Your post feels like the last generation lamenting the new generation.

        > The future is exciting.

        Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.

        • By kaydub 2026-02-0721:54

          What hard problems are you working on?

      • By cruffle_duffle 2026-02-0715:56

        > If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?

        To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.

        Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.

        They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b

      • By objclxt 2026-02-0714:34

        > Your post feels like the last generation lamenting the new generation [...] There's so much plumbing and refactoring bullshit in writing code [...] I've had my excitement

        I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.

      • By creata 2026-02-0714:371 reply

        > I want to be able to make art and music and movies and games.

        Then make them. What's stopping you?

        • By echelon 2026-02-0714:421 reply

          I want to live forever and set foot on distant planets in other galaxies.

          Got a prescription for that too?

          I've made films for fifteen years. I hate the process.

          Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.

          There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.

          I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.

          I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.

          I want the future to climb out of the pit we're in and touch the stars.

          • By nradov 2026-02-0715:44

            Computing technology always becomes cheaper and more powerful over time. But it's a slow process. The rate of improvement for LLMs is already decreasing. You will die of old age before the technology that you seem to be looking for arrives.

      • By estimator7292 2026-02-0714:321 reply

        Burn the planet to the ground because your life is boring. Extremely mature stance you've got there

        • By echelon 2026-02-0714:33

          This is 1960's era anti-nuclear all over again.

          People on Reddit posting AI art are getting death threats. It's absurd.

      • By plagiarist 2026-02-0716:54

        Oh, no, you're imagining the wrong subgenre of sci-fi. These robots are actually owned and operated by billionaires.

HackerNews