Code is cheap. Show me the talk

2026-01-3012:05287229nadh.in

Linus Torvalds once said, 'Talk is cheap. Show me the code'. That is no longer the case.

TLDR; Software development, as it has been done for decades, is over. LLM coding tools have changed it fundamentally for the better or worse.

“Talk is cheap. Show me the code.” — Linus Torvalds, August 2000

When Linus Torvalds, the creator of Linux, made this quip in response to a claim about a complex piece of programming in the Linux kernel, [1] I was an oblivious, gangly, fledgling teenage n00b coder learning by copy-pasting open source Perl and VB snippets over dialup internet.

The quip has since become an adage in the software world. The gist of it back then was that, it was easy to talk about all the software stuff one would like to do, or could be hypothetically done, but unless one actually put in the effort and proved it, talk wasn’t of much value. Writing and proving good software was a high-effort, high-cost, high-skill endeavour.

Even when armed with a crystal clear software development plan and the exact know-how to implement it, any sufficiently complex piece of programming is high-effort, tedious, and time consuming to actually write and get to a form where it is functional, reliable, and at least reasonably future-ready. In the process of developing software, any number of unforeseen complexities and gotchas can arise with many unresolvable trade-offs,[2] both technical and external. It is not uncommon for software architectures to change mid-way multiple times. The cost of just trying things out is so exponentially high that the significant majority of ideas are simply never tried out.

After all, the real bottleneck is good old physical and biological human constraints—cognitive bandwidth, personal time and resources, and most importantly, the biological cost and constraints of having to sit for indefinite periods, writing code with one’s own hands line by line even if it is all in one’s head, while juggling and context-switching through the mental map of large systems. And if it is more than one individual, a whole host of interpersonal coordination and communication dynamics come into play. It is thus very difficult to prototype and try out not just grand ideas, but even reasonably simple ones. As many of us have done, most ideas are generally appended to a bottomless wishlist where they very likely stay forever. That’s how I have programmed and written software on a regular basis and enjoyed it—from hobby stuff to critical systems that millions of people depend on—for about 25 years.

All that has now been thrown out of the window, of course, for better or worse.

Coming back to Linus, fast-forward 25 years, when he merges a chunk of AI-generated code into his toy project and comments “Is this much better than I could do by hand? Sure is.”, [3] I, no longer the fledgling n00b, but someone with decades of software development scars and calluses (both physical and metaphorical), am able to grasp its implications. Not only that, now with a sizeable amount of first-hand experience with LLM-assisted coding, I am compelled to say, software development, as it has been done for decades, is over. Along with that, many other things are too.

I say that with the full awareness that it smacks of Fukuyama’s The End of History, [4] but I will reiterate:

Software development, as it has been done for decades, is over.

I

I was lucky to be in the transitionary Goldilocks era to witness and to partake in the breakneck evolution of the internet and software landscape—dialup to DSL to gigabit; Basic, Visual Basic 4/5/6 and Delphi; rise and fall of cgi-bin; Altavista to Google; XMLHttpRequest kicking off Web 2.0; rise and fall of Flash; death of IE and the rise of Chrome; WAP to Symbian to Android and smartphone apps; the demise of SourceForge and the massive proliferation and success of FOSS (Free and Open-source Software); git and GitHub; rise of SaaS; ExpertsExchange to StackOverflow; the growth of the Linux world; sysadmin to devops to whateverOps; the ominous birthing of Node.js and MongoDB in the same year; microservices; the explosion of VC-funded software “unicorns”; crypto and web3 shams; the rapid darkening of patterns; widespread enshittification and monetisation of privacy, attention, and dignity; and the monumental bloating of software that has since become the norm.

All throughout this, I have been writing, maintaining, and deploying software both as a professional developer and as a FOSS hobbyist dabbling in a gazillion languages, frameworks, tools, and methodologies. From thinking that “indenting code is lame” (cringe) as a teen, from copy-pasting to CVS to svn to git, fighting space vs. tab battles, to maturing to “whatever floats your boat” and still regularly compressing PNGs to shave off a few KBs, I have been a dabbler, dilettante, and an addict, someone who has unconditionally enjoyed writing code and developing software.

But now? How I develop software now is not how I have done it all these years, all the right, wrong, good, bad, easy and hard bits combined. With the advent of code-assisting LLMs, it has been completely flipped on its head, and I don’t think there is any going back.

Now, that is some “Tears in rain”-esque [5] monologue right there.

Code

Barring a bunch of obvious objective 101s, there is no universal measure of what makes a codebase good or great. Styles, idioms, patterns, architectures all vary greatly. Even objectively provable technical choices are subject to trade-offs that defy consensus. For a software developer like me, historically, there have been a few rule-of-thumb indicators for quick evaluation of software. When I evaluate a FOSS project, I look at a bunch of factors, all a mix of objective and subjective, weighted differently under different contexts—the project’s age; is the commit activity overly sparse or frantic; frameworks and dependencies; is code consistently organised and commented without being over-abstracted; is there a community around it; are maintainers responsive; can I actually get it up and running quickly from a clear README; the quality and depth of its documentation …

Many of these rule-of-thumb signals give a reasonable glimpse of the mental model and the style of working of the maintainers and the likely future trajectory of the project. For example, concise comments, README, and documentation indicate thoughtfulness, extra effort, and empathy for other developers (and self). Mainly because, for mortal developers like me, documentation and tests are a necessity, but unpleasant, boring, and tedious things to write and maintain.

Well, those notions have now been abruptly and violently defenestrated by LLMs. They can now one-shot generate stunning looking documentation pages, dense (ironically, pedantically detailed) READMEs, build great looking user interfaces, neatly organise code with proper idioms, patterns, and comments. One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance. These no longer indicate the quality of a codebase. On the contrary, the more stunning or perfect looking something is, the more suspicious it is now—was it low-effort, one-shot vibe coded?

With the tell-tale, rule-of-thumb measures of code and software quality being outright dead, without a much closer inspection and a bit of expert forensic analysis, it is now difficult to tell the wheat from the “slop”. One is now slowly being compelled to also look much more closely at the provenance of software—the who, why, their track record, and plans of governance.

Effort

Historically, it would take a reasonably long period of consistent effort and many iterations of refinement for a good developer to produce 10,000 lines of quality code that not only delivered meaningful results, but was easily readable and maintainable. While the number of lines of code is not a measure of code quality—it is often the inverse—a codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.

Now, LLMs can not only one-shot generate that in seconds, they can handle many technical aspects of the software development workflow, from testing to sysadmin to publishing. Unlike the unpredictable outcomes of frenzied vibe coding, when steered with human expertise, the output can be high quality and highly effective.[6] This has been my personal experience as well. On a regular basis, I have been compressing work that would have taken me weeks and months to mere days and even hours. That too, without vibe coding, an AGENT.md file, or any fancy multi-agent workflows or orchestration. Just an LLM agent CLI at arm’s length.

As a developer with a bottomless wishlist of things I wish I could have done or tried out, I have been able to use LLM tools to not just rapidly prototype and validate complex ideas, but actually write good quality production-grade software (my own subjective metric, of course) with better code than I could have written manually—things where I knew exactly what I had to do, but was constrained by physical limits, and also things that were unclear to me and needed novel ideas, approaches, and leaps. All the while, learning and bettering my own understanding of things.

The physiological, cognitive, and emotional cost I generally incur to achieve the software outcomes I want or am capable of engineering, has undoubtedly reduced by several orders of magnitude. The time and bandwidth this has freed up, I now spend on engineering, architecting, debating, tinkering, trying to expand my imagination, and writing much more concise and meaningful code that I actually want to write.

Remember the old adage, “programming is 90% thinking and 10% typing”? It is now, for real.

Slop

Given all that, what is the value of code as an artefact, when it can be generated at an industrial scale within seconds by someone who has never written any code? Barring obviously bad LLM-generated code, when code is neatly structured and functional (yes, LLMs can write good code when steered competently), what makes it valuable or not? We wouldn’t want LLM-generated code in systems out there in the real world, but would instead prefer pure unadulterated human code, yes? Well, that would be a wonderful joke.[7] [8] [9] [10] [11]

The reality is that the significant majority of the code written by humans globally on a daily basis, is likely borderline junk.[12] Software development is not even a discipline that has reached any objective level of maturity. Medical doctors and civil engineers go through rigorous training to be issued licenses that are contingent on real world ramifications of their work. How about software developers and engineers? The world runs on shoddily engineered, poorly cobbled together, bloated systems with garbage code that humans have written, mostly directed by people in positions of power with perverse incentives who have absolutely no technical know-how or have any grounding in the humanities—the tyranny of non-tech “tech leaders”.[13]

One could, to trigger emotions, argue that AI slop is at least neatly formatted, well documented, and more syntactically consistent than the vast majority of human-written code. ( ͡° ͜ʖ ͡°)

Kidding aside, I am no fan of AI slop. Reading those obvious soulless LLM-generated messages and articles on the (dead) internet[14] is a waste of neuronal activation in the amygdala, if there is any activation at all. That so many people across the world LLM-speak and emote in the exact same manner on the internet, is creepy self-Pluribus-ification.[15] Without human creation, perfection and flaws, language, literature, art, music etc. are unenjoyable (to most). Infinite, instantly-generatable stuff without human constraints and limits, is actually very difficult to value.

As is code, then? Well, code is a bit different from art, literature, or any form of direct communication and evocation. Code was always a means to an end. Unlike poetry or prose, end users don’t read or care about code. They don’t care what language or framework or the architecture the hundred systems running behind a portal are made of. Code is hidden. They interact with the effect and outcomes of code through various forms of UX. I say that, slightly begrudgingly, as someone who enjoys writing, organising, and even nurturing code. For those who are immersed in it, there is an element of creativity and art in it, and many like me, are borderline curmudgeons on all things software.[16]

Ignoring outright bad code, in a world where functional code is so abundant that “good” and “bad” are indistinguishable, ultimately, what makes functional AI code slop or non-slop? I am strongly inclined to think that it is the framework of accountability, and ironically, the element of humanness. That is, all things (code) being equal, the ability to hold someone accountable at least emotionally and morally (and sometimes legally), for an artefact, instills value.

When one gets that big pull request (PR) on an open source repository, irrespective of its quality, if it is handwritten by a human, there is an intrinsic value and empathy for the human time and effort that is likely ascribed to it. It is known that there is a physical and cognitive cost that has been paid writing a lot of code before raising a PR. That is what makes that code “expensive” and not cheap.

When a PR is obviously LLM-generated, irrespective of how good it is, the first reaction is likely to be “slop!”, because it is no longer possible to instantly ascertain the human effort behind it. On the other hand, the effort required to read and validate it is disproportionately and exponentially high—setting aside people who have also offloaded reading of code to LLMs. It may very well be the best possible functional code, but it is one out of an infinite possible variation that could have been generated with no human cost or effort. Emotionally, it feels wrong and unfair to be burdened by such code dumps.

And, at that point, our reality has become a version of Borges’ Library of Babel.[17]

FOSS

Speaking of libraries, FOSS is perhaps the greatest public commons that humanity has created. The genesis of FOSS and its predecessors, various schemes for sharing code, can be traced to the fundamental premise that software was prohibitively expensive and required immense specialist skills to create. Only a tiny handful of people in the world had the ability to do that, and everyone else was naturally forced to use the creations of the few (proprietary or not). While the global developer ecosystem has exploded since then, the ratio of makers to users has largely remained the same. Largescale FOSS collaboration and community dynamics all stem from that—codebases as valuable shared artefacts.

What happens in a world where code is cheap and small to medium-sized software libraries and modules can be quickly created by an expert, perfectly customised and attuned to their needs, no matter how niche? Forget expertise, a world where anyone reasonably savvy can vibe code the small things they need for their private use, however they please. I see this happening everywhere. What is happening to StackOverflow[18] is also happening to software, although not as dramatically. This seems to strike at the very heart of the human dynamics, societal conditions, and incentives that drive FOSS collaboration and sharing. Add to that, if one considers the impending Cambrian explosion of FOSS projects manufactured at an unprecedented scale, the high-quality FOSS projects that remain and thrive, expert governance, curation, and trust are likely to become more valuable than the code itself.

Missing the forest for the trees

Humans have produced amazing software when there was no syntax highlighting, IDEs, or any kind of tooling. And humans also produce trash despite all the tooling and resources in the world. A good competent developer with good articulation skills and care for quality will use LLMs, or any other tools, in their own ways to produce quality outcomes. An incompetent developer with poor articulation skills or one with a lack of care for quality, will produce bad stuff, LLMs or not.

Thus, the extreme proponents of manic “agentic” vibe coding,[19] and the outright denouncers of LLMs, are both missing the forest for the trees. That there is a pragmatic middle path, where people who have the experience, expertise, competence, and the ability to articulate can use these tools to get the outcomes they desire with the right sets of trade-offs.

Vibe coding has its place, especially for non-technical people, who, for the first time, can tinker, explore, have fun, and empower themselves with software. I see this happening all around me. However, the fanatical acolytes of vibe coding are missing a very important thing that makes humans take artefacts seriously—finitude. They’re generating a vast Borgesian library where they themselves are likely to be lost in an ocean of slop generated by sycophantic agents. Slop, not because the code is of poor quality, but because anything that can be generated infinitely without effort and has no meaningful provenance, is very hard to value or take seriously. Humans fundamentally do not deal well with an infinite supply of anything, especially choices. Completely unsurprising because we are heavily constrained biological beings that have evolved on a finite planet with finite resources to live out finite lifetimes.

And then, the denouncers, they can’t seem to get past the argument from incredulity.[20] They denounce LLMs because they don’t personally like them for whatever reason, or have been unable to get desirable outcomes, or had the wrong expectations about them, or have simply gotten sick of them. But that is immaterial because there is a sizeable population who are using the exact same tools fruitfully and have the opposite experience. I am one of them.

All that said, the widespread braindead and outright stupid and harmful implementations of these technologies fuelled by hype, frenzy, and greed are an unfortunate reality and a massive cause of concern. The AI-business bubble is perhaps one of the biggest in history. The rise of FOSS AI technologies makes one hopeful. However, to incorrectly conflate bad actors, bad actions, bean-counting, and nonsensical implementations with fundamental, physical capabilities of these technologies—not theoretical, but the regular, proven, and practical—is irrational. It is missing the forest for the trees.

The human cost

All of this has been from the perspective of an experienced developer and engineer. For someone who has been weathered and bruised enough, these AI technologies provide extremely effective and powerful assistance.

But what about the young folks who are just starting out? If one does not have their fundamentals in place, if one has not developed an innate and nuanced understanding of systems and the process of software development, then these technologies are unreliable, dangerous genies. One asks for code, it gives code. One asks for changes, it gives changes. Soon, one is stuck with a codebase whose workings one doesn’t understand, and one is forced to go back to the genie and depend on it helplessly. And because one is hooked on and dependent on the genie, the natural circumstances that otherwise would allow for foundational and fundamental skills and understanding to develop, never arise, to the point of cognitive decline.[21] What then happens to an entire generation of juniors, who never get an opportunity to become seniors meaningfully?

Personally, I don’t care about the extreme vibe coders or denouncers or even slop. We are all going to drown in a deluge of slop, from which, many islands of sanity, recovery, and a new order of software will emerge. The real concern is for generations of learners who are being robbed of the opportunity to acquire the expertise to objectively discern what is slop and what is not. Even worse, the possibility that experienced folks who use these tools effectively, will feel disincentivised from mentoring and training junior folks in foundational ways, something that was a natural part of societal evolution. And not just with software development, but the wholesale offloading of agency and decision-making to black boxes.

Talk

At this point, for a hands-on developer, reading and critically evaluating code have become more important than learning syntax and typing it out line by line. Of course, that is still an important skill, because the ability to read code effectively comes from that in the first place. But, the daily software development workflows have flipped over completely.

An experienced developer who can talk well, that is, imagine, articulate, define problem statements, architect and engineer, has a massive advantage over someone who cannot, more disproportionately than ever. Knowledge of specific language, syntax, and frameworks—code—is no longer a bottleneck. The physiological constraints of yore are no longer impediments. The machinery for instantly creating code at scale is now a commodity and available to everyone, just a pip install equivalent away. It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.

Conventional software development methodologies and roles—Waterfall[22] to Agile,[23] developer to tester, senior to junior—have fundamentally changed with traditional boundaries consolidating into unimaginably fast, compressed, blurry, iterative “agentic” loops. The dynamics of people, organisations, and public communities in software development, the very human incentives for sharing and collaboration,[24] [25] [26] are all changing.

For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.


Read the original article

Comments

  • By overgard 2026-01-3022:5210 reply

    I asked Codex to write some unit tests for Redux today. At first glance it looked fine, and I continued on. I then went back to add a test by hand, and after looking more closely at the output there were like 50 wtf worthy things scattered in there. Sure they ran, but it was bad in all sorts of ways. And this was just writing something very basic.

    This has been my experience almost every time I use AI: superficially it seems fine, once I go to extend the code I realize it's a disaster and I have to clean it up.

    The problem with "code is cheap" is that, it's not. GENERATING code is now cheap (while the LLMs are subsidized by endless VC dollars, anyway), but the cost of owning that code is not. Every line of code is a liability, and generating thousands of lines a day is like running up a few thousand dollars of debt on a credit card thinking you're getting free stuff and then being surprised when it gets declined.

    • By acedTrex 2026-01-3023:107 reply

      I've always said every line is a liability, its our job to limit liabilities. That has largely gone out the window these days.

      • By ky3 2026-01-3110:42

        EWD 1036: On the cruelty of really teaching computing science (1988)

        “My point today is that, if we wish to count lines of code, we should not regard them as ‘lines produced’ but as ‘lines spent’: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.”

      • By elgenie 2026-01-312:13

        No code is as easy to maintain as no code.

        No code runs as fast as no code.

      • By doug_durham 2026-01-317:482 reply

        A better formulation is "every feature is a liability". Taking it to the line of code level is too prescriptive. Occasionally writing more verbose code is preferable if it makes it easier to understand.

        • By catdog 2026-01-3110:20

          > A better formulation is "every feature is a liability". Taking it to the line of code level is too prescriptive.

          Amount of code is a huge factor but maybe not the best wording here. It's more a thing of complexity where amount of code is a contributing metric but not the only one. You can very easily have a feature implemented in a too complex way and with too much code (esp. if an LLM generated the code but also with human developers). Also not every feature is equal.

          > Occasionally writing more verbose code is preferable if it makes it easier to understand.

          Think this is more a classic case of "if the metric becomes a goal it ceases to be a metric" than it being a bad metric per se.

        • By phi-go 2026-01-318:061 reply

          This sounds wrong, features have to be the value of your code. The required maintenance and slow down to build more features (technical debt) are the liability, which is how I understood the relationship to "lines of code" anyway.

          • By TeMPOraL 2026-01-318:37

            Wrong or not, the industry embraced it.

            I can sort of understand it if I squint: every feature is a maintenance burden, and a risk of looking bad in front of users when you break or remove it, even if those users didn't use this feature. It's really a burden to be avoided when the point of your product is to grow its user base, not to actually be useful. Which explains why even Fischer-Price toys look more feature-ful and ergonomic than most new software products.

      • By cs_sorcerer 2026-01-311:00

        Followed by even better is code is no code, and best is deleting code.

        It’s one of those things which has always strikes me funny about programming is how less usually really is more

      • By chr15m 2026-01-310:481 reply

        > every line is a liability, its our job to limit liabilities.

        Hard agree!

        • By ares623 2026-01-318:231 reply

          But more code from AI means stocks go up. Stocks are assets. If you generate enough code the assets will outnumber the liabilities. It’s accounting 101. /s

      • By ramraj07 2026-01-3114:57

        Agreed, every line you ship,whether you wrote it or not, you are responsible. In that regard, while I write a lot of code completely with AI, I still endeavor to keep the lines as minimal as possible. This means you never write both the main code and the tests using AI. Id rather have no tests than AI tests (we have QA team writing that up). This kinda works.

      • By nomel 2026-01-3023:582 reply

        The only people I've known that share this perspective are those that hate abstraction. Going back to their code, to extend it in some way, almost always requires a rewrite, because they wrote it with the goal of minimum viable complexity rather than understanding the realities of the real world problem they're solving, like "we all know we need these other features, but we have a deadline!"

        For one off, this is fine. For anything maintainable, that needs to survive the realities of time, this is truly terrible.

        Related, my friend works in a performance critical space. He can't use abstractions, because the direct, bare metal, "exact fit" implementation will perform best. They can't really add features, because it'll throw the timing of others things off to much, so usually have to re-architect. But, that's the reality of their problem space.

        • By johnmwilkinson 2026-01-310:241 reply

          I believe this is conflating abstraction with encapsulation. The former is about semantic levels, the later about information hiding.

          • By nomel 2026-01-310:371 reply

            Maybe I am? How is it possible to abstract without encapsulation? And also, how is it possible to encapsulate without abstracting some concept (intentionally or not) contained in that encapsulation? I can't really differentiate them, in the context of naming/referencing some list of CPU operations.

            • By Retric 2026-01-311:431 reply

              > How is it possible to abstract without encapsulation.

              Historically pure machine code with jumps etc lacked any from of encapsulation as any data can be accessed and updated by anything.

              However, you would still use abstractions. If you pretend the train is actually going 80.2 MPH instead of somewhere between 80.1573 MPH to 80.2485 MPH which you got from different sensors you don’t need to do every calculation that follows twice.

              • By nomel 2026-01-312:221 reply

                I'm using the industry definition of abstraction [1]:

                > In software, an abstraction provides access while hiding details that otherwise might make access more challenging

                I read this as "an encapsulation of a concept". In software, I think it can be simplified to "named lists of operations".

                > Historically pure machine code with jumps etc lacked any from of encapsulation as any data can be accessed and updated by anything.

                Not practically, by any stretch of the imagination. And, if the intent is to write silly code, modern languages don't really change much, it's just the number of operations in the named lists will be longer.

                You would use calls and returns (or just jumps if not supported), and then name and reference the resulting subroutine in your assembler or with a comment (so you could reference it as "call 0x23423 // multiply R1 and R2"), to encapsulate the concept. If those weren't supported, you would use named macros [2]. Your assembler would used named operations, sometimes expanding to multiple opcodes, with each opcode having a conceptually relevant name in the manual, which abstracted a logic circuit made with named logic gates, consisting of named switches, that shuffled around named charge carriers. Say your code just did a few operations, the named abstraction for the list of operations (which all these things are) there would be "blink_light.asm".

                > If you pretend the train is actually going 80.2 MPH instead of somewhere between 80.1573 MPH to 80.2485 MPH which you got from different sensors you don’t need to do every calculation that follows twice.

                I don't see this as an abstraction as much as a simple engineering compromise (of accuracy) dictated by constraint (CPU time/solenoid wear/whatever), because you're not hiding complexity as much as ignoring it.

                I see what you're saying, and you're probably right, but I see the concepts as equivalent. I see an abstraction as a functional encapsulation of a concept. An encapsulation, if not nonsense, will be some meaningful abstraction (or a renaming of one).

                I'm genuinely interested in an example of an encapsulation that isn't an abstraction, and an abstraction that isn't a conceptual encapsulation, to right my perspective! I can't think of any.

                [1] https://en.wikipedia.org/wiki/Abstraction_(computer_science)

                [2] https://www.tutorialspoint.com/assembly_programming/assembly...

                • By Retric 2026-01-312:421 reply

                  > I can't think of any.

                  Incorrect definition = incorrect interpretation. I edited this a few times but the separation is you can use an abstraction even if you maintain access to the implementation details.

                  > assembler

                  Assembly language which is a different thing. Initially there was no assembler, someone had to write one. In the beginning every line of code had direct access to all memory in part because limited access required extra engineering.

                  Though even machine code itself is an abstraction across a great number of implementation details.

                  > I don't see this as an abstraction as much as a simple engineering compromise (of accuracy) dictated by constraint (CPU time/solenoid wear/whatever), because you're not hiding complexity as much as ignoring it.

                  If it makes you feel better consider the same situation with 5 senators X of which have failed. The point is you don’t need to consider all information at every stage of a process. Instead of all the underlying details you can write code that asks do we have enough information to get a sufficiently accurate speed? What is it?

                  It doesn’t matter if the code could still look at the raw sensor data, you the programmer prefer the abstraction so it persists even without anything beyond yourself enforcing it.

                  IE: “hiding details that otherwise might make access more challenging”

                  You can use TCP/IP or anything else as an abstraction even if you maintain access to the lower level implementation details.

                  • By nomel 2026-01-313:182 reply

                    I genuinely appreciate your response, because there's a good chance it'll result in me changing my perspective, and I'm asking these questions with that intent!

                    > You are thinking of assembly language which is a different thing. Initially there was no assembler, someone had to write one.

                    This is why I specifically mention opcodes. I've actually written assemblers! And...there's not much to them. It's mostly just replacing the names given to the opcodes in the datasheet back to the opcodes, with a few human niceties. ;)

                    > consider the same situation with 5 senators X of which have failed

                    Ohhhhhhhh, ok. I kind of see. Unfortunately, I don't see the difference between abstraction and encapsulation here. I see the abstraction as being speed as being the encapsulation of a set of sensors, ignoring irrelevant values.

                    I feel like I'm almost there. I may have edited my previous comment after you replied. My "no procrastination" setting kicked in, and I couldn't see.

                    I don't see how "The former is about semantic levels, the later about information hiding." are different. In my mind, semantic levels exist as compression and encapsulation of information. If you're saying encapsulation means "black box" then that could make sense to me, but "inaccessible" isn't part of the definition, just "containment".

                    • By johnmwilkinson 2026-01-316:271 reply

                      Computer Science stole the term abstraction from the field of Mathematics. I think mathematics can be really helpful in clearing things up here.

                      A really simple abstraction in mathematics is that of numeric basis (e.g. base 10) for representing numbers. Being able to use the symbol 3 is much more useful than needing to write III. Of course, numbers themselves are an abstraction- perhaps you and I can reason about 3 and 7 and 10,000 in a vacuum, but young children or people who have never been exposed to numbers without units struggle to understand. Seven… what? Dogs? Bottles? Days? Numbers are an abstraction, and Arabic digits are a particular abstraction on top of that.

                      Without that abstraction, we would have insufficient tools to do more complex things such as, say, subtract 1 from 1,000,000,000. This is a problem that most 12 year olds can solve, but the greatest mathematicians of the Roman empire could not, because they did not have the right abstractions.

                      So if there are abstractions that enable us to solve problems that were formerly impossible, this means there is something more going on than “hiding information”. In fact, this is what Dijkstra (a mathematician by training) meant when he said:

                      The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise

                      When I use open(2), it’s because I’m operating at the semantic level of files. It’s not sensible to think of a “file” at a lower level: would it be on disk? In memory? What about socket files? But a “file” isn’t a real thing, it’s an abstraction created by the OS. We can operate on files, these made up things, and we can compose operations together in complex, useful ways. The idea of a file opens new possibilities for things we can do with computers.

                      I hope that explanation helps!

                      • By johnmwilkinson 2026-01-3116:11

                        Expanding on this regarding the difference between abstraction vs encapsulation: abstraction is about the distillation of useful concepts while encapsulation is a specific tactic used to accomplish a behavior.

                        To continue with the idea of numbers, let’s say you asked someone to add 3 and 5. Is that encapsulation? What information are you hiding? You are not asking them to add coins or meters or reindeer. 3 and 5 are values independent of any underlying information. The numbers aren’t encapsulating anything.

                        Encapsulation is different. When you operate a motor vehicle, you concern yourself with the controls presented. This allows you, as the operator, to only need a tiny amount of knowledge to interact with an incredibly complex machine. This details have been encapsulated. There may be particular abstraction present, such as the notion of steering, acceleration, and breaking, but the way you interact with these will differ from vehicle to vehicle. Additionally, encapsulation is not concerned with the idea of steering, it is concerned with how to present steering in this specific case.

                        The two ideas are connected because using an abstraction in software often involves encapsulation. But they should not be conflated, out the likely result is bad abstractions and unwieldy encapsulation.

                    • By Retric 2026-01-313:381 reply

                      > It's mostly just replacing the names given to the opcodes in the datasheet back to the opcodes

                      Under the assumption that the input data is properly formatted you can generate machine code. This is however an abstraction which can fail as nothing forces a user to input valid files.

                      So we have an abstraction without any encapsulation.

                      • By nomel 2026-02-0422:17

                        I can only see that as being the case if you weren't aware of it. Otherwise, the awareness would be explicit intent to fail on malformed input, which seems like just as much as an encapsulation?

                        But, that's a great example! Thank you. This makes it clear that I'm probably sometimes wrong. ;)

        • By jahsome 2026-01-313:132 reply

          I don't see how the two are related, personally. I'm regularly accused of over-abstraction specifically because I aspire to make each abstraction do as little as possible, i.e. fewest lines possible.

          • By galaxyLogic 2026-01-316:541 reply

            "Abstracting" means extracting the commnon parts of multiple instances, and making everything else a parameter. The difficulty for software is that developers often start by writing the abstraction, rather than having multiple existing instances and then writing code that collects the common parts of those multiple instances into a single abstraction. I guess that is what "refactoring" is about.

            In sciences and humanities abstraction is applied the proper way, studying the instances first then describing multitude of existing phenomena by giving names to their common repeating descriptions.

            • By nomel 2026-02-0220:42

              This matches my "ideal" way of writing software, which is something close to "reverse waterfall". Start with the non-negotiable truths at the lowest level, then work your way up towards the goal, which is sufficiently defined. As you go, the patterns became apparent, collapsing into nice abstractions.

              The code always ends up nice and clean and modular. And, since I'm working towards the idea, I can say "here are the modular pieces I have to work with, dictated the fundamentals beneath, how do I use them to accomplish the task?". When working from the idea, I think it's easier to want to write something to achieve the immediate task, in a "not see the forest for the trees" kind of way (abstractions are about the goal, rather than the reality under). Of course, both directions are required, but I get the best "separation of concerns" going in reverse.

          • By nomel 2026-01-313:25

            I call that lasagna code! From what I've seen, developers start with spaghetti, overcompensate with lasagna, then end up with some organization more optimized for the human, that minimizes cognitive load while reading.

            To me, abstraction is an encapsulation of some concept. I can't understand how they're practically different, unless you encapsulate true nonsense, without purpose or resulting meaning, which I can't think of an example of, since humans tend to categorize/name everything. I'm dumb.

    • By acemarke 2026-01-315:221 reply

      Hi, I'm the primary Redux maintainer. I'd love to see some examples of what got generated! (Doubt there's anything we could do to _influence_ this, but curious what happened here.)

      FWIW we do have our docs on testing approaches here, and have recommended a more integrated-style approach to testing for a while:

      - https://redux.js.org/usage/writing-tests

      • By overgard 2026-02-0121:31

        Unfortunately I think I cleaned up the code before committing it, so I don't have an exact example! I did actually read that usage page though after looking at those tests, and that helped me in fixing the tests (maybe in retrospect I should have pointed the AI at the docs page first).

        I think the main issue I was having with it was reusing the store object instead of creating a new one for each test. The other issue I was seeing was that it was creating mock objects and API's for things that weren't even being tested (lot of scope creep), and one of those API's was basically copy-pasted between two files (code duplication). It was also just testing things that weren't really necessary (ie, testing Redux itself, instead of my usage of it).

        Another issue was just taking a complex approach to fixing something that could be more easily solved. For instance, I had debug: true turned on for redux-undo so I was seeing some unnecessary log messages in the test. Codex identified this, and asked if I wanted to disable them, so I said yes. What it did though, was instead of setting debug: false, or disabling it on tests, it patched console.log to look for redux-undo prefixes. Technically worked, but kind of byzantine!

        None of this was a terrible disaster or anything, especially since I started pretty small, but I think what made me miss some of the issues at first glance is this is my first usage of Redux in a non-toy project so while I understand the fundamentals fine, it was easy to sneak working-but-bad stuff past me until I sat down with the code to write a test on my own and started to see the issues.

    • By visarga 2026-01-314:461 reply

      > "write some unit tests for Redux today"

      The equivalent of "draw me a dog" -> not a masterpiece!? who would have thought? You need to come up with a testing methodology, write it down, and then ask the model to go through it. It likes to make assumptions on unspecified things, so you got to be careful.

      More fundamentally I think testing is becoming the core component we need to think about. We should not vibe-check AI code, we should code-check it. Of course it will write the actual test code, but your main priority is to think about "how do I test this?"

      You can only know the value of a code up to the level of its testing. You can't commit your eyes into the repo, so don't do "LGTM" vibe-testing of AI code, it's walking a motorcycle.

      • By overgard 2026-02-0121:37

        I think you're assuming a lot about my prompting. I know to be specific with LLMs

    • By sjsizjhaha 2026-01-3023:29

      Generating code was always cheap. That’s part of the reason this tech has to be forced on teams. Similar to the move to cloud, it’s the kind of cost that’s only gonna show up later - faster than the cloud move, I think. Though, in some cases it will be the correct choice.

    • By akst 2026-01-311:131 reply

      ATM I feel like LLM writing tests can be a bit dangerous at times, there are cases where it's fine there are cases where it's not. I don't really think I could articulate a systemised basis for identifying either case, but I know it when I see it I guess.

      Like the the other day, I gave it a bunch of use cases to write tests for, the use cases were correct the code was not, it saw one of the tests broken so it sought to rewrite the test. You risking suboptimal results when an agent is dictating its own success criteria.

      At one point I did try and use seperate Claude instances to write tests, then I'd get the other instance to write the implementation unaware of the tests. But it's a bit to much setup.

      • By icedchai 2026-01-3119:161 reply

        I work with individuals who attempt to use LLMs to write tests. More than once, it's added nonsensical, useless test cases. Admittedly, humans do this, too, to a lesser extent.

        Additionally, if their code has broken existing tests, it "fixes" them by not fixing the code under test, but changing the tests... (assert status == 200 becomes 500 and deleting code.)

        Tests "pass." PR is opened. Reviewers wade through slop...

        • By sigotirandolas 2026-02-0110:502 reply

          The most annoying thing is that even after cleaning up all the nonsense, the tests still contain all sort of fanfare and it’s essentially impossible to get the submitter to trim them because it’s death by a thousand cuts (and you better not say "do it as if you didn’t use AI" in the current climate..)

          • By akst 2026-02-0412:47

            That’s also another thing. Sometimes the output is just junk, like there wasn’t really any intention behind the test to prevent a certain likely scenario arising

            Sometimes it just add tests that lock in specific quirks of the code that weren’t necessarily intentional

          • By icedchai 2026-02-0115:231 reply

            Yep. We've had to throw PRs away and ask them to start over with a smaller set of changes since it became impossible to manage. Reviews went on for weeks. The individual couldn't justify why things were done (and apparently their AI couldn't, either!)

            • By sigotirandolas 2026-02-0220:53

              Luckily those I work with are smart enough that I've not seen a PR thrown away yet, but sometimes I'm approving with more "meh, it's fine I guess" than "yeah, that makes sense".

    • By cheema33 2026-01-311:485 reply

      This is how you do things if you are new to this game.

      Get two other, different, LLMs to thoroughly review the code. If you don’t have an automated way to do all of this, you will struggle and eventually put yourself out of a job.

      If you do use this approach, you will get code that is better than what most software devs put out. And that gives you a good base to work with if you need to add polish to it.

      • By overgard 2026-01-314:111 reply

        I actually have used other LLMs to review the code, in the past (not today, but in the past). It's fine, but it doesn't tend to catch things like "this technically works but it's loading a footgun." For example, the redux test I was mentioning in my original post, the tests were reusing a single global store variable. It technically worked, the tests ran, and since these were the first tests I introduced in the code base there weren't any issues even though this made the tests non deterministic... but, it was a pattern that was easily going to break down the line.

        To me, the solution isn't "more AI", it's "how do I use AI in a way that doesn't screw me over a few weeks/months down the line", and for me that's by making sure I understand the code it generated and trim out the things that are bad/excessive. If it's generating things I don't understand, then I need to understand them, because I have to debug it at some point.

        Also, in this case it was just some unit tests, so who cares, but if this was a service that was publicly exposed on the web? I would definitely want to make sure I had a human in the loop for anything security related, and I would ABSOLUTELY want to make sure I understood it if it were handling user data.

        • By cstejerean 2026-01-318:342 reply

          how long ago was this past? A review with latest models should absolutely catch the issue you describe, in my experience.

          • By t_mahmood 2026-01-3113:05

            Ah, "It's work on my computer" edition of LLM.

          • By overgard 2026-01-3122:12

            December. Previous job had cursor and copilot automatically reviewing PRs.

      • By summerlight 2026-01-313:022 reply

        The quality of generated code does not matter. The problem is when it breaks 2 AM and you're burning thousands of dollars every minutes. You don't own the code that you don't understand, but unfortunately that does not mean you don't own the responsibility as well. Good luck on writing the postmortem, your boss will have lots of question for you.

        • By icedchai 2026-01-3119:18

          Frequently the boss is encouraging use of AI for efficiency without understanding the implications.

          And we'll just have the AI write the postmortem, so no big deal there. ;)

        • By charcircuit 2026-01-318:53

          AI can help you understand code faster than without AI. It allows me to investigate problems that I have little context in and be able to write fixes effectively.

      • By timcobb 2026-01-314:22

        > you will struggle and eventually put yourself out of a job.

        We can have a discussion without the stakes being so high.

      • By lelanthran 2026-01-3113:45

        > If you do use this approach, you will get code that is better than what most software devs put out. And that gives you a good base to work with if you need to add polish to it.

        If you do use this approach, you'll find that it will descend into a recursive madness. Due to the way these models are trained, they are never going to look at the output of two other models and go "Yeah, this is fine as it is; don't change a thing".

        Before you know it you're going to have change amplification, where a tiny change by one model triggers other models (or even itself) to make other changes, which triggers further changes, etc ad nauseum.

        The easy part is getting the models to spit out working code. The hard part is getting it to stop.

      • By 3kkdd 2026-01-312:303 reply

        Im sick and tired of these empty posts.

        SHOW AN EXAMPLE OF YOU ACTUALLY DOING WHAT YOU SAY!

        • By alt187 2026-01-314:042 reply

          There's no example because OP has never done this, and never will. People lie on the internet.

          • By timcobb 2026-01-314:21

            I've never done this because i haven't felt compelled to do this because I want to review my own code but I imagine this works okay and isn't hard to set up by asking Claude to set this up for you...

          • By senordevnyc 2026-02-012:05

            What? People do this all the time. Sometimes manually by invoking another agent with a different model and asking it to review the changes against the original spec. I just setup some reviewer / verifier sub agents in Cursor that I can invoke with a slash command. I use Opus 4.5 as my daily driver, but I have reviewer subagents running Gemini 3 Pro and GPT-5.2-codex and they each review the plan as well, and then the final implementation against the plan. Both sometimes identify issues, and Opus then integrates that feedback.

            It’s not perfect so I still review the code myself, but it helps decrease the number of defects I have to then have the AI correct.

        • By cheema33 2026-02-027:21

          The setup is much simpler than you might think. I have 4 CLI tools I use for this setup. Claude Code, Codex, Copilot and Cursor CLI. I asked Claude Code to create a code reviewer "skill" that uses the other 3 CLI tools to review changes in detail and provide feedback. I then ask Claude Code to use this skill to review any changes in code or even review plan documents. It is very very effective. Is it perfect? No. Nothing is. But, as I stated before, this produces results that are better than what an average developer sends in for PR review. Far far better in my own experience.

          In addition to that, we do use CodeRabbit plugin on GitHub to perform a 4th code review. And we tell all of our agents to not get into gold-plating mode.

          You can choose not to use modern tools like these to write software. You can also choose to write software in binary.

        • By Foreignborn 2026-01-312:48

          these two posts (the parent and then the OP) seem equally empty?

          by level of compute spend, it might look like:

          - ask an LLM in the same query/thread to write code AND tests (not good)

          - ask the LLM in different threads (meh)

          - ask the LLM in a separate thread to critique said tests (too brittle, testing guidelines, testing implementation and not out behavior, etc). fix those. (decent)

          - ask the LLM to spawn multiple agents to review the code and tests. Fix those. Spawn agents to critique again. Fix again.

          - Do the same as above, but spawn agents from different families (so Claude calls Gemini and Codex).

          —-

          these are usually set up as /slash commands like /tests or /review so you aren’t doing this manually. since this can take some time, people might work on multiple features at once.

    • By zamalek 2026-01-3023:451 reply

      The main issue I've seen is it writing passing tests, the code being correct is a big (and often incorrect) assumption.

      • By 0x696C6961 2026-01-310:20

        The majority of devs do the same thing.

    • By hahahahhaah 2026-01-317:33

      Living off only deep fried potatoes and large cola bottles is cheap...

    • By jjimmy2k 2026-02-0112:47

      A better agent can fix it in the future.

    • By pmarreck 2026-01-317:09

      nobody is going to care about your bespoke codework if there is no downstream measurable difference

  • By Waterluvian 2026-01-3015:237 reply

    I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.

    But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.

    I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.

    • By enlyth 2026-01-3015:345 reply

      A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.

      The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".

      So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.

      • By javier_e06 2026-01-3015:524 reply

        I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided:

        1. Declare in advance that AI is being used.

        2. Provided verbatim the questions and answer session.

        3. Explain why the answer given by the AI is good answer.

        Part of the grade will include grading 1, 2, 3

        Fair enough.

        • By chasd00 2026-01-3020:34

          It’s better than nothing but the problem is students will figure out feeding step 2 right back to the AI logged in via another session to get 3.

        • By bheadmaster 2026-01-3016:051 reply

          This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.

          • By jdjeeee 2026-01-3016:343 reply

            No this is not the way we want learning to be - just like how students are banned from using calculators until they have mastered the foundational thinking.

            • By bheadmaster 2026-01-3021:321 reply

              That's a fair point, but AI can do much more than just provide you with an answer like a calculator.

              AI can explain the underlying process of manual computation and help you learn it. You can ask it questions when you're confused, and it will keep explaining no matter how off the topic you go.

              We don't consider tutoring bad for learning - quite the contrary, we tutor slower students to help them catch up, and advanced students to help them fulfill their potential.

              If we use AI as if it was an automated, tireless tutor, it may change learning for the better. Not like it was anywhere near great as it was.

              • By Arainach 2026-01-313:072 reply

                You're assuming the students are reading any of this. They're not, they're just copy/pasting it.

                • By bheadmaster 2026-01-319:13

                  Well, you can lead the horse to water, but you can't make him drink.

                  If you assume all students are lazy assholes who want to cheat the system, then I doubt there's anything that would help them learn.

                • By myko 2026-01-3111:55

                  Also so much of the LLMs answer is fluff, when not outright wrong

            • By graemep 2026-01-315:392 reply

              There is research that shows that banning calculators impedes the learning of maths. It is certainly not obvious to me that calculators will have a negative effect - I certainly always allowed my kids to use them.

              LLMs are trickier and use needs to be restricted to stop cheating, just as my kids had restrictions on what calculators they could use in some exams. That does not mean they are all bad or even net bad if used correctly.

              • By thesz 2026-01-317:161 reply

                  > There is research that shows that banning calculators impedes the learning of maths.
                
                Please share what you know. My search found a heap of opinions and just one study where use of calculators made children less able to calculate by themselves, not the ability to learn and understand math in general.

              • By lelanthran 2026-01-3117:401 reply

                > There is research that shows that banning calculators impedes the learning of maths.

                I've seen oodles of research concluding the opposite at the primary level (grades 1- 5, say). If your mentioned research exists, it must be very well hidden :-/

                • By graemep 2026-02-0111:071 reply

                  There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

                  • By lelanthran 2026-02-0114:39

                    > There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

                    From the first page of that study

                    > Do calculators threaten basic skills? The answer consistently seemed to be no, provided those basic skills have first been developed with paper and pencil.

                    So, yeah, there are no studies I have found that support any assertion along the lines of:

                    >>> There is research that shows that banning calculators impedes the learning of maths.

                    If you actually find any, we still have to consider that things like this meta-study you posted is already 74-studies ahead in confirming that you are wrong.

                    Best would be for you to find 75 studies that confirm your hypothesis. Unfortunately, even though I read studies all the time, and even at one point had full access via institutional license to full-text of studies, and spent almost all of my after-hours time between 2009 and 2011 actually reading papers on primary/foundational education, I have not seen even one that supports your assertion.

                    I have read well over a hundred papers on the subject, and did not find one. I am skeptical that you will find any.

            • By stevofolife 2026-01-3018:353 reply

              Calculator don't tell you step by step. AI can.

              • By sethops1 2026-01-311:331 reply

                Symbolic computation is a thing. How do you think wolfram alpha worked for 20 years before AI?

              • By simianparrot 2026-01-3019:58

                And it’s making that up as well.

              • By danaris 2026-01-3021:14

                Yeah; it gets steps 1-3 right, 4-6 obviously wrong, and then 7-9 subtly wrong such that a student, who needs it step by step while learning, can't tell.

        • By moffkalast 2026-01-3021:01

          That's roughly what we did as well. Use anything you want, but in the end you have to be able to explain the process and the projects are harder than before.

          If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later.

        • By aesch 2026-01-3016:53

          Props to the teacher for putting in the work to thoughtfully grade an AI transcript! As I typed that I wondered if a lazy teacher might then use AI to grade the students AI transcript?

      • By Waterluvian 2026-01-3015:371 reply

        I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.

        This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.

        • By JoelMcCracken 2026-01-3021:00

          Oh wow this is a great reference/image/metaphor for "software engineers" who misuse these tools - "the great pakledification" of software

      • By icedchai 2026-01-310:341 reply

        Yep, I've seen a couple of folks pretending to be junior PMs, thinking they can replace developers entirely. The problem is, they can't write a spec. They can define a feature at a very high level, on a good day. They resort to asking one AI to write them a spec that they feed to another.

        It's slop all the way down.

        • By graemep 2026-01-315:47

          People have tried that with everything from COBOL to low code. Its even succeeded in some problem domains (e.g. thing people code with spreadsheet formula) but there is no general solution that replaces programmers entirely.

      • By fatherwavelet 2026-01-3111:311 reply

        A "commoner"... Could you possibly be more full of yourself?

        • By enlyth 2026-01-3112:11

          That was literally the opposite of my intention. Maybe the choice of word wasn't perfect, but basically, I was trying to highlight that domain expertise is still valuable in the specific scenario of software engineering.

          The same could be said about any other job, if you put me against a construction worker and give us both expensive power tools, he will still do a better job than me because I have no experience in that domain.

      • By bambax 2026-01-3015:44

        Agree totally.

    • By Buttons840 2026-01-3015:511 reply

      My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not.

      • By HappMacDonald 2026-01-311:35

        I mean it sounds to me like a beautiful corporate poison. :)

    • By figassis 2026-01-3018:021 reply

      I don’t think this is the issue “yet”. It’s that no matter what class you are, your CEO does not care. Mediocre AI work is enough to give them immense returns and an exit. He’s not looking out for the unfortunate bag holders. The world has always had tolerance for highly distributed crap. See Windows.

      • By dasil003 2026-01-3021:011 reply

        This seems like a purely cynical lacking any substantive analysis.

        Despite whatever nasty business practices and shitty UX Windows has foisted on the world, there is no denying the tremendous value that it has brought, including impressive backwards compatibility that rivals some of the best platforms in computing history.

        AI shovelware pump-n-dump is an entirely different short term game that will never get anywhere near Microsoft levels of success. It's more like the fly-by-nights in the dotcom bubble that crashed and burned without having achieved anything except a large investment.

        • By figassis 2026-01-3021:19

          You misunderstand me. While I left Windows over a decade ago, I recognize it was a great OS in some aspects. I was referring to the recent AI fueled Windows developments and Ad riddled experiences. Someone decided that is fine, and you won't see orgs or regular users drop it...tolerance.

    • By HorizonXP 2026-01-3015:291 reply

      This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.

      • By skydhash 2026-01-3015:362 reply

        SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing.

        • By Waterluvian 2026-01-3015:41

          The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem.

          Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?

        • By altmanaltman 2026-01-3015:492 reply

          is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally).

          Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.

          So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.

          • By skydhash 2026-01-3015:57

            Most projects don’t change that much either. Head over to a big open source project, and more often you will only see tweaks. To be able to do the tweaks require a very good understanding of the whole project (Naur’s theory of programming).

            Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it.

          • By Waterluvian 2026-01-3015:511 reply

            I wonder about how true this was historically. I imagine race car driving had periods of rapid, exciting innovation. But I can see how a lot of it has probably reached levels of optimization where the rules, safety, and technology change well within the realm of diminishing returns. I'm sure there's still a ridiculous about of R&D though? (I don't really know race car driving)

            • By altmanaltman 2026-01-3017:091 reply

              Sure there is crazy levels of R&D but that mostly happens off season or if there is a change in regulations which happen every 4-5 years usually. Interestingly, this year the entire grid starts with new regs and we don't really know the pecking order yet.

              But my whole point was that race to race, it really isn't that much different for the teams as the comment implied and I am still kind of lost how it fits to SWE unless you're really stretching things.

              Even then, most teams dont even make their own engines etc.

              • By skydhash 2026-01-3021:31

                Do you really think that rainy Canada is the same as Jedddah, or Singapore? And what is the purpose of the free practice sessions?

                You’ve got the big bet to design the car between the season (which is kinda the big architectural decisions you make at the beginning of the project). Then you got the refinement over the season, which are like bug fixings and performance tweaks. There’s the parts upgrade, which are like small features added on top of the initial software.

                For the next season, you either improve on the design or start from scratch depending on what you’ve learned. In the first case, It is the new version of the software. In the second, that’s the big refactor.

                I remember that the reserve drivers may do a lot of simulations to provide data to the engineers.

    • By crazylogger 2026-01-3016:19

      You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.

      Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.

      This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.

    • By mips_avatar 2026-01-3019:49

      Well a lot of managers view their employees as doing the former, but they’re really doing the latter

    • By raincole 2026-01-3015:392 reply

      > I know a lot of counter arguments are a form of, “but AI is automating that second class of job!”

      Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.

      Yeah, I'm aware of the lump of labour fallacy.

      • By Waterluvian 2026-01-3015:47

        Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is."

        It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."

      • By blktiger 2026-01-3016:08

        It depends a lot on the type of industry I would think.

  • By gipp 2026-01-3014:148 reply

    I see a lot of the same (well thought out) pushback on here whenever these kinds of blind hype articles pop up.

    But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?

    Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.

    Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.

    To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.

    • By simonw 2026-01-3014:482 reply

      I'm not sure you're actually in disagreement with the author of this piece at all.

      They didn't say that software engineering is over - they said:

      > Software development, as it has been done for decades, is over.

      You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.

      • By krupan 2026-01-313:181 reply

        When we say generating code is only a small percentage, that does not imply that the rest is just talking. Simon, you were part of a relatively small fast moving project in Django and the news website it powered, with from I understand a pretty small team. Have you worked as part of a team of 10, 20, 100, 1000 engineers? It's different.

        • By simonw 2026-01-315:22

          "Talking" here doesn't literally mean talking. It means figuring out the scope of the problem, researching solutions, communicating with stakeholders, debating architecture, building exploratory prototypes, breaking down projects - it's all the stuff that isn't writing the code.

          I've worked at various sizes of organization. Most notably I joined Eventbrite when they were less than 100 developers and stayed while they grew to around 1,000.

      • By Imustaskforhelp 2026-01-3015:22

        > Software development, as it has been done for decades, is over.

        Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025

        Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".

        Recently Economic Media made a relevant video about all of this too: How Replacing Developers With AI is Going Horribly Wrong [https://www.youtube.com/watch?v=ts0nH_pSAdM]

        My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)

        So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.

        So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.

        I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.

        To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.

        I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.

    • By techblueberry 2026-01-3014:24

      Yeah in a lot of ways, my assertion is that @ “Code is cheap” actually means the opposite of what everyone thinks it does. Software Engineer is even more about the practices we’ve been developing over the past 20 or so years, not less

      Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.

      Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.

    • By patrickmay 2026-01-3014:26

      > Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.

      You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.

      We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.

    • By wrs 2026-01-3016:391 reply

      My recent experience demonstrates this. I had a couple weeks of happily cranking out new code and refactors at high speed with Claude’s help, then a week of what felt like total stagnation, and now I’m back to high velocity again.

      What happened in the middle was I didn’t know what I wanted. I hadn’t worked out the right data model for the application yet, so I couldn’t tell Claude what to do. And if you tell it to go ahead and write more code at that point, very bad things will start to happen.

      • By chasd00 2026-01-3020:43

        Ive been using LLMs through the web to help with discreet pieces of code and scripts for a while now. I’ve been putting it off (out of fear?) but I finally sat down with Claude Code on the console and an empty directory to see what the fuss was about. Over about a total of 4 hrs and maybe $15 pay as you go it became clear things are drastically different now in web dev. I’m not saying changed for good or bad just things have definitely changed and will never go back.

    • By mehagar 2026-01-3015:41

      The book Software Engineering at Google makes a distinction between software engineering and programming. The main difference is that software engineering occurs over a longer time span than programming. In this sense, AI tools can make programming faster, but not necessarily software engineering.

    • By jatins 2026-01-3020:06

      Did you read the article? Author is one of the more thoughtful and least hype guys you'll find when it comes to these things

    • By mupuff1234 2026-01-3014:30

      They're also great for writing design docs, which is another significant time sink for SWEs.

HackerNews