Grief and the AI split

2026-03-1222:35204354blog.lmorchard.com

TL;DR: AI-assisted coding is revealing a split among developers that was always there but invisible when we all worked the same way. I've felt the grief too—but mine resolved differently than I…

TL;DR: AI-assisted coding is revealing a split among developers that was always there but invisible when we all worked the same way. I've felt the grief too—but mine resolved differently than I expected, and I think that says something about what kind of developer I've been all along.

Silhouette of trees during sunset
Photo by Lucian on Unsplash

A few months ago, I wrote about how making computers do things is fun. The gist: I've never been in it for the elegance of code. I've been in it for the result. I learned BASIC on a Commodore 64 at age 7 not because BASIC was beautiful—it wasn't—but because I wanted to make things happen on screen. Then I learned 6502 assembly because BASIC was too slow for what I wanted to do.

That post was my attempt to explain why AI coding tools felt like a natural fit for me. But since then, I've been reading other people's reactions to this moment, and I want to come back to it—because I think something bigger is going on.

The Mourning

James Randall wrote a piece that hit a nerve. He's been programming since age 7, like me—started in 1983, a year after I did. But his experience of this moment is subtly different from mine:

The wonder is harder to access. The sense of discovery, of figuring something out through sheer persistence and ingenuity — that’s been compressed. Not eliminated, but compressed. And something is lost in the compression, even if something is gained.

Nolan Lawson put it more starkly in "We Mourn Our Craft":

We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”

These are real feelings about real losses. I'm not here to argue otherwise. But reading these posts, I kept having this nagging sense that we were mourning different things—and that the difference mattered.

The Split

Here's what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.

Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.

Now there's a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.

I wrote about this split before in terms of my college math and CS classes—some of us loved proofs and theorems for their own sake, some of us only clicked with the material when we could apply it to something. I tended to do better in classes focused on application and did not-so-well in pure math classes. (I still want to take another crack at Calculus someday, but that's a different story.)

My Grief Was Different

I want to be clear: I've felt grief too. I've gone through a real adjustment period over the past 18-24 months.

I was afraid I wouldn't be able to understand the new tools. But, it seems I have. I was worried I wouldn't understand the output—that I'd lose my ability to judge whether the code was actually right. But it turns out decades of reading and reviewing code doesn't evaporate. I can still tell when something's wrong and I still have taste.

I was afraid the puzzle-solving was over. But it wasn't—it just moved up a level. Which, when I think about it, is exactly what's happened at every other transition in my career. I went from placing bytes in memory on a C64 to writing functions to designing systems. The puzzle got more abstract each time, but it never stopped being a puzzle. Now the puzzle is architecture, composition, directing the assistant. It's different. It's still satisfying—to me, anyway.

So most of my fears got tested against reality and thankfully didn't hold up. But some grief stuck.

The Grief That Sticks

I grieve for the web I've known. Not for writing HTML by hand—I grieve for the open web as an ecosystem, threatened in ways that have nothing to do with whether I personally type code or not. AI training on the commons, the further consolidation of who gets to shape how people experience the internet—that's a real loss, and it doesn't go away because I'm personally more productive.

I grieve for the career landscape shifting under me. Webdev's been my thing for over three decades. But, it's not a hot field anymore, whether folks like it or not: mobile apps took some big bites, but AI engineering is really ruling the roost now. I spent time worrying whether I could make that shift. I think I have been, but the anxiety is genuine, and I don't think it's finished.

Here's what I notice about my grief: none of it is about missing the act of writing code. It's about the world around the code changing. The ecosystem, the economy, the culture. That's a different kind of loss than what Randall and Lawson are describing. Theirs is about the craft itself. Mine is about the context and the reasons why we're doing any of this.

Kevin Lawver wrote a response to Lawson that I mostly agree with. He argues for redirecting craft and passion rather than clinging to how things were. But I'd go further than framing it as nostalgia vs. pragmatism. (And you know I've got plenty of nostalgia—but that doesn't pay my mortgage.)

I think recognizing which kind of grief you're feeling is the actually useful thing here. If you're mourning the loss of the craft itself—the texture of writing code, the satisfaction of an elegant solution—that's real, and no amount of "just adapt" addresses it. You might need to find that satisfaction somewhere else, or accept that work is going to feel different. Frankly, we've been lucky there's been a livelihood in craft up to now.

If you're mourning the context—the changing web, the shifting career landscape, the uncertainty—that's real too, but it's more actionable. You can learn new tools. You can push for the web you want, even if it's a small web. You can grieve and adapt at the same time.

I've been trying to do a lot of both. I'm not entirely sure how it's going. I do really feel for what Nolan Lawson said here:

I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.

But, you know, I'd be lying if I said it didn't excite me a bit, amidst the grief & terror.

Make Computer Do Thing

I started programming in 1982. Every language I've learned since then has been a means to an end—a new way to make computers do things I wanted them to do. AI-assisted coding feels like the latest in that progression. Not a discontinuity per se, just another rung on the ladder.

But I'm trying to hold that lightly. Because the ladder itself is changing, the building it's leaning against is changing, and I'd be lying if I said I knew exactly where it's going.

What I do know is this: I still get the same hit of satisfaction when something I thought up and built actually works. The code got there differently than it used to, but the moment it runs and does the thing? That hasn't changed in my over 40 years at it.


Read the original article

Comments

  • By wiml 2026-03-130:326 reply

    I think the article misunderstands completely. "Craft" coders are chasing results too — we're just chasing results that last and that can be built upon. I've been in this game for a while, and a major goal of every single good programmer I've known has been to make themselves obsolete. Yes, I enjoyed meticulous hand crafted assembly, counting cycles and packing bits, but nobody had to talk me into using compilers. Yes, I've spent many fruitful hours writing basic CRUD apps but now that's easily done by libraries/frameworks I'm not eager to go back. Memory management, type systems, higher level languages, no-/low-code systems that completely remove me from some parts of the design loop, etc etc etc. All great: the point of computer programming is to have the computer do things so we don't have to.

    I think the real divide we're seeing is between people who saw software as something that is, fundamentally, improvable and understandable; and people who saw it as a mysterious roadblock foisted upon them by others, that cannot really be reasoned about or changed. And oddly, many of the people in the second category use terminology from the first, but fundamentally do not believe that the first category really exists. (Fair enough; I was surprised at the second category.) It's not about intelligence or whatever, it's a mindset or perspective thing.

    • By msteffen 2026-03-1317:31

      Yeah, I like this framing a lot. There comes a point, after working on a system for a while, when there are no details: every aspect of how the system works is understood to be in some way significant. If one of those details is changed, you understand what the implications of that change will be for the rest of the system, its users, etc. I worry that in a post-AI software world, that’ll never happen. The system will be so full of code you’ve barely looked at, understanding it all will be hopeless. If a change is proving impossible to make without introducing bugs, it will be more sensible to AI-build a new system than understand the problem.

      I sometimes wonder if modularity will become even more important (as it has in physical construction, e.g. with the move from artisanal, temperamental plaster to cheap, efficient drywall), so that systems that AI is not able to reliably modify can easily be replaced.

    • By randomNumber7 2026-03-136:561 reply

      > It's not about intelligence or whatever, it's a mindset or perspective thing.

      I agree with everything except this last sentence. What you wrote looks highly intelligent and I would suspect a lot of people in the second camp are not up to par with this.

      • By hinkley 2026-03-1318:03

        Wise+intelligent people realize that intellectualizing every fucking thing in their life is a trap, and that they should rely more on other things. Not the least of which is buy-in, which is often not about rationality either.

        It might take intelligence to notice the problem, but we all know people who haven’t and some who might never. My previous job has the oldest I’ve ever met and I wanted to strangle him at least once a week. I haven’t added anyone to my Do Not Hire list in over a decade. Except him. He made himself indispensable at every opportunity and had some of the most convoluted code (and vocabulary) I’ve encountered in a long time. I spent way too much time extracting his claws from code I’d written that he made an absolute hash of.

    • By svilen_dobrev 2026-03-1318:30

      i wonder, is the divide something like Pirsig's classical vs romantic ? First see mostly-functionality / facts, want to understand to the bones, latter see .. appearance / utility / feelings, want to just-get-on-with-it ?

    • By simianwords 2026-03-137:045 reply

      You are repeating the same thing. You think having good maintainable good is important - more than the first camp.

      That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.

      When the craft coders bring their ideology to this set up, it starts slowing things down because they are optimising for the wrong target.

      • By layer8 2026-03-1319:20

        When the code is low-quality, you can’t reason well about it, can’t reason well about what changes to apply and what their effects will be, can’t reason about what the outcome will be when making changes to the context (inputs, environment) the code is run it. Instead everything becomes an experiment on a black box or gray box, whose behavior you can’t well predict in advance.

        Engineering is the task of making things behave predictably and reliably. Because software is malleable and rarely “finished”, this applies to changing software as well.

        I’m pretty sure that there is more than one divide regarding AI among developers, but one of the dividing lines concerns the predictability and reason-ability of the tools and the code. AI fundamentally lacks these qualities.

      • By hinkley 2026-03-1318:08

        > Most enterprise code involves lots of exploring and fast iteration.

        And when the code base is 250,000 lines of garbage all the way down, this is impossible. The projects where I’ve had free rein to kill tech debt and SDLC footguns with impunity have all been the ones where velocity maintained or increased over time. All the rest have ground to a halt.

        There’s value in customers believing you’ll get there soon and the code will actually work. They can be patient if you have an air of competence. But they will switch to some other clown car if it’s cheaper than your clown car.

      • By eikenberry 2026-03-1318:09

        > Code quality doesn’t matter that much. No one else is going to see it.

        You seem to be claiming that enterprise shops never adopted code reviews. Interesting if true.

      • By wiseowise 2026-03-138:291 reply

        > Code quality doesn’t matter that much. No one else is going to see it.

        This is just false for anyone who has worked in the industry for any meaningful amount of time. Do you seriously never encountered a situation where a change was supposedly easy on the surface, but some stupid SoB before you wrote it so bad that you want to pull your hair out from trying to make it work without rewriting this crap codebase from scratch?

        • By simianwords 2026-03-138:361 reply

          at least where i have worked, you need to identify the context. certain projects require good readable code and certain projects require you to iterate fast and explore.

          in my experience very few projects were serious enough that required such scrutiny in code.

          • By FuckButtons 2026-03-138:542 reply

            Sounds like you’ve never had a prototype become foundational infrastructure before, or dealt with someone else’s.

            • By edgyquant 2026-03-1311:12

              I have many times and if you spend too long over architecting a prototype you start to get annoyed looks and tons of questions from PMs who just want something that looks right today (we can fix it/optimize it later)

            • By simianwords 2026-03-1310:332 reply

              you can always change it later. this is exactly the dogmatism i'm speaking about - you need to prioritise pushing things. the clean up can come later.

              ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.

              • By discreteevent 2026-03-1312:00

                > You can always change it later.

                People seem to think that technical debt doesn't need to be paid back for ages. In my experience bad code starts to cost more than it saved after about three months. So if you have to get a demo ready right now that will save the company then hack it in. But that's not the case for most technical debt. In most cases the management just want the perception of speed so they pile debt upon debt. Then they can't figure out why delivery gets slower and slower.

                > ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.

                I agree with this. But there's a difference between over-engineering and hacking in bad quality code. So to be clear, I am talking about the latter.

              • By skydhash 2026-03-1311:491 reply

                > you can always change it later. this is exactly the dogmatism i'm speaking about - you need to prioritise pushing things. the clean up can come later.

                Everyone that says this has not been the one that had to fix the code later. They have already moved to the next jobs (or have been fired). Engineers do know the tradeoff between quality and speed, and can do hack if that’s what needed to get the project to the finish line. But good ones will note down the hack and resolve it later. Bad ones will pat themselves in the back and add more hacks on top of that.

      • By suddenlybananas 2026-03-137:091 reply

        I think your target is the wrong target myself. Now what?

        • By simianwords 2026-03-137:101 reply

          If more people think like you we won’t have jobs because company won’t make profit

          • By wiseowise 2026-03-138:311 reply

            If people think like you we won’t have jobs because everyone would fucking die when cars, MRI machines, nuclear power plans and ICBMs, airplanes, infra, payments start misbehaving. Now what?

            • By simianwords 2026-03-138:332 reply

              this is a category error that i specifically called out in my comment.

              • By oytis 2026-03-1312:321 reply

                What is the category of code that does not need quality? You need it to not interact with real world, with people's finances, with people's personal data. Basically it's the code that only exists for PMs to show to investors (in startups) and VPs (in enterprise), but not for real users to rely on.

                • By aleph_minus_one 2026-03-1313:031 reply

                  > What is the category of code that does not need quality?

                  For example there exist "applications"/"demos" that exist "to show the customer what could be possible if they hire 'us'". These demos just have to survive a, say, intense two-hour marketing pitch and some inconvenient questions/tests that someone in the audience might come up with during these two hours.

                  In other words: applications for "pitching possibilities" to a potential customer, where everything is allowed to be smoke and mirrors if necessary (once the customer has been convinced with all tricks to hire the respective company for the project, the requirements will completely change anyway ...).

                  • By oytis 2026-03-1313:12

                    Yeah, that's what I mean - prototypes. The caveat is though that before agentic coding skills to build a prototype and skills to build a production system were generally the same, so a prototype did not only provide a demonstration of what is possible in general, but what your team of engineers can do specifically. Now these skills will diverge, so prototypes will not prove anything like that. They are still going to be useful for demonstrations and market research though.

              • By wiseowise 2026-03-139:421 reply

                Where?

                > That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.

                Here? Most of those that I’ve listed IS boring enterprise code. Unless we’re taking medical/military grade.

                • By simianwords 2026-03-1310:291 reply

                  fair, you have presented specific niche where the ~quality~ correctness is important in enterprise - not just libraries.

                  but most people aren't writing code in those places. its usually CRUD, advertisement, startups, ecommerce.

                  also there are two things going on here:

                  - quality of code

                  - correctness of code

                  in serious reusable libraries and opensource tools, quality of code matters. the interfaces, redundancy etc.

                  but that's not exactly equal to correctness. one can prioritise correctness without dogmatism in craft like clean code etc.

                  in most of these commercial contexts like ecommerce, ads - you don't need the dogmatism that the craft camp brings. that's the category error.

                  • By skydhash 2026-03-1311:561 reply

                    Maybe you’re too entrenched in the web section of software development. Be aware that there’s a lot of desktop and system software out there.

                    Even in web software, you can write good code without compromising in delivery speed. That just requires you to be good at what you’re doing. But the web is more forgiving of mistakes and a lot of frameworks have no taste at all.

                    • By simianwords 2026-03-1312:49

                      Do you think more sdes work in mission critical software or the ones I mentioned?

    • By harpiaharpyja 2026-03-1314:19

      Thank you for articulating that.

    • By wiseowise 2026-03-138:33

      > and a major goal of every single good programmer I've known has been to make themselves obsolete.

      I’ve always heard this mantra when coders were thinking they’re untouchable, not so much now.

  • By wolvesechoes 2026-03-137:523 reply

    The real split is between people that believe technological progress is good by itself and by the law of nature it always makes life better and easier, and people that know the history and know that stuff like 8 hours workday wasn't spat out of steam machine - it had to be fought for through political struggle, because actual "natural" consequence of increased productivity was increase in workload.

    • By Gud 2026-03-138:402 reply

      The real split is between the capital owners, who live on our labour, typically through inheritance of a piece of paper that says they own a percentage of what I make.

      • By khafra 2026-03-139:327 reply

        Whether the labor theory of value is right or wrong, the "real split" you describe will soon no longer exist. Capital owners will live on the labor of their capital. Non-capital-owners will live on the largesse of capital, or will not live at all.

        Unless we muster the political will to stop AI development, internationally, until we can be certain of our ability to durably imbue it with the intrinsic desire to keep humans around, doing human things.

        • By grafmax 2026-03-1311:381 reply

          Capital is a commodity, just like a business' product. It does not produce value. Labor does. This is a central point of LTV!

          We witnessed the same thing with looms and other automation in the Industrial Revolution. Capital that helps you produce more. But owners faced with increased competition under commoditized production see their profit margins fall. Thus they will turn to squeezing workers - the source of value - for profit in the newly commoditized landscape - exactly what happened during the Industrial Revolution. It was only when workers got their act together and organized that this decline was stopped and reversed.

          • By khafra 2026-03-1313:59

            Ok, but when the looms can autonomously analyze the market, design the products, organize purchasing and sales channel, run production, and deliver the products, the workers will not be squeezed. We will be discarded.

        • By snek_case 2026-03-1313:091 reply

          > imbue it with the intrinsic desire to keep humans around, doing human things.

          It's not the AI you have to convince, it's your government and the people running tech companies. Dario Amodei was cheering for AI to take all programming jobs (along with the others). If that happened, it would be an unmitigated disaster for millions of people. Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.

          > the political will to stop AI development

          The reason that's not likely is that it's an arms race. You stop AI research here, but how can you trust that China and Russia are doing the same? Unlike nuclear bombs, the potential harms are less tangible.

          • By bluefirebrand 2026-03-1316:07

            > Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.

            I don't need to imagine this student, I'm friends with some who are going through this right now. They graduated almost a year ago and haven't found work yet. One of them jokes about suicide often and I don't know how to help him

            The social contract between labour and capital has been frayed for a long time, but it is near breaking now. It's going to get worse, maybe a lot worse, before it gets better. If it ever does get better

        • By chrisvalleybay 2026-03-1312:183 reply

          I think there's a piece missing here. Capital owners are humans too, and what humans want (perhaps especially the ones who accumulate capital), is to be at the top of a hierarchy. But a hierarchy needs participants. If nobody else is playing the game, there's no top to be on top of. Strip away the people willing to compete, admire, envy, or just show up, and the whole structure collapses. It's not clear that a world of pure capital-on-AI-labor actually gives them what they're after. It sounds lonely and meaningless to me. I don't think that it would feed the black hole in their chests.

          • By khafra 2026-03-1313:29

            I think it's much more likely that the AI turns out not to be as compliant as the capital owners expect, and they die too.

            However, that's not useful in predicting what capital owners will do, because they follow their local incentives. "If everyone keeps doing X, we will all be worse off" does not help unless you can create local incentives that point toward an equilibrium where everyone stops doing X.

            In this case, no capital owner is individually better off by unilaterally refusing to chase more efficient returns on their capital. We would need an international agreement, with enforcement mechanisms, like I mentioned above.

          • By wolvesechoes 2026-03-1312:481 reply

            Lot of effort was spent to naturalize the current state of affairs and value system, even if there is nothing natural and obvious about it. Humans for millennia have lived with much higher political and social flexibility, with hierarchies built and teared down even seasonally, or with role of property and wealth shifting back and forth.

            Of course the structure exists because we allow it, that's the easy part. Hard part is - why do we allow it?

            • By chrisvalleybay 2026-03-1312:59

              I think in part because we have a black hole in our chest, and we are searching for ways to fill it. We attempt to fill it through worship at the altar of materialism, celebrity, etc. We are doing this to quiet the roar from the black hole. Actually stepping away would require us to sit with stillness, and then to forge a new path, a new life. It's frightening.

          • By eloisius 2026-03-1314:54

            They can still keep us as pets, or use us for cage fighting spectacles.

        • By forgetfreeman 2026-03-1310:021 reply

          "Non-capital-owners will live on the largesse of capital, or will not live at all."

          That's been tried several times now and has a tendency to end very badly for capital. You'd think folks with even a grade school level of historical literacy would know better than to stick a fork in that outlet.

          • By khafra 2026-03-1311:122 reply

            It's never been tried before. Capital always required human labor, to be productive. Capital has never closed in on the ability to operate, maintain, defend, and expand itself without human assistance, as it is closing in on that ability now.

            • By daveguy 2026-03-1311:532 reply

              It's really not. The capital owners just think it is.

              • By khafra 2026-03-1312:511 reply

                We'd all be a lot safer--even the capital owners--if today's robotics and multimodal intelligence were near the ceiling of what's possible, or even near the bend in the logistic curve where things slow down a lot.

                I haven't seen evidence of that. I see evidence of rapid advances in task length, general capabilities, and research and development capabilities in AI, and generality, price, and autonomy in robotics.

                How much headroom in these capabilities do you believe we have, before a data center can protect and maintain itself and an on-site power plant? Before robots can run a robot factory?

                • By daveguy 2026-03-1313:181 reply

                  I think we are still 25+ years away from that kind of automation. People are still confusing plausible text generation with adaptable dynamic intelligence. See also: "Shall I implement it? No."[0] We are getting some awesome tools that sound like science fiction from decades ago, but the intelligence is hollow and brittle. In my opinion, we just don't have the algorithms or computational bandwidth necessary.

                  [0] https://news.ycombinator.com/item?id=47357042

                  • By khafra 2026-03-1313:34

                    We're absolutely not there yet, algorithmically or with compute. Algorithms keep getting better, though, despite the bitter lesson; and data centers keep getting bigger.

                    If you showed a conversation between Terry Tao or Steve Yegge and their AI collaborators to someone from 2021, they would consider it beyond obvious that it's AGI. Today, we know they still have some shortcomings; but in another 5 years, what looks to us today like it's beyond obviously ASI may well be enough for catastrophic, irrecoverable outcomes.

              • By gom_jabbar 2026-03-1313:01

                The real transition would be from human-owned capital to self-owned capital. You are right that current capabilities and autonomy don't allow for that yet.

            • By forgetfreeman 2026-03-1313:51

              Is that what you think is happening right now? This line of reasoning brings to mind the luxury bunkers in New Zealand that are so popular with a certain type of folks. I'm guessing the sales brochures on those things don't mention stuff like the outcome of an 80lb bag of cement poured into the ventilation or the fact that heavy machinery is ubiquitous and shockingly simple to operate. Thinking capital can decouple itself from the larger populace is comically flawed for similar reasons. See also: XKCD where the crypto guy gets worked over with a wrench for his password.

        • By eloisius 2026-03-1310:552 reply

          I, for one, am looking forward to me and a band of my closest friends and family raiding heavily fortified data centers guarded by Boston Dynamics robot dogs to steal clean drinking water for our underground village. We might even hit a caravan of autonomous trucks carrying cricket protein powder in the same night.

          • By bluefirebrand 2026-03-1316:09

            If I were 20 years younger maybe, but nowadays I'm too old for that

          • By sweetheart 2026-03-1311:42

            can i come

        • By edgyquant 2026-03-1311:101 reply

          The people without capital will just form their own economies and continue to exist, likely they will kill the capital owners as well if it really came to that.

          • By khafra 2026-03-1311:141 reply

            What's your plan for beating the autonomous drone swarms without capital?

            • By dns_snek 2026-03-1313:32

              My plan is that we don't let it get far enough that a small group of people gain control of a fully integrated robotic supply chain powering an unbeatable war machine. If it comes to that then the world is already doomed.

              In practice this currently means voting for political options who can correctly identify concentration of power as the root cause of most of our current and future problems, and who pledge to actually do something about it.

        • By Gud 2026-03-139:39

          I agree.

    • By A_Duck 2026-03-1310:572 reply

      Interesting to see more of this thinking on Hacker News

      Perhaps one of the secondary effects of AI replacing developers will be mobilising a group of smart, motivated people to the left

      (It's always interesting to think of the secondary effects which kick in past a certain point of growth. High-multiple stock valuations often fail to take these into account. For the East India Company, for example — your company can keep growing until it's the size of a country. But suddenly other countries treat you as a foreign power rather than a pet.)

      • By wolvesechoes 2026-03-1312:36

        > Interesting to see more of this thinking on Hacker News

        I am on this site because it is one of the less shitty places on the Internet (in terms of usability, privacy etc.) to have some form of discussion, but I never identified as a "hacker", "techie", "entrepreneur" or "temporarily embarrassed billionaire". AI didn't change my view on anything, except it has shown me how blind and naive people can be.

        Of course I tend to focus on aspects that are being discussed here (context of software engineering).

      • By snek_case 2026-03-1313:19

        I've always thought of myself as more "centrist" (feel free to make fun of me), but seeing so many tech CEOs cheer for layoffs and destruction of the job market has been a bit of a wake up call. Also just being confronted with the sheer idiocy of these people. They are making hundreds of millions of dollars a year, but they barely understand the tech they are cheering for. They act as though being broadly "bullish on AI" and being overly enthusiastic about its short-term potential was some kind of visionary stance, when in fact they are just repeating the same ideas as every other idiot in the silicon valley VC bubble.

        My personal bet would be that in the medium term, there will be a reversal of the idiotic belief that you can immediately just lay off developers because of LLMs. If your developers are more productive because of LLMs, you still have an advantage by having more developers than the competition. There's also a lot of institutional knowledge that's just not documented. You fire key people, you can cripple your organization.

        In the longer term, I think AI will eventually take jobs, and unfortunately, it will have major negative societal impact. I doubt that our governments will be proactive in trying to anticipate this. They will just play damage control. There's probably going to be an anti-AI social movement. You'll have the confluence of more and more disinformation and AI slop online along with more and more job loss. There are probably going to be riots. Some people think UBI is inevitable. I think the problem is that if the government puts UBI in place, they will only give you the minimum necessary so that you don't starve. Just enough to afford to rent a bedroom, eat processed food and stay online all day.

    • By jasonlotito 2026-03-1313:181 reply

      The absolutist extremes here is wild. Let's just push the pedal to the metal.

      The real split are between those that support science and technology, and those that hate science and technology and want to see more children die.

      • By wolvesechoes 2026-03-1315:201 reply

        The same children that those pushing for "science and technology" were sending to work in mines during last spectacular technological developments that promised to reduce need for hard-breaking labor?

        • By jasonlotito 2026-03-1318:32

          Supporting "kill the children" seems like such a bad take, but you do you.

  • By countWSS 2026-03-1319:40

    Sure, is there anyone nostalgic for debugging bash files by hand? Any sense of grief for writing C++ template headers, with all boilerplate? Hmm, does anyone like manually re-writing makefiles these days? I suspect the enthusiasts of coding craft will struggle to maintain their wonder after ~4h deep in any of these magical adventures, which surely involve inventing ad-hoc duct tape and novel, never-before-seen algorithms.

HackerNews