LLMs can be exhausting

2026-03-1520:56320203tomjohnell.com

Some days I get in bed after a tortuous 4-5 hour session working with Claude or Codex wondering what the heck happened. It's easy to blame the model - there'...

Some days I get in bed after a tortuous 4-5 hour session working with Claude or Codex wondering what the heck happened. It's easy to blame the model - there's so many options to choose from:

  1. They're dumbing down the model to save money.
  2. Context rot!
  3. Codex/Claude Code/[insert harness] is getting bloated.

It's not uncommon for me to come back to the problem the next day, my own context window cleared from rest, and find a fast and fulfilling path forward with the help of the LLM. What's going on?

I'm tired & experiments are too slow

As I get more tired, the quality of my prompts degrade

This one seems pretty obvious. If I am becoming mentally fatigued, I will write worse prompts, and because of that, the AI will do a worse job. Here's an example of what happens when I'm really tired: Kick off a somewhat meaty prompt (after 30% of context was used to align with the AI on the problem), realize right after submitting that I missed some key context, interrupt the LLM, provide the context, and then have it proceed. Without a doubt, interrupting Claude Code or "steering" in Codex leads to worse outcomes.

Feedback loop is too slow and context is bloated

Some of the work I'm doing right now requires parsing some large files. There's bugs in that parsing logic that I'm trying to work through with the LLM. The problem is, every tweak requires re-parsing and it's a slow process. I liken it to a slot machine that takes 10 minutes to spin. To add insult to injury, some of these tasks take quite a bit of context to get rolling on a new experiment, and by the end of the parsing job, the LLM is 2% away from compaction. That then leads to either a very dumb AI or an AI that is pretending to know what's going on with the recent experiment once it's complete.

The happy path with AI

Avoiding the doom-loop psychosis caused by bad prompting

If I reach the point where I am not getting joy out of writing a great prompt, then it's time to throw in the towel. That has to be the first signal. If I'm half-assing it, being short, interrupting, and getting frustrated, then time to take a break.

There's some metacognition that needs to take place here. Am I being less descriptive because I haven't actually thought through this problem and I'm hoping the AI will just fill the gaps? That can be a very seductive trap to fall into. AIs are getting quite good at filling in undefined requirements, something that I remember having to do as a software engineer myself, but they're not good enough yet.

There's times I write a prompt with so much clarity in my desired end-state that I'm already celebrating the end-result when I submit the prompt because I know the AI is going to CRUSH IT. That's the feeling I need to look for in every prompt. If it's more the feeling of unsureness or impatience, it's just not going to pan out.

Recognizing slow feedback loops and making those the problem

In the case of my parsing problem I mentioned above, it was too slow and the feedback loop was painful. I want my slot machine to take seconds/minutes to spin, not 15/20/30 minutes. In these cases, I've started to spin up a new session with the LLM, lay out my problem with feedback loop speed, express my desire to get to a sub 5-minute loop, give it an example of a failure case, and ask it to reproduce that failure case as quickly as possible. This is starting to sound familiar ... TDD anyone?

I was always the scrappy engineer. Sure I wrote tests, but I was never one to stop and create elaborate test cases or integration tests for bespoke problems. That was too time consuming, and also, I was getting paid even if my feedback loop wasn't perfect.

It's been quite the journey to fight that feeling that writing elaborate tests is time-consuming when working with AI. If you give an LLM clear success criteria: "Reproduce this specific failure case and make sure the clock time is less than 5 minutes to do it. Feel free to experiment with ways to optimize the code path or omit certain pieces that are unnecessary to reproduce" - the AI will not only reproduce the problem (maybe slowly the first time), but it will create levers for a faster feedback cycle. With that fast feedback cycle, it will consume less context and be SMARTER. This can seriously save hours of debugging time.

Conclusion

When I am exhausted from working with an LLM - it might actually be a "skill issue". I need to recognize when I'm tired and entering the doom-loop psychosis. Cognitive outsourcing of requirements is seductive, but it's a trap. If I'm not enjoying the act of writing the perfect prompt and absolutely confident I will return to a result I'm 95% happy with, I need to either take a break or ponder if I've really thought through the problem. If things are moving slow and it feels as though context is filling up too quickly - I need to make that the problem to solve. Find a path, with the help of the LLM, to iterate faster and use up less context.

You can subscribe to my blog via email or RSS feed.


Read the original article

Comments

  • By Schlagbohrer 2026-03-1611:574 reply

    Reminds me of the best saying I ever got from my CS professor. She would make us first write out our code and answer the question, "What will the output be?" before we were allowed to run it.

    "If you don't know what you want your code to do, the computer sure as heck won't know either." I keep this with me today. Before I run my code for the first time or turn on my hardware for the first time, I ask myself, "What _exactly_ am I expecting to see here?" and if I can't answer that it makes me take a closer and more adversarial look at my own output before running it.

    • By swat535 2026-03-1613:54

      Isn't this the whole idea of TDD? Write your assertions, then write the code the fulfill it.

    • By Tenemo 2026-03-1613:313 reply

      I'm not 100% convinced, while iterating fast on an early prototype, what's wrong with legitimately not knowing what e.g. the data structure will end up looking? Just let it run, check debugger/stdout/localhost page and adjust: "Oh, right, the entries are missing canonical IDs, but at the same time there are already all the comments in them, forgot they would be there – neat". What's wrong with that? Especially at uni, when working on low-stakes problems.

      • By ssivark 2026-03-1614:31

        > what's wrong with legitimately not knowing what e.g. the data structure will end up looking?

        But that's not what the above comment said.

        > Just let it run, check debugger/stdout/localhost page and adjust: "Oh, right, the entries are missing canonical IDs, but at the same time there are already all the comments in them, forgot they would be there

        So you did have an expectation that the entries should have some canonical IDs, and anticipated/desired a certain specific behavior of the system.

        Which is basically the meaning of "what will the output be?" when simplified for programming novices at university.

    • By salawat 2026-03-1616:20

      This is a restatement of the old wisdom that to safely use a tool you must be 10% smarter than it is." Or stated differently, you must be "ahead" of the tool (capable of accurately modeling and predicting the outcome), not "behind" (only reacting). TDD is kind of an outgrowth of it. I've lived by the wisdom, but admit that for me there is a lot of fun in the act of verifying hypotheses in the course of development, even in the "test case gap" when you're writing the lines of code that don't make a difference in terms of making a long term test case go from red to green, or doing other exploratory work where the totality of behavior is not well charted. Those times are the best. "Moodily scowling at the computer screen again," has been a status update from chilluns on what I'm doing more times than I like to admit.

  • By cglan 2026-03-1523:3712 reply

    I find LLMs so much more exhausting than manual coding. It’s interesting. I think you quickly bump into how much a single human can feasibly keep track of pretty fast with modern LLMs.

    I assume until LLMs are 100% better than humans in all cases, as long as I have to be in the loop there will be a pretty hard upper bound on what I can do and it seems like we’ve roughly hit that limit.

    Funny enough, I get this feeling with a lot of modern technology. iPhones, all the modern messaging apps, etc make it much too easy to fragment your attention across a million different things. It’s draining. Much more draining than the old days

    • By afandian 2026-03-1611:391 reply

      Same feeling as pair programming in my experience.

      If your consciousness is driving, your brain is internally aligned. You type as you think. You can get flow state, or at least find a way to think around a problem.

      If you're working with someone else and having to discuss everything as you go, then it's just a different activity. I've collaboratively written better code this way in the past. But it's slower and more exhausting.

      Like pair programming, I hope people realise that there's a place for both, and doing exclusively one or the other full time isn't in everyone's best interests.

      • By fluoridation 2026-03-1613:04

        I've had a similar experience, where I pair-programmed with a coworker for a few days in a row (he understood the language better and I understood the problem better) and we couldn't be in the call for more than an hour at a time. Still, although it was more tiring, I found it quite engaging and enjoyable. I'd much rather bounce ideas back and forth with another person than with an LLM.

    • By superfrank 2026-03-164:053 reply

      > I find LLMs so much more exhausting than manual coding

      I do as well, so totally know what you're talking about. There's part of me that thinks it will become less exhausting with time and practice.

      In high school and college I worked at this Italian place that did dine in, togo, and delivery orders. I got hired as a delivery driver and loved it. A couple years in there was a spell where they had really high turnover so the owners asked me to be a waiter for a little while. The first couple months I found the small talk and the need to always be "on" absolutely exhausting, but overtime I found my routine and it became less exhausting. I definitely loved being a delivery driver far more, but eventually I did hit a point where I didn't feel completely drained after every shift of waiting tables.

      I can't help but think coding with LLMs will follow a similar pattern. I don't think I'll ever like it more than writing the code myself, but I have to believe at some point I'll have done it enough that it doesn't feel completely draining.

      • By qq66 2026-03-166:423 reply

        I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for, and when there was a bug, you simply sat down and fixed the bug.

        With the rise of open source, there started to be more black-box compositing, you grabbed some big libraries like Django or NumPy and honestly just hoped there weren't any bugs, but if there were, you could plausibly step through the debugger and figure out what was going wrong and file a bug report.

        Now, the LLMs are generating so many orders of magnitude more code than any human could ever have the chance to debug, you're basically just firing this stuff out like a firehose on a house fire, giving it as much control as you can muster but really just trusting the raw power of the thing to get the job done. And, bafflingly, it works pretty well, except in those cases where it doesn't, so you can't stop using the tool but you can't really ever get comfortable with it either.

        • By nvardakas 2026-03-168:41

          Very good catch. The mental model thing is real I've caught myself approving LLM generated code that works but that I couldn't debug if it broke at 2am. With libraries you at least had docs and a community. With generated code, the only source of truth is... asking the same LLM again and hoping it's consistent.

        • By xienze 2026-03-167:42

          > I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for

          Not just that, but the fact that with programming languages you can have the utmost precision to describe _how_ the problem needs to be solved _and_ you can have some degree of certainty that your directions (code) will be followed accurately.

          It’s maddening to go from that to using natural language which is interpreted by a non-deterministic entity. And then having to endlessly iterate on the results with some variation of “no, do it better” or, even worse, some clever “pattern” of directing multiple agents to check each other’s work, which you’ll have to check as well eventually.

        • By chii 2026-03-167:111 reply

          > bafflingly, it works pretty well, except in those cases where it doesn't

          so as a human, you would make the judgement that the cases where it works well enough is more than make up for the mistakes. Comfort is a mental state, and can be easily defeated by separating your own identity and ego with the output you create.

      • By prmph 2026-03-1610:45

        I think what will eventually help is something I call AI-discipline. LLMs are a tool, not more, no less. Just like we now recognize unbridled use of mobile phones to be a mental health issue, causing some to strictly limit their use, I think we will eventually recognize that the best use of LLMs is found by being judicious and intentional.

        When I first started dabbling in the use of LLMs for coding, I almost went overboard trying to build all kinds of tools to maximize their use: parallel autonomous worktree-based agents, secure sandboxing for agents to do as they like, etc.

        I now find it much more effective to use LLMs in a target and minimalist manner. I still architecturally important and tricky code by hand, using LLMs to do several review passes. When I do write code with LLMs, I almost never allow them to do it without me in the loop, approving every single edit. I limit the number of simultaneous sessions I manage to at most 3 or 4. Sometimes, I take a break of a few days from using LLMs (and ofter from writing any code at all), and just think and update the specs of the project(s) I'm working on at a high level, to ensure I not doing busy-work in the wrong direction.

        I don't think I'm missing anything by this approach. If anything, I think I am more productive.

      • By apsurd 2026-03-166:09

        Thanks for the story. I also spent time as a delivery driver at an italian restaurant. It was a blast in the sense that i look back at that slice of life with pride and becoming. Never got the chance to be a waiter, but definitely they were characters and worked hard for their money. Also the cooking staff. What a hoot.

    • By hombre_fatal 2026-03-1523:455 reply

      I think the upper limit is your ability to decide what to build among infinite possibilities. How should it work, what should it be like to use it, what makes the most sense, etc.

      The code part is trivial and a waste of time in some ways compared to time spent making decisions about what to build. And sometimes even a procrastination to avoid thinking about what to build, like how people who polish their game engine (easy) to avoid putting in the work to plan a fun game (hard).

      The more clarity you have about what you’re building, then the larger blocks of work you can delegate / outsource.

      So I think one overwhelming part of LLMs is that you don’t get the downtime of working on implementation since that’s now trivial; you are stuck doing the hard part of steering and planning. But that’s also a good thing.

      • By SchemaLoad 2026-03-1523:522 reply

        I've found writing the code massively helps your understanding of the problem and what you actually need or want. Most times I go into a task with a certain idea of how it should work, and then reevaluate having started. While an LLM will just do what you ask without questing, leaving you with none of the learnings you would have gained having done it. The LLM certainly didn't learn or remember anything from it.

        • By jeremyjh 2026-03-160:041 reply

          In some cases, yes. But I’ve been doing this awhile now and there is a lot of code that has to be written that I will not learn anything from. And now, I have a choice to not write it.

          • By orbisvicis 2026-03-162:372 reply

            Ehh, I find that the most tedious code is also the most sensitive to errors, stuff that blurs the divide between code and data.

            • By jeremyjh 2026-03-162:56

              I doubt if we're talking about the same sort of things at all. I'm talking about stuff like generic web crud. Too custom to be generated deterministically but recent models crush it and make fewer errors than I do. But that is not even all they can do. But yes, once you get into a large complicated code base its not always worth it, but even there one benefit is it to develop more test cases - and more complicated ones - than I would realistically bother with.

            • By claytongulick 2026-03-1616:07

              I actually like writing the tedious code by hand.

              The whole time I'm doing it, I'm trying to think of better ways. I'm thinking of libraries, utilities or even frameworks I could create to reduce the tedium.

              This is actually one of the things I dislike the most about LLM coding: they have no problem with tedium and will happily generate tens of thousands of lines where a much better approach could exist.

              I think it's an innovation killer. Would any of the ORMs or frameworks we have today exist if we'd had LLMs this whole time?

              I doubt it.

        • By stavros 2026-03-161:171 reply

          It depends on how you use them. In my workflow, I work with the LLM to get the desired result, and I'm familiar with the system architecture without writing any of the code.

          I've written it up here, including the transcript of an actual real session:

          https://www.stavros.io/posts/how-i-write-software-with-llms/

          • By jeremyjh 2026-03-163:091 reply

            Thanks for writing this up.

            I just woke up recently myself and found out these tools were actually becoming really, really good. I use a similar prompt system, but not as much focus on review - I've found the review bots to be really good already but it is more efficient to work locally.

            One question I have since you mention using lots of different models - is do you ever have to tweak prompts for a specific model, or are these things pretty universal?

            • By stavros 2026-03-165:09

              I don't tweak prompts, no. I find that there's not much need to, the models understand my instructions well enough. I think we're way past the prompt engineering days, all models are very good at following instructions nowadays.

      • By galaxyLogic 2026-03-160:142 reply

        Right when you're coding with LLM it's not you asking the LLM questions, it's LLM asking you questions, about what to build, how should it work exactly, should it do this or that under what conditions. Because the LLM does the coding, it's you have to do more thinking. :-)

        And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.

        • By eucyclos 2026-03-165:341 reply

          Hehe, speak for yourself- as a 1x coder on a good day, having a nonjudgmental partner who can explain stuff to me is one of the best parts of writing with an llm :)

          • By galaxyLogic 2026-03-168:26

            I like that aspect of it too. LLM never seems to get offended even when I tell it its wrong. Just trying to understand why some people say it can feel exhausting. Instead of focusing on narrowly defined coding tasks, the work has changed and you are responsible for a much larger area of work, and expectations are similarly higher. You're supposed to produce 10x code now.

        • By lelanthran 2026-03-1615:21

          > Because the LLM does the coding, it's you have to do more thinking. :-)

          I keep seeing this sentiment, but it sure sounds wrong to me.

          Coding requires thinking (in humans, at any rate). When you're doing coding, you're doing both coding-thinking and the design thinking.

          Now you're only doing one half of it.

      • By clickety_clack 2026-03-1523:50

        I’d love to see what you’ve built. Can you share?

      • By grey-area 2026-03-160:33

        Maintenance is the hard part, not writing new code or steering and planning.

      • By ipaddr 2026-03-164:45

        You can outsource that to another llm

    • By raincole 2026-03-160:211 reply

      If you care at code quality of course it is exhausting. It's supposed to be. Now there is more code for you to assure quality in the same length of time.

      • By onion2k 2026-03-167:162 reply

        If you care about code quality you should be steering your LLM towards generating high quality code rather than writing just 'more code' though. What's exhausting is believing you care about high quality code, then assuming the only way to get high quality code from an LLM is to get it to write lots of low quality code that you have to fix yourself.

        LLMs will do pretty much exactly what you tell them, and if you don't tell them something they'll make up something based on what they've been trained to do. If you have rules for what good code looks like, and those are a higher bar than 'just what's in the training data' then you need to build a clear context and write an unambiguous prompt that gets you what you want. That's a lot of work once to build a good agent or skill, but then the output will be much better.

        • By Frieren 2026-03-168:06

          > write an unambiguous prompt

          That's an oxymoron. Prompts by definition are ambiguous otherwise you will be writing code.

        • By lelanthran 2026-03-1615:23

          > LLMs will do pretty much exactly what you tell them

          That bit is quiet funny :-)

    • By Cthulhu_ 2026-03-169:572 reply

      I suspect it's because you need to keep more things in your head yourself; after a while of coding by hand, it becomes more labor and doesn't cost as much brain power anymore. But when offloading the majority of that coding to an LLM, you're left with the higher level tasks of software engineering, you don't get the "breaks" while writing code anymore.

      • By lelanthran 2026-03-1610:14

        How often, in your life, did you write code without stopping, in the middle of writing, to go back and review assumptions that turned out to be wrong?

        I'm not talking about "oh, this function is deprecated, have to use this other one, but more "this approach is wrong, maybe delete it all and try a different approach"?

        Because IME an AI never discards an approach, they just continue adding band aids and conditional to make the wrong approach work.

      • By simonask 2026-03-1610:041 reply

        The tactical process of writing the code is also when you discover the errors in your design.

        Like, did we think waterfall suddenly works now just because typing can be automated? No.

        • By danielbln 2026-03-1614:411 reply

          Who says you can't iterate on a design just because an LLM does the manual typing?

          • By simonask 2026-03-1622:06

            I meant to write “tactile”, not “tactical”, but missed it before the edit window expired.

            Anecdotally, ask people who knit whether their brain is stimulated. Physically engaging with the thing you are making is part of the process that makes it actually good.

    • By rhysfonixone 2026-03-169:44

      Working with LLMs for coding tasks feels more like juggling I think. You're fixating on the positions of all of the jobs you're handling simultaneously and while muscle memory (in this metaphor, the LLMs) are keeping each individual item in the air, you're actively managing, considering your next trick/move, getting things back on track when one object drifts from what you'd anticipated, etc. It simultaneously feels markedly more productive and requiring carefully divided (and mentally taxing) focus. It's an adjustment, though I do worry if there's a real tangible trade-off at play and I'm loosing my edge for instances where I need to do something carefully, meticulously and manually.

    • By gotwaz 2026-03-164:33

      Theory of Bounded Rationality applies. Tech tools scale systemic capability limits. 3 inch chimp brain limits dont change. The story writes itself.

    • By Sparkyte 2026-03-1613:29

      It feels no different than inhheriting someone's code base when you start at a company. I hate this feeling. AI removes the developer's attachment and first hand understanding of the code.

    • By ModernMech 2026-03-1619:24

      Classic coding was the process of incrementally saying "Ah, I'm getting it!" -- as your compile your code and it works better each time, you get a little dopamine hit from "solving" the puzzle. This creates states where time can pass with great alacrity as we enter these little dopamine induced trances we call "flow", which we all experience.

      AI is not that, it's a casino. Every time you put words into the prompt you're left with a cortisol spike as you hope the LLM lottery gives you a good answer. You get a little dopamine spike when it does, but it's not the same as when you do it yourself because it's punctuated by anxiety, which is addictive but draining. And I personally have never gotten into a state of LLM-induced "flow", but maybe others have and can explain that experience. But to me there's too much anxiety around the LLM from the randomness of what it produces.

    • By akomtu 2026-03-161:311 reply

      You used to be a Formula 1 driver. Now you are an instructor for a Formula 1 autopilot. You have to watch it at all times with full attention for it's a fast and reckless driver.

      • By esafak 2026-03-165:411 reply

        You're being generous to the humans; we're more like Ladas in comparison.

        • By p_v_doom 2026-03-167:07

          That may not be a bad comparison. A F1 car is really fast, really specialized car, that is also extremely fragile. A Lada may not be too fast but its incredibly versatile and robust even after decades of use. And has more luggage space

    • By empath75 2026-03-1615:44

      I go through phases with it where I am extraordinarily productive and times where i can't even bear to open a terminal window.

    • By senectus1 2026-03-160:122 reply

      I imagine code reviewing is a very different sort of skill than coding. When you vibe code (assuming you're reading teh code that is written for you) you become a coder reviewer... I suspect you're learning a new skill.

      • By qudat 2026-03-160:542 reply

        It’s easier to write code than read it.

        • By j3k3 2026-03-161:08

          Id argue the read-write procedures are happening simultaneously as one goes along, writing code by hand.

        • By Leynos 2026-03-168:12

          It's important to enforce the rules that make the code easier to read.

      • By pessimizer 2026-03-160:38

        The way I've tried to deal with it is by forcing the LLM to write code that is clear, well-factored and easy to review i.e. continually forcing it to do the opposite of what it wants to do. I've had good outcomes but they're hard-won.

        The result is that I could say that it was code that I myself approved of. I can't imagine a time when I wouldn't read all of it, when you just let them go the results are so awful. If you're letting them go and reviewing at the end, like a post-programming review phase, I don't even know if that's a skill that can be mastered while the LLMs are still this bad. Can you really master Where's Waldo? Everything's a mess, but you're just looking for the part of the mess that has the bug?

        I'm not reviewing after I ask it to write some entire thing. I'm getting it to accomplish a minimal function, then layering features on top. If I don't understand where something is happening, or I see it's happening in too many places, I have to read the code in order to tell it how to refactor the code. I might have to write stubs in order to show it what I want to happen. The reading happens as the programming is happening.

  • By rednafi 2026-03-1523:598 reply

    I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

    But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

    I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.

    • By bmurphy1976 2026-03-162:012 reply

      > Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day.

      That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.

      • By rednafi 2026-03-163:39

        True at Doordash, Amazon, and salesforce - speaking from experience.

      • By chewbacha 2026-03-164:461 reply

        Mandates are becoming normal. Most devs don’t seem to want to but they want to keep their jobs.

        • By shimman 2026-03-1616:06

          Definitely a sign that workers aren't being exploited.

    • By civvv 2026-03-1613:50

      10k LoC per day? Wow, my condolences to you.

      On a different note: something I just discovered is that if you google "my condolences", the AI summary will thank you for the kindness before defining its meaning, fun.

    • By hnthrow0287345 2026-03-1612:502 reply

      >Reviewing this junk has become exhausting.

      Nitpick it to death. Ask the reviewer questions on how everything works. Even if it looks good, flip a coin and reject it anyway. Drag that review time out. You don't want unlucky PRs going through after all.

      Corporate is not going to wake up and do the sensible thing on its own.

      • By rednafi 2026-03-1612:54

        Ha ha I wish. Then both corporate and your coworkers hate you.

        Also, there is no point in asking questions when you know that they just yoloed it and won't be able to answer anything.

        We have collectively lost our common sense and reasonable people are doing unreasonable things because there's an immense amount of pressure from the top.

    • By anonzzzies 2026-03-160:242 reply

      I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.

      • By sarchertech 2026-03-160:502 reply

        I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

        I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

        But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

        If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.

        • By rednafi 2026-03-163:471 reply

          I also work at a huge company, and this observation is true. The way AI is being rammed down our throats is burning out the best engineers. OTOH, the mediocre simian army “empowered” by AI is pushing slop like there’s no tomorrow. The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

          The resilience of the system has taken a massive hit, and we were told that it doesn’t matter. Managers, designers, and product folks are being asked to make PRs. When things cause Sev0 or Sev1 incidents, engineers are being held responsible. It’s a huge clown show.

          • By gopher_space 2026-03-164:52

            > The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

            "Look, if the AI fairy worked like that our company would be me and the investors."

            I should make t-shirts. They'll be worth a fortune in ironic street cred once the AI fairy works like that.

        • By anonzzzies 2026-03-160:581 reply

          Tech companies. How about massive non software tech companies. I don't know where it is not the norm and I have been in very many of them as supplier for the past 30 years. Tech companies are a bit different as they usually have leadership that prioritizes these things.

          • By sarchertech 2026-03-162:35

            None tech companies too. You can’t build large scale software with everyone merging PRs like that. My guess is that if you’re a supplier your are getting a pretty severe sampling bias.

      • By nightpool 2026-03-160:512 reply

        I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them

        • By duskdozer 2026-03-167:57

          Half the posts here are talking about how they 100xd their output with the latest agentic loop harness, so I'm not sure why you would get that impression.

        • By heromal 2026-03-163:50

          Neither of those. The pay is great and if all leadership cares about is making the whole company "AI Native" and pushing bullshit diffs, I'll play ball.

    • By shiandow 2026-03-160:50

      The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.

    • By bsjshshsb 2026-03-161:39

      Use AI to review.

    • By dyauspitr 2026-03-167:07

      I do “TDD” LLM coding and only review the tests. That way if the tests pass I ship it. It hasn’t bitten me in the ass yet.

    • By xyzal 2026-03-167:511 reply

      10k, really? Are you supposed to understand all that code? This is crazy and a one way street to burnout.

      • By rednafi 2026-03-1611:49

        Yep and now we are encouraged to use AI to review the code as well. But if shit hits the fan then you are held responsible.

HackerNews