AI fatigue is real and nobody talks about it

2026-02-0814:19471320siddhantkhare.com

You're using AI to be more productive. So why are you more exhausted than ever? The paradox every engineer needs to confront.

I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two facts are not unrelated.

I build AI agent infrastructure for a living. I'm one of the core maintainers of OpenFGA (CNCF Incubating), I built agentic-authz for agent authorization, I built Distill for context deduplication, I shipped MCP servers. I'm not someone who dabbles with AI on the side. I'm deep in it. I build the tools that other engineers use to make AI agents work in production.

And yet, I hit a wall. The kind of exhaustion that no amount of tooling or workflow optimization could fix.

If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you. You're not imagining it. You're not weak. You're experiencing something real that the industry is aggressively pretending doesn't exist. And if someone who builds agent infrastructure full-time can burn out on AI, it can happen to anyone.

I want to talk about it honestly. Not the "AI is amazing and here's my workflow" version. The real version. The one where you stare at your screen at 11pm, surrounded by AI-generated code you still need to review, wondering why the tool that was supposed to save you time has consumed your entire day.

An overwhelmed engineer surrounded by code, errors, and notifications

The paradox nobody warned us about

Here's the thing that broke my brain for a while: AI genuinely makes individual tasks faster. That's not a lie. What used to take me 3 hours now takes 45 minutes. Drafting a design doc, scaffolding a new service, writing test cases, researching an unfamiliar API. All faster.

But my days got harder. Not easier. Harder.

The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don't do fewer tasks. You do more tasks. Your capacity appears to expand, so the work expands to fill it. And then some. Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.

Before AI, I might spend a full day on one design problem. I'd sketch on paper, think in the shower, go for a walk, come back with clarity. The pace was slow but the cognitive load was manageable. One problem. One day. Deep focus.

Now? I might touch six different problems in a day. Each one "only takes an hour with AI." But context-switching between six problems is brutally expensive for the human brain. The AI doesn't get tired between problems. I do.

This is the paradox: AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.

You became a reviewer and you didn't sign up for it

AI dropping code onto a conveyor belt faster than a human can review

Before AI, my job was: think about a problem, write code, test it, ship it. I was the creator. The maker. That's what drew most of us to engineering in the first place - the act of building.

After AI, my job increasingly became: prompt, wait, read output, evaluate output, decide if output is correct, decide if output is safe, decide if output matches the architecture, fix the parts that don't, re-prompt, repeat. I became a reviewer. A judge. A quality inspector on an assembly line that never stops.

This is a fundamentally different kind of work. Creating is energizing. Reviewing is draining. There's research on this - the psychological difference between generative tasks and evaluative tasks. Generative work gives you flow states. Evaluative work gives you decision fatigue.

I noticed it first during a week where I was using AI heavily for a new microservice. By Wednesday, I couldn't make simple decisions anymore. What should this function be named? I didn't care. Where should this config live? I didn't care. My brain was full. Not from writing code - from judging code. Hundreds of small judgments, all day, every day.

The cruel irony is that AI-generated code requires more careful review than human-written code. When a colleague writes code, I know their patterns, their strengths, their blind spots. I can skim the parts I trust and focus on the parts I don't. With AI, every line is suspect. The code looks confident. It compiles. It might even pass tests. But it could be subtly wrong in ways that only surface in production, under load, at 3am.

So you read every line. And reading code you didn't write, that was generated by a system that doesn't understand your codebase's history or your team's conventions, is exhausting work.

This is also why I think agent security and authorization matter so much. If we can't review everything AI produces - and we can't, not at scale - then we need systems that constrain what agents can do in the first place. Least-privilege access, scoped tokens, audit trails. The less you have to worry about "did the AI do something dangerous," the more cognitive budget you have for the work that actually matters. This isn't just a security problem. It's a human sustainability problem.

The nondeterminism problem

Engineers are trained on determinism. Same input, same output. That's the contract. That's what makes debugging possible. That's what makes reasoning about systems possible.

AI broke that contract.

Same prompt, same AI, different results - clean code or spaghetti

I had a prompt that worked perfectly on Monday. Generated clean, well-structured code for an API endpoint. I used the same prompt on Tuesday for a similar endpoint. The output was structurally different, used a different error handling pattern, and introduced a dependency I didn't ask for.

Why? No reason. Or rather, no reason I can access. There's no stack trace for "the model decided to go a different direction today." There's no log that says "temperature sampling chose path B instead of path A." It just... happened differently.

For someone whose entire career is built on "if it broke, I can find out why," this is deeply unsettling. Not in a dramatic way. In a slow, grinding, background-anxiety way. You can never fully trust the output. You can never fully relax. Every interaction requires vigilance.

I tried to fight this. I version-controlled my prompts. I built elaborate system messages. I created templates. Some of it helped. None of it solved the fundamental problem: you are collaborating with a probabilistic system, and your brain is wired for deterministic ones. That mismatch is a constant, low-grade source of stress.

This frustration is actually what led me to build Distill - deterministic context deduplication for LLMs. No LLM calls, no embeddings, no probabilistic heuristics. Pure algorithms that clean your context in ~12ms. I wanted at least one part of the AI pipeline to be something I could reason about, debug, and trust. If the model's output is going to be nondeterministic, the least I can do is make sure the input is clean and predictable.

The engineers I've talked to who handle this best are the ones who've made peace with it. They treat AI output like a first draft from a smart but unreliable intern. They expect to rewrite 30% of it. They budget time for that rewriting. They don't get frustrated when the output is wrong because they never expected it to be right. They expected it to be useful. There's a difference.

The FOMO treadmill

Take a breath and try to keep up with just the last few months. Claude Code ships sub-agents, then skills, then an Agent SDK, then Claude Cowork. OpenAI launches Codex CLI, then GPT-5.3-Codex - a model that literally helped code itself. New coding agents announce background mode with hundreds of concurrent autonomous sessions. Google drops Gemini CLI. GitHub adds an MCP Registry. Acquisitions happen weekly. Amazon Q Developer gets agentic upgrades. CrewAI, AutoGen, LangGraph, MetaGPT - pick your agent framework, there's a new one every week. Google announces A2A (Agent-to-Agent protocol) to compete with Anthropic's MCP. OpenAI ships its own Swarm framework. Kimi K2.5 drops with agent swarm architecture orchestrating 100 parallel agents. "Vibe coding" becomes a thing. OpenClaw launches a skills marketplace and within one week, researchers find 400+ malicious agent skills uploaded to ClawHub. And somewhere in the middle of all this, someone on LinkedIn posts "if you're not using AI agents with sub-agent orchestration in 2026, you're already obsolete."

That's not a year. That's a few months. And I'm leaving stuff out.

I fell into this trap hard. I was spending weekends evaluating new tools. Reading every changelog. Watching every demo. Trying to stay at the frontier because I was terrified of falling behind.

Here's what that actually looked like: I'd spend Saturday afternoon setting up a new AI coding tool. By Sunday I'd have a basic workflow. By the following Wednesday, someone would post about a different tool that was "way better." I'd feel a pang of anxiety. By the next weekend, I'd be setting up the new thing. The old thing would sit unused. One coding assistant to the next to the next and back to the first one. Each migration cost me a weekend and gave me maybe a 5% improvement that I couldn't even measure properly.

Multiply this by every category - coding assistants, chat interfaces, agent frameworks, multi-agent orchestration platforms, MCP servers, context management tools, prompt libraries, swarm architectures, skills marketplaces - and you get a person who is perpetually learning new tools and never getting deep with any of them. The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.

The worst part is the knowledge decay. I spent two weeks building a sophisticated prompt engineering workflow in early 2025. Carefully crafted system prompts, few-shot examples, chain-of-thought templates. It worked well. Three months later, the model updated, the prompting best practices shifted, and half my templates produced worse results than a simple one-liner. Those two weeks were gone. Not invested. Spent. The same thing happened with my MCP server setup - I built five custom servers (Dev.to publisher, Apple Notes integration, Python and TypeScript sandboxes, more), then the protocol evolved, then the MCP Registry launched on GitHub and suddenly there were thousands of pre-built ones. Some of my custom work became redundant overnight.

The agent framework churn is even worse. I watched teams go from LangChain to CrewAI to AutoGen to custom orchestration in the span of a year. Each migration meant rewriting integrations, relearning APIs, rebuilding workflows. The people who waited and did nothing often ended up in a better position than the people who adopted early and had to migrate twice.

I've since adopted a different approach. Instead of chasing every new tool, I go deep on the infrastructure layer underneath them. Tools come and go. The problems they solve don't. Context efficiency, agent authorization, audit trails, runtime security - these are durable problems regardless of which framework is trending this month. That's why I built agentic-authz on OpenFGA instead of tying it to any specific agent framework. That's why Distill works at the context level, not the prompt level. Build on the layer that doesn't churn.

I still track the landscape closely - you have to when you're building infrastructure for it. But I track it to understand where the ecosystem is going, not to adopt every new thing. There's a difference between being informed and being reactive.

The "just one more prompt" trap

This one is insidious. You're trying to get AI to generate something specific. The first output is 70% right. So you refine your prompt. The second output is 75% right but broke something the first one had correct. Third attempt: 80% right but now the structure is different. Fourth attempt: you've been at this for 45 minutes and you could have written the thing from scratch in 20.

I call this the prompt spiral. It's the AI equivalent of yak shaving. You started with a clear goal. Thirty minutes later you're debugging your prompt instead of debugging your code. You're optimizing your instructions to a language model instead of solving the actual problem.

The prompt spiral is especially dangerous because it feels productive. You're iterating. You're getting closer. Each attempt is slightly better. But the marginal returns are diminishing fast, and you've lost sight of the fact that the goal was never "get the AI to produce perfect output." The goal was to ship the feature.

I now have a hard rule: three attempts. If the AI doesn't get me to 70% usable in three prompts, I write it myself. No exceptions. This single rule has saved me more time than any prompting technique I've ever learned.

Perfectionism meets probabilistic output

Engineers tend toward perfectionism. We like clean code. We like tests that pass. We like systems that behave predictably. This is a feature, not a bug - it's what makes us good at building reliable software.

AI output is never perfect. It's always "pretty good." 70-80% there. The variable names are slightly off. The error handling is incomplete. The edge cases are ignored. The abstraction is wrong for your codebase. It works, but it's not right.

For a perfectionist, this is torture. Because "almost right" is worse than "completely wrong." Completely wrong, you throw away and start over. Almost right, you spend an hour tweaking. And tweaking AI output is uniquely frustrating because you're fixing someone else's design decisions - decisions that were made by a system that doesn't share your taste, your context, or your standards.

I had to learn to let go. Not of quality - I still care about quality. But of the expectation that AI would produce quality. I now treat every AI output as a rough draft. A starting point. Raw material. I mentally label it "draft" the moment it appears, and that framing change alone reduced my frustration by half.

The engineers who struggle most with AI are often the best engineers. The ones with the highest standards. The ones who notice every imperfection. AI rewards a different skill: the ability to extract value from imperfect output quickly, without getting emotionally invested in making it perfect.

The thinking atrophy

A brain on a couch watching AI, its thinking muscles covered in cobwebs

This is the one that scares me most.

I noticed it during a design review meeting. Someone asked me to reason through a concurrency problem on the whiteboard. No laptop. No AI. Just me and a marker. And I struggled. Not because I didn't know the concepts - I did. But because I hadn't exercised that muscle in months. I'd been outsourcing my first-draft thinking to AI for so long that my ability to think from scratch had degraded.

It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes. After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it.

The same thing is happening with AI and engineering thinking. When you always ask AI first, you stop building the neural pathways that come from struggling with a problem yourself. The struggle is where learning happens. The confusion is where understanding forms. Skip that, and you get faster output but shallower understanding.

I now deliberately spend the first hour of my day without AI. I think on paper. I sketch architectures by hand. I reason through problems the slow way. It feels inefficient. It is inefficient. But it keeps my thinking sharp, and that sharpness pays dividends for the rest of the day when I do use AI - because I can evaluate its output better when my own reasoning is warmed up.

The comparison trap

Social media is full of people who seem to have AI figured out. They post their workflows. Their productivity numbers. Their "I built this entire app in 2 hours with AI" threads. And you look at your own experience - the failed prompts, the wasted time, the code you had to rewrite - and you think: what's wrong with me?

Nothing is wrong with you. Those threads are highlight reels. Nobody posts "I spent 3 hours trying to get Claude to understand my database schema and eventually gave up and wrote the migration by hand." Nobody posts "AI-generated code caused a production incident because it silently swallowed an error." Nobody posts "I'm tired."

The comparison trap is amplified by the fact that AI skill is hard to measure. With traditional engineering, you can look at someone's code and roughly gauge their ability. With AI, the output depends on the model, the prompt, the context, the temperature, the phase of the moon. Someone's impressive demo might not reproduce on your machine with your codebase.

I became much more selective about AI content on social media. I still follow the space closely - I have to, it's my job. But I shifted from consuming everyone's hot takes to focusing on people who are actually building and shipping, not just demoing. The ratio of signal to anxiety matters. If a feed is making you feel behind instead of informed, it's not serving you.

What actually helped

I'll be specific about what changed my relationship with AI from adversarial to sustainable.

Time-boxing AI sessions. I don't use AI in an open-ended way anymore. I set a timer. 30 minutes for this task with AI. When the timer goes off, I ship what I have or switch to writing it myself. This prevents the prompt spiral and the perfectionism trap simultaneously.

Separating AI time from thinking time. Morning is for thinking. Afternoon is for AI-assisted execution. This isn't rigid - sometimes I break the rule. But having a default structure means my brain gets both exercise and assistance in the right proportions.

Accepting 70% from AI. I stopped trying to get perfect output. 70% usable is the bar. I'll fix the rest myself. This acceptance was the single biggest reducer of AI-related frustration in my workflow.

Being strategic about the hype cycle. I track the AI landscape because I build infrastructure for it. But I stopped adopting every new tool the week it launches. I use one primary coding assistant and know it deeply. I evaluate new tools when they've proven themselves over months, not days. Staying informed and staying reactive are different things.

Logging where AI helps and where it doesn't. I kept a simple log for two weeks: task, used AI (yes/no), time spent, satisfaction with result. The data was revealing. AI saved me significant time on boilerplate, documentation, and test generation. It cost me time on architecture decisions, complex debugging, and anything requiring deep context about my codebase. Now I know when to reach for it and when not to.

Not reviewing everything AI produces. This was hard to accept. But if you're using AI to generate large amounts of code, you physically cannot review every line with the same rigor. I focus my review energy on the parts that matter most - security boundaries, data handling, error paths - and rely on automated tests and static analysis for the rest. Some roughness in non-critical code is acceptable.

The sustainability question

The tech industry has a burnout problem that predates AI. AI is making it worse, not better. Not because AI is bad, but because AI removes the natural speed limits that used to protect us.

Before AI, there was a ceiling on how much you could produce in a day. That ceiling was set by typing speed, thinking speed, the time it takes to look things up. It was frustrating sometimes, but it was also a governor. You couldn't work yourself to death because the work itself imposed limits.

AI removed the governor. Now the only limit is your cognitive endurance. And most people don't know their cognitive limits until they've blown past them.

I burned out in late 2025. Not dramatically - I didn't quit or have a breakdown. I just stopped caring. Code reviews became rubber stamps. Design decisions became "whatever AI suggests." I was going through the motions, producing more than ever, feeling less than ever. It took me a month to realize what had happened and another month to recover.

The recovery wasn't about using less AI. It was about using AI differently. With boundaries. With intention. With the understanding that I am not a machine and I don't need to keep pace with one. Working at Ona helped me see this clearly - when you're building AI agent infrastructure for enterprise customers, you see the human cost of unsustainable AI workflows at scale. The problems aren't just personal. They're systemic. And they need to be solved at the tooling level, not just the individual level.

Ironically, the burnout period is when some of my best work happened. When I stopped trying to use every AI tool and started thinking about what was actually broken, I saw the problems clearly for the first time. Context windows filling up with garbage - that became Distill. Agents with all-or-nothing API key access - that became agentic-authz. The inability to audit what an agent actually did - that's becoming AgentTrace. The fatigue forced me to stop consuming and start building. Not building more features faster, but building the right things deliberately.

The real skill

Here's what I think the real skill of the AI era is. It's not prompt engineering. It's not knowing which model to use. It's not having the perfect workflow.

It's knowing when to stop.

A hand pulling the STOP switch - good enough, ship it, go outside, rest

Knowing when the AI output is good enough. Knowing when to write it yourself. Knowing when to close the laptop. Knowing when the marginal improvement isn't worth the cognitive cost. Knowing that your brain is a finite resource and that protecting it is not laziness - it's engineering.

We optimize our systems for sustainability. We add circuit breakers. We implement backpressure. We design for graceful degradation. We should do the same for ourselves.

AI is the most powerful tool I've ever used. It's also the most draining. Both things are true. The engineers who thrive in this era won't be the ones who use AI the most. They'll be the ones who use it the most wisely.

If you're tired, it's not because you're doing it wrong. It's because this is genuinely hard. The tool is new, the patterns are still forming, and the industry is pretending that more output equals more value. It doesn't. Sustainable output does.

I'm still building in this space every day. Agent authorization, context engineering, audit trails, runtime security - the infrastructure that makes AI agents actually work in production. I'm more committed to AI than ever. But I'm committed on my terms, at my pace, building things that matter instead of chasing things that trend.

Take care of your brain. It's the only one you've got, and no AI can replace it.

I write about AI agent infrastructure, security, context engineering, and the human side of building with AI. You can find all my writing on my writing page.


Read the original article

Comments

  • By parpfish 2026-02-0814:5528 reply

    For me the fatigue is a little different— it’s the constant switching between doing a little bit of work/coding/reviewing and then stopping to wait for the llm to generate something.

    The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.

    You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.

    I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves

    • By mikkupikku 2026-02-0815:2911 reply

      I know this is a terribly irresponsible and immature suggestion, but what I've been doing is every time I give claude code a request of indeterminate length, I just hit a blunt and chill out. That and sometimes I'll tab into the kind of game that can be picked up and put down on very short notice, here's where I shameless plug for the free and open source game Endless Sky.

      For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.

      • By Waterluvian 2026-02-0815:401 reply

        Now that’s vibe coding.

        • By tartoran 2026-02-0816:022 reply

          That’s vibe coding while high. Probably terrible for assesing the results from Claude

          • By mikkupikku 2026-02-0816:261 reply

            The code works quite well, but I wouldn't inflict it on other humans. In my view, when you use a coding agent you're committing to forever maintaining that code with a coding agent. There are no human programmers participating in these projects.

            • By Waterluvian 2026-02-0816:531 reply

              I think oftentimes in the absence of context, people will substitute their own, usually worst-case context. They imagine someone vibe coding safety critical software that flies airplanes.

              I think we much too often forget that the domain of software development has expanded its reach into literally everything and that we share a guild hall with all kinds: those who write deeply safety critical correct code, those who are hacking a blender, those who are just making their clerical task less repetitive, etc.

              • By jama211 2026-02-0911:52

                Well said, people online are so quick to criticise LLM usage because of possible security holes or what have you.I can assure you, when making something dumb for fun I wouldn’t have checked it for security holes when I was coding it either.

          • By dripdry45 2026-02-0816:361 reply

            Ballmer curve in full effect

            • By tux1968 2026-02-0816:461 reply

              For anyone else that didn't recognize this reference, it's also known as the "Ballmer Peak": https://xkcd.com/323/

              • By y0eswddl 2026-02-0914:17

                lol i love telling people my "Ballmer Peak" is 2 tokes of Cali Jack Herer

      • By swat535 2026-02-0817:484 reply

        Programming stopped feeling fun for me once MBAs and bean counters took over. There's rarely time to write thoughtful code anymore. Even convincing management to invest in a sane architecture feels like an endless uphill battle.

        Engineer teams are nothing but an annoying expense on the balance sheet and the goal is to cram as many features, as quickly as possible to get the sale.

        That's exactly why I'm happy to use every tool available to get the work done efficiently. To this end, LLMs have been great for me, especially when dealing with large amounts of boilerplate code.

        Long gone the days of crafting artisan code.

        • By johnnyanmac 2026-02-0818:56

          It's still fun for me. But a lot of my day right now isn't "coding". But playing the job market game to get stability back in life while juggling other part time work.

          Of course, I know the next layoff will come. So I simply want to use the next time of stability to make sure I can be that artisan the next time industry kicks me out. But baby steps for now.

        • By WD-42 2026-02-0823:212 reply

          Sounds like you need to find a better job.

          • By DerArzt 2026-02-0920:21

            Sometimes a job is a job.

        • By jama211 2026-02-0911:55

          Y’all were having fun? I always had fun making things, but the coding part was always a means to an end for me, and one that caused endless frustration at times. I never have to debug regex or write a single date format line by hand ever again, and I will never ever miss it.

          Every bit of coding that did anything other than give me meaningful progress on features on the thing I was actually trying to build, which at times was 80%+ of it, was always terrible.

        • By faresfa 2026-02-090:05

          [dead]

      • By neomantra 2026-02-0816:48

        That's precisely how I refactored dank-extract from dank-mcp and finally got dank-data to archive CT canna-data every Sunday at 4:20pm Pacific.

        [1] https://github.com/AgentDank/dank-extract

        [2] https://github.com/AgentDank/dank-data

      • By neom 2026-02-0816:232 reply

        yeah that's how I do it too but careful with the blunt, yesterday I was working away, had some time while an agent swarm ran, took a "little break" and now ghostty looks like this: https://s.h4x.club/kpuGgD12 :|

        • By 0_____0 2026-02-0816:501 reply

          I feel like we used to do so much more customization and theming of our desktop environments in the 2000s/2010s. I miss it, and therefore love this for you!

          • By neom 2026-02-0817:11

            I was totally obsessed with it, when I first got into linux as a kid in the late 90s, I spent forever and hours tweaking just my bashrc. I spent entirely too long on this stupid terminal, the media player and getting the rainbow thing to match up to the time of whatever was playing... oh well, had fun!

        • By jama211 2026-02-0911:571 reply

          Oh man, I’d love your config if you were willing to share, that looks sweet

          • By neom 2026-02-0913:111 reply

            Well "little break" - I forked ghostty and re-wrote parts of it for Metal and a bunch of new uniforms and iChannels - so my config may not work for you, but here is everything config wise I have running right now in full: https://s.h4x.club/NQuG6pZN

            • By jama211 2026-02-0918:52

              Oh wow I see, great effort either way, thanks

      • By dvfjsdhgfv 2026-02-0816:57

        Man you restored the original vibe in vibe coding.

        (In all seriousness though, it's probably not good for your health.)

      • By djkivi 2026-02-0816:25

        20 years ago we had the Ballmer Peak. With AI you may have found the next thing!

      • By dubeye 2026-02-0816:21

        I play piano, it' a cool game play loop

      • By amelius 2026-02-0815:494 reply

        Except there is a well-known phenomenon among programmers that commencing to work requires more energy than working itself (*).

        Every time you chill out and come back to work, you will have to invest that extra bit of start-up energy. Which can be draining.

        (* probably has to do with reloading your working memory)

        • By dwringer 2026-02-0816:48

          While this is certainly very true, I find coding through an LLM to require far less effort dedicated to this cognitive switching than does writing in some programming language, primarily because I no longer have to load the mental context for converting my high level human instructions to code that a programming environment actually supports. The mental context seems more lightweight and closer to the way I think about the problem when I'm not sitting at the computer actively working on it. If an idea comes to me while I'm away from the computer I can momentarily sit down, type in whatever I just thought of, and get going almost immediately. I think it also saves a huge amount of cognitive load and stress (for me) involved with switching around between different programs and languages, an unfortunate fact of life when dealing with legacy systems.

        • By jama211 2026-02-0911:59

          This is true for me but just due to task anxiety in general, it’s not specific to coding. So I don’t get repeats of this with AI. If anything, it’s helped reduce the procrastination for me, because I feel more supported and less on my own with the task.

        • By trcf23 2026-02-0816:55

          I didn’t know it was a common thing but it’s definitely something I’ve experienced!

        • By tartoran 2026-02-0816:031 reply

          Context switching tax.

          • By spacecadet 2026-02-0816:451 reply

            For some. I have never experienced this myself. All my energy goes into keeping unconsciously destructive people from sabotaging the effort.

            • By hxugufjfjf 2026-02-096:302 reply

              What do you mean by that? Can you clarify?

              • By spacecadet 2026-02-0912:47

                It means people go through life making selfishly unconscious choices that hurt collaborative efforts and its tiring to constantly have to reign these people in instead of building said thing.

              • By jama211 2026-02-0911:581 reply

                They mean they feel intellectually superior to their peers and want to humble brag about it.

                • By spacecadet 2026-02-0912:451 reply

                  No. Im the dumbest person in the room, mate. It means I have to deal with you all day. Sigh.

                  • By jama211 2026-02-0918:501 reply

                    Well, at least one of those statements is probably true.

                    • By spacecadet 2026-02-1014:021 reply

                      Yes, the first one obviously. Try it out sometime.

                      • By jama211 2026-02-1018:291 reply

                        I err, don’t think you’re saying what you think you’re saying.

                        • By spacecadet 2026-02-1116:371 reply

                          Omg you instigator, you are this person! Let me just spell it out and end this. Im an idiot man, I dont try to project intelligence, I just listen and act... and sometimes comment on the negative behavior of others, which comes from lived experience. Appreciate you being an example of antagonistically wasting someone's time because of what? An assumption of what I meant? wow.

                          • By jama211 2026-02-1618:57

                            Oh I’m the one wasting your time now? Suuure buddy…

      • By ndsipa_pomu 2026-02-0818:02

        I can recommend a Steam Deck for gaming that can be easily interrupted and resumed. It also means that you can leave your "work" machine so that you can see when claude has finished.

      • By keyle 2026-02-0815:501 reply

        Imagine the captain high while auto pilot is on... who's flying this thing!

        • By jondwillis 2026-02-0816:14

          It’ll probably be fine. Mostly.

      • By stack_framer 2026-02-0816:224 reply

        Wow, this is a new low I did not expect to see. "Just get high and you'll be able to code with an LLM." Preceded by, "I know it's terrible."

        I'm wholly unwilling to relinquish my health and my morals to "AI" so I can "ship faster." What a pathetic existence that would be.

        • By mikkupikku 2026-02-0816:54

          I appreciate your honesty. For what it's worth, this is not a commercial endeavor and is all motivated entirely by scratching my own personal itches. I'm not being paid to do this.

        • By ramon156 2026-02-0816:522 reply

          One of the best programmers I know personally is constantly under the influence of marijuana. As "immature" as it may sound, she's still extremely aware of what she's doing and is able to work in an environment I would give up in after 2 weeks. The kind of environment that denies 1 day PTO for your birthday because of a deadline (hint, every week is a deadline).

          I do not smoke myself, but it made me realize how little I know regarding THC and CBD

          • By klibertp 2026-02-0821:08

            > I do not smoke myself, but it made me realize how little I know regarding THC and CBD

            Long-term use causes the psychedelic part of THC effects to diminish over time. At some point, only a mild depressant effect remains - somewhat similar to chamomile. It does have some effect on intelligence and short-term memory, but if the alternative is to be too stressed to think at all, it might be better to just smoke.

            Obviously, if possible, psychotherapy or a prescription from a psychiatrist (or better yet, a change of environment) would be better (in the latter case, it depends on the prescribed drug, of course), but THC is not that bad an alternative where it's legal.

        • By johnnyanmac 2026-02-0819:43

          Natural results when incentives are on "shippng" and not "quality". People find ways to ship out stuff faster. And perhaps drown their apprehensions about quality.

        • By coldtea 2026-02-0816:50

          Welcome to Taylorism. Not just for assembly line workers anymore.

    • By gengstrand 2026-02-0816:212 reply

      That's a type of fatigue that is not new but I hear you, context switching fatigue has increased ten fold with the introduction of agentic AI coding tools. Here are some more types of fatigue that have been increased with the adoption of LLMs in writing code.

      There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.

      The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.

      The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.

      • By Forgeties79 2026-02-0816:29

        I think all of this is why I don’t really experiment with an LLM anymore. I just use it to ideatw/rewrite things in different styles so I can turn rough drafts into finished things. It’s just an editor to bounce ideas off of essentially. Using it that way is the only way I find myself being actually productive and not annoyed with it

      • By ghthor 2026-02-0819:14

        Maybe this is why I’m different. I love reviewing code, it’s a great way to learn about a system, get new ideas. Diffs are great, see how things are interconnected

    • By alex_c 2026-02-0815:192 reply

      I joke that I'm on the "Claude Code workout plan" now.

      Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.

      Moving around does help, but even so, the mental fatigue is real!

      • By _aavaa_ 2026-02-0815:23

        Coffee shops got filled with the laptop crew, are gyms the next frontier?

      • By joebates 2026-02-0816:351 reply

        I've seriously wondered about merging a home office and home gym into one, and doing sets in between claude working. My usual workout has about 22-30 sets of exercises total and I probably wait on Claude that often in a day. It would be wonderful to be able to spread my exercise throughout the entire day. I'd also include an adjustable height desk so that I could be standing up for much of the workout/workday. I could even have a whiteboard in there.

        • By jq-r 2026-02-0820:14

          I have a small gym next to my home office and I just cannot do it. When I train I need to be 100% focused on the exercises otherwise in the best case I'll stagnate, and in the worst I'll injure myself. So instead I do some mindless chores if time allows.

    • By ericmcer 2026-02-0815:146 reply

      Seriously and beyond productivity, flow state was what I liked most about the job. A cup of coffee and noise cancelling headphones and a 2 hour locked in session were when I felt most in love with programming.

      • By safety1st 2026-02-0815:37

        I'm not at all convinced that "break your concentration and go check on an agent once every several minutes" is a productivity increaser. We already know that compulsively checking your inbox while you try to code makes your output worse. Both kill your focus and that focus isn't optional when you're doing cognitively taxing work--you know, the stuff an AI can't do. So at the moment it's like we're lobotomizing ourselves in order to babysit a robot that's dumber than we are.

        That said I don't dispute the value of agents but I haven't really figured out what the right workflow is. I think the AI either needs to be really fast if it's going to help me with my main task, so that it doesn't mess up my state of flow/concentration, or it needs to be something I set and forget for long periods of time. For the latter maybe the "AIs submitting PRs" approach will ultimately be the right way to go but I have yet to come across an agent whose output doesn't require quite a lot of planning, back and forth, and code review. I'm still thinking in the long run the main enduring value may be that these LLMs are a "conversational UI" to something, not that they're going to be like little mini-employees.

      • By zabzonk 2026-02-0816:261 reply

        Speaking as someone with over 40 years paid programming experience, I've never understood this "flow" thing. I typically do about half an hours typing, get up and walk around, mooch over to colleague and yack bit, or go to the coffee machine, or just think a bit and then go back to the keyboard.

        Never used headphones - if the environment is too loud, make it quieter. I once moved into a new office area that had a dot-matrix printer that "logged", in the worst sense of the word (how could you find any access on such a giant printout), every door open/close in the block. It was beyond annoying (ever heard a DM printer? only thing worse is a daisy wheel) so I simply unplugged it, took out the ink ribbon and twisted off the print head. It was never replaced, because as is very often the case nobody ever used the "reports" it produced.

        • By johnnyanmac 2026-02-0819:48

          Half an hour of typing would be above average attention span for the youth these days. That's pretty much how Pomodoro Timers start out for people who can't focus at all.

          >if the environment is too loud, make it quieter.

          we shifted to open office setups over the decades. There may not even be anyway to make things "quieter" externally.

      • By treespace8 2026-02-0815:25

        For me AI has given that back to me. I'm back to just getting stuff built, not getting stuck for long when working in a new area. And best of all using AI for cleanup! Generate some tests, refactor common code. The boring corporate stuff.

      • By parpfish 2026-02-0815:214 reply

        I love the flow state, and I’m pretty sure it’s fundamentally incompatible with prompting. For me, when the flow state kicks in, it’s completely nonverbal and my inner dialogue shuts up. I think that’s part of why it feels so cool and fun when it hits.

        But LLM prompting requires you to constantly engage with language processing to summarize and review the problem.

        • By jaapz 2026-02-0815:36

          That's pretty funny because LLM's actually help me achieve flow state easier because they help me automate away the dumb shit that normally kind of blocks me. Flow state for me is not (just) churning out lines of code but having that flow of thought in my head that eventually flows to a solved problem without being interrupted. Interesting that for you the flow state actually means your mind shutting up lol. For me it means shutting up about random shit that doesn't matter to the task at hand and being focused only on solving the current problem.

          It helps that I don't outsource huge tasks to the LLM, because then I lose track of what's happening and what needs to be done. I just code the fun part, then ask the LLM to do the parts that I find boring (like updating all 2000 usages of a certain function I just changed).

        • By saltcured 2026-02-0819:05

          As someone with no inner monologue, I think I could just as easily "flow" about a non-verbal task like spatial reasoning or a verbal task like reading, writing, or even engaging in a particularly technical or abstract conversation. Unlike you, my resting state is non-verbal and I would not be able to correlate verbal content with flow like that.

          To me, flow is a mental analogue to the physical experience of peak athletic output. E.g. when you are are at or near your maximum cardiovascular throughput and everything is going to training and plan. It's not a perfect dichotomy. After all, athletics also involve a lot of mental effort, and they have more metabolic side-effects. I've never heard of anybody hitting their lactate threshold from intense thinking...

          My point is that the peak mental output could be applied to many different modes of thought, just as your cardiovascular capacity can be applied to many different sports activities. A lot of analogies I hear seem too narrow, like they only accept one thinking task as flow state.

          I also don't think it is easy to describe flow in terms of attention or focus. I think one can be in a flow state with a task that involves breadth or depth of attention. But, I do suspect there is some kind of fixed sum aspect to it. Being at peak flow is a kind of prioritization and tradeoff, where irrelevant cognitive tasks get excluded to devote more resources to the main task.

          A person flowing on a deep task may seem to have a blindness to things outside their narrow focus. But I think others can flow in a way that lets them juggle many things, but instead having a blindness to the depth of some issues. Sometimes, I think many contemporary tech debates, including experience of AI tech, are due to different dispositions on this spectrum...

        • By macNchz 2026-02-0816:28

          Interesting that for some people flow state is non-verbal. I personally have sort of a constant dialogue in my head (or sometimes muttered out loud under my breath) that I have to buffer or spool into various notes/diagrams/code. The process of prompting winds up being complementary to this—typing out that stream of consciousness into a prompt and editing it becomes a more effective form of reflection and ideation than my own process had been before. Sometimes I don’t even send the prompt—the act of structuring my thinking while writing it having made me rethink my approach altogether.

        • By yukinon 2026-02-0819:18

          This really goes to show that everyone's 'flow state' is different.

          My inner dialogue is always chatty; that doesn't stop when I enter a flow state. It just becomes far more laser focused and far less distracted. LLMs help to maintain the flow because I'm able to use it to automate anything I don't care about (e.g. config files) and troubleshoot with me quickly to keep the flow going.

      • By luplex 2026-02-0815:43

        I still hit the flow state in cursor, always reviewing the plan for some feature, asking questions, learning, reviewing code. I'm still thinking hard to keep up with the model.

      • By stackedinserter 2026-02-0815:22

        The question is the result of these 2 hours in noise cancelling headphones.

    • By rubslopes 2026-02-0816:131 reply

      I used to lose myself in focused work for hours. That's changed. Now I'm constantly pulled away, and I've noticed the pattern: I send a prompt, wait for the reply, and drift into browsing. Without SelfControl blocking me, I can't seem to resist. I am certainly more productive with LLMs, but I also feel much more tired (and guilty) after a day of work.

      • By thunfischtoast 2026-02-0816:471 reply

        This has been a common pattern for me before LLMs, when my work required constantly rebuilding models or doing small deployments where each task/try took more than ~20ish seconds and less than say 3 minutes. It's enough to pull you out of it but not enough to make a proper break or switch tasks.

        I suffered from the problems you describe, grabbing a browser window or my phone which would usually take my attention much longer than the task and it left me burned out at the end of the day.

        There are some helper tools, like blocking "interesting" pages (like HN, reddit) on the browser, putting the phone in the bag at the end of the room or using a pomodoro timer so sequence proper breaks. But at the end the only thing that really helped is getting into meditation: I try to use these little interruptions of flow as a opportunity to bore myself. Try to reframe boredom from being an annoyance that needs to be fought to a chance to relax your brain for a couple of seconds and refocus.

        The want to grab the phone is hard at the start, but it gets better very soon when you manage to push through the discomfort in the first days.

        • By rubslopes 2026-02-0819:01

          Thank you for this comment. I meditated almost daily for years, but somewhere along the way life got hectic and the habit slipped. Time to revisit it.

          EDIT: I wanted to add that I think it's a great time to get back to it because this mental fatigue has been leading me to migrate to more analog tools, like pen and paper for journaling and ditching my smartwatch for analog ones.

    • By amelius 2026-02-0815:47

      As a programmer I want to minimize my context switches, because they require a lot of energy.

      LLMs force me to context switch all the time.

    • By jeremyjacob 2026-02-0815:592 reply

      I don’t think it’s unreasonable to assume that in 1-2 years inference speed with have increased enough to allow for “real time” prompting where the agent finishes work in a few seconds instead of a couple minutes. That will certainly change our workflows. Seems like we are in the dial-up era currently.

      • By deaux 2026-02-0816:26

        It's arguably already here, only cost is a concern. We now have an open weights model - =you can throw as much hardware at it as you want to speed it up - at Sonnet 4.5+ level.

        Today Anthropic started offering 3x(?) Opus speed at 5x cost as well.

      • By jmalicki 2026-02-0822:48

        But then models will do more computation, to be slower.

        What will have to change are workflows. Why are you ever waiting for the prompt to return? When you send an email, do you stare at your screen until you get a reply?

    • By iterateoften 2026-02-0815:17

      For me plan mode is consistently pretty fast. Then to implement I just walk away and wait for it to be done while working on new plan in new tab

      Probably more stress if I’m on battery and don’t want the laptop to sleep or WiFi to get interrupted.

    • By zozbot234 2026-02-0815:20

      You're supposed to write a detailed spec first (ask the AI for help with that part of the job too!) so that it's less likely to go off track when writing the code. Then just ask it to write the code and switch to something else. Review the result when the work is done. The spec then becomes part of your documentation.

    • By rcarmo 2026-02-0816:04

      This. It’s the context switching and synchronicity, just like when you are managing a project and go round the table - every touch point risks having to go back and remember a bazillion things, plus in the meantime you lose the flow state.

    • By agumonkey 2026-02-0821:54

      Isn't that similar to the FSD issues where people cannot engage deeply enough because it's "FSD" but they still have to switch back a little, and sometimes go into crisis to avoid a wreck ?

    • By xnx 2026-02-0815:23

      Inferring is the new compiling: https://3d.xkcd.com/303/

      Edit: Looks like plenty of people have observed this: https://www.reddit.com/r/xkcd/comments/12dpnlk/compiling_upd...

    • By wouldbecouldbe 2026-02-0815:083 reply

      That’s why now it’s legitimate to work on multiple features or projects at the same time

      • By well_ackshually 2026-02-0815:111 reply

        This way you can do twice the terrible job twice as fast!

        (Also, this only applies if what you're working on happens to be easily parallelizable _and_ you're part of the extremely privileged subset of SV software engineers. Try getting two Android Studios/XCodes/Clang builds in parallel without 128GB of RAM, see what happens).

        • By wouldbecouldbe 2026-02-0816:401 reply

          I appreciate sarcasm, but this is just snarky.

          But yeah improving build speed & parallel running I think are one of the biggest advances devs can do to speed up development time in the AI age. With native apps that can be a challenge. I restructured a react native project to make it faster to iterate, but I have a feeling you might not be fond of rn.

          • By johnnyanmac 2026-02-0819:59

            >I appreciate sarcasm, but this is just snarky.

            With all due respect, the commment does carry the vibes of "it's okay I can totally multitask" that's nearly a cliche in programming. It just feels a bit insensitive to respond to an article talking about context switching being a big problem with LLMs and then suggest the biggest possible context switch of managing multiple workstations.

            And yes, your hardware expenditure will vary per industry. I can barely run one instance of Unreal Engine as is.

      • By Forge36 2026-02-0815:124 reply

        Context switching like that is exhausting

        • By jdonaldson 2026-02-0815:14

          It's a different kind of fatigue, but it's something a felt I got stronger at over time. Beats waiting IMHO, but be sure to give yourself a chance to rest.

        • By Davidzheng 2026-02-0815:17

          really interested in what the brain does when it "loads" the context for something it's familiar with but is currently unloaded from the working memory. Does it mostly try to align some internal state? or more just load memories into fast access

        • By AlienRobot 2026-02-0815:16

          The next step is running an LLM that tries to figure out parts of the project that you aren't working on so it automatically starts coding that while letting you code in peace other stuff manually.

        • By wouldbecouldbe 2026-02-0816:35

          Depends on the person I guess

      • By joebates 2026-02-0816:36

        I tried this but didn't realize how exhausting it is to think about even 2 smaller items at once.

    • By JeremyNT 2026-02-0816:46

      I have this problem too.

      I try to fix it by having multiple opencode instances running on multiple issues from different projects at the same time, but it feels like I'm just herding robots.

      Maybe I'm ready for gastown..

    • By WarmWash 2026-02-0815:151 reply

      I hope Google has been improving their diffusion model in the background this whole time. Having an agentic system that can spin up diffusion agents for lite tasks would be awesome

      • By ithkuil 2026-02-0816:051 reply

        Because they would be faster?

        • By WarmWash 2026-02-0816:26

          ~1000 tok/sec and lite/flash model quality, without crazy cerebras level hardware.

    • By Davidzheng 2026-02-0815:181 reply

      makes you wonder how automate-able this babysitter roles is...

      • By pfdietz 2026-02-0815:41

        That was my reaction.

    • By Fire-Dragon-DoL 2026-02-093:22

      So far the only solution I found is doing multiple things in parallel. Apparently it's terrible for our brain, but it does work

    • By mavamaarten 2026-02-0815:46

      For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.

      Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.

    • By 9rx 2026-02-0816:44

      Same here. But, then again, I talk about it all the time, so who knows what the article is trying to get at.

    • By the-grump 2026-02-0815:43

      What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.

      e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.

      The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.

    • By aixpert 2026-02-097:48

      as long as it's new I tremendously enjoy binge watching Claude: I have three tabs open and if one of them is not doing something interesting I just switch to a different channel, and occasionally influenced the narrative

    • By jwarden 2026-02-0815:201 reply

      It’s like being a manager.

      • By parpfish 2026-02-0815:241 reply

        No, it’s like being a micro manager.

        I don’t just give somebody a bit’s ticket a let the go. I give them a ticket but have to hover over their shoulder and nitpick their design choices.

        Tell them “you should use a different name for that new class”, “that function should actually be a method on this other thing”, etc

        • By xnx 2026-02-0815:431 reply

          Nitpicking seems like a choice. It's also possible to be more relaxed/removed and only delve in when there is a problem.

          • By johnnyanmac 2026-02-0820:04

            We're at the stage right now where there's always a problem. That's the issue with this type of programming right now.

    • By ReptileMan 2026-02-0816:34

      Have a three monitor setup. Have some game on one in alt tab.

    • By SecretDreams 2026-02-0815:251 reply

      I wonder if this is how managers feel -_-'

      • By coffeefirst 2026-02-0816:33

        Somewhat. You have to set yourself up to manage your own attention because the context switching is rough. If you don’t you will burn out.

        But the cycle is longer. When you help a person they don’t come back to you 4-20 minutes later.

        I also only review PRs at specific times a day, because that’s more cognitively intensive and switching in and out pretty much ensures you’ll do it badly.

        Either way, I’m really starting to think agentic as designed is a deeply flawed workflow. The future could be small, fast models that finish pseudo code and look stuff up to aide focus. Anthropic’s own research seems to support this.

    • By pylua 2026-02-0815:052 reply

      What are you generating that the llm takes so long ? I usually prompt and review in small pieces.

      • By Forge36 2026-02-0815:09

        For me: Will this task take 30 seconds or 3 minutes. With good planning I've been able to step away and come back. Sometimes it decides to prompt me within 5 seconds for permissions. Sometimes it runs for 15 minutes.

        The output is still small and I can review it. I can switch tasks, however if it's my primary effort for the day I don't like stepping away for an hour to do something else.

      • By Scene_Cast2 2026-02-0815:11

        Not the OP, but the new LLMs together with harnesses (OpenCode in my case) can handle larger scopes of work - so the workflow moves away from pair programming (single-file changes, small scope diffs) to full-feature PR reviewing.

    • By SomeHacker44 2026-02-0815:231 reply

      "Compiling!" (C.f. xkcd)

    • By likeajr 2026-02-0815:48

      [dead]

    • By z0ltan 2026-02-0815:22

      [dead]

  • By barishnamazov 2026-02-0815:0211 reply

    This write-up has good ideas but gives me the "AI-generated reading fatigue." Things that can cleanly be expressed in 1-2 sentences are whole paragraphs, often with examples that seem unnecessary or unrealistic. There are also some wrong claims like below:

    > The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.

    These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.

    The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue

    • By raincole 2026-02-0815:522 reply

      > HN homepage is still quite sane.

      Those Show HN posts aren't the insane part. Insane part is like:

      > Thank you, OpenClaw. Thank you, AGI—for me, it’s already here.

      > If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement

      > Code must not be reviewed by humans

      > Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.

      (All quoted from actual homepage posts today. Fun game: guess which quote is from which article)

      • By Matheus28 2026-02-0817:03

        I personally believe what we’re seeing are newcomers who aren’t even programmers who fall for all this crap and then come here to post about it

      • By krupan 2026-02-0816:40

        This is so disheartening. Time to short more tech stocks

    • By jairuhme 2026-02-0816:031 reply

      > Things that can cleanly be expressed in 1-2 sentences are whole paragraphs

      Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.

    • By StilesCrisis 2026-02-0815:311 reply

      "You're not imagining it." I hit back immediately.

      • By goostavos 2026-02-0815:451 reply

        Sigh.. same.

        The real AI fatigue is the constant background irritation I have when interacting with LLMs.

        "You're not imagining it" "You're not crazy" "You're absolutely right!" "Your right to push back on this" "Here's the no fluff, correct, non-reddit answer"

        • By Xcelerate 2026-02-0816:44

          “You’re not [X]—you’re [Y]” is the one that drives me nuts. [X] is typically some negative characterization that, without RLHF, the model would likely just state directly. I get enough politics/subtext from humans. I’d rather the LLM just call it straight.

    • By QuadmasterXLII 2026-02-0815:41

      The boring and likely answer is that is was just clauded out,”I’m tired chat, look through my last ten days of sessions and write and publish a blog post about why,” but it would be fascinating to discover that the author has actually looked at so much ai output that they just write like this now

    • By idopmstuff 2026-02-0815:232 reply

      > Things that can cleanly be expressed in 1-2 sentences are whole paragraphs

      Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)

      • By nemomarx 2026-02-0816:15

        I think those kind of texts (school papers, marketing fluff, linkedin influencers trying to look smart) just influenced the dataset a lot.

        Too bad we didn't have more laconic, interesting books to feed in?

      • By ryukoposting 2026-02-0815:30

        I had word/page quotas, but I also don't write my blog in a way that resembles the papers I wrote for school 10 years ago.

    • By bwfan123 2026-02-0815:28

      > but gives me the "AI-generated reading fatigue."

      Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.

    • By jitbit 2026-02-0912:18

      Can't upvote this comment enough.

      "You're not imagining it."

      "But my days got harder. Not easier. Harder."

      "Now?" as the paragraph opener

      "Why? No reason." as the paragraph opener

      Nice try OP, submitting your own post to HN.

    • By pcurve 2026-02-0815:142 reply

      The headline is clickbait-y but I think the article is well articulated. I found the "What actually helped" helpful too.

      • By rsrsrs86 2026-02-0816:16

        Article is mostly GPT vomit after a couple bullet pints. If it’s not as easy for others to tell I’ll stay my blade runner style shop that tells who NOT to hire

      • By barishnamazov 2026-02-0815:28

        I'd personally rethink about applying some advice in that section. Here's my take.

        > Time-boxing AI sessions.

        Unless you are a full-time vibe coder, you already wouldn't be using AI all the time. But time boxing it feels artificial, if it's able to make good and real progress (not unmaintainable slop).

        > Separating AI time from thinking time.

        My usage of AI involves doing a lot of thinking, either collaboratively within a chat, or by myself while it's doing some agentic loop.

        > Accepting 70% from AI.

        This is a confusing statement. 70% what? What does 70% usable even mean? If it means around 70% of features work and other 30% is broken, perhaps AI shouldn't be used for those 30% in the first place.

        > Being strategic about the hype cycle.

        Hype cycles have always been a thing. It's good for mind in general to avoid them.

        > Logging where AI helps and where it doesn't.

        I do most of this logging in my agent md files instead of a separate log. Also after a bit my memory picks it up really quickly what AI can do and what it can't. I assume this is a natural process for many fellow engineers.

        > Not reviewing everything AI produces.

        If you are shipping in an insane speed, this is just an expected outcome, not an advice you can follow.

    • By johnnyanmac 2026-02-0820:09

      >These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.

      top 24 hours is a better way to get sentiment. Here's the top 5 for today (not including this post at #2)

      > DoNotNotify is now Open Source

      > I am happier writing code by hand

      > Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

      > Slop Terrifies Me

      > Vouch

      real shame, we just missed the politics post at #7.

    • By spopejoy 2026-02-092:44

      I'm also getting really annoyed by AI-generated images like this article has that don't really help comprehension, but make the author feel like they're "pro blogging" because god forbid you have two paragraphs in a row without a subhead or an image.

      Programmers complaining about AI but then ripping off umpteen illustrators' labor through AI is infuriating.

  • By tangotaylor 2026-02-0815:13

    > Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.

    This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:

    > The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.

    https://www.theatlantic.com/magazine/archive/1932/08/put-you...

HackerNews