Ask HN: AI Depression

2026-02-1312:125628

Hi,

Throw-away account because my original one is easily identifiable.

Does any starts to feel depressed about AI push and hype? I'm around ~45 and have been happily hacking and delivering stuff for 25 years.

I use AI daily — it's a useful tool. But the gap between the marketing and ...

Hi,

Throw-away account because my original one is easily identifiable.

Does any starts to feel depressed about AI push and hype? I'm around ~45 and have been happily hacking and delivering stuff for 25 years.

I use AI daily — it's a useful tool. But the gap between the marketing and reality for many of us is hard to describe. The people and corporations and all those LinkedIn gurus, podcasters declaring our obsolescence are overwhelmingly people who've never built or maintained anything complex in their lives. I'm sick of posts showing developers as awesome managers orchestrating fleets of Codex and Claude Code instances — I don't know a single person who actually has access to unlimited quotas for that. I'm now scared to publish open source because some random AI agent might spam my repo with garbage PRs and issues. Are we really expected to deliver mediocre C compilers while emitting millions of tons of CO2 into the atmosphere just to make a handful of rich people even more rich? And suddenly we have something like Moltbook to pollute our planet even more. Where are we going with this?

Anybody feels something like that? I seriously thinking about leaving the industry to keep my mental health in control or switch to some tech that is hard for AI.


Comments

  • By bakibab 2026-02-1319:18

    I actually opened HN to ask something similar. Thank you for putting this out there. Sadly, people who haven't delivered anything complex genuinely believe this is the end of the programmer role. I'm 43 and went through depression about my place in the industry. It was scary.

    Then I decided to build something complex using Claude, and within a week I realized that whoever claims "90% of code is written by LLMs" is not being totally honest, the parts left out from such posts tell a different story: programming is going to get harder, not easier.

    The project started great but turned into a large ball of spaghetti. It became really hard to extend, every feature you want to add requires Claude to rearrange large portions of the codebase. Debugging and reading logs are also very expensive tasks. If you don't have a mental model of the codebase, you have to rely on the LLM to read logs and figure things out for you.

    Overall, my impression is that we need to use this as just another tool and get proficient at it, instead of thinking it will do everything.

    Also, the recent Anthropic partnership with Accenture suggests otherwise [0]. If AI could do it all, why train humans?

    So please don't leave the industry. I think it will get worse before it gets better. We need to stick around longer and plan for all this hype period.

    [0] https://www.anthropic.com/news/anthropic-accenture-partnersh...

  • By bigstrat2003 2026-02-1318:051 reply

    I also struggle. I do not personally find LLMs useful for my job, but it does concern me that the hype might become so strong that I'm unable to find a job where I'm allowed to simply not use the tool if it isn't providing me value. And if it comes to that I'll use the stupid thing because I'd rather do that than go homeless, but I will certainly hate my job if it becomes just "supervise the LLMs".

    In order to combat that worry, I'm trying to focus on gratitude that I have had a career where I got paid for doing fun things (programming), rather than worrying about what if my career stops being fun. Many people never get that chance, after all, and live their entire lives working menial jobs just to put food on the table. I'm also trying to make my career less important to my own mental happiness by focusing on other things that are good and will not go away even if my career stops being fun (for me, that means my marriage and my faith).

    It's not easy to do, at all. And it also doesn't help the worry that I might even lose my job entirely because the industry abandons sense and fires people in favor of LLMs. But it does help a little, and I'm hoping that with practice the mental discipline will get easier and I can let go of the anxiety some.

    • By tacostakohashi 2026-02-1319:01

      I think the hype is already so strong, and coming from so high up, that it's not wise to say anything negative about it out loud. It's just like diversity / DEI / net zero etc. was five years ago, everyone has to comply and play along with it, at least performatively.

      Even though I don't personally find AI terribly useful for my own actual work, I keep people happy by talking about it where possible. If someone has a suggestion involving something something fairly repetitive, I tell them "that sounds like a great use case for an AI agent", even if it is, in fact, a great use case for a shell script.

      If someone has an inane question, I tell them "Have you tried asking copilot about this?" - it's the new "Let me google that for you...".

      If someone has a request to add a new feature that seems useless and counterproductive, I tell them "That's a great idea! How about you do that, using AI?", instead of getting into a debate about why it won't work in practice.

      I'm finding that mentioning AI in these contexts keeps people happy enough, without extensive personal use, for now at least.

  • By 0xecro1 2026-02-1313:151 reply

    Hi, I’ve been working with embedded Linux for 18 years.

    I’ve been actively trying to apply AI to our field, but the friction is real. We require determinism, whereas AI fundamentally operates on probability.

    The issue is the Pareto Principle in overdrive: AI gets you to 90% instantly, but in our environment, anything less than 100% is often a failure. Bridging that final 10% reliability gap is the real challenge.

    Still, I view total replacement as inevitable. We are currently in a transition period where our job is to rigorously experiment and figure out how to safely cross that gap.

    Good luck!

    • By jacquesm 2026-02-1313:251 reply

      And by not doing the 90% yourself you lack the understanding you need to be able to tackle the remaining 10%.

      • By 0xecro1 2026-02-1313:522 reply

        Absolutely agree. I do vibe-code, but I still review every line of that 90% — I don't move forward until I understand it and trust the quality. Right now, that human verification step is non-negotiable.

        That said, I have a hunch we're heading toward a world where we stop reading AI-generated code the same way we stopped reading assembly. Not today, not tomorrow, but the direction feels clear.

        Until then — yes, we need to understand every bit of what the AI writes.

        • By AnimalMuppet 2026-02-1314:461 reply

          I disagree. Compilers were deterministic. Complicated, but deterministic. You could be sure that it was going to emit something sensible.

          AI? Not so much. Not deterministic. Sure, the probability of something bizarre may go down. But with AI, as currently constituted, you will always need to review what it does.

          • By 0xecro1 2026-02-1315:052 reply

            I think the comparison is slightly off. The compiler was never the author — it was the verifier.

            The real comparison is: 1. Human writes code (non-deterministic, buggy) → compiler catches errors

            2. AI writes code (non-deterministic, buggy) → compiler catches errors

            In both cases, the author is non-deterministic. We never trusted human-written code without review and compilation either (and + lots of tests). The question isn't whether AI output needs verification — of course it does. The question is whether AI + human review produces better results faster than human alone.

            • By jacquesm 2026-02-1316:50

              The compiler isn't so much a verifier than that it is a translator. The verification step wasn't the initial focus but over time it became more and more important.

            • By apothegm 2026-02-1315:32

              The compiler catches certain classes of errors. And AI can spit out unmaintainable code or code with incorrect logic or giant security holes a lot faster than humans can review it.

        • By james_marks 2026-02-140:18

          My impression is that if you are reviewing the edits, that is not “vibe coding”, it’s AI-assisted programming.

HackerNews