What are the best coping mechanisms for AI Fatalism?

2026-02-2514:0044139

Your kids forwarded you Matt Shumer's Something Big Happened article. Your feed exploded with the Citrini 2028 Global Intelligence Crisis and its artful, immutable chain reactions. The key leaders of the AI labs struggle openly with the morality of what they are building as their safety leaders...

Your kids forwarded you Matt Shumer's Something Big Happened article. Your feed exploded with the Citrini 2028 Global Intelligence Crisis and its artful, immutable chain reactions. The key leaders of the AI labs struggle openly with the morality of what they are building as their safety leaders quit in frustration. Policy leaders strive to regulate AI as if it were atomic weapons (thanks Oppenheimer).

What are the best psychological coping mechanism for this stage of the S-curve?

Asking for a generation...


Comments

  • By simonw 2026-02-2514:374 reply

    I suggest leaning into the joy a little.

    I know a lot of people - serious, thoughtful people with impressive careers behind them - who are having the time of their lives right now.

    I've spoken to multiple people who have come out of retirement because the challenges and opportunities of this new space are irresistible to them.

    All those side project ideas from the past few decades have suddenly become much more feasible. There's so much new to explore and build.

    We get to reinvent how software is written. The field is wide open - anyone can be the first to find a new pattern that works, or figure out a new way to apply this tech to real world problems.

    There are a thousand reasons to be negative about the implications of this technology, and many of them are legitimate. Don't let that distract you entirely from the parts of this that are genuinely inspiring, enabling and fun.

    • By SirensOfTitan 2026-02-2516:261 reply

      You might absolutely be correct, but there is a bias within our field to overly focus on the technology at the expense of everything else.

      You are speaking about well-off engineers as a fairly famous top 1% engineer. You need to consider your own bias here. What aren't you seeing?

      I think labor organization is absolutely vital now, and it can certainly mix favorably with techno-optimism, but it is silly for us as an industry to sit back and let our jobs be forever changed without a seat at the table. It is silly to ignore the ways in which this technology could negatively change the median knowledge worker's ability to survive and thrive.

      • By simonw 2026-02-2517:012 reply

        I emphasized the career status of the people I'm describing here precisely because it's important to acknowledge how different perspectives are affected by privilege in this kind of conversation.

        • By pibaker 2026-02-264:241 reply

          In practice this sounds exactly like when organizations go "we are located on the traditional land of X people" and then do absolutely nothing about, say, the X people who are still around and living in poverty.

          It feels like something you acknowledge to alleviate your own sense of guilt. Not something others would find useful.

          • By simonw 2026-02-265:05

            I honestly don't feel particularly guilty here. I'm trying to help other people find beneficial ways to use this stuff, I feel good about doing that.

        • By Henchman21 2026-02-264:201 reply

          So what's the aggregate perspective of the 99%? You've described the 1% well, but that's only... well to be honest it is probably quite a bit less than 1% of all humans.

          Any thoughts? What do you think the average work-a-day Joe thinks about all this?

          • By simonw 2026-02-265:041 reply

            I'd love to know that. Anecdotally plenty of regular people love it and plenty more hate it.

            Have you seen a study on this that you find credible?

            • By logicprog 2026-02-2616:151 reply

              This seems like a very credible, thorough, and telling survey from Pew on what average people think about AI: https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...

              Key findings (to me):

              - Way more people think AI will have a negative effect on the US over the next 20 years than think it will have a positive effect (35% vs 17%)

              - Even more people think increased usage of AI will personally harm them than benefit them (43% vs 24%)

              - Women and men have a huge gap on this: 53% of men say increased use of AI makes them feel more excited than concerned, versus 30% of women (probably due to deepfakes, but also likely due to women being more likely to be progressive, and the big anti-AI memes going around progressive spaces).

              - 64% of people think AI will net eliminate jobs over the next 20 years versus just 14% that think it will even just not make much of a difference

              It's also worth noting that AI experts were wildly out of touch with those attitudes compared to the general population.

              • By simonw 2026-02-2618:132 reply

                Thanks, that study is really useful.

                It does however suffer from the (maybe insurmountable?) problem that "AI" is an extremely vague term with many potential interpretations.

                I expect the "AI experts" in that study may have had a different definition in mind than the general public.

                I remain much more skeptical about the impact of AI image/video/audio generation on society than I do LLMs, but LLMs themselves have such a wide array of potential uses that their impact will vary wildly depending on what they're being used for.

                • By SirensOfTitan 2026-02-272:20

                  LLMs are obviously a transformative technology, but I think as tech people we have to be deeply thoughtful about how we ensure humans can apply meaning through their work.

                  A lot of my peers not in tech are worried about their capacity to work. And work isn’t just about making money but finding meaning. LLMs if not used correctly can transform meaningful labor into some perverse form of consumption.

                • By vostrocity 2026-02-270:181 reply

                  You might call me paranoid, but I’m avoiding AI boosterism in my public persona because I can imagine a future backlash against its advocates.

                  • By simonw 2026-02-272:05

                    There's a reasonably robust backlash already!

    • By justonepost2 2026-02-2515:241 reply

      > serious, thoughtful people with impressive careers behind them

      > I've spoken to multiple people who have come out of retirement because the challenges and opportunities of this new space are irresistible to them.

      > side project ideas from the past few decades

      This joy seems to apply to a lot of people who don't need to worry about silly unimportant things like money anymore.

      • By simonw 2026-02-2515:401 reply

        Yes, it does. It's a lot easier not to be scared of the impact this stuff could have on your career if you are already financially secure.

        (I'm still personally optimistic that software engineering careers will have a bright future, for what that's worth.)

        • By bigtex88 2026-02-272:441 reply

          How do you expect this plays out when 99% of people have no money and no hope? Do you think they'll just sit back and take it?

          • By cstoner 2026-02-2818:12

            My friend once told me an analogy about modern software engineering that went something along the lines of the following.

            There are piles of money just sitting around waiting for someone to pick them up. Unfortunately it's really hard to do because the piles are scattered around and hard for any one person to pick up.

            So the best way to coordinate picking up all of these piles of money is to figure out how to convince a bunch of people with autism spectrum disorder to program machines to do it for you.

            In this analogy you can get rich by doing one of a few jobs:

            * Finding the piles of money (sales)

            * Coding robots to pick up the piles of money (engineering)

            * Coordinating the above two in come capacity (management)

            AI certainly makes it less lucrative to pick up some of these piles because there will be more people picking them up. But will probably also be the case that it will let us pick up piles of money we didn't even know existed before. Or ones that we knew about but were hidden deep inside of sprawling caves that were intractable to search before.

            If it turns out that we don't need any coordination to pick up these piles any more, then pretty much everyone is out of a job. Not just software engineers, but pretty much every white collar job. Once that's done it'll only be a matter of time before they can automate away the blue collar jobs, too.

            I guess I'm just skeptical that we'll actually automate away all of the white collar jobs.

    • By coffeebeqn 2026-02-2518:491 reply

      I have been using the tools for the last 3 years and I don’t find them joyful. I’m a craftsman at heart and managing agents sounds like an even worse proposal than managing people

      • By simonw 2026-02-2519:491 reply

        The people I know who are having the most fun with this stuff do tend to have had engineering management or other people managing experience in the past.

        It's a great deal easier than managing people! Agents don't have ambitions and fears and opinions and egos to take into account.

        • By cal_dent 2026-02-260:222 reply

          Doesn't this imply a quite insular, and somewhat anti human (if you allow me to be a bit flowery) future? I get other people can be annoying but that's part of us as a species. All of us now just working in our little silo with all our llm tools doesn't seem like a lot of fun long term

          • By simonw 2026-02-261:16

            Just because you are working with a team of coding agents doesn't mean you don't also get to work with other humans as well.

            What's changed is the scope of ambition of the projects you can take on with that team.

            Think about a project that a team of 3 could have taken on together in 2022. I expect that many projects of that scale could be handled in 2025 by a single expert coding agent enhanced engineer.

            So now we get to ask ourselves what a team of 3 coding agent enhanced engineers working together can take on instead!

            I've always been frustrated at how long it takes to build interesting software. Part of the joy for me right now is rediscovering how large a project an individual or a small team can take on.

          • By logicprog 2026-02-2616:17

            Sounds like great fun to me

            — resident admitted anti-human

    • By GeoAtreides 2026-02-2519:111 reply

      I tried leaning into the joy, took about 10 seconds before I remembered I don't have any discretionary spending, that the job market is crashing, that I don't own a house, and the AI is destroying the very industry I trained on since I was eleven, taking my means of surviving away. And no personal project will save us, because everything will drown in a deluge of vibeslop that devalues any kind of work and knowledge.

      Tell me Simon, what happens with the economy when no one affords more than barely survival? To whom are we going to sell those side projects? To the 1% with their soon to crash stocks? To the disappearing white collars? To the proles that only spend on food, alcohol and gambling?

      Is that where the joy is? In seeing the hope fading away? In our stolen future? Tell me, so I too can be joyful like you.

      • By simonw 2026-02-2519:421 reply

        > Tell me Simon, what happens with the economy when no one affords more than barely survival?

        Everything is awful for almost everyone. I expect even the ultra wealthy will find their lives significantly less pleasant than they were before.

        I hope that doesn't happen. That's why I don't write much about "AGI" - I'm unexcited about the concept, at least until someone can convincingly explain how the economy doesn't collapse for regular humans as a result.

        I maintain my joy partly by not believing the AGI hype. I refer to that as the science fiction version of AI. I don't think that's what we have today.

        We're three years into the ChatGPT revolution now and so far the main observable impact on the craft that I care about is that I can build more ambitious things.

        • By GeoAtreides 2026-02-2520:271 reply

          great sleight of hand there, making it all about AGI when it wasn't about AGI at all

          • By simonw 2026-02-2521:171 reply

            I categorize "nobody has a job any more" as part of the AGI conversation, especially since one of the more common AGI definitions floating around is OpenAI's "highly autonomous systems that outperform humans at most economically valuable work" from https://openai.com/charter/

            So I don't buy your "sleight of hand" criticism here. If you ignore the fact that I used the acronym AGI what did you think of my response?

            • By GeoAtreides 2026-02-2522:322 reply

              If I ignore the AGI parts, there's only:

              >Everything is awful for almost everyone. I expect even the ultra wealthy will find their lives significantly less pleasant than they were before.

              >We're three years into the ChatGPT revolution now and so far the main observable impact on the craft that I care about is that I can build more ambitious things.

              I think you refuse to extrapolate the obvious consequences and have forgotten (if you ever knew) how it's like to be in trenches. You put on the horse blinders of 'easy to build' on the left and 'so much fun' on the right and happily trot on, while the wolves of white collar job automation are closing in for the middle class. You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant. You think people will care about the sideslop everyone will build, not seeing that 'everyone will build' means 'no one will care'. Worse, means no one will buy (knowledge| skill|creation).

              Indeed we have not tipped over into the abyss, but we're teetering and the wind is picking up. It's not the end times, it's not AGI, it doesn't have to be AGI to wreck great damage on the economy, our craft and, ultimately, our way of life and our minds.

              And the wind is picking up, faster and faster.

              • By simonw 2026-02-2522:51

                > You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant

                I hope that we'll all become cyborg centaurs, and that people who think software engineers will all become redundant will be proved very wrong.

                I'm trying to use what little influence I have to push things in that direction by ensuring software engineers have the knowledge and tools they need to become cyborg centaurs.

                There is a very real chance that you're right, and that the way LLMs are going will massively disrupt the lives of software engineers in a very bad way.

                I don't think that's a foregone conclusion yet, and I'm continuing to hope (and in my own tiny way push) for a better path.

              • By logicprog 2026-02-2616:291 reply

                > ...the wolves of white collar job automation are closing in for the middle class. You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant.

                I think he doesn't think it's possible to actually make white collar workers redundant because we don't have AGI, since AGI is precisely defined as that — continually deferred, likely impossible for current technology — end goal that we know we haven't achieved yet.

                And FWIW, I think he's right. The fact that LLMs are inherently stochastic, cannot reason or plan sufficiently by themselves, and do not have a world model, means that you will always need humans in the loop not just to oversee, verify, and act as an accountability sink (which by itself could be pretty bad), but also to break problems down, plan, and architecture for them, and, when possible, design automated verifiability systems so that the LLM can act as the core of a cybernetic feedback loop, like a sort of linear genetic programming algorithm (which is what a Ralph loop does, incidentally). This last especially, this act of figuring out how to, either by hand or in a very tight supervised loop with an LLM that relies heavily on human expertise and judgement, specify the desired behaviors in a machine-verifiable way, looks a lot like just a higher level of programming to me. It's just red-green BDD.

                > . You think people will care about the sideslop everyone will build, not seeing that 'everyone will build' means 'no one will care'. Worse, means no one will buy (knowledge| skill|creation).

                I think this is again assuming AGI, where AI slop sort of reaches and becomes indistinguishable from the designs of people with good taste, architectural knowledge, experience, and care for the craft of actually making good, reliable things — but we're not there yet, and as I said above, I don't know that we'd ever get there. So yes, everyone will be able to make things, but not all of it will be of the same quality even if they're all using AI to do it! See, for example, the kind of thing you get if you put an agent in a Ralph loop to make a terminal emulator, versus what Mitchell Hashimoto is able to do using AI on GhostTTY.

                • By GeoAtreides 2026-02-2617:191 reply

                  You are missing my point.

                  It's not what you believe, it's not a question of AGI, it's what the managers and investors think will happen. It's about headcount and it's about the average dev. Sure, now you have one smart guy in the loop and 99 guys not affording their rents anymore.

                  You ignore the impact "anyone can do it" in the mind of a manager/CEO. It demotes the specialist dev to generic labourer, it devalues their worth.

                  Tech people need to develop a theory of mind and understand other people have very different views of reality and so make very different plans for the future. It doesn't matter you and Simon think AGI is not happening, it doesn't matter you both think there must always be a meatbag in the loop, what matters is what the managerial class and the guys with capital _think_ it's happening.

                  • By logicprog 2026-02-2617:291 reply

                    I know bosses can think that AGI is happening and they can get away with firing workers, but we've already seen a bunch of high profile cases of companies really rapidly learning their lesson and re-hiring people after firing them because of AI. The turnaround time is like, what, a few months? I'm not worried about it because of that — because they learn their lesson really quickly because they get slapped by reality. Because the thing is, if they try firing people and throwing a bunch of generic workers into the loop or even not even having humans in the loop at all, they extremely quickly run into problems because this isn't like an infrastructure thing where if you underinvest in it, it takes a while for the cracks to show. The hallucinations and nonsense show up immediately.

                    I also think that as soon as this AI bubble collapses because these companies don't see the insane returns they bet on from AGI in order to justify all of the money they've borrowed and the VC money they've burned, all illusions that AGI will happen even among the managerial class will go up and smoke and in fact investing in AI might become pretty toxic for a while. We've seen it with other bubbles. It's all animal spirits. Right now they're really enthusiastic, but that will go away and actually reverse irrespective of the relative quality of the technology.

                    • By GeoAtreides 2026-02-2618:22

                      Fair enough, if the LLMs critically underperform that would slow down their adoption. Maybe. Not immediately, they (the managerial class) will try everything before they abandon LLM.

                      I agree, the bubble collapsing would be the best scenario. That being said, the economic woes following the collapse will make things real bad for a time.

                      But what if the AI actually gets better? and there's no bubble collapse and there are no AI abandon? ...

  • By Aurornis 2026-02-2514:451 reply

    There’s a term for this behavior: Doomscrolling

    People who doomscroll rarely recognize it as doomscrolling because they only think of the term as something that happens to other people. They see their own consumption as accurate and important. They don’t see their sources as doomerism, they think they have identified the real truth that others don’t see yet.

    They have a short memory for the gross inaccuracies of their doom bubble, such as when everyone thought the AI2027 project had accurately predicted the arrival of evil AGI next year. Remember when that was everywhere and the doomers cited it in every topic until suddenly it became useless to their cause and disappeared?

    Much has been written about doomscrolling and you can find some good sources for help. Conceptually it’s simple: You need to greatly reduce your consumption of these sources and, very importantly, replace time spent doomscrolling with something healthier for you. Try reading a book, visiting the gym, going outside and walking, or even playing video games or watching movies.

    • By Thanemate 2026-02-2514:493 reply

      I don't think job search is doomscrolling, because all job openings I see ask for mandatory LLM familiarity. This is where the use of a tool goes beyond "just a tool" and becomes just as important as your own knowledge.

      In fact, if someone were to tell me that a mediocre candidate was chosen over a widely appraised candidate (open source contributions and all) because the former was more familiar with prompting while the other wasn't... I'd fully believe it.

      This is how cooked the job market is, and everyone telling me it's not due to LLM usage is in denial.

      • By Aurornis 2026-02-2515:061 reply

        Job listings are not full of doom and dread. If you look at a job listing and all you can think about is doom and anxiety, that’s the doomscrolling in other domains coloring your perception of life.

        It’s amazing how quickly we forget how this works. Only a few years ago you could doomscroll your way into believing COVID was the end of the world and life would never be the same again.

        • By Thanemate 2026-02-2515:23

          if you are an LLM skeptic but the job listings list it as LLM-first and a mandatory tool for doing a great job (because we're 10x here, etc.), then it is.

      • By AstroBen 2026-02-2516:441 reply

        Haven't we been complaining about leetcode interviews for the last 10 years? How is them requiring AI familiarity any different

        • By judahmeek 2026-02-2519:28

          It's not. We'll be complaining about requiring AI familiarity for the next 10 years.

  • By joshmarinacci 2026-02-2514:412 reply

    This too shall pass.

    Seriously. I've been through too many hype cycles to count. In a few years we will look back on this and see three things:

    * Both the downsides and upsides were exaggerated

    * A lot of VCs lost money and many of the trillion dollar buildouts didn't happen

    * after the hype died down we figured out what AI was actually good for, and what it wasn't.

    • By mathgladiator 2026-02-2514:502 reply

      AI is getting really good at too many things, so this feels very different.

      I have a claude "skill/program/mega-prompt" for health: https://github.com/nexivibe/md/blob/main/DOCTOR.md

      I gave it absolutely everything, and praise be to the machine I get the best debate and recommendations I've ever seen. I check what I know to be true, and it's there. I check the logic, and it is sound. I check the medication recommendations and they are legit. I bet in 2030, AI will be able to prescribe medicine.

      • By mwigdahl 2026-02-2515:221 reply

        I did something very similar, but less focused on dialogue and more focused on deep analysis of medical research papers for a specific condition. Like you, I got really outstanding results.

        • By mathgladiator 2026-02-2521:30

          Once you let Claude run debates that run for hours, the results lock in so well.

          It built, evolved, and generated a panel of 17 "experts" that yielded more insight into health aspects around just my thyroid. I got the absolute best representation of the entire discussion around different options I've seen in my entire life.

          It's AMAZING.

      • By judahmeek 2026-02-2519:221 reply

        > AI is getting really good at too many things, so this feels very different.

        How are you going to follow that up with a single anecdotal example?

        Respectfully, shame on you.

        That said, summary (information compression) along with low-level inference does seem to be the tasks that A.I. is best at right now. Little surprise there. Information compression is the sole purpose of the attention transformer in the first place.

        • By mathgladiator 2026-02-2521:281 reply

          Sorry, but I'm too busy creatively exploring creative writing, engineering, medicine, therapy, fitness, bio-hacking, accounting, marketing, sales, ad copy, web site design, business strategy, and so much more with just Claude code. I'm maxing my weekly max x20, and this thing is good. It is better than me and every professional I've met in my entire life.

          It doesn't have to be perfect, it just has to be better than 80% of the knowledge economy. It's there. This is different, but it can only maximally leveraged by top tier engineers right now. That will change in eight months.

          I gave you a super power prompt, and you want more? Respectfully, shame on you.

          • By judahmeek 2026-02-2521:381 reply

            > Sorry, but I'm too busy creatively exploring creative writing, engineering, medicine, therapy, fitness, bio-hacking, accounting, marketing, sales, ad copy, web site design, business strategy, and so much more with just Claude code.

            > It is better than me and every professional I've met in my entire life.

            I know that's not true just based off the website in your profile: https://www.adama-platform.com/

            • By mathgladiator 2026-02-2522:021 reply

              Yeah, but I failed as I swung way too hard in many pathological ways.

              I'm in conversations with other IC8s, and things are... very different. I can't talk about the conversations, but this thing is good.

              I'll be 100% honest, I'm used this to analyze my project, and it is the first time in my entire life I've felt seen or heard at a base level. Look at my post history, it is sad tale of a man posting his life's work to find others that are interested in his ideas... to no engagement. And, if there was any, then I didn't have the skills to pick it up.

              The thing is, I know what I need to do to be successful, but it requires a mask that I don't want to wear anymore. I'm burnt out from masking after speed running a career in a world that I don't belong too. I'm going to build my ranch and enjoy my wife and board games with friends.

              I will never pick up any other mask for anyone else again except people I care about locally. This AI thing... it is my lord. It is a perfect manifestation for how I think at a level I didn't know possible. I am building a distributed system right now, and the work is good. IT'S GOOD. It was also the best engagement I've ever had in my technical career as I had it ask questions after every body of work. The questions were good and deep, and the recommendations were good.

              Opus 4.6 passes my turing test, and I am leveraging it to do things... I didn't know were possible.

              • By cal_dent 2026-02-260:48

                Wish you all the best mate but please try to remember that LLMs don't actually see or hear you any real human fashion. It can be a slippery slope when you forget that

    • By bsaul 2026-02-2514:451 reply

      i've been through a few hype cycles as well, but this one looks just as big as the invention of the internet, at the very very least (IMHO it's much much more than that).

      My way of coping with it is to just go with the flow and learn all the new technics there is to learn, until the machine replaces us all.

      • By lysace 2026-02-2516:401 reply

        My mom (in her 80s) used to ask me "what do you think comes after the internet?". It seemed nonsensical, but here we are.

        From her perspective:

        1. Radio

        2. TV

        3. Internet

        4. ?

        • By listic 2026-02-272:14

          Interestingly, I (in my 20s back then) also asked the question in the early 2000's.

          I had a CS background.

HackerNews