"Token anxiety", a slot machine by any other name

2026-02-1618:23265230jkap.io

You're absolutely right!

I realize it's gauche to blog about some shit you saw on bluesky but yesterday I saw a post that encapsulated so much of what has been bumming me out about the rise of coding agents over the last year. this dread had been slowly rising from seeing blogs about using claude code from your phone while getting ready for work, while commuting, while waiting to pick your kids up from school, but it's come to a head.

Token Anxiety

i think i mostly echo this for myself. with so much that can be done, i often feel like i should be doing something, always

[image or embed]

— Tim Kellogg (@timkellogg.me) February 15, 2026 at 6:44 AM

now obviously the opinions of founder-brained SF social bubble weirdos should be immediately discounted; they are the spiders georg of this industry. but at the same time they are playing into the dreams of management, the worker that never stops working, that's always online, that's infinitely Productive, always shipping, always wants to get back to work. I imagine this archetype exists in other industries but my experience is limited to tech so I will stick to that.

my fear is that this will become the norm. anecdotal evidence tells me that more and more companies are adopting AI for their engineers to use, encouraging (and in some cases requiring) its use in an effort to boost productivity, despite no actual evidence pointing to these improvements and Anthropic-funded research indicating that AI usage reduces skill retention.

so where does this lead us? we know that some US tech companies are starting to embrace the "996" schedule popularized in China's tech industry. enforced usage of coding agents makes that push even easier—is it really work if all you're doing is telling the computer what to do and then reviewing it to make sure it didn't do anything wrong and also babysitting it all hours of the day?

many have already observed that working with coding agents, which require constant attention and often generate low-quality code with (by design) random results, are a slot machine. they are loot boxes. they are gambling. you are constantly pulling the lever and hoping you get the SSR SaaS Passive Income product. you will not get this, but maybe you will. just one more prompt, one more pull, one more revision, one more go at being Absolutely Right.

if you suffer from token anxiety, you have a gambling addiction. I'm sorry that it's not being formally treated as such, but you can take some solace in the fact that novel forms of gambling often take time to be recognized.

now we can put our thinking caps on and follow a pretty easy chain of events. coding agents can trigger our gambling instincts with slot machine-like behavior; tech companies are pushing engineers to work more and encouraging or enforcing the use of coding agents to get there; gambling is addictive; heavy users of coding agents self-report symptoms of gambling addiction.

you see where this is going, right? by enforcing the use of inherently addictive technology in the workplace, employers are (whether intentionally or not) making their workers addicted to work. this seems bad!

one has to wonder how common this will become. will this become the norm? obviously there will be companies with a shred of ethics and empathy for their workers that choose to buck this trend, but if they become the minority there will be fewer and fewer jobs for those who value having free time. we've already reached a point where trying to get a job in this industry requires a gradual erosion of ethics and standards, how much worse does that get?

all I know is that if we keep down this road, I'm gonna bail out to get HVAC certified and make youtube videos about fucked up commercial systems. I can't do this shit forever.

#agents #ai #blog posts that i really shouldn't be writing and publishing while actively looking for work


Read the original article

Comments

  • By ctoth 2026-02-1620:1123 reply

    The gambling analogy completely falls apart on inspection. Slot machines have variable reward schedules by design — every element is optimized to maximize time on device. Social media optimizes for engagement, and compulsive behavior is the predictable output. The optimization target produces the addiction.

    What's Anthropic's optimization target??? Getting you the right answer as fast as possible! The variability in agent output is working against that goal, not serving it. If they could make it right 100% of the time, they would — and the "slot machine" nonsense disappears entirely. On capped plans, both you and Anthropic are incentivized to minimize interactions, not maximize them. That's the opposite of a casino. It's ... alignment (of a sort)

    An unreliable tool that the manufacturer is actively trying to make more reliable is not a slot machine. It's a tool that isn't finished yet.

    I've been building a space simulator for longer than some of the people diagnosing me have been programming. I built things obsessively before LLMs. I'll build things obsessively after.

    The pathologizing of "person who likes making things chooses making things over Netflix" requires you to treat passive consumption as the healthy baseline, which is obviously a claim nobody in this conversation is bothering to defend.

    • By crystal_revenge 2026-02-177:077 reply

      > What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

      What makes you believe this? The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

      > Slot machines have variable reward schedules by design

      LLMs by all major providers are optimized used RLHF where they are optimized in ways we don't entirely understand to keep you engaged.

      These are incredibly naive assumptions. Anthropic/OpenAI/etc don't care if you get your "answer solved quickly", they care that you keep paying and that all their numbers go up. They aren't doing this as a favor to you and there's no reason to believe that these systems are optimized in your interest.

      > I built things obsessively before LLMs. I'll build things obsessively after.

      The core argument of the "gambling hypothesis" is that many of these people aren't really building things. To be clear, I certainly don't know if this is true of you in particular, it probably isn't. But just because this doesn't apply to you specifically doesn't mean it's not a solid argument.

      • By hnbad 2026-02-178:401 reply

        > The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

        I was surprised when I saw that Cursor added a feature to set the number of agents for a given prompt. I figured it might be a performance thing - fan out complex tasks across multiple agents that can work on the problem in parallel and get a combined solution. I was extremely disappointed when I realized it's just "repeat the same prompt to N separate agents, let each one take a shot and then pick a winner". Especially when some tasks can run for several minutes, rapidly burning through millions of tokens per agent.

        At that point it's just rolling dice. If an agent goes so far off-script that its result is trash, I would expect that to mean I need to rework the instructions and context I gave it, not that I should try the same thing again and hope that entropy fixes it. But editing your prompt offline doesn't burn tokens, so it's not what makes them money.

        • By reasonableklout 2026-02-178:48

          Cursor and others have a subagent feature, which sounds like what you wanted. However, there has to be some decision making around how to divide up a prompt into tasks. This is decided by the (parent) model currently.

          The best-of-N feature is a bit like rolling N dice instead of one. But it can be quite useful if you use different models with different strengths and weaknesses (e.g. Claude/GPT-5/Gemini), rather than assigning all to N instances of Claude, for example. I like to use this feature in ask mode when diving into a codebase, to get an explanation a few different ways.

      • By tokioyoyo 2026-02-178:194 reply

        > What makes you believe this?

        Simply, cut-throat competition. Given multiple nations are funding different AI-labs, quality of output and speed are one of the most important things.

        • By philipwhiuk 2026-02-1713:552 reply

          Dating apps also have cut-throat competition and none of them are optimised for minimising the time you spend on the app.

          • By dymk 2026-02-1715:411 reply

            They don’t, they’re all owned by Match group

            • By whattheheckheck 2026-02-1818:231 reply

              So why can't a better one start?

              • By dymk 2026-02-190:20

                Because Match buys them out. PoF, Tinder, Hinge started independent, and were bought by Match once they showed promise.

          • By _345 2026-02-1720:53

            ~90% of them are owned by Match

        • By forgetfreeman 2026-02-179:512 reply

          sigh We're doing this lie again? Quality of Outcome is not, has never been, and if the last 40 years are anything to go on will never be a core or even tangential goal. Dudes are trying to make the stock numbers go up and get paid. That's it. That's all it ever is.

          • By wordpad 2026-02-1714:272 reply

            You're just being pedantic and cynical.

            Goal of any business in principle is profit, by your terms all of them are misaligned.

            Matter of fact is that customers are receiving value and the value has been a good proxy for which company will grow to be successful and which will fail.

            • By forgetfreeman 2026-02-1715:171 reply

              I'm being neither pedantic nor cynical. Do you need a refresher on value proposition vs actual outcomes on the last few decades of breathlessly hyped tech bubbles? Executive summary: the portions of tech industry that attract the most investment consistently produce the worst outcomes, the more cash the shittier the result. It's also worth noting that "value" is defined as anything you can manipulate someone to pay for.

            • By piperswe 2026-02-1715:36

              I mean, yeah. All businesses are misaligned, unless a fluke aligns the profit motive with the consumers for a brief period.

          • By stoneforger 2026-02-1712:44

            Hey man people either get it or they don't. We're doomed.

        • By delusional 2026-02-178:402 reply

          How is nation-states funding private corporations "cut-throat competition"?

          • By tokioyoyo 2026-02-1710:131 reply

            Ok, to be very honest I wrote that in the middle of having a couple of drinks. I guess, what I mean is, countries are funding AI labs because it can turn into a “winner-takes-it-all” competition. Unless the country starts blocking the leading providers.

            Private companies will turn towards the best, fastest, cheapest (or some average of them). Country borders don’t really matter. All labs are fighting to get the best thing out in the public for that reason, because winning comes with money, status, prestige, and actually changing the world. This kind of incentives are rare.

            • By irishcoffee 2026-02-1716:391 reply

              > countries are funding AI labs because it can turn into a “winner-takes-it-all” competition.

              Winner takes what exactly? They can rip off react apps quicker than everyone else? How terrifying.

              • By tokioyoyo 2026-02-1723:001 reply

                Like I understand this commentary, but it’s so detached from reality. My dad in his 70s is writing Excel macros, even though he never touched that in his life. There are a ton of cases like this, but people can’t see reality out of their domains.

                • By irishcoffee 2026-02-181:021 reply

                  That’s so dope excel finally let old people learn this! They removed the agelock?!

                  • By tokioyoyo 2026-02-186:151 reply

                    Come on man, you know exactly what I mean. You can keep coming up with these arguments, but the world has moved on already. I genuinely don’t know a single person in 3 different countries from age 12+ who does not use LLMs at least once a week. We have to adapt, or choose to not play the “game”.

                    • By irishcoffee 2026-02-1816:55

                      Your counter to "excel was never hard to learn" is "people use LLMs all day long" ??

                      I uh, think the LLM use has compromised your critical thinking skills.

          • By enraged_camel 2026-02-179:472 reply

            What does this even mean? Are you disputing the fact that AI labs are competing with each other because they are funded by nation-states?

            • By malfist 2026-02-1713:33

              Why do you have to compete if you can just say "but China!" And get billions more dollars from the government

            • By co_king_5 2026-02-1712:44

              [dead]

        • By psychoslave 2026-02-179:58

          Cut throat competition between nations is usually called war. In war, gathering as much information as possible on everyone is certainly a strategic wanna do. Selling psyops about how much benefits will come for everyone willing to join the one sided industrial dependency also is a thing. Giving significant boost to potentially adversarial actors is not a thing.

          That said universe don't obligate us to think the cosmos is all about competition. Cooperation is always possible as a viable path, often with far more long term benefits at scale.

          Competition is superfluous self inflict masochism.

      • By d1sxeyes 2026-02-1715:40

        There’s a line to be trod between returning the best result immediately, and forcing multiple attempts. Google got caught red-handed reducing search quality to increase ad impressions, no reason to think the AI companies (of which Google is one) will slowly gravitate to the same.

      • By NeutralCrane 2026-02-1716:24

        My (possibly dated) understanding is that OpenAI/Anthropic are charging less than it costs right now to run inference. They are losing money while they build the market.

        Assuming that is still true, then they absolutely have an incentive to keep your tokens/requests to the absolute minimum required to solve your problem and wow you.

      • By YetAnotherNick 2026-02-177:252 reply

        Bill is unrelated to their cost. If they can produce answer in 1/10th of the token, they can charge 10x more per token, likely even more.

        • By Drakim 2026-02-178:163 reply

          That is simply not true, token price is largely determined by the token price of their rival services (even before their own operational costs). If everybody else charges about $1 per millions of tokens, then they will also charge about $1 per millions of tokens (or slightly above/below) regardless of how many answers per token they can provide.

          • By YetAnotherNick 2026-02-179:551 reply

            It only matters if the rivals have same performance. Opus pricing is 50x Deepseek, and like >100x of small models. It should match rival if the performance is same, and if they can produce model with 10x lower token usage, they can charge 10x.

            Gemini increased the same Flash's price by something like 5x IIRC when it got better.

            • By shafyy 2026-02-1710:482 reply

              I bet that the actual "performance" of all the top-tier providers is so similar, that branding has bigger impact on if you think Claude or ChatGPT peforms better.

              • By wordpad 2026-02-1714:50

                Performance or perception of performance

                Potato potato Tomato tomato

              • By co_king_5 2026-02-1716:151 reply

                [dead]

                • By shafyy 2026-02-2013:23

                  Your cousin sounds like a solid dude, haha

          • By sixtyj 2026-02-178:34

            This applies when there is a large number of competitors.

            Now companies are fighting for the attention of a finite number of customers, so they keep their prices in line with those around them.

            I remember when Google started with PPC - because few companies were using it, it cost a fraction of recent prices.

            And the other issue to solve is future lack of electricity for land data centers. If everyone wants to use LLM… but data centers capacity is finite due to available power -> token prices can go up. But IMHO devs will find an innovative approach for tokens, less energy demanding… so token prices will probably stay low.

          • By Aerroon 2026-02-1711:11

            Opus 4.6 costs about 5-10x of GLM 5.

        • By lelanthran 2026-02-1710:451 reply

          What businesses charge for a product is completely unrelated to what it costs them.

          They charge what the market will bear.

          If "what the market will bear" is lower than the cost of production then they will stop offering it.

          • By Hasnep 2026-02-1711:191 reply

            Companies make a loss on purpose all the time.

            • By lpnam0201 2026-02-1712:36

              Not forever. If that's their main business then they will eventually have to profit or they die.

      • By r_lee 2026-02-1815:46

        I'm also seeing a lot of new rambling in Sonnet 4.6 when compared to 4.5, more markdown slop and pointing out details and things in the context which isn't too useful etc...

        which then causes increased token usage because you need to prompt multiple times.

        Idk, maybe it's just me though.

      • By co_king_5 2026-02-1712:43

        [dead]

    • By CGMthrowaway 2026-02-1620:165 reply

      > The gambling analogy completely falls apart on inspection. Slot machines have variable reward schedules by design — every element is optimized to maximize time on device. Social media optimizes for engagement, and compulsive behavior is the predictable output. The optimization target produces the addiction.

      Intermittent variable rewards, whether produced by design or merely as a byproduct, will induce compulsive behavior, no matter the optimization target. This applies to Claude

      • By ctoth 2026-02-1620:245 reply

        Sometimes I will go out and I will plant a pepper plant and take care of it all summer long and obsessively ensure it has precisely the right amount of water and compost and so on... and ... for some reason (maybe I was on vacation and it got over 105 degrees?) I don't get a good crop.

        Does this mean I should not garden because it's a variable reward? Of course not.

        Sometimes I will go out fishing and I won't catch a damn thing. Should I stop fishing?

        Obviously no.

        So what's the difference? What is the precise mechanism here that you're pointing at? Because sometimes life is disappointing is a reason to do nothing. And yet.

        • By roblh 2026-02-1620:453 reply

          It's a not a binary thing, it's a spectrum. There are many elements of uncertainty in every action imaginable. I'm inclined to agree with the other commenter though, the LLM slot machine is absolutely closer on that spectrum to gambling than your example is.

          Anthropic's optimization target is getting you to spend tokens, not produce the right answer. It's to produce an answer plausible enough but incomplete enough that you'll continue to spend as many tokens as possible for as long as possible. That's about as close to a slot machine as I can imagine. Slot rewards are designed to keep you interested as long as possible, on the premise that you _might_ get what you want, the jackpot, if you play long enough.

          Anthropic's game isn't limited to a single spin either. The small wins (small prompts with well defined answers) are support for the big losses (trying to one shot a whole production grade program).

          • By Aurornis 2026-02-1622:082 reply

            > Anthropic's optimization target is getting you to spend tokens, not produce the right answer.

            The majority of us are using their subscription plans with flat rate fees.

            Their incentive is the precise opposite of what you say. The less we use the product, the more they benefit. It's like a gym membership.

            I think all of the gambling addiction analogies in this thread are just so strained that I can't take them seriously. Even the basic facts aren't even consistent with the real situation.

            • By samrus 2026-02-174:051 reply

              Thats a bit naive. Anthropic makes way more money if they gey you to use past your plans limit and wonder if you should get the next tier or switch to tokens

              • By mikkupikku 2026-02-1711:101 reply

                The price jump between subscription tiers is so high that relatively few people will upgrade instead of waiting a few more hours, and even if somebody does upgrade to the next subscription level, Anthropic still has an incentive to provide satisfactory answers as quickly as possible, to minimize tokens used per subscription, and because there is plenty of competition so any frustrated users are potential lost customers.

                I swear this whole conversation is motivated reasoning from AI holdouts who desperately want to believe everybody else is getting scammed by a gambling scheme, that they don't stop and think about the situation rationally. Insofar as Claude is dominant, it's only because Claude works the best. There is meaningful competition in this market, as soon as Anthropic drops the ball they'll be replaced.

            • By RGamma 2026-02-1710:281 reply

              And we're still in the expansion phase, so LLM life is actually good... for now.

              • By Aerroon 2026-02-1711:152 reply

                It's not going to get worse than now though. Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available. They will likely get cheaper to run over time as well (better hardware).

                • By jplusequalt 2026-02-1714:121 reply

                  >Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available.

                  https://apxml.com/models/glm-5

                  To run GLM-5 you need access to many, many consumer grade GPUs, or multiple data center level GPUs.

                  >They will likely get cheaper to run over time as well (better hardware).

                  Unless they magically solve the problem of chip scarcity, I don't see this happening. VRAM is king, and to have more of it you have to pay a lot more. Let's use the RTX 3090 as an example. This card is ~6 years old now, yet it still runs you around $1.3k. If you wanted to run GLM-5 I4 quantization (the lowest listed in the link above) with a 32k context window, you would need *32 RTX 3090's*. That's $42k dollars you'd be spending on obsolete silicon. If you wanted to run this on newer hardware, you could reasonable expect to multiply that number by 2.

                  • By RGamma 2026-02-1714:27

                    I mean it would make sense to see this as a hardware investment into a virtual employee, that you actually control (or rent from someone who makes this possible for you), not as private assistant. Ballparking your numbers, we would need at least an order of magnitude price-performance improvement for that I think.

                    Also, how much bang for the buck do those 3090s actually give you compared to enterprise-grade products?

                • By RGamma 2026-02-1711:36

                  That's good to hear. I'm not really up-to-date on the open models, but they will become essential, I'm sure.

          • By 8note 2026-02-1621:073 reply

            im on a subscription though.

            they want me to not spend tokens. that way my subscription makes money for them rather than costing them electricity and degrading their GPUs

            • By sweetjuly 2026-02-1622:10

              Wouldn't that apply only to a truly unlimited subscription? Last I looked all of their subs have a usage limit.

              If you're on anything but their highest tier, it's not altogether unreasonable for them to optimize for the greatest number of plan upgrades (people who decide they need more tokens) while minimizing cancellations (people frustrated by the number of tokens they need). On the highest tier, this sort of falls apart but it's a problem easily solved by just adding more tiers :)

              Of course, I don't think this is actually what's going on, but it's not irrational.

            • By samrus 2026-02-174:04

              For subscription isers, anthropic makes mkre money if you hit your usage limit and wonder idlf the next plan, or switching to tokens would be better. Especially given the FOMO you probably have from all these posts talking about peoples productivity

            • By lelanthran 2026-02-1710:541 reply

              > im on a subscription though.

              Understood.

              > they want me to not spend tokens.

              No, they want you to expand your subscription. Maybe buy 2x subscriptions.

              • By mikkupikku 2026-02-1711:12

                He's not going to do that if all Claude can do is waste tokens for hours.

          • By pixl97 2026-02-1620:571 reply

            > you'll continue to spend as many tokens as possible for as long as possible.

            I mean this only works if Anthropic is the only game in town. In your analogy if anyone else builds a casino with a higher payout then they lose the game. With the rate of LLM improvement over the years, this doesn't seem like a stable means of business.

            • By tsimionescu 2026-02-177:411 reply

              While I don't know if this applies to AI usage, but actual gambling addicts most certainly do not shop around for the best possible rewards: they stick more or less to the place they got addicted at initially. Not to mention, there's plenty of people addicted to "casinos" that give 0 monetary rewards, such as Candy Crush or Farmville back in the day and Genshin Impact or other gacha games today.

              So, if there's a way to get people addicted to AI conversations, that's an excellent way to make money even if you are way behind your competitors, as addicted buyers are much more loyal that other clients.

              • By mikkupikku 2026-02-1711:14

                You're taking the gambling analogy too seriously. People do in fact compare different LLMs and shop around. How gamblers choose casinos is literally irrelevant because this whole analogy is nothing more than a retarded excuse for AI holdouts to feel smug.

        • By krackers 2026-02-1623:481 reply

          The timescale is one difference, it's hard to get "sucked in" in the gambling-like mindless state when the timescales are over seasons as opposed to minutes. There's a reason gambling isn't done in a correspondence format.

          • By sph 2026-02-1711:38

            In human physiology/psychology as well, the chance of addiction itself is a function of timescale. This is why a nicotine patch is much less addictive than insufflated nicotine (hours to reach peak effect vs seconds), or why addictive software have plenty of sensory experiences attached to every action, to keep the user engaged.

        • By mejutoco 2026-02-1910:51

          Not OP, but they said:

          > Intermittent variable rewards ... will induce compulsive behavior

          As a dog owner this is why you need to give a variable amount of treats (sometimes zero) to your dog when they obey a command, for the "jackpot effect".

          For example, if I land a trick while skating it gives me a boost. Is that addictive behaviour? not sure. It gets me to exercise.

          My point is that variability is probably part of what gets you back to pepper planting and fishing. Intermittent variable rewards reinforcing a behaviour seems to be a fact. If this is good or bad regarding a specific activity is left as an exercise for the reader.

          EDIT: grammar

        • By DANmode 2026-02-173:45

          Are you a pepper farmer taking this approach to feed your family,

          or a hobbyist gardener?

        • By outofpaper 2026-02-1620:411 reply

          ??? I'm pretty sure you know what the differences are. Go touch grass and tell me it's the same as looking at a plant on a screen.

          Dealing with organic and natural systems will, most of the time, have a variable reward. The real issue comes from systems and services designed to only be accessible through intermittent variable rewards.

          Oh, and don't confuse Claude's artifacts working most of the time with them actually optimizing to be that way. They're optimizing to ensure token usage. I.E. LLMs have been fine-tuned to default to verbose responses. They are impressive to less experienced developers, often easier to detect certain types of errors (eg. Improper typing), and will make you use more tokens.

          • By squeaky-clean 2026-02-1621:401 reply

            So gambling is fine as long as I'm doing it outside. Poker in a casino? Bad. Poker in a foresty meadow, good. Got it.

            • By mikkupikku 2026-02-1621:47

              Basically true tbqh. Poker is maybe the one exception, but you're almost always better off gambling "in the wild" e.g. poker night with your buds instead of playing slots or anything else where "the house" is always winning in the long run. Are your losses still circulating in your local community, or have they been siphoned off by shareholders on the other side of the world? Gambling with friends is just swapping money back and forth, but going to a casino might as well be lighting the money on fire.

      • By Aurornis 2026-02-1622:07

        > Intermittent variable rewards, whether produced by design or merely as a byproduct, will induce compulsive behavior, no matter the optimization target.

        This is an incorrect understanding of intermittent variable reward research.

        Claims that it "will induce compulsive behavior" are not consistent with the research. Most rewards in life are variable and intermittent and people aren't out there developing compulsive behavior for everything that fits that description.

        There are many counter-examples, such as job searching: It's clearly an intermittent variable reward to apply for a job and get a good offer for it, but it doesn't turn people into compulsive job-applying robots.

        The strongest addictions to drugs also have little to do with being intermittent or variable. Someone can take a precisely measured abuse-threshold dose of a drug on a strict schedule and still develop compulsions to take more. Compulsions at a level that eclipse any behavior they'd encounter naturally.

        Intermittent variable reward schedules can be a factor in increasing anticipatory behavior and rewards, but claiming that they "will induce compulsive behavior" is a severe misunderstanding of the science.

      • By bonoboTP 2026-02-1621:23

        And that's only bad if it's illusory or fake. This reaction evolved because it's adaptive. In slot machines the brain is tricked to believe there is some strategy or method to crack and the reward signals make the addict feel there is some kind of progress being made in return to some kind of effort.

        The variability in eg soccer kicks or basketball throws is also there but clearly there is a skill element and a potential for progress. Same with many other activities. Coding with LLMs is not so different. There are clearly ways you can do it better and it's not pure randomness.

      • By pixl97 2026-02-1620:41

        >Intermittent variable rewards,

        So you're saying businesses shouldn't hire people either?

    • By scuff3d 2026-02-1621:401 reply

      Right. A platform who makes money the more you have to use it is definitely optimizing to get you the right answer in as few tokens as possible.

      There is absolutely no incentive to do that, for any of these companies. The incentive is to make the model just bad enough you keep coming back, but not so bad you go to a competitor.

      We've already seen this play out. We know Google made their search results worse to drive up and revenue. Exact same incentives are at play here, only worse.

      • By ctoth 2026-02-1621:464 reply

        Please go read how the Anthropic max plan works.

        IF I USE LESS TOKENS, ANTHROPIC GETS MORE MONEY! You are blindly pattern matching to "corporation bad!" without actually considering the underlying structure of the situation. I believe there's a phrase for this to do with probabilistic avians?

        • By eaglelamp 2026-02-173:201 reply

          As an investor in Anthropic which pricing strategy would you support? That's the question you need to ask, not what there current pricing strategy in the win the market phase happens to be.

          • By materielle 2026-02-178:03

            It’s sort of surprising how naive developers still are given the countless rug pulls over the past decade or two.

            You’re right on the money: the important thing to look at are the incentive structures.

            Basically all tech companies from the post-great financial crisis expansion (Google, post Balmer Microsoft, Twitter, Instagram, Airbnb, Uber, etc) started off user-friendly but all eventually converged towards their investment incentive structure.

            One big exception is Wikipedia. Not surprising since it has a completely different funding model!

            I’m sure Anthropic is super user friendly now, while they are focused on expansion and founding devs still have concentrated policial sway. It will eventually converge on its incentive structures to extract profit for shareholders like all other companies.

        • By scuff3d 2026-02-173:321 reply

          The Max plan has usage limits, and you can buy more... Which is exactly what I'm talking about...

          And the incentive is even strong for the lower tiers. They want answers to be just good enough to keep you using it, but bad enough that you're pushed towards buying the higher tier.

          • By mikkupikku 2026-02-1711:181 reply

            Have you actually used a max plan? You have to try really damn hard to get close to the max plan usage. I don't think that's something that realistically happens by accident, you have to be deliberately spawning a huge number of subagents or something.

            • By philipwhiuk 2026-02-1713:56

              Isn't this why OpenAI just hired the OpenClawd guy? To encourage people to build more agents?

        • By maplethorpe 2026-02-1623:33

          What if I use zero tokens, as I'm currently doing? Do they get any money then?

        • By otikik 2026-02-1715:26

          We are on the pre-enshittification phase.

    • By mh2266 2026-02-178:311 reply

      Anthropic themselves have described CC as a slot machine:

      https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a...

      (cmd-f "slot machine")

      • By wiseowise 2026-02-1714:13

        No, no, you misunderstand! It’s means something else!

    • By mrbungie 2026-02-1620:18

      > What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

      Are you totally sure they are not measuring/optimizing engagement metrics? Because at least I can bet OpenAI is doing that with every product they have to offer.

    • By RamblingCTO 2026-02-1714:58

      Thank you! I don't get how so many people want to see dark patterns everywhere. All arguments miss the big counterargument: in a world where you have competitors, even free ones, you can't fuck around. You need to get it working. it's not a slot machine for me. How on earth are people using it? And if it would be I'd take my money elsewhere (kimi for example, openrouter or whatever). It needs to do my work as correct as possible. That's the business they are in. Tech folks talking about economics is so cringe. It's always just "corporations bad". As if they exist in a vacuum.

    • By samrus 2026-02-174:021 reply

      > What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

      That is a generous interpretation. Mighr be correct. But they dont make as much money if you quickly get the right answer. They make more money if you spend as many tokens as possible being on that "maybe next time" hook.

      Im not saying theyre actually optimizng for that. But charlie munger said "show me the incentives, and ill show you the outcome"

      • By cedilla 2026-02-1710:221 reply

        I know for sure that each and every AI I use wants to write whole novellas in response to every prompt unless I carefully remind it to keep responses short over and over and over again.

        This didn't used to be the case, so I assume that it must be intentional.

        • By djaro 2026-02-1712:37

          I've noticed this getting a lot worse recently. I just want to ask a simple question, and end up gettig a whole essay in response, an 8-step plan, and 5 follow-up questions. Lately ChatGPT has also been referencing previous conversations constantly, as if to prove that it "knows" me.

          "Should I add oregano to brown beans or would that not taste good?"

          "Great instinct! Based on your interests in building new apps and learning new languages, you are someone who enjoys discovering new things, and it makes sense that you'd want to experiment with new flavor profiles as well. Your combination of oregano and brown beans is a real fusion of Italian and Mexican food, skillfully synthesizing these two cultures.

          Here's a list of 5 random unrelated spices you can also add to brown beans:

          Also, if you want to, I can create a list of other recipes that incorporate these oregano. Just say the words "I am hungry" and I will get right back to it!"

          Also, random side note, I hate ChatGPT asking me to "say the word" or "repeat the sentence". Just ask me if I want it and then I say yes or no, I am not going to repeat "go oregano!" like some sort of magic keyphrase to unlock a list of recipes.

    • By Aurornis 2026-02-1621:32

      > The gambling analogy completely falls apart on inspection.

      The analogy was too strained to make sense.

      Despite being framed as a helpful plea to gambling addicts, I think it’s clear this post was actually targeted at an anti-LLM audience. It’s supposed to make the reader feel good for choosing not to use them by portraying LLM users as poor gambling addicts.

    • By mossTechnician 2026-02-1620:432 reply

      At one point, people said Google's optimization target was giving you the right search results as soon as possible. What will prevent Anthropic from falling into the same pattern of enshittification as its predecessors, optimizing for profit like all other businesses?

      • By mikkupikku 2026-02-1620:56

        I stopped using Google years ago because they stopped trying to provide good search results. If Anthropic stops trying to provide a good coding agent, I'll stop using them too.

      • By trashb 2026-02-1710:58

        Slightly off topic actually but ill put it here.

        I found it interesting that Google removed the "summary cards" supposedly "to improve user experience" however the AI overview was added back.

        I suspect the AI overview is much more influenceable by advertisement money then the summary cards where.

    • By toss1 2026-02-1623:03

      Doesn't the alignment sort of depend on who is paying for all the tokens?

      If Dave the developer is paying, Dave is incentivized to optimize token use along with Anthropic (for the different reasons mentioned).

      If the Dave's employer, Earl, is paying and is mostly interested in getting Dave to work more, then what incentive does Dave have to minimize tokens? He's mostly incentivized by Earl to produce more code, and now also by Anthropic's accidentally variable-reward coding system, to code more... ?

    • By jplusequalt 2026-02-1713:56

      >The pathologizing of "person who likes making things chooses making things over Netflix" requires you to treat passive consumption as the healthy baseline, which is obviously a claim nobody in this conversation is bothering to defend

      I think their greater argument was to highlight how agentic coding is eroding work life balance, and that companies are beginning to make that the norm.

    • By randusername 2026-02-1715:17

      Disagree. Unreliability is intractable because of the human, not the tool.

      Even a perfect LLM will not be able to produce perfect outputs because humans will never put in all the context necessary to zero-shot any non-trivial query. LLMs can't read your mind and will always make distasteful assumptions unless driven by users without any unique preferences or a lot of time on their hands to ruminate on exactly how they want something done.

      I think it will always be mostly boring back-and-forth until the jackpot comes. Maybe future generations will align their preferences with the default LLM output instead of human preferences in that domain, though.

    • By timcobb 2026-02-1713:40

      > The gambling analogy completely falls apart on inspection.

      yeah I think the bluesky embed is much more along the lines of what I'm experiencing than the OP itself.

    • By pjc50 2026-02-179:31

      > "person who likes making things chooses making things over Netflix"

      This is subtly different. It's not clear that the people depicted like making things, in the sense of enjoying the process. The narrative is about LLMs fitting into the already-existing startup culture. There's already a blurry boundary between "risky investment" and "gambling", given that most businesses (of all types, not just startups) have a high failure rate. The socially destructive characteristic identified here is: given more opportunity to pull the handle on the gambling machine, people are choosing to do that at the expense of other parts of their life.

      But yes, this relies on a subjective distinction between "building, but with unpredictable results" and "gambling, with its associated self-delusions".

    • By phplovesong 2026-02-1717:01

      Claude RARELY get it right on the fifth time. Usually i write the damn thing when my account is on "cooldown".

    • By jrflowers 2026-02-188:16

      > What's Anthropic's optimization target???

      It is a business that sells monthly subscriptions

    • By bandrami 2026-02-176:181 reply

      > What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

      Wait, what? Anthropic makes money by getting you to buy and expend tokens. The last thing they want is for you to get the right answer as fast as possible. They want you to sometimes get the right answer unpredictably, but with enough likelihood that this time will work that you keep hitting Enter.

      • By theon144 2026-02-1722:21

        Given that pre-paid plans are the most popular way to subscribe to Claude, it quite plainly is a "the less tokens you use, the more money Anthropic makes" kind of situation.

        In an environment where providers are almost entirely interchangeable and tiniest of perceived edges (because there's still no benchmark unambiguously judging which model is "better") make or break user retention, I just don't see how it's not ludicrous on its face that any LLM provider would be incentivized to give unreliable answers at some high-enough probability.

    • By evmaki 2026-02-1622:432 reply

      The LLM is not the slot machine. The LLM is the lever of the slot machine, and the slot machine itself is capitalism. Pull the lever, see if it generates a marketable product or moment of virality, get rich if you hit the jackpot. If not, pull again.

      • By ASalazarMX 2026-02-1717:23

        I don't know why you were downvoted. This is the FOMO that encourages agent gambling, automated experimentation in the hopes of accidentally striking digital gold before your peers do. A million monkeys racing 24/7 to create the next Harry Potter first.

        Ideas are a dime a dozen, now proofs of concept are a load of tokens a dozen.

      • By mikkupikku 2026-02-1711:20

        [flagged]

    • By beepbooptheory 2026-02-1714:28

      You may have a point but either way: immediately taking it personally like this and creating a whole semi-rant that includes something to the effect "I've been doing this since before you were born" really makes you sound like a person with a gambling problem.

      Trust me, we all feel like the house is our friend until its isn't!

  • By BoxFour 2026-02-1620:302 reply

    I wish the author had stuck to the salient point about work/life balance instead of drifting into the gambling tangent, because the core message is actually more unsettling. With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.

    To the bluesky poster's point: Pulling out a laptop at a party feels awkward for most; pulling out your phone to respond to claude barely registers. That’s what makes it dangerous: It's so easy to feel some sense of progress now. Even when you’re tired and burned out, you can still make progress by just sending off a quick message. The quality will, of course, slip over time; but far less than it did previously.

    Add in a weak labor market and people feel pressure to stay working all the time. Partly because everyone else is (and nobody wants to be at the bottom of the stack ranking), and partly because it’s easier than ever to avoid hitting a wall by just "one more message". Steve Yegge's point about AI vampires rings true to me: A lot of coworkers I’ve talked to feel burned out after just a few months of going hard with AI tools. Those same people are the ones working nights and weekends because "I can just have a back-and-forth with Claude while I'm watching a show now".

    The likely result is the usual pattern for increases in labor productivity. People who can’t keep up get pushed out, people who can keep up stay stuck grinding, and companies get to claim the increase in productivity while reducing expenses. Steve's suggestion for shorter workdays sound nice in theory, but I would bet significant amounts of money the 40-hour work week remains the standard for a long time to come.

    • By nharada 2026-02-1620:59

      Another interesting thing here is that the gap between "burned out but just producing subpar work" and "so crispy I literally cannot work" is even wider with AI. The bar for just firing off prompts is low, but the mental effort required to know the right prompts to ask and then validate is much higher so you just skip that part. You can work for months doing terrible work and then eventually the entire codebase collapses.

    • By Aurornis 2026-02-1621:431 reply

      > With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.

      This isn't generally true at all. The "all tech companies are going to 996" meme comes up a lot here but all of the links and anecdotes go back to the same few sources.

      It is very true that the tech job market is competitive again after the post-COVID period where virtually nobody was getting fired and jobs were easy to find.

      I do not think it's true that the median or even 90th percentile tech job is becoming so overbearing that personal time is disappearing. If you're at a job where they're trying to normalize overwork as something everyone is doing, they're just lying to you to extract more work.

      • By BoxFour 2026-02-1622:06

        It would never show up as some explicit rule or document. It just sort of happens when a few things line up: execs start off-handedly praising 996, stack ranking is still a thing, and the job market is bad enough that getting fired feels genuinely dangerous.

        It starts with people who feel they’ve got more to lose (like those supporting a family) working extra to avoid looking like a low performer, whether that fear is reasonable or not. People aren’t perfectly rational, and job-loss anxiety makes them push harder than they otherwise would. Especially now, when "pushing harder" might just mean sending chat messages to claude during your personal time.

        Totally anecdotal (strike 1), and I'm at a FAANG which is definitely not the median tech job (strike 2), but it’s become pretty normal for me to come back Monday to a pile of messages sent by peers over the weekend. A couple years ago even that was extremely unusual; even if people were working on the weekend they at least kept up a facade that they weren't.

  • By HolyLampshade 2026-02-1713:3917 reply

    I know I'm running a bit late to the party here, but maybe someone can provide some color that I (on the slightly older end of the spectrum when it comes to this) don't fully understand.

    When people talk about leaving their agents to run overnight, what are those agents actually doing? The limited utility I've had using agent-supported software development requires a significant amount of hand holding, maybe because I'm in an industry with limited externally available examples to build am model off of (though all of the specifications are public, I've yet to see an agent build an appropriate implementation).

    So it's much more transactional...I ask, it does something (usually within seconds), I correct, it iterates again...

    What sort of tasks are people putting these agents to? How are people running 'multiple' of these agents? What am I missing here?

    • By jascha_eng 2026-02-1714:051 reply

      My impression so far is that the parallel agent story is a fabrication of "ai influencers" and the labs themselves.

      I might run 3-4 claude sessions because that's the only way to have "multiple chats" to e.g. ask unrelated things. Occasionally a task takes long enough to keep multiple sessions busy, but that's rather rare and if it happens its because the agent runs a long running task like the whole test suite.

      The story of running multiple agents to build full features in parallel... doesn't really add up in my experience. It kinda works for a bit if you have a green field project where the complexity is still extremely low.

      However once you have a feature interaction matrix that is larger than say 3x3 you have to hand hold the system to not make stupid assumptions. Or you prompt very precisely but this also takes time and prevents you from ever running into the parallel situation.

      The feature interaction matrix size is my current proxy "pseudo-metric" for when agentic coding might work well and at which abstraction level.

      • By nerdsniper 2026-02-1715:342 reply

        This is exactly my experience as well. The feature interaction matrix is growing as models get better, and I tend to build "prompt library components" for each project which saves time on "you prompt very precisely but this also takes time".

        But so far that doesn't change the reality - I can't find any opportunities to let an agent run for more than 30 minutes at best, and parallel agents just seem to confuse each other.

        • By jascha_eng 2026-02-1716:001 reply

          idk I haven't really hit the point with any llm that it comes up with useful abstractions on its own unless those abstractions have been in the training data.

          E.g. imagine building a google docs clone where you have different formatting options. Claude would happily build bold and italic for you but if afterwards you add headings, tables, colors, font size, etc. It would just produce a huge if/else tree instead of building a somewhat sensible text formatting abstraction.

          Tbf I wouldn't actually know how to build this myself but e.g. bold and italic work together but if you add a "code block" thing that should probably not work with font color and putting a table inside that also makes no sense.

          Claude might get some of these interactions intuitively correct but at some point you'll have so many NxM interactions between features that it just forgets half of them and then the experience becomes sloppy and crashes on all edge cases.

          The point of good software engineering is to simplify the matrix to something that you can keep arguing about e.g. classify formatting options into categories and then you only have to argue and think about how those categories interact.

          This is the kind of thing LLMs just aren't really good at if the problem space isn't in the training data already => doing anything remotely novel. And I haven't seen it improve at this either over the releases.

          Maybe this kind of engineering will eventually be dead because claude can just brute force the infinitely growing if/else tree and keep it all in context but that does not seem very likely to me. So far we still have to think of these abstraction levels ourselves and then for the sub-problems I can apply agentic coding again.

          Just need to make sure that Claude doesn't breach these abstractions, which it also happily does to take short cuts btw.

          • By nerdsniper 2026-02-1718:092 reply

            FWIW I’ve used LLMs to invent new things. Not super groundbreaking fundamental research, but they were able to use physics to design a device that didn’t exist yet, from first principles.

            • By mrguyorama 2026-02-1718:561 reply

              Pics or it didn't happen

              More seriously, what in the world "novel" physics device did you invent?

              • By nerdsniper 2026-02-1720:011 reply

                I didn’t say “novel physics” or “physics device”.

                • By mrguyorama 2026-02-1720:45

                  Okay, so rereading it as pedantically as you seem to insist:

                  You "invented" ("Designed") a "device" "using physics", and nobody has designed that "device" before, making it novel.

                  "From first principles" is a fun statement because people like Aristotle also thought they were reasoning from "first principles" and look how far it got them. The entire point of science is that "first principles" are actually not something we have access to, so we should instead prioritize what literally happens and can be observed. It's not possible as far as we know to trick mother nature into giving us the answer we want rather than the real answer.

                  Did you ever actually build or test this "device"?

            • By _345 2026-02-1720:18

              Would you share a bit more?

        • By rubenflamshep 2026-02-1715:52

          Same. The only situation when I've consistently gotten a system to run for 20+ minutes was a data-analysis with tight guardrails and explicit multi-phase operations.

          Outside that I'm juggling 2-3 sessions at most with nothing staying unattended for more than 10 minutes.

    • By jonahrd 2026-02-1714:222 reply

      I might be able to shine a little light on this.

      I came from embedded, where I wasn't able to use agents very effectively for anything other than quick round trip iterative stuff. They were still really useful, but I definitely could never envision just letting an agent run unattended.

      But I recently switched domains into vaguely "fullstack web" using very popular frameworks. If I spend a good portion of my day going back and forth with an agent, working on a detailed implementation plan that spawns multiple agents, there is seemingly no limit* to the scope of the work they are able to accurately produce. This is because I'm reading through the whole plan and checking for silly gotchyas and larger implementation mistakes before I let them run. It's also great because I can see how the work can be parallelized at certain parts, but blocked at others, and see how much work can be parallelized at once.

      Once I'm ready, I can usually let it start with not even the latest models, because the actual implementation is so straightforwardly prompted that it gets it close to perfectly right. I usually sit next to it and validate it while it's working, but I could easily imagine someone letting it run overnight to wake up to a fresh PR in the morning.

      Don't get me wrong, it's still more work that just "vibing" the whole thing, but it's _so_ much more efficient than actually implementing it, especially when it's a lot of repetitive patterns and boilerplate.

      * I think the limit is how much I can actually keep in my brain and spec out in a well thought out manner that doesn't let any corner cases through, which is still a limit, but not necessarily one coming from the agents. Once I have one document implemented, I can move on to the next with my own fresh mental context which makes it a lot easier to work.

      • By QuadrupleA 2026-02-1715:351 reply

        The amount of boilerplate people talk about seems like the fault of these big modern frameworks honestly. A good system design shouldn't HAVE so much boilerplate. Think people would be better off simplifying and eliminating it deterministically before reaching for the LLM slot machine.

        • By jonahrd 2026-02-1716:521 reply

          I'm not so sure I agree. To me it's somewhat magical that I can write even this amount of code and have this stuff just magically work on pretty much every platform via docker, the web platform, etc. Maybe this again is me having started with embedded, but I am blown away at the ratio of actual code to portability we currently have.

          • By ASalazarMX 2026-02-1717:17

            > To me it's somewhat magical that I can write even this amount of code

            It's because you're not writing it, you adopted the role of Project Manager or Chief Engineer. How much cognitive debt are you accumulating?

      • By rubenflamshep 2026-02-1716:14

        Interesting. What would you say is your ratio of "sit down and make the implementation" time to "multi-agent system builds the thing" time?

    • By LeonidBugaev 2026-02-1715:342 reply

      I had a few useful examples of this. In order to make it work you need to define your quality gates, and rather complex spec. I personally use https://github.com/probelabs/visor for creating the gates. It can be a code-review gate, or how well implementation align with the spec and etc. And basically it makes agent loop until it pass it. One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task. You can also play around with the gates with a more simple tooling, for example https://probelabs.com/vow/

      Hope it helps!

      • By thwarted 2026-02-1718:05

        > One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

        This is definitely a way to keep those who wear Program and Project manager hats busy.

      • By HolyLampshade 2026-02-1718:23

        That is interesting. Never considered trying to throw one or two into a loop together to try to keep it honest. Appreciate the Visor recommendation, I'll give it a look and see if I can make this all 'make sense'.

    • By meta_1995 2026-02-1714:331 reply

      Not a dev but doing some side projects.

      As i build with agents, i frequently run into new issues that arent in scope for the task im on and would cause context drift. I have the agent create a github issue with a short problem description and keep going on the current task. In another terminal i spin up a new agent and just tell it “investigate GH issue 123” and it starts diving in, finds the root cause, and proposes a fix. Depending on what parts of the code the issue fix touches and what other agents ive got going, i can have 3-4 agents more or less independently closing out issues/creating PRs for review at a time. The agents log their work in a work log- what they did, what worked what didnt, problems they encountered using tools - and about once a day i have an agent review the worklog and update the AGENTS.md with lessons learned.

      • By dummydummy1234 2026-02-181:37

        What are you using for environment for this, I am running into similar issues, can't really spin up a second agent because they would collide. Just a newly cloned repo?

    • By Leynos 2026-02-1715:40

      With 5.3 Codex, the execplans skill and a well specified implementation task, you can get a good couple of hours work in a single turn. That's already in the scope of "set it up before bed and review it in the morning".

      If you have a loop set up, e.g., using OpenClaw or a Ralph loop, you can stretch that out further.

      I would suggest that when you get to that point really, you want some kind of adversarial system set up with code reviews (e.g., provided by CodeRabbit or Sourcery) and automation to feed that back into the coding agent.

    • By mikemarsh 2026-02-1714:26

      > what are those agents actually doing?

      Providing material for attention-grabbing headlines and blog posts, primarily. Can't (in good conscience, at least) claim you had an agent running all night if you didn't actually run an agent all night.

    • By candiddevmike 2026-02-1715:211 reply

      Maybe it's the programmer equivalent of rolling coal.

    • By dudeinhawaii 2026-02-1718:56

      If you visualize it as AI Agents throwing a rope to wrangle a problem, and then visualize a dozen of these agents throwing their ropes around a room, and at each other -- very quickly you'll also visualize the mess of code that a collections of agents creates without oversight. It might even run, some might say that's the only true point but... at what cost in code complexity, performance waste, cascading bugs, etc.

      Is it possible? Yes, I've had success with having a model output a 100 step plan that tried to deconflict among multiple agents. Without re-creating 'Gas town', I could not get the agents to operate without stepping on toes. With _me_ as the grand coordinator, I was able to execute and replicate a SaaS product (at a surface level) in about 24hrs. Output was around 100k lines of code (without counting css/js).

      Who can prove that it works correctly though? An AI enthusiasts will say "as long as you've got test coverage blah blah blah". Those who have worked large scale products know that tests passing is basically "bare minimum". So you smoke test it, hope you've got all the paths, and toss it up and try to collect money from people? I don't know. If _this_ is the future, this will collapse under the weight of garbage code, security and privacy breaches, and who knows what else.

    • By mmasu 2026-02-1717:14

      I will give you an example I heard from an acquaintance yesterday - this person is very smart but not strictly “technical”.

      He is building a trading automation for personal use. In his design he gets a message on whatsapp/signal/telegram and approves/rejects the trade suggestion.

      To define specifications for this, he defined multiple agents (a quant, a data scientist, a principal engineer, and trading experts - “warren buffett”, “ray dalio”) and let the agents run until they reached a consensus on what the design should be. He said this ran for a couple of hours (so not strictly overnight) after he went to sleep; in the morning he read and amended the output (10s of pages equivalent) and let it build.

      This is not a strictly-defined coding task, but there are now many examples of emerging patterns where you have multiple agents supporting each other, running tasks in parallel, correcting/criticising/challenging each other, until some definition of “done” has been satisfied.

      That said, personally my usage is much like yours - I run agents one at a time and closely monitor output before proceeding, to avoid finding a clusterfuck of bad choices built on top of each other. So you are not alone my friend :-)

    • By flerchin 2026-02-1713:461 reply

      This is my experience of it too. Perhaps if it was chunking through a large task like upgrading all of our repos to the latest engine supported by our cloud provider, I could leave it overnight. Even then it would just result in a large daylight backlog of "not quite right" to review and redo.

      • By HolyLampshade 2026-02-1715:221 reply

        I think that's the issue I have with using these tools so far (definitely professionally, but even in pet projects for embedded systems). The mental load of having to go back through and make sure all of the lines of code do what the agent claims they do, even with tests, is significantly more than it would take to learn the implementation myself.

        I can see the utility in creating very simple web-based tools where there's a monstrous wealth of public resources to build a model off of, but even the most recent models provided by Anthro, OpenAI, or MSFT seem prone to not quite perfection. And every time I find an error I'm left wondering what other bugs I'm not catching.

        • By flerchin 2026-02-1718:18

          What I tell my kids is: You know how when you ask AI about something you know very well, how its answers are always somewhat wrong? It's like that for things you do not know very well too.

    • By cbg0 2026-02-1715:401 reply

      This is very dependent on what kind of work you're asking the agent to do. For software, I've had quite a bit of success providing detailed API specifications and asking an LLM to build a client library for that. You can leave it running unattended as long as it knows what it's supposed to build and it won't need a lot of correction since you're providing the routes, returned statuses and possible error messages.

      Do some people just create complete SaaSlop apps with it overnight? Of course, just put together a plan (by asking the LLM to write the plan) with everything you want the app to do and let it run.

      • By skydhash 2026-02-181:26

        > it won't need a lot of correction since you're providing the routes, returned statuses and possible error messages.

        Wouldn’t be better to setup an api docs (Postman, RapidApi,…), extract an OpenAPI version from that, then use a generator for your language of choice (Nswag,…)?

    • By YetAnotherNick 2026-02-1714:04

      There has only been one instance of coding where I let the agent run for like 7 hours. To generate playwright test. Once the scaffolding is done, it is just matter of writing test for each of the component. But yeah even for that I didn't just fire and forget.

    • By mikrotikker 2026-02-1713:451 reply

      I wrote a program to classify thousands of images but that was using a model running on my gaming PC. Took about 3 days to classify them all. Only cost me the power right?

      • By irishcoffee 2026-02-1716:36

        Power, gaming rig, internet, somewhere to store the rig, probably pay property taxes too.

        You can draw the line wherever you want. :) Personally, I wish I'd built a new gaming rig a year ago so I could mess with local models and pay all these same costs.

    • By mrnotcrazy 2026-02-1717:03

      I have agents run at night to work through complicated TTRPG campaigns. For example I have a script that runs all night simulating NPCs before a session. The NPCs have character sheets + motivations and the LLMs do one prompt per NPC in stages so combat can happen after social interactions. IF you run enough of these and make the prompts well written you can save a lot of time. You can't like... simulate the start of a campaign and then jump in. Its more like you know there is a big event, you already have characters, you can throw them in a folder to see how things would cook all else being equal and then use that to riff off of when you actually write your notes.

      I think of my agents like golems from disc world, they are defined by their script. Adding texture to them improves the results so I usually keep a running tally of what they have worked on and add that to the header. They are a prompt in a folder that a script loops over and sends to gemeni(spawning an agent and moving to the next golem script)

      I also was curious to see if it could be used it for developing some small games, whenever I would run into a problem I couldn't be bothered to solve or needed a variety of something I would let a few llms work on it so in the morning I had something to bounce off. I had pretty good success with this for RTS games and shooting games where variety is something well documented and creativity is allowed. I imagine there could be a use here, I've been calling it dredging cause I imagine myself casting a net down into the slop to find valuables.

      I did have an idea where all my sites and UI would be checked against some UI heuristic like Oregon State's inclusivity heuristic but results have been mixed so far. The initial reports are fine, the implementation plans are ok but it seems like the loop of examine, fix, examine... has too much drift? That does seem solvable but I have a concern that this is like two lines that never touch but get closer as you approach infinity.

      There is some usefulness in running these guys all night but I'm still figuring out when its useful and when its a waste of resources.

    • By BatteryMountain 2026-02-1716:231 reply

      Spin up a mid sized linux vm (or any machine with 8 or 12 cores will do with at least 16GB RAM with nmve). Add 10 users. Install claude 10 times (one per user). Clone repo 10 times (one per user). Have a centralized place to get tasks from (db, trello, txt, etc) - this is the memory. Have a cron wake up every 10 minutes and call your script. Your script calls claude in non-interactive mode + auto accept. It grabs a new task, takes a crack at it and create a pull request. That is 6 tasks per hour per user, times 12 hours. Go from there and refine your harnesses/skills/scripts that claude's can use.

      In my case, I built a small api that claude can call to get tasks. I update the tasks on my phone.

      The assumption is that you have a semi-well structured codebase already (ours is 1M LOC C#). You have to use languages with strong typing + strict compiler.You have to force claude to frequently build the code (hence the cpu cores + ram + nmve requirement).

      If you have multiple machines doing work, have single one as the master and give claude ssh to the others and it can configure them and invoke work on them directly. The usecase for this is when you have a beefy proxmox server with many smaller containers (think .net + debian). Give the main server access to all the "worker servers". Let claude document this infrastructure too and the different roles each machine plays. Soon you will have a small ranch of AI's doing different things, on different branches, making pull requests and putting feedback back into task manager for me to upvote or downvote.

      Just try it. It works. Your mind will be blown what is possible.

      • By hattmall 2026-02-1716:441 reply

        So is this something you do with a monthly subscription or is this using API tokens?

        • By BatteryMountain 2026-02-1718:071 reply

          At first used Claude Max x5, but we are using the api now.

          We only give it very targeted tasks, no broad strokes. We have a couple of "prompt" templates, which we select when creating tasks. The new opus model one shots about 90% of tasks we throw at it. Getting a ton of value from diagnostic tasks, it can troubleshoot really quickly (by ingesting logs, exceptions, some db rows).

          • By hattmall 2026-02-183:19

            Thanks, in your example are you saying that you had 10 Claude accounts or all 10 user accounts able to work in the allotment for a single Claude subscription. I've only ever dealt with the API and it got way too expensive quickly for the quality I was getting back.

    • By wiseowise 2026-02-1714:11

      > what are those agents actually doing

      Generate material for yet another retarded twitter hype post.

    • By ath3nd 2026-02-1719:24

      [dead]

HackerNews