Comments

  • By FrasiertheLion 2026-03-038:424 reply

    AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.

    Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.

    • By TacticalCoder 2026-03-0312:23

      > AI has normalized single 9's of availability, ...

      FWIW I use AI daily to help me code...

      And apparently the output of LLMs are normalizing single 9's too: which may or may not be sufficient.

      From all the security SNAFUs, performance issues, gigantic amount of kitchen-skinky boilerplate generated (which shall require maintenance and this has always been the killer) and now uptime issues this makes me realize we all need to use more of our brains, not less, to use these AI tools. And that's not even counting the times when the generated code simply doesn't do what it should.

      For a start if you don't know jack shit about infra, it looks like you're already in for a whole world of hurt: when that agent is going to rm -rf your entire Git repo and FUBAR your OS because you had no idea how to compartmentalize it, you'll feel bad. Same once all your secrets are going to publicly exposed.

      It looks like now you won't just be needing strong basis about coding: you'll also be needing to be at ease with the entire stack. Learning to be a "prompt engineer" definitely sounds like it's the very easy part. Trivial even.

    • By direwolf20 2026-03-0310:111 reply

      That's supposing the autonomous kill chain needs more than one 9. There are wars going on right now with less than 20% targeting accuracy.

      • By mrbombastic 2026-03-0317:56

        we are going to do the same "everything is binary" engineer thing with bombs and innocent casualties we did with self driving? there is also an accountability crisis that will unfold if we loose these things on the world, it is not just one metric is better than human operators therefore take your hands off the wheel and hope for the best. Please file a ticket with support if your child's school was accidentally destroyed.

    • By gaigalas 2026-03-039:231 reply

      "It's fine, everyone does it"

      • By KronisLV 2026-03-0314:421 reply

        There's probably a curve of diminishing returns when it comes to how much effort you throw in to improve uptime, which also directly affects the degree of overengineering around it.

        I'm not saying that it should excuse straight up bad engineering practices, but I'd rather have them iterate on the core product (and maybe even make their Electron app more usable: not to have switching conversations take 2-4 seconds sometimes when those should be stored locally and also to have bare minimum such as some sort of an indicator when something is happening, instead of "Let me write a plan" and then there is nothing else indicating progress vs a silently dropped connection) than pursue near-perfect uptime.

        Sorry about the usability rant, but my point is that I'd expect medical systems and planes to have amazing uptime, whereas most other things that have lower stakes I wouldn't be so demanding of. The context I've not mentioned so far is that I've seen whole systems get developed poorly, because they overengineered the architecture and crippled their ability to iterate, sometimes thinking they'd need scale when a simpler architecture, but a better developed one would have sufficed!

        Ofc there's a difference between sometimes having to wait in a queue for a request to be serviced or having a few requests get dropped here and there and needing to retry them vs your system just having a cascading failure that it can't automatically recover from and that brings it down for hours. Having not enough cards feels like it should result in the former, not the latter.

        • By gaigalas 2026-03-0315:571 reply

          I kind of agree. The AI train depends more on having a cute user interface than being actually reliable.

          • By KronisLV 2026-03-0318:561 reply

            Ehh, I'd say there's not much difference between the UI being in a state that's for all intents being frozen due to not having a status indicator or actually having a dropped request and the UI doing nothing because there's nothing going on, you know?

            Or having the Electron UI being sluggish 99% of the time during daily use vs dealing with that 1% of the time when there's outages. I'd rather have that 99% be good and 1% not work, than 99.9% be miserable and 0.01% not work.

            • By gaigalas 2026-03-0321:38

              Yep, it's not like electricity which is an essential service.

              If electricity faults for one second, chaos breaks loose as so many things depend on it. Imagine having several microblackouts a day? We wouldn't tolerate. It's so reliable that normal people don't need redundancy for it, they just tap into the stream which is always available.

              AI is definitely shaping up to NOT become like that. We're designing for an unreliable system (try again buttons, etc), and the use cases follow that design.

    • By Copyrightest 2026-03-0311:37

      [dead]

  • By thekid314 2026-03-035:101 reply

    Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.

    • By camillomiller 2026-03-035:113 reply

      I am so baffled that someone with the stature of Jony Ive fell prey to scam Altman empty promises. I would have expected much more of him.

      • By skywhopper 2026-03-039:34

        Seriously? Jony Ive is in his Cash In era. He long ago stopped being relevant, and was a huge drag on Apple for a decade. He’s perfectly happy to take billions for doing nothing, I’m sure.

      • By rhubarbtree 2026-03-037:50

        What were the empty promises?

      • By chihuahua 2026-03-035:411 reply

        Altman put all of his attribute points on lying.

        • By Sammi 2026-03-0310:36

          He's a Bard with all his points in Charisma. He doesn't do anything except sing fairy tale songs.

          That's the prettier fantasy version. The other is that he is Gríma Wormtongue.

  • By adithyassekhar 2026-03-035:578 reply

    Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.

    Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.

    • By raincole 2026-03-039:254 reply

      > Your next interview won't be testing your AI skills.

      You are living under quite a big rock.

      • By loevborg 2026-03-0310:011 reply

        Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.

        • By koito17 2026-03-0314:391 reply

          In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.

          • By danielbarla 2026-03-0316:22

            To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.

      • By bakugo 2026-03-0310:091 reply

        If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.

        • By malka1986 2026-03-0310:571 reply

          Unless you are doing stuff that does not need to be maintened, there is still a need for a skilled human to maintain proper software architecture.

          It is the managers who are doomed. The future is small team of dev answering directly to the cto.

          • By PessimalDecimal 2026-03-0311:46

            My guess is this is correct. To the extent coding with agents becomes dominant, the need for non-technical managers to coordinate large numbers of developers will decrease.

      • By brunooliv 2026-03-0310:25

        What rock?

        C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".

        The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.

        The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).

      • By adithyassekhar 2026-03-039:271 reply

        I wish I could edit that; Read: ..AI skills alone.

        • By rozap 2026-03-0316:38

          Some people think so. I interviewed someone who, on a screenshare, would just type every question I said, verbatim, into antigravity. Then he'd look at the output for a second and say "Hm this looks good" (it was not) and then run the code and paste the error back into the prompt. It was a surreal experience. I didn't end the interview early because it was so incredibly wild I couldn't even believe it. I don't think he had a single thought the entire time that wasn't motivated by the LLM output.

    • By vidarh 2026-03-0310:082 reply

      If you're not learning anything new, you're doing it wrong.

      There's a massive gap between just using an LLM and using it optimally, e.g. with a proper harness, customised to your workflows, with sub-agents etc.

      It's a different skill-set, and if you're going to go into another job that requires manual coding without any AI tools, by all means, then you need to focus on keeping those skills sharp.

      Meanwhile, my last interview already did test my AI skills.

      • By polairscience 2026-03-0310:171 reply

        Have any descriptions or analysis of what is considered "properly" on the cutting edge? I'm very curious. Only part of my profession is coding. But it would be nice to get insight into how people who really try to learn with these tools work.

        • By vidarh 2026-03-0311:221 reply

          I would say the first starting point is to run your agent somewhere you're comfortable with giving it mostly unconstrained permissions (e.g. --dangerously-skip-permissions for Claude COde), but more importantly, setting up sub-agents to hand off most work to.

          A key factor to me in whether you're "doing it right" is whether you're sitting there watching the agent work because you need to intervene all the time, or whether you go do other stuff and review the code when the agent think it's done.

          To achieve that, you need a setup with skills and sub-agents to 1) let the model work semi-autonomously from planning stage until commit, 2) get as much out of the main context as possible.

          E.g. at one client, the Claude Code plugin I've written for them will pull an issue from Jira, ask for clarification if needed, then augment the ticket with implementation details, write a detailed TODO list. Once that's done, the TODO items will be fed to a separate implementation agent to do the work, one by one - this keeps the top level agent free to orchestrate, with little entering its context, and so can keep the agent working for hours without stopping.

          Once it's ready to commit, it invokes a code-review agent. Once the code-review agent is satisifed (possibly involving re-work), it goes through a commit checklist, and offers to push.

          None of these agents are rocket-science. They're short and simple, because the point isn't for them to have lots of separate context, but mostly to tell them the task and move the step out of the main agents context.

          I've worked on a lot more advanced setups too, but to me, table stakes beyond minimising permissions is is to have key workflows laid out in a skill + delegate each step to a separate sub-agent.

          • By afro88 2026-03-0318:381 reply

            Nice setup, but GP said:

            > how people who really try to learn with these tools work

            This setup is potentially effective sure, but you're not learning in the sense that GP meant.

            For GP: Personally I've reached the conclusion that it's better for my career to use agents effectively and operate at this new level of abstraction, with final code review by me and then my team as normal.

            • By vidarh 2026-03-0410:14

              > This setup is potentially effective sure, but you're not learning in the sense that GP meant.

              Then GP didn't mean anything useful. I've learned how to build those setups. I learn to build by orchestrating groups of agents, and I get to spend far more of my time focusing on architecture, rather than minutiae that are increasingly irrelevant.

      • By gck1 2026-03-0310:29

        > Meanwhile, my last interview already did test my AI skills.

        Curious to hear more about this.

    • By thepasch 2026-03-038:292 reply

      > But you are forgetting your skills

      Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).

      • By adithyassekhar 2026-03-038:411 reply

        There are three sides to this depending on when you started working in this field.

        For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju... When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and let claude review it's own code.

        For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.

        For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.

        • By skydhash 2026-03-0311:14

          > For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book.

          This! There are several forces that act on how code is written and getting the software to work is only one. Abstraction is another which itself reflect two needs: Not repeating code and solving the metaproblem instead of the direct one. Simplicity is another factor (solving only the current problem). Then there’s making the design manifests in how the files are arranged,…

          As a developer, you need to guarantee that the code you produced works. But how the computer works is not how we think. We invented a lot of abstractions between the two, knowing the cost in performance for each one. And we also invented a lot of techniques to help us further. But most of them are only learned when you’ve experienced the pain of not knowing them. And then you’ll also start saying things like “code smells”, “technical debt”, “code is liability” even when things do work.

      • By the_bigfatpanda 2026-03-038:50

        The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.

        That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.

        As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.

    • By AlexeyBelov 2026-03-036:043 reply

      > Your next interview won't be testing your AI skills

      Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.

      • By adithyassekhar 2026-03-036:123 reply

        Are they just looking for AI skills? If so that's terrifying.

        • By AlexeyBelov 2026-03-039:331 reply

          The usual LeetCode-ish tasks, often system design, but then deep AI usage. "I use Copilot" isn't going to fly at all, as far as I understand.

          • By nDRDY 2026-03-039:37

            Are you allowed to leverage AI to answer the leetcode questions? Otherwise, it seems it is the interviewers who are behind the times!

        • By ternwer 2026-03-037:461 reply

          I think most are looking for both.

          AI/LLM knowledge without programming knowledge can make a mess.

          Programming knowledge without AI/LLM knowledge can also make a mess.

          • By palmotea 2026-03-038:252 reply

            > AI/LLM knowledge without programming knowledge can make a mess.

            That makes sense.

            > Programming knowledge without AI/LLM knowledge can also make a mess.

            How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.

            • By thepasch 2026-03-038:341 reply

              > How? I'd imagine that most typically means continuing to program by hand.

              I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.

              There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.

              For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.

              • By ternwer 2026-03-038:44

                Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.

                Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.

            • By column 2026-03-0311:001 reply

              "How?" <- It shows a lack of curiosity?

              "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

              • By palmotea 2026-03-0314:521 reply

                > "How?" <- It shows a lack of curiosity?

                We're talking about a codebase, here. How does "lack of curiosity" about LLMS "make a mess"

                > "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

                Probably knows enough about what's a good vs bad change. If you're "clueless about AI/LLM" but know a bad change when you see one, how do you "make a mess?"

                It's 2026, even a developer who's never touched an LLM before has heard about LLM hallucinations. If you've got programming knowledge, you should know how to make changes (e.g. you're not going to commit 200 files for a tiny change, because you know that doesn't smell right), which should guard against "making a mess."

                My point it doesn't seem reasonable to assume symmetry here. That if you don't know both things, you'll make a mess. That also implies everything built before 2022 was a mess, because those developers new programming but not LLMs, which is an unreasonable claim to make.

                • By ternwer 2026-03-0323:53

                  I was too cute in trying to be terse, but I meant a mess while using AI:

                  > [Employers], above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.

        • By tmountain 2026-03-036:431 reply

          Probably, I think hand coding is going the way of the dodo and the ox cart.

          • By adithyassekhar 2026-03-037:39

            Sorry but focusing on the hand coding part misses the whole picture and would derail the conversation. Comparisons like that are often dishonest.

            Hiring someone who writes Rust with Claude but never written anything with it in their lives, never faced the edge cases, never took the wrong decisions feels naive to me. At the end of the day it's still a next token generator, an impressive one. It can hold context but not relate with anything outside that context. Someone needs to take accountability.

      • By PacificSpecific 2026-03-039:14

        I've done a couple flirty interviews and so far it hasn't come up. So take hope, it's not all bad.

      • By nDRDY 2026-03-039:361 reply

        What a time to be alive. I once got roasted in an interview because I said I would use Google if I didn't know something (in this context, the answer to a question that would easily be found in language and compiler documentation).

        • By column 2026-03-0310:02

          You are not alone. The silver lining is they show their true colors early.

    • By skeledrew 2026-03-039:49

      > not learning anything new

      Huge disagree. Or likely more "depends on how you use it". I've learned a lot since I started using AI to help me with my projects, as I prompt it in such a way that if I'm going about something the "wrong" way, it'll tell me and suggest a better approach. Or just generally help me fill out my knowledge whenever I'm vague in my planning.

    • By gck1 2026-03-0310:281 reply

      > But you are forgetting your skills (seen it first hand), and you're not learning anything new.

      This is just false. I may forget how to write code by hand, but I'm playing with things I never imagined I would have time and ability to, and getting engineering experience that 15 years of hands on engineering couldn't give me.

      > Your next interview won't be testing your AI skills.

      Which will be a very good signal to me that it's not a good match. If my next interview is leetcode-style, I will fail catastrophically, but then again, I no longer have any desire to be a code writer - AI does it better than me. I want to be a problem solver.

      • By adithyassekhar 2026-03-0310:471 reply

        > getting engineering experience that 15 years of hands on engineering couldn't give me.

        This is the equivalent of how watching someone climb mountain everest in a tv show or youtube makes you feel like you did it too. You never did, your brain got the feeling that you did and it'll never motivate you to do it yourself.

        • By gck1 2026-03-0311:25

          This is only true for fully unsupervised "vibe coding". But you'll find this will not work for anything beyond a basic todo list app.

          You'll free up your time from actually writing code, but on the other hand, you'll have to do way more reading, planning, making architectural decisions etc. This is what engineering feels like should be.

    • By eggsandbeer 2026-03-037:21

      [dead]

    • By mihaaly 2026-03-037:581 reply

      > Professionally you are downgrading

      It is the contrary!

      You learn using a very powerfool tool. This is a tool, like text editor and compiler.

      But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

      The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.

      ... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.

      • By adithyassekhar 2026-03-038:12

        I agree with you, syntax details are not important but they haven't been important for a long time due to better editors and linters.

        > But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

        This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.

        Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.

HackerNews