LLMs as the new high level language

2026-02-0310:00177378federicopereiro.com

(originally published here) [UPDATE 2026-02-12]: an interesting counterargument from the HN thread (thanks to everyone who commented!) is that the analogy breaks if you’re still reading and verifying…

(originally published here)

[UPDATE 2026-02-12]: an interesting counterargument from the HN thread (thanks to everyone who commented!) is that the analogy breaks if you’re still reading and verifying the code. When you’re writing JS, you don’t look at the output of the interpreter; when you’re programming in C, you don’t look at the executable. So I think that means that the analogy is correct if most coding becomes vibe coding.

Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.

What do I mean by LLM agents? I mean that the main development stack of a human will soon be:

  • Multiple: a number of agents working in parallel.
  • Autonomous: those agents only requiring feedback from the human every once in a while, but mostly work autonomously.

How can we determine if the hypothesis is true? If a human developer can now build an order of magnitude more (10x) using multiple autonomous agents compared to what the human would be able to build without them, then the hypothesis is true. I’m not sure of it yet (as of January 2026) but I’m seriously considering it.

For many that have been in the software business for a while, the mind reels with objections. Let’s address the easy ones first:

  • 10x lines of code is not building 10x more, it’s just slop: the measure should be done on the actual functional output delivered, not the lines of code. If we go with the hypothesis, the “lines of code” are really the instructions to the LLM.
  • LLMs are only for those who don’t know how to code: while there will be many new programmers thanks to LLMs, that doesn’t mean that experienced programmers won’t benefit from using LLM agents. Evidence shows that many experienced programmers are experiencing a lot more output thanks to LLMs.
  • LLMs are for those who don’t want to think/work: if you are using LLMs to do more than you did before, you’ll have to think and work more, not less. It’s more demanding to manage a fleet of agents, and you’ll have to design way more (since you’re building x times what you were building before in the same amount of time).
  • LLMs are going to make our coding skills rot: probably. But at work we are not usually concerned about our assembler, or our C chops rotting, if they exist at all. Most of us practice those chops in our free time, because we cannot defend the case that we’d be more productive working in assembler or C at work (for most types of software development).
  • The code that LLMs make is much worse than what I can write: almost certainly; but the same could be said about your assembler, or your C code. As long as what the LLM generates is sufficiently efficient, it will run and it will already be ready. The system will be uglier, but it will still work.
  • Using LLM agents is expensive: if they give you already 50% more productivity, and your salary is an average salary, they are not. And LLMs will only get cheaper. They are only expensive in absolute, not in relative terms.
  • I tried using LLM agents one afternoon and they wasted my time: there’s a learning curve involved. It takes a while to get a hang of working with multiple LLM agents. Think of the hours and days you spent fighting the tools and syntax of your programming stack, until you more or less got it.

(None of the above are defensible, I think, though emotionally they are not easy to accept. Many of these are definitely not easy to accept for me)

Now for two objections that go to the crux of the matter:

  • Quality: aren’t LLMs are generating code that will soon become a dumpster fire? Are we not building in foundations of sand?
  • Understandability: won’t LLMs generate so much code we can never hope to understand it? Even if the systems work, are we not forever in peril of not controlling them because we don’t understand them?

I would like tho use quality and understandability as the goals for any acceptable framework of LLM programming. Economically, only quality is undisputable as a goal. Understandability might be a romantic dream or a good long term bet (I’m choosing the latter, but you can of course be agnostic).

Now for the quaint: LLMs are far more nondeterministic than previous higher level languages. They also can help you figure out things at the high level (descriptions) in a way that no previous layer could help you dealing with itself.

Let’s try to find the common elements of how this near-future would look like:

  • Documentation: a set of markdown pages that contain the specification of the system: purpose, main entities, endpoints, constraints, core flows, coding standards.
  • Implementation: the codebase, plus all of the data. This is what runs and what holds state. The codebase should be reconstructable from the documentation, and the data should be consistent with its description in the documentation.
  • Dialogs: multiple agents are churning away at their tasks. They produce text while they’re thinking through the solution, some of it code: this is the dialog (which is expressible as a markdown page). A human can inspect at any time this stream of text, code changes and commands; a human can also enter the dialog. Some dialogs can be waiting for human input. When an agent completes its work, the dialog is no longer alive but it still is accessible.
  • Tasks: a dynamic set of discrete pieces of work, expressed as a markdown page. They should be reconstructable from the documentation + the existing state of the codebase. Tasks should be nestable. They have a status (done, pending, in progress, waiting for human interaction, complete).

Looking at this, we see two stocks and two flows. The two stocks are the “tions” (documentation and implementation), which are the accretions of the system. And we also see two flows, which are the dialogs and tasks. The dialogs and the tasks build both the documentation and the implementation. It’s also possible for the human to modify the documentation and the implementation directly, but that won’t happen that often, as most of the flow is agentic and the human will spend most of their time interacting with the agents.

How will agents will be structured? Since agents can play multiple roles (since the underlying models are general purpose), I think we can leave as much freedom as possible here. If any agent can enter any dialog, and any human can enter any dialog, we can let the human experiment with different possibilities:

  • Agents working on tasks independently, start to finish.
  • Managing agents that are in charge of orchestrating what’s next.
  • QA agents to try to break new features.
  • Reviewing agents that take a new unmerged feature and review it without the context of the builder.
  • Merging agents that resolve conflicts.

The important thing is that the human can either manually or automatically spin agents with instructions that can be either one-offs or a chunk of the documentation.

There’s an opportunity for a new type of world wide web – or rather, for making the existing web much more free and web-like, breaking the silos of applications. That opportunity is MCP. MCP (a standard for tool calling by LLMs), which everyone and their mother is rushing to support, can be considered as a general XMLHTTPRequest. This opens the possibility to have your AI agents take any functionality and data that’s siloed in an existing application and put it in a dynamic canvas of your own choosing.

My original vision for cell was a grid of code and data (the dataspace) that you can fully understand and is already deployed. This is not enough. This will be just the “grid”. Surrounding the grid will be a set of dynamic pages, where documentation and functionality come together.

Documentation won’t just be documentation: you will be able to embed functionality, either from your own application (which will be supported in the grid) or from external applications. You can have mini dashboards or widgets that you can bring to fullscreen. Or you can navigate to another page. Your cell will be a collection of pages, plus the grid, plus the agents that are working on it. And a lot of it can be accessible from the outside.

This all still requires a server for these reasons:

  • Receive requests while you’re not online.
  • Persist data.
  • Keep the agents working.
  • Many calls cannot be done directly from the browser for security reasons, so they require a server to make the request.

What about quality and understandability? If instead of a big stack, we use a good substrate, the line count of the LLM output will be much less, and more understandable. If this is the case, we can vastly increase the quality and performance of the systems we build.

The frontend of the system is now the documentation and the agents; the backend is the stack/substrate.

Open questions:

  • How do we store the documentation and dialog alongside the implementation?
  • How do we use version control systems?

Edit 2026-02-03: I’m getting mindblown by the idea of MCP as XMLHTTPRequest, and LLMs as high-level glue. You can just say “get me X data from salesforce”, and the LLM will get it, and it will do a nice little ad-hoc visualization of it wherever you want it, in a doc page, or another page of your application. It’s truly the end of the silo.


Read the original article

Comments

  • By heikkilevanto 2026-02-0810:336 reply

    If we consider the prompts and LLM inputs to be the new source code, I want to see some assurance we get the same results every time. A traditional compiler will produce a program that behaves the same way, given the same source and options. Some even go out of their way to guarantee they produce the same binary output, which is a good thing for security and package management. That is why we don't need to store the compiled binaries in the version control system.

    Until LLMS start to get there, we still need to save the source code they produce, and review and verify that it does what it says on the label, and not in a totally stupid way. I think we have a long way to go!

    • By afavour 2026-02-0814:322 reply

      > If we consider the prompts and LLM inputs to be the new source code, I want to see some assurance we get the same results every time.

      There’s a related issue that gives me deep concern: if LLMs are the new programming languages we don’t even own the compilers. They can be taken from us at any time.

      New models come out constantly and over time companies will phase out older ones. These newer models will be better, sure, but their outputs will be different. And who knows what edge cases we’ll run into when being forced to upgrade models?

      (and that’s putting aside what an enormous step back it would be to rent a compiler rather than own one for free)

      • By devsda 2026-02-0818:131 reply

        > New models come out constantly and over time companies will phase out older ones. These newer models will be better, sure, but their outputs will be different.

        IIUC, same model with same seed and other parameters is not guaranteed to produce the same output.

        If anyone is imagining a future where your "source" git repo is just a bunch of highly detailed prompt files and "compilation" just needs an extra LLM code generator, they are signing up for disappointment.

        • By carlmr 2026-02-0818:27

          >IIUC, same model with same seed and other parameters is not guaranteed to produce the same output.

          Models are so large that random bit flips make such guarantees impossible with current computing technology:

          https://aclanthology.org/2025.emnlp-main.528.pdf

      • By nerdsniper 2026-02-0816:55

        Presumably, open models will work almost, but not quite, as well and you can store those on your local drive and spin them up in rented GPUs.

    • By energy123 2026-02-0811:243 reply

      Greedy decoding gives you that guarantee (determinism). But I think you'll find it to be unhelpful. The output will still be wrong the same % of the time (slightly more, in fact) in equally inexplicable ways. What you don't like is the black box unverifiable aspect, which is independent of determinism.

      • By willj 2026-02-0814:28

        If you’re using a model from a provider (not one that you’re hosting locally), greedy decoding via temperature = 0 does not guarantee determinism. A temperature of 0 doesn’t result in the same responses every time, in part due to floating-point precision and in part to to lack of batch invariance [1]

        [1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

      • By layer8 2026-02-0812:38

        What people don’t like is that the input-output relation of LLMs is difficult, if not impossible, to reason about. While determinism isn’t the only factor here (you can have a fully deterministic system that is still unpredictable in practical terms), it is still a factor.

      • By trklausss 2026-02-0813:55

        The question is: if we keep the same context and model, and the same LLM configuration (quantization etc.), does it provide the same output at same prompt?

        If the answer is no, then we cannot be sure to use it as a high-level language. The whole purpose of a language is providing useful, concise constructs to avoid something not being specified (undefined behavior).

        If we can't guarantee that the behavior of the language is going to be the same, it is no better than prompting someone some requirements and not checking what they are doing until the date of delivery.

    • By fpereiro 2026-02-098:48

      Mario Zechner has a very interesting article where he deals with this problem (https://mariozechner.at/posts/2025-06-02-prompts-are-code/#t...). He's exploring how structured, sequential prompts can achieve repeatable results from LLMs, which you still have to verify. I'm experimenting with the same, though I'm just getting started. The idea I sense here is that perhaps a much tighter process of guiding the LLM, with current models, can get you repeatable and reliable results. I wonder if this is the way things are headed.

    • By properbrew 2026-02-0812:482 reply

      > I want to see some assurance we get the same results every time

      Genuine question, but why not set the temperature to 0? I do this for non-code related inference when I want the same response to a prompt each time.

    • By pjmlp 2026-02-0810:433 reply

      Anyone doing benchmarks with managed runtimes, or serverless, knows it isn't quite true.

      Which is exactly one of the AOT only, no GC, crowds use as example why theirs is better.

      • By zozbot234 2026-02-0811:161 reply

        Reproducible builds exist. AOT/JIT and GC are just not very relevant to this issue, not sure why you brought them up.

        • By pjmlp 2026-02-0811:42

          Because they are dynamic compilers!

      • By dgb23 2026-02-0813:381 reply

        But there is functional equivalence. While I don't want to downplay the importance of performance, we're talking about something categorically different when comparing LLMs to compilers.

        • By pjmlp 2026-02-0817:26

          Not when those LLMs are tied to agents, replacing what would be classical programming.

          Using low code platforms with AI based automations, like most iPaaS are now doing.

          If the agent is able to retrieve the required data from a JSON file, fill an email with the proper subject and body, sending it to another SaaS application, it is one less integration middleware that was required to be written.

          For all practical business point of view it is an application.

      • By manuelabeledo 2026-02-0811:042 reply

        Even those are way more predictable than LLMs, given the same input. But more importantly, LLMs aren’t stateless across executions, which is a huge no-no.

        • By zozbot234 2026-02-0811:191 reply

          > But more importantly, LLMs aren’t stateless across executions, which is a huge no-no.

          They are, actually. A "fresh chat" with an LLM is non-deterministic but also stateless. Of course agentic workflows add memory, possibly RAG etc. but that memory is stored somewhere in plain English; you can just go and look at it. It may not be stateless but the state is fully known.

          • By manuelabeledo 2026-02-0811:31

            Using the managed runtime analogy, what you are saying is that, if I wanted to benchmark LLMs like I would do with runtimes, I would need to take the delta between versions, plus that between whatever memory they may have. I don’t see how that helps with reproducibility.

            Perhaps more importantly, how would I quantify such “memory”? In other words, how could I verify that two memory inputs are the same, and how could I formalize the entirety of such inputs with the same outputs?

        • By pjmlp 2026-02-0811:432 reply

          Are you certain to predict the JIT generated machine code given the JVM bytecode?

          Without taking anything else into account that the JIT uses on its decision tree?

          • By manuelabeledo 2026-02-0812:491 reply

            For a single execution, to a certain extent, yes.

            But that’s not the point I’m trying to make here. JIT compilers are vastly more predictable than LLMs. I can take any two JVMs from any two vendors, and over several versions and years, I’m confident that they will produce the same outputs given the same inputs, to a certain degree, where the input is not only code but GC, libraries, etc.

            I cannot do the same with two versions of the same LLM offering from a single vendor, that had been released one year apart.

            • By pjmlp 2026-02-0817:301 reply

              Good luck mapping OpenJDK with Azul's cloud JIT, in generated machine code.

              • By manuelabeledo 2026-02-0818:33

                The output being the actual program output, not the byte code. No one is arguing that in the scope of LLMs.

          • By ThunderSizzle 2026-02-0812:04

            Enough so that I've never had a runtime issue because the compiler did something odd once, and correct thr next time. At least in c#. If Java is doing that, then stop using it...

            If the compiler had an issue like LLMs do, the half my builds would be broken, running the same source.

    • By aurareturn 2026-02-0811:372 reply

      > If we consider the prompts and LLM inputs to be the new source code, I want to see some assurance we get the same results every time.

      Give a spec to a designer or developer. Do you get the same result every time?

      I’m going to guess no. The results can vary wildly depending on the person.

      The code generated by LLMs will still be deterministic. What is different is the product team tools to create that product.

      At a high level, does using LLMs to do all or most of the coding ultimately help the business?

      • By jug 2026-02-0811:421 reply

        This comparison holds up to me only in the long standing debate "LLMs as the new engineer", not "LLMs as a new programming language" (like here).

        I think there are important distinctions there, predictably one of them.

        • By intrasight 2026-02-0812:08

          Even as a SSWE I do often wonder if I am but a high-level language.

  • By manuelabeledo 2026-02-0814:125 reply

    After re-reading the post once again, because I honestly thought I was missing something obvious that would make the whole thing make sense, I started to wonder if the author actually understands the scope of a computer language. When he says:

    > LLMs are far more nondeterministic than previous higher level languages. They also can help you figure out things at the high level (descriptions) in a way that no previous layer could help you dealing with itself. […] What about quality and understandability? If instead of a big stack, we use a good substrate, the line count of the LLM output will be much less, and more understandable. If this is the case, we can vastly increase the quality and performance of the systems we build.

    How does this even work? There is no universe I can imagine where a natural language can be universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it.

    • By onlyrealcuzzo 2026-02-0816:261 reply

      You're going to pretty hard pressed to do Rust better than Rust.

      There's minimal opportunity with lifetime annotations. I'm sure very small options elsewhere, too.

      The idea of replacing Rust with natural language seems insane. Maybe I'm being naive, but I can't see why or how it could possibly be useful.

      Rust is simply Chinese unless you understand what it's doing. If you translate it to natural language, it's still gibberish, unless you understand what it does and why first. In which case, the syntax is nearly infinitely more expressive than natural language.

      That's literally the point of the language, and it wasn't built by morons!

      • By manuelabeledo 2026-02-0816:431 reply

        I believe the author thinks of this problem in terms of “the LLM will figure it out”, i.e. it will be trained on enough code that compiles, that the LLM just needs to put the functional blocks together.

        Which might work to a degree with languages like JavaScript.

        • By onlyrealcuzzo 2026-02-0818:12

          That point makes no sense.

          If the LLM is not perfect at scale - extraordinarily unlikely that it would be - then it becomes relevant to understand the actual language.

          That's either natural language that's supposed to somehow be debuggable - or it's a language like Rust - which actually is.

    • By fpereiro 2026-02-098:48

      @manuelabeledo: during 2025 I've been building a programming substrate called cell (think language + environment) that attempts to be both very compact and very expressive. Its goal is to massively reduce complexity to turn general purpose code more understandable (I know this is laughably ambitious and I'm desperately limited in my capabilities of pulling through something like that). But because of the LLM tsunami, I'm reconsidering the role of cell (or any other successful substrate): even if we achieve the goal, how will this interact with a world where people mostly write and validate code through natural language prompts? I never meant to say that natural language would itself be this substrate, or that the combination of LLMs and natural languages could do that: I still see that there will be a programming language behind all of this. Apologies for the confusion.

    • By slfnflctd 2026-02-0815:17

      To be generous and steelman the author, perhaps what they're saying is that at each layer of abstraction, there may be some new low-hanging fruit.

      Whether this is doable through orchestration or through carefully guided HITL by various specialists in their fields - or maybe not at all! - I suspect will depend on which domain you're operating in.

    • By coldtea 2026-02-0817:231 reply

      >After re-reading the post once again, because I honestly thought I was missing something obvious that would make the whole thing make sense, I started to wonder if the author actually understands the scope of a computer language.

      The problem is you restrict the scope of a computer language to the familiar mechanisms and artifacts (parsers, compilers, formalized syntax, etc), instead of taking to be "something we instruct the computer with, so that it does what we want".

      >How does this even work? There is no universe I can imagine where a natural language can be universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it.

      Doesnt matter. Who said it needs to be "universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it"?

      It's enough that is can be used to instruct computers more succintly and at a higher level of abstraction, and that a program will come out at the end, which is more or less (doesn't have to be exact), what we wanted.

      • By manuelabeledo 2026-02-0818:311 reply

        If you cannot even provide a clear definition of what you want it to be, then this is all science fiction.

        • By coldtea 2026-02-0819:432 reply

          Doesn't have to be "a clear definition", a rough defition within some quite lax boundaries is fine.

          You can just say to Claude for example "Make me an app that accepts daily weight measurements and plots them in a graph" and it will make one. Tell it to use tha framework or this pattern, and it will do so too. Ask for more features as you go, in similar vague language. At some point your project is done.

          Even before AI the vast majority of software is not written with any "clear definition" to begin with, there's some rought architecture and idea, and people code as they go, and often have to clarify or rebuilt things to get them as they want, or discover they want something slightly different or the initial design had some issues and needs changing.

          • By collingreen 2026-02-0819:581 reply

            This is the most handwaving per paragraph I've ever seen.

            I think a fair summarization of your point is "LLM generated programs work well enough often enough to not need more constraints or validation than natural language", whatever that means.

            If you take that as a true thing then sure why would you go deeper (eg, I never look at the compiled bytecode my high level languages produce for this exact reason - I'm extremely confident that translation is right to the point of not thinking about it anymore).

            Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point. Many many folks have lots of first hand experience watching it not be true, even when people are confidently claiming otherwise.

            I think if you want to be convincing in this thread you need to go back one step and explain why the LLM code is "good enough" and how you determined that. Otherwise it's just two sides talking totally past each other.

            • By coldtea 2026-02-0820:312 reply

              >This is the most handwaving per paragraph I've ever seen.

              Yes: "LLM generated programs work well enough often enough to not need more constraints or validation than natural language" if a fair summarization of my point.

              Not sure the purpose of "whatever that means" that you added. It's clear what it means. Thought, casual language seems to be a problem for you. Do you only always discuss in formally verified proofs? If so, that's a you problem, not an us or LLM problem :)

              >Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point.

              I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.

              (I'm not saying it's good or good enough as a quality assessment. In fact, I don't particularly like it. But I am saying it's "good enough" as in, people will deem it good enough to be shipped).

              • By manuelabeledo 2026-02-0822:41

                > I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.

                This is definitely not true. Outside of the US, very few devs can afford to pay for the computer and/or services. And in a couple years, I believe, devs in the US will be in for a rude awakening when the current prices skyrocket.

              • By collingreen 2026-02-098:151 reply

                The "whatever that means" isn't a judgement jab at your point, merely acknowledging the hand waving of my own with "good enough".

                I hope this comment thread helps with your cheeky jab that I might have a problem understanding or using casual language.

                I'm not sure if it's moving the goalpost or not to back away from a strong claim that LLMs are at the "good enough" (whatever that means!) level now and instead fall back to "some devs will just ship it and therefore that's good enough, by definition".

                Regardless, I think we agree that, if LLMs are "good enough" in this way then we can think a lot less about code and logic and instead focus on prompts and feature requests.

                I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.

                I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.

                • By coldtea 2026-02-098:21

                  >I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.

                  Don't need to consider what they think, one can just see their "revealed preferences", what they actually do. Which for the most part is adopting agents.

                  >I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.

                  That's true for many devs one might have working for their team as well. Or even one's self. So we review, we add tests, and so on. So we do that when the programming language is a "real" programming language too, doesn't have to change when it is natural language to an agent. What I'm getting at, is, that this is not a show stopper to the point of TFA.

          • By manuelabeledo 2026-02-0822:27

            You do need a clear definition of what this “LLM as a high level language” is supposed to be. Otherwise it’s all just wishful thinking.

            “It’s good enough” so it generates apps that could otherwise be boilerplate. OK, I guess? But that’s not what OP was talking about in their post.

    • By jmalicki 2026-02-0823:15

      In the same way in Rust you can download a package with Cargo and use it without reimplementing it, an LLM can download and explore all written human knowledge to produce a solution.

      Or how you can efficiently loop over all combinations of all inputs in a short computer program, it will just take awhile!

      If you have a programming language where finding an efficient algorithm is a compiler optimization, then your programs can get a lot shorter.

  • By tomaytotomato 2026-02-0722:594 reply

    I would like to hijack the "high level language" term to mean dopamine hits from using an LLM.

    "Generate a Frontend End for me now please so I don't need to think"

    LLM starts outputting tokens

    Dopamine hit to the brain as I get my reward without having to run npm and figure out what packages to use

    Then out of a shadowy alleyway a man in a trenchcoat approaches

    "Pssssttt, all the suckers are using that tool, come try some Opus 4.6"

    "How much?"

    "Oh that'll be $200.... and your muscle memory for running maven commands"

    "Shut up and take my money"

    ----- 5 months later, washed up and disconnected from cloud LLMs ------

    "Anyone got any spare tokens I could use?"

    • By cyberax 2026-02-0723:121 reply

      > and your muscle memory for running maven commands

      Here's $1000. Please do that. Don't bother with the LLM.

    • By allovertheworld 2026-02-086:49

      aka a mind virus

    • By jatora 2026-02-083:11

      If you're disconnected from cloud LLM's you've got bigger problems than coding can solve lol

    • By imiric 2026-02-088:42

      I can't tell if your general premise is serious or not, but in case it is: I get zero dopamine hits from using these tools.

      My dopamine rush comes from solving a problem, learning something new, producing a particularly elegant and performant piece of code, etc. There's an aspect of hubris involved, to be sure.

      Using a tool to produce the end result gives me no such satisfaction. It's akin to outsourcing my work to someone who can do it faster than me. If anything, I get cortisol hits when the tool doesn't follow my directions and produces garbage output, which I have to troubleshoot and fix myself.

HackerNews