JSLinux Now Supports x86_64

2026-03-0916:43378138bellard.org

Run Linux or other Operating Systems in your browser! The following emulated systems are available:

Run Linux or other Operating Systems in your browser!

The following emulated systems are available:


Read the original article

Comments

  • By brucehoult 2026-03-100:215 reply

    Out of interest I tried running my Primes benchmark [1] on both the x86_64 and x86 Alpine and the riscv64 Buildroot, both in Chrome on M1 Mac Mini. Both are 2nd run so that all needed code is already cached locally.

    x86_64:

        localhost:~# time gcc -O primes.c -o primes
        real    0m 3.18s
        user    0m 1.30s
        sys     0m 1.47s
        localhost:~# time ./primes
        Starting run
        3713160 primes found in 456995 ms
        245 bytes of code in countPrimes()
        real    7m 37.97s
        user    7m 36.98s
        sys     0m 0.00s
        localhost:~# uname -a
        Linux localhost 6.19.3 #17 PREEMPT_DYNAMIC Mon Mar  9 17:12:35 CET 2026 x86_64 Linux
    
    x86 (i.e. 32 bit):

        localhost:~# time gcc -O primes.c -o primes
        real    0m 2.08s
        user    0m 1.43s
        sys     0m 0.64s
        localhost:~# time ./primes
        Starting run
        3713160 primes found in 348424 ms
        301 bytes of code in countPrimes()
        real    5m 48.46s
        user    5m 37.55s
        sys     0m 10.86s
        localhost:~# uname -a
        Linux localhost 4.12.0-rc6-g48ec1f0-dirty #21 Fri Aug 4 21:02:28 CEST 2017 i586 Linux
    
    
    riscv64:

        [root@localhost ~]# time gcc -O primes.c -o primes
        real    0m 2.08s
        user    0m 1.13s
        sys     0m 0.93s
        [root@localhost ~]# time ./primes
        Starting run
        3713160 primes found in 180893 ms
        216 bytes of code in countPrimes()
        real    3m 0.90s
        user    3m 0.89s
        sys     0m 0.00s
        [root@localhost ~]# uname -a
        Linux localhost 4.15.0-00049-ga3b1e7a-dirty #11 Thu Nov 8 20:30:26 CET 2018 riscv64 GNU/Linux
    
    
    Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.

    Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.

    [1] http://hoult..rg/primes.txt

    • By dmitrygr 2026-03-100:292 reply

      MIPS (the arch of which RISCV is mostly a copy) is even easier to emulate, unlike RV it does not scatter immediate bits al over the instruction word, making it easier for an emulator to get immediates. If you need emulated perf, MIPS is the easiest of all

      • By brucehoult 2026-03-100:432 reply

        That's a very small effect in the overall decoding of an instruction even in a pure interpretive emulator, and undetectable in a JIT.

        Also MIPS code is much larger.

        • By thesz 2026-03-1012:391 reply

          MIPS code is not much larger.

          There are two interesting differences of ISA between MIPS and RISC-V: that MIPS does not have branch on condition, only on zero/non-zero and that MIPS has 16 bit immediates with appropriate sign extension (all zeroes for ORI, all ones for ANDI). The first difference makes MIPS programs about 10% larger and second difference makes MIPS programs smaller (RISC-V immediates are 11.5 bits due to mandatory sign extension, 13 bits are required to cover 95% of immediates in MIPS-like scheme), a percent or so, I think.

          • By dmitrygr 2026-03-112:381 reply

            ANDI is not 1-extended (though that would be nice) it is 0-extended. MIPS only has 2 extension modes for immediates - sign extended and zero-extended. all logical ops are 0-extended, all arith ops are sign extended.

            • By thesz 2026-03-1122:37

              Thanks. My memory failed me, I've implemented MIPS 18 years ago.

        • By dmitrygr 2026-03-1012:05

          Entirely disagreed. In a simple step by step emulator it can be as much as 30% of the time spent. In a jit indeed it is less of an effect.

      • By anthk 2026-03-109:54

    • By saagarjha 2026-03-1011:301 reply

      > If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.

      I don't really think this bears out in practice. RISC-V is easy to emulate but this does not make it fast to emulate. Emulation performance is largely dominated by other factors where RISC-V does not uniquely dominate.

    • By camel-cdr 2026-03-1015:28

      x86 is a lot easier to JIT to Arm or RISC-V though, because it has fewer registers.

    • By vexnull 2026-03-100:591 reply

      Interesting to see the gcc version gap between the targets. The x86_64 image shipping gcc 15.2.0 vs 7.3.0 on riscv64 makes the performance comparison less apples-to-apples than it looks - newer gcc versions have significantly better optimization passes, especially for register allocation.

      • By brucehoult 2026-03-104:03

        The RISC-V one has just never been touched since it was created in 2018.

        > newer gcc versions have significantly better optimization passes

        So what you're saying is that with a modern compiler RISC-V would win by even more?

        TBH I doubt much has changed with register allocation on register-rich RISC ISAs since 2018. On i386, yeah, quite possible.

    • By unit149 2026-03-106:54

      [dead]

  • By maxloh 2026-03-0918:062 reply

    Unfortunately, he didn't attach the source code for the 64-bit x86 emulation layer, or the config used to compile the hosted image.

    For a more open-source version, check out container2wasm (which supports x86_64, riscv64, and AArch64 architectures): https://github.com/container2wasm/container2wasm

    • By zamadatix 2026-03-0918:191 reply

      https://github.com/copy/v86 might be a more 1:1 fully open sourced alternative.

      • By maxloh 2026-03-0918:253 reply

        Not really. x86_64 is not supported yet: https://github.com/copy/v86/issues/133

        • By zamadatix 2026-03-0921:00

          Sure, and there are probably some other things lacking, but JSLinux supports a lot more than CLI Linux userspace on x86-64 too. E.g. compare to lack of graphical interface https://github.com/container2wasm/container2wasm/issues/196

          It looks like container2wasm uses a forked version of Bochs to get the x86-64 kernel emulation to work. If one pulled that out separately and patched it a bit more to have the remaining feature support it'd probably be the closest overall. Of course one could say the same about patching anything with enough enthusiasm :).

    • By zoobab 2026-03-108:48

      "he didn't attach the source code for the 64-bit x86 emulation layer"

      It's not open source? If that's the case, it should be in his FAQ.

  • By simonw 2026-03-0919:2711 reply

    The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

    Claude Code / Codex CLI / etc are all great because they know how to drive Bash and other Linux tools.

    The browser is probably the best sandbox we have. Being able to run an agent loop against a WebAssembly Linux would be a very cool trick.

    I had a play with v86 a few months ago but didn't quite get to the point where I hooked up the agent to it - here's my WIP: https://tools.simonwillison.net/v86 - it has a text input you can use to send commands to the Linux machine, which is pretty much what you'd need to wire in an agent too.

    In that demo try running "cat test.lua" and then "lua test.lua".

    • By the_mitsuhiko 2026-03-0920:421 reply

      > The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

      That exists: https://github.com/container2wasm/container2wasm

      Unfortunately I found the performance to be enough of an issue that I did not look much further into it.

      • By stingraycharles 2026-03-101:372 reply

        Did anyone expect anything different though, when running a full-blown OS in JavaScript?

        • By pancsta 2026-03-109:07

          WASM != JS (fortunately)

    • By d_philla 2026-03-0920:531 reply

      Check out Jeff Lindsay's Apptron (https://github.com/tractordev/apptron), comes very close to this, and is some great tech all on its own.

      • By progrium 2026-03-0921:31

        It's getting there. Among other things, it's probably the quickest way to author a Linux environment to embed on the web: https://www.youtube.com/watch?v=aGOHvWArOOE

        Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.

        Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together. https://www.youtube.com/watch?v=kGBeT8lwbo0

    • By apignotti 2026-03-0920:271 reply

      We are working on exactly this: https://browserpod.io

      For a full-stack demo see: https://vitedemo.browserpod.io/

      To get an idea of our previous work: https://webvm.io

      • By otterley 2026-03-102:101 reply

        How’s performance relative to bare metal or hardware virtualization?

        • By johndough 2026-03-108:161 reply

          I ran two experiments:

          ~20x slower for a naive recursive Fibonacci implementation in Python (1300 ms for fib(30) in this VM vs 65ms on bare metal. For comparison, CPython directly compiled to WASM without VM overhead does it in 140ms.)

          ~2500x slower for 1024x1024 matrix multiplication with NumPy (0.25 GFLOPS in VM vs 575 GFLOPS on bare metal).

          • By apignotti 2026-03-108:211 reply

            This is not correct. You are using WebVM here, not BrowserPod.

            WebVM is based on x86 emulation and JIT compilation, which at this time lowers vector instructions as scalar. This explains the slowdowns you observe. WebVM is still much faster than v86 in most cases.

            BrowserPod is based on a pure WebAssembly kernel and WebAssembly payload. Performance is close to native speed.

            • By johndough 2026-03-109:441 reply

              I see. I thought they were the same and https://vitedemo.browserpod.io/ failed to load, so I was not able to test there.

              • By apignotti 2026-03-109:551 reply

                Demo works as expected for me, please share information if possible or join our Discord for further help: https://discord.leaningtech.com

                • By johndough 2026-03-1010:341 reply

                  I dug a bit deeper. The cause was that navigator.storage.getDirectory() is not available in Firefox private browsing mode.

                  The performance is pretty amazing. fib(35) runs in 60ms, compared to 65ms in NodeJS on Desktop.

                  But I can't find a shell. Is there only support for NodeJS at the moment?

                  • By apignotti 2026-03-1011:261 reply

                    Only node is supported as of version 1.1, but the next version is fully focused on command line tooling (git, bash, ssh, grep, ...).

                    See the launch blog post for our full timeline: https://labs.leaningtech.com/blog/browserpod-10

                    Also, could I ask you to quickly edit your previous comment to clarify you were benchmarking against the older project?

                    • By johndough 2026-03-1011:40

                      Unfortunately, that comment can not be edited anymore. Maybe @dang can change it or remove the comment chain. I am fine with both.

    • By andai 2026-03-102:56

      I run agents as a separate Linux user. So they can blow up their own home directory, but not mine. I think that's what most people are actually trying to solve with sandboxing.

      (I assume this works on Macs too, both being Unixes, roughly speaking :)

    • By johnhenry 2026-03-101:25

      Are you describing bolt.new? (Unfortunately, it looks like their open source project is lagging behind https://github.com/stackblitz-labs/bolt.diy)

    • By zitterbewegung 2026-03-100:09

      While this may be a better sandbox, actually having a separate computer dedicated to the task seems like a better solution still and you will get better performance.

      Besides, prompt injection or simpler exploits should be addressed first than making a virtual computer in a browser and if you are simulating a whole computer you have a huge performance hit as another trade off.

      On the other hand using the browser sandbox that also offers a UI / UX that the foundation models have in their apps would ease their own development time and be an easy win for them.

    • By repstosb 2026-03-108:103 reply

      > The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

      Well, there it is, the dumbest thing I'll read on the internet all week.

      Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.

      Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.

      • By thepasch 2026-03-1014:141 reply

        > while taking the joyful bits of software development away from you

        Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?

        Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.

        • By repstosb 2026-03-1218:57

          I guess I enjoy solving problems, and recognize that the devil is always in the details, so I don't get much satisfaction until I see the whole stack working in concert. I never had much esteem for "architects" who sketch some blobs on the whiteboard and then disappear. I certainly wouldn't want to be "that guy" for anyone else, and I'm not even sure I could do it to an LLM.

      • By simonw 2026-03-1012:522 reply

        > Well, there it is, the dumbest thing I'll read on the internet all week.

        Rude.

        In case you're open to learning, here's why I think this is useful.

        The big lesson we've learned from Claude Code, Codex CLI et al over the past twelve months is that the most useful tool you can provide to an LLM is Bash.

        Last year there was enormous buzz around MCP - Model Context Protocol. The idea was to provide a standard for wiring tools into LLMs, then thousands of such tools could bloom.

        Claude Code demonstrated that a single tool - Bash - is actually much more interesting than dozens of specialized tools.

        Want to edit files without rewriting the whole thing every time? Tell the agent to use sed or perl -e or python -c.

        Look at the whole Skills idea. The way Skills work is you tell the LLM "if you need to create an Excel spreadsheet, go read this markdown file first and it will tell you how to run some extra scripts for Excel generation in the same folder". Example here: https://github.com/anthropics/skills/tree/main/skills/xlsx

        That only works if you have a filesystem and Bash style tools for navigating it and reading and executing the files.

        This is why I want Linux in WebAssembly. I'd like to be able to build LLM systems that can edit files, execute skills and generally do useful things without needing an entire locked down VM in cloud hosting somewhere just to run that application.

        Here's an alternative swipe at this problem: Vercel have been reimplementing Bash and dozens of other common Unix tools in TypeScript purely to have an environment agents know how to use: https://github.com/vercel-labs/just-bash

        I'd rather run a 10MB WASM bundle with a full existing Linux build in then reimplement it all in TypeScript, personally.

        • By repstosb 2026-03-1022:391 reply

          I agree, bash, sed, etc. are great, but a VM running inside a browser seems like the least efficient way to access them. Even if you're stuck on Windows, Cygwin has been a thing for 30 years now, and WSL for ten or so? There should be plenty of ways to set up a sandbox without having the simulate an entire machine.

          It sounds like what you're really trying to recreate is the Software Tools movement from 50 years ago, where there was a push to port the UNIX/BTL utilities to the widest possible variety of systems to establish a common programming and data manipulation environment. It was arguably successful in getting good ports available just about anywhere, evolving into GNU, etc., but it never really reached its apotheosis. That style of clear, easy-to-read-and-write software was still largely killed off by a few big industry players pushing a narrative that "enterprise" has to mean relational databases and distributed objects. It would be FASCINATING if AI coding agents are the force that brings it back.

          • By simonw 2026-03-1023:231 reply

            This isn't meant to be a daily driver. I'd like the option to build systems that occasionally run filesystem agent loops on an ad-hoc basis, for any user. A browser is a really good platform for that.

            • By repstosb 2026-03-1218:541 reply

              So are Cygwin and WSL, though, for those who don't already have the luxury of being on Linux or UNIX (incl. MacOS). I'm sure there are uses for running full-system emulators inside a browser, but access to bash and sed and gawk doesn't seem like one of them. Seriously, if that's the best way to get access to good text manipulation tools, why aren't you ditching your entire OS?

              • By simonw 2026-03-1219:26

                Because bash and sed and suchlike turn out to be the most useful tools for unlocking the abilities of AI agents to do interesting things - more so than previous attempts like MCP.

        • By lioeters 2026-03-1014:301 reply

          > 10MB WASM bundle with a full existing Linux build

          We'll get there I'm sure of it. In case you hadn't seen: https://github.com/edubart/webcm

          > Linux RISC-V virtual machine, powered by the Cartesi Machine emulator, running in the browser via WebAssembly

          > a single 32MiB WebAssembly file containing the emulator, the kernel and Alpine Linux operating system. Networking supports HTTP/HTTPS requests, but is subject to CORS restrictions

      • By yjftsjthsd-h 2026-03-1016:501 reply

        > I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.

        Cheaper than renting a server, more isolated than a container.

        • By repstosb 2026-03-1022:261 reply

          But Docker is free (unless you're a fairly large business, in which case containerd is still free, and you can either pay for the front-end license or figure out how to set up one of the free alternatives), and from what perspective are the isolations available for the containerd process inferior to those available for your browser process? The former was at least designed from the ground up with security, auditing, quotas etc. in mind, and offers better per-container granular control than your browser offers per-tab.

          • By yjftsjthsd-h 2026-03-110:32

            I would argue the exact opposite: Linux is great, but it wasn't really designed with a focus on containing hostile software, and while containers have come to be a decent security barrier, they're still one kernel bug away from compromise. On the other hand, the browser is very accustomed to being the most exposed security-sensitive software on a machine, and modern browsers and wasm in particular are designed against that threat. Heck, wasm is so good for security that Mozilla started compiling components to wasm and then back into native code to get memory safety ( https://hacks.mozilla.org/2020/02/securing-firefox-with-weba... ).

    • By ZeWaka 2026-03-102:41

      It's relatively easy to spin up a busybox WASM v86 solution

    • By kantord 2026-03-0922:371 reply

      This is not the technical solution you want, but I think it provides the result that you want: https://github.com/devcontainers

      tldr; devcontainers let you completely containerize your development environment. You can run them on Linux natively, or you can run them on rented computers (there are some providers, such as GitHub Codespaces) or you can also run them in a VM (which is what you will be stuck with on a Mac anyways - but reportedly performance is still great).

      All CLI dev tools (including things like Neovim) work out of the box, but also many/most GUI IDEs support working with devcontainers (in this case, the GUI is usually not containerized, or at least does not live in the same container. Although on Linux you can do that also with Flatpak. And for instance GitHub Codespaces runs a VsCode fully in the browser for you which is another way to sandbox it on both ends).

      • By stavros 2026-03-0922:52

        This is interesting (and I've seen it mentioned in some editors), but how do I use it? It would be great if it had bubblewrap support, so I don't have to use Docker.

        Do you know if there's a cli or something that would make this easier? The GitHub org seems to be more focused on the spec.

    • By jraph 2026-03-0920:245 reply

      Simon, this HN post didn't need to be about Gen AI.

      This thing is really inescapable those days.

      • By dang 2026-03-1018:13

        It's normal for HN to be preoccupied with the major technical trend of the moment, and this is unquestionably the biggest technical trend in many years.

        People can argue about where to insert it in the list, but it is certainly in the top 5 of many decades (smartphones, web, PCs, etc.) That's why it's inescapable.

        Your complaint isn't really about simonw's comment, but rather the fact that it was heavily upvoted - in other words, you were dissenting from the community reaction to the comment. That's understandable; in fact it's a fundamental problem with forums and upvoting systems: the same few massive topics suck in all the smaller ones until we get one big ball of topic mud: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

      • By simonw 2026-03-0920:282 reply

        Parallel thread: https://news.ycombinator.com/item?id=47311484#47312829 - "I've always been fascinated by this, but I have never known what it would be useful for."

        I should have replied there instead, my mistake.

        • By stavros 2026-03-0923:033 reply

          I don't know man, I didn't see anyone say "this post didn't need to be about <random topic>", HN has just become allergic to LLMs lately.

          I'm excited about them and I think discussion on how to combine two exciting technologies are exactly what I'd like to see here.

          • By bakugo 2026-03-0923:331 reply

            Has there ever been any other topic that was not only the subject of the majority of submissions, but also had a subset of users repeatedly butting into completely unrelated discussions to go "b-but what about <thing>? we need to talk about <thing> here too! how can I relate this to <thing>? look at my <thing> product!"?

            You can't just roll in to a random post to tell people about your revolutionary new AI agent for the 50th time this week and expect them not to be at least mildly annoyed.

            • By stavros 2026-03-0923:341 reply

              I'm with you, but he wasn't telling us about his agent, he was saying "this is a cool technology and I've been wanting to use it to make a thing". The thing just happened to be LLM-adjacent.

              • By bakugo 2026-03-0923:502 reply

                Almost all of his comments "just happen" to be LLM-adjacent. At some point it stops "just happening" and it becomes clear that certain people (or their AI bots) are frequenting discussion spaces for the sole purpose of seeking out opportunities to bring up AI and self-promote.

                • By stavros 2026-03-0923:52

                  Simon has been here since way before LLMs were a thing, and it's fairly obvious (to me, at least) that he's genuinely excited about LLMs, he's not just spamming sales or anything.

                • By yokoprime 2026-03-100:171 reply

                  You are not reading his material i suppose? It’s really one of the better sources for informed takes on llms

                  • By bakugo 2026-03-100:451 reply

                    I just went and read one of his recent posts at: https://simonwillison.net/2026/Mar/5/chardet/

                    The entire thing is just quotes and a retelling of events. The closest thing to a "take" I could find is this:

                    > I have no idea how this one is going to play out. I’m personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.

                    Which effectively says nothing. It doesn't add anything the discussion around the topic, informed or not, and the post doesn't seem to serve any purpose beyond existing as an excuse to be linked to and siphon attention away from the original discussion (I wonder if the sponsor banner at the top of the blog could have something to do with that...?)

                    This seems to be a pattern, at least in recent times. Here's another egregious example: https://simonwillison.net/2026/Feb/21/claws/

                    Literally just a quote from his fellow member of the "never stops talking about AI" club, Karpathy. No substance, no elaboration, just something someone else said or did pasted on his blog followed by a short agreement. Again, doesn't add anything or serve any real purpose, but was for some reason submitted to HN[1], and I may be misremembering but I believe it had more upvotes/comments than the original[2] at one point.

                    [1] https://news.ycombinator.com/item?id=47099160

                    [2] https://news.ycombinator.com/item?id=47096253

                    • By simonw 2026-03-104:32

                      I think my coverage of the Mark Pilgrim situation added value in that most people probably aren't aware that Mark Pilgrim removed himself from internet life in 2011, which is relevant to the chardet story.

                      That second Karpathy example is from my link blog. Here's my post describing how I try to add something new when I write about things on my link blog: https://simonwillison.net/2024/Dec/22/link-blog/

                      In the case of that Karpathy post I was amplifying the idea that "Claw" is now the generic name for that class of software, which is notable.

          • By dang 2026-03-1018:201 reply

            > HN has just become allergic to LLMs lately.

            It's very much a bimodal distribution: an enthusiast subset and an allergic subset. It's impossible to satisfy both, but that's the dynamic of HN anyhow: guaranteed to dissatisfy everybody! It's a strange game; the only to win is to complain.

            • By stavros 2026-03-1018:34

              Yeah, but I don't know, a bit more intellectual curiosity would be good. Ah well, what can you do.

          • By purerandomness 2026-03-0923:382 reply

            You haven't been around here in the Blockchain/NFT/Smart Contract dark ages, have you?

            • By stavros 2026-03-0923:391 reply

              Naw man I just signed up.

              • By yokoprime 2026-03-0923:592 reply

                I chuckled. Everything on earth is recent if you look at it from a cosmic timeframe I guess

                • By stavros 2026-03-100:102 reply

                  To be fair, it really was annoying when everything was blockchain.

                  • By girvo 2026-03-106:40

                    On the other hand man was it easy to make money at the time. I guess that’s probably true now for those in the AI space too

                  • By Towaway69 2026-03-107:08

                    Aren't there blockchain agents, surely there must be agents running in the blockchain as smart contracts?

                • By Towaway69 2026-03-107:07

                  I wonder in what timeframe the cosmic timeframe is recent.

                  It's turtles all the way down ....

                  ;)

            • By fsloth 2026-03-107:411 reply

              TBH I’ve been here a while, never felt what the point with the above is but do feel LLM:s are a new valuable affordance in computer use.

              I mean I don’t have to remember the horrible git command line anymore which already improves my exprience as a dev 50%.

              It’s not all hype bs this time.

              • By latexr 2026-03-1011:141 reply

                > I mean I don’t have to remember the horrible git command line anymore

                Every time I see a comment like this, I have to wonder what the heck other devs were doing. Don’t you know there were shell aliases, and snippet managers, and a ton of other tools already? I never had to commit special commands to memory, and I could always reference them faster than it takes to query any LLM.

                • By fsloth 2026-03-1016:421 reply

                  You do realize it does not help _me_ at all if _you_ have found your perfect custom setup.

                  Because it’s custom there is no standard curriculum you could point me to etc.

                  So it’s great you’ve found a setup that works for you but I hope you realize it’s silly to become idignant I don’t share it.

                  • By latexr 2026-03-1018:191 reply

                    The point I’m making is there are tons of solutions. Deterministic, fast, low-energy, customisable. Which is why I said “I have to wonder what the heck other devs were doing”. As in, have you never looked for a solution to your frustration? Hard to believe there was nothing out there before which wouldn’t have improved your Git command-line experience. Like, say, one of the myriad GUI tools which exist.

                    > Because it’s custom there is no standard curriculum you could point me to etc.

                    Not true. There are tons of resources out there not only explaining the solutions but even how different people use them and why.

                    If I sat with you for ten minutes and you explained me the exact difficulties you have, I doubt I couldn’t have suggested something.

                    • By fsloth 2026-03-1018:33

                      I use a git gui :)

                      So the only time I need terminal, it’s for something non-obvious.

                      ”There are tons of resources”

                      This is not a standard curriculum as such though.

                      I’ve tried to come to terms with posix for 25 years and am so happy I don’t need to anymore. That’s just me!

        • By darig 2026-03-0922:38

          [dead]

      • By yokoprime 2026-03-0923:56

        What topics are allowed in your opinion? I very much enjoyed Simon’s comment as it is a use case I also was thinking of.

      • By brumar 2026-03-104:23

        Why not leting upvotes do their thing? I enjoyed this comment.

      • By grimgrin 2026-03-0923:57

        a bit cute that you interacted with the 1 AI thread. there are other threads!

    • By bakugo 2026-03-0922:512 reply

      [flagged]

      • By dang 2026-03-1018:10

        Please don't cross into personal attack on this site. We ban accounts that do that, and you've unfortunately done it repeatedly in this thread. Current comment was the worst case of this by far, but https://news.ycombinator.com/item?id=47317411, for example, is also on the wrong side of the line.

        https://news.ycombinator.com/newsguidelines.html

      • By iamjackg 2026-03-100:061 reply

        Nobody is promoting a product. Simon is just sharing an experiment he attempted. No products being sold here.

        • By pelcg 2026-03-100:28

          Maybe not, but in the past some here see that the blog is the product that is being promoted here.

          Even in this thread alone https://news.ycombinator.com/item?id=47314929 some commenters here are clearly annoyed with the way AI is being shoved in each place where they do not want it.

          I don't care, but I can see why many here are getting tired of it.

HackerNews