A proposal to restrict sites from accessing a users’ local network

2025-06-0418:15680377github.com

A proposal to restrict sites from accessing a users' local network without permission - explainers-by-googlers/local-network-access

This proposal is an early design sketch by the Chrome Secure Web and Network team to describe the problem below and solicit feedback on the proposed solution. It has not been approved to ship in Chrome.

  • Chrome Secure Web and Network team

Currently public websites can probe a user's local network, perform CSRF attacks against vulnerable local devices, and generally abuse the user's browser as a "confused deputy" that has access inside the user's local network or software on their local machine. For example, if you visit evil.com it can use your browser as a springboard to attack your printer (given an HTTP accessible printer exploit).

Local Network Access aims to prevent these undesired requests to insecure devices on the local network. This is achieved by deprecating direct access to private IP addresses from public websites, and instead requiring that the user grants permission to the initiating website to make connections to their local network.

Note: This proposal builds on top of Chrome's previously paused Private Network Access (PNA) work, but differs by gating access on a permission rather than via preflight requests. This increases the level of user control (at the expense of new permissions that have to be explained to the user) but removes the explicit "device opt-in" that the preflight design achieved. We believe this simpler design will be easier to ship, in order to mitigate the real risks of local network access today. Unlike the previous Private Network Access proposal, which required changes to devices on local networks, this approach should only require changes to sites that need to access the local network. Sites are much easier to update than devices, and so this approach should be much more straightforward to roll out.

  • Stop exploitation of vulnerable devices and servers from the drive-by web.
  • Allow public websites to communicate to private network devices when the user expects it and explicitly allows it.

An adjacent goal is that we want a path for browsers to be good stewards of OS-level local network access permissions. These OS-level permissions are increasingly common (on iOS, and more recently on macOS) -- simply because the browser has been granted the permission (for legitimate browser functionality the user may want to use, like mirroring the contents of a tab on a local device) should not expose users' local devices to the risks of the open web.

  • Break existing workflows and services that rely on a public web frontend that can control local network devices.
    • As long as there is some path forward we should be okay with breaking some use cases (e.g., iframe and HTML subresources that aren't explicitly sourced from local hostnames), but overall we want to minimize breakage.
  • Solve the local network HTTPS problem.
    • As stated in the original Private Network Access explainer: Provide a secure mechanism for initiating HTTPS connections to services running on the local network or the user's machine. This piece is missing to allow secure public websites to embed non-public resources without running into mixed content violations, with the exception of http://localhost which is embeddable. While a useful goal, and maybe even a necessary one in order to deploy Private Network Access more widely, it is out of scope of this specification.

The most common case is for users who don't have any services or devices on their local network that expect connections from websites. Today, browsers freely allow JavaScript and subresource requests to these devices without any indication to the end user. Unless this behavior is expected by the user, users should not be exposed to this risk by default.

Public web frontend for controlling or setting up local devices (such as an IoT device, home router, etc.).

Device manufacturers want to be able to give users an easy process for setting up a new device, and one method that is used is to have a page hosted on the manufacturer's public website which then communicates with the device via the user's browser, explicitly relying on the browser's vantage point inside the user's network.

This also reduces the complexity needed on the device itself -- for example, a smart toothbrush does not need to support a full webserver. Additionally, by being a public webpage under the control of the manufacturer, the setup page is always up to date.

We propose gating the ability for a site to make requests to the users' local network behind a new "local network access" permission. Any origin that has not been granted this permission would be blocked from making such requests.

As defined in the original Private Network Access proposal, we organize an IP network into three layers from the point of view of a node, from most to least private:

  • Localhost: accessible only to the node itself, by default
  • Private IP addresses: accessible only to the members of the local network (e.g. RFC1918)
  • Public IP addresses: accessible to anyone

We call these layers address spaces: loopback, local, and public.

(Note: The original PNA proposal called these local, private, and public. Changing this was considered in WICG/private-network-access#91 but reverted due to already using the "private network access" name and values in headers implemented by sites and device manufacturers.)

We note that local includes RFC 1918/RFC 4193 private/local IP addresses and RFC 6762 link-local names (.local hostnames). (See discussion on the original PNA proposal repository.)

We define a local network request as a request crossing an address space boundary to a more-private address space. That is, any of the following are considered to be local network requests:

  1. public -> local
  2. public -> loopback
  3. local -> loopback

Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)

A request is considered to be going to a local space if:

  • The hostname is a private IP address literal (per RFC 1918 etc.), or
  • The hostname is a .local domain (per RFC 6762), or
  • The fetch() call is annotated with targetAddressSpace="local" (see "Integration with Fetch" below).

In these cases we know a priori that the request is local.

Similarly, if a request is to a loopback IP literal (e.g., 127.0.0.1), localhost, or the fetch() call is annotated with targetAddressSpace="loopback", then the request is loopback.

Separately, a request may eventually end up being considered a local network request if the request's hostname resolves to a private or loopback IP address. (We do not know this a priori however, and so cannot exempt these requests from mixed content blocking -- see "Mixed Content" below.)

When a site makes a local network request, the UA should check if the origin has already been granted the "local network access" permission. If not, the request should be blocked while the UA displays a prompt to the user asking whether they want to allow the origin to make requests to their local network. If the user denies the permission prompt, the request fails. If the user accepts the permission prompt, the request continues.

To reduce breakage (due to the lack of local network HTTPS), the permission also exempts requests that are known to be local or loopback from mixed content blocking (see "Mixed Content" below.)

For a user who is not expecting a site to connect to their local network, when example.com tries to call fetch("http://192.168.0.1/routerstatus"), the user's browser will ask whether or not to allow example.com to make connections to the local network. Since the user is not expecting this behavior, they can deny the permission request, and example.com is blocked from making these connections.

An existing site run by a device manufacturer that talks to a local device by making fetch() requests would potentially make minor modifications (for example, ensuring that they either use a private IP address or .local name when referring to the device, or adding the targetAddressSpace="local" property to their fetch() calls). When the site first tries to make a request to the device, the user sees a permission prompt. If the user expects the site to be communicating with devices on their local network, they can choose to grant the permission and the site will continue to function. If the user does not expect this or does not want to grant the permission to the site, they choose to not grant the permission to the site, and no local network requests from the site will be allowed.

The Fetch spec does not integrate the details of DNS resolution, only defining an obtain a connection algorithm, thus Local Network Access checks are applied to the newly-obtained connection. Given complexities such as Happy Eyeballs (RFC6555, RFC8305), these checks might pass or fail non-deterministically for hosts with multiple IP addresses that straddle IP address space boundaries.

After we have obtained a connection, if we detect a local network request:

  • If the client is not in a secure context, block the request.
  • Check if the origin has previously been granted the local network access permission; if not, prompt the user.
  • If the user grants permission, the request will proceed.

However, the requirement that local network requests be made from secure contexts means that any insecure request will be blocked as mixed content unless we can know ahead of time that the request should be considered a local network request.

To make this easier for developers, we propose adding a new parameter to the fetch() options bag to explicitly tag the address space of the request. For example:

fetch("http://router.com/ping", { targetAddressSpace: "local",
});

This would instruct the browser to allow the fetch even though the scheme is non-secure and obtain a connection to the target server. The targetAddressSpace should be either local or loopback.

If the remote IP address does not belong to the IP address space specified as the targetAddressSpace option value, then the request is failed. This ensures the feature cannot be abused to bypass mixed content in general. See "Mixed content" below for more details.

There is a challenge in the combination of (1) requiring secure contexts in order to make PNA requests, (2) trying to load PNA subresources over HTTP (due to the lack of local network HTTPS), and (3) applying mixed content blocking. The Fetch specification applies mixed content upgrading and blocking steps well before we have obtained a connection.

We can know ahead of the mixed content checks whether a request has a local target address if:

  • The hostname is a private IP address literal (per RFC1918 etc.), or
  • The hostname is a .local domain, or
  • The fetch() call is annotated with the { targetAddressSpace: "local" } option.

If the request meets any of those requirements, we skip steps 6 and 7 of main fetch (upgrading mixed content and blocking mixed content) and mark the request as a local network request. Then, after obtaining the connection the local network access checks can fully run, and if the origin is not granted the local network access permission the request will be blocked. Additionally, if the request ends up not resolving to a local network endpoint, we need to block the request (as it should have been blocked as mixed content).

A new parameter targetAddressSpace will be added as a fetch() API option, allowing a developer to specify that the request should be treated as going to a local or loopback address space. This allows for HTTP local network requests that are not private IP literals or .local domains as long as they are explicitly tagged with the target address space. This also allows the developer to deterministically request the permission.

Documents and WorkerGlobalScopes store an additional address space value. This is initialized from the IP address the document or worker was sourced from.

Local connection attempts that use WebRTC should also be gated behind the Local Network Access permission prompt.

In the WebRTC spec, algorithms that add candidates to the ICE Agent already have steps that ensure “administratively prohibited” addresses are not used. We can modify these algorithms to perform the following steps if the candidate has a loopback or local address:

  • Check if the origin has previously been granted the local network access permission; if not, prompt the user.
  • If the user grants permission, the algorithm will continue.
  • If the user denies the permission, we won’t add the candidate to the ICE Agent and it won’t be used when establishing a connection.

Note that these checks are done asynchronously and don’t block resolving the methods where they are used i.e. setRemoteDescription() and addIceCandidate().

The same checks should also be performed when connecting to STUN/TURN servers with loopback or local addresses.

By default the ability to make local network requests will be limited to top-level documents that are secure contexts. There are use cases where a site needs to be able to delegate this permission into a subframe. To support these use cases, a new policy-controlled feature ("local-network-access") will be added that will allow top-level documents to delegate access to this feature to subframes.

The permission will be integrated with the Permissions API, which will allow sites to query the status of the permission grant.

HTML subresource fetches go through the standard Fetch algorithm, but will not have the ability to specify an explicit targetAddressSpace. This includes subframe navigations.

For HTTP subresources, only "local names" (i.e., private IP literals or .local hostnames) are allowed for local network requests. This is required for resolving the mixed content problem (see "Mixed content" above, and see "Considered alternatives" for more involved methods that have been discussed).

The establish a WebSocket connection algorithm depends on the Fetch algorithm (in the updated WHATWG spec), so Websockets should behave like other Fetch requests and trigger local network access prompts without additional work.

Requests from a service worker go through the Fetch algorithm, and will be included in local network access restrictions.

The previously proposed Private Network Access work (PNA for short, also previously referred to as CORS-RFC1918) required a secure CORS preflight response from the private subresource server. If the preflight failed, the request would be blocked. If the request was for an insecure resource (e.g., due to the lack of trusted local HTTPS), they proposed a separate permission prompt to allow the connection to a specific endpoint device and relax the mixed content restrictions on the connection.

A lot of effort and developer outreach went into this effort, but it was never able to ship. Chrome currently has an opt-in mode behind an enterprise policy, and for a while Chrome shipped a restriction where private network access was restricted to secure contexts only (with a deprecation trial for developers who could not yet meet this requirement due to having HTTP-only private network endpoints). The previous plan was to build and ship the "mixed content permission prompt" to get these remaining developers out of the reverse origin trial.

PNA met a lot of different developer and user needs, and in the "good case" (secure website talking to a local network device that had a publicly trusted TLS certificate and a "PNA-aware" server) could be quite seamless, since it required no user intervention. In the non-ideal cases, PNA accumulated a lot of workarounds to address use cases that would result in a permission prompt in many cases (a device chooser style prompt for insecure devices). For example:

Even if the local device has opted in to connections from a top level site, we believe there is value in user awareness and control over this exchange.

The use of preflights (without any user consent speed bump) also exposed its own risks. For example, timing attacks could be used to determine valid IP addresses on the network (crbug.com/40051437, WICG/private-network-access#41).

Given the risks of allowing sites to use the browser as an access point into the user's local network, we could simply block all local network requests (or just any requests that cross from a public address space to a more private one). This would be simplest, but would break many existing use cases, or require expensive workarounds.

As more operating systems are implementing local network access permissions at the application level, we believe it is the duty of user agents to broker access to that privilege in order to be good stewards of it. A user may have legitimate reasons to grant access for their browser (e.g., Chrome's "cast this tab" functionality) but never want a site to be able to access their local network.

In our proposal above we recommend restricting local network HTML subresources to "assumed local" hostnames (such as .local domains) as a middle ground that meets developer needs, is relatively easy to deploy, and doesn't require complex technical or specification work to accomplish.

Below are some alternatives that have been considered for addressing the mixed content problem.

In order to restrict local network access to secure contexts (which is necessary in order for a permission prompt to make sense), we need some resolution of the mixed content problem. The Fetch specification orders the "upgrade or block mixed content" steps before we have obtained a connection, and thus before we can know the IP address.

Currently, some developers can work around this by getting publicly trusted certificates for their servers running on local networks (e.g., Plex getting Let's Encrypt certs under a different subdomain for every install) but it is a substantial engineering and maintenance burden. Even for developers that go to the trouble of using publicly trusted certificates, fallback to HTTP is required in some network circumstances.

This could be a "treat hostname as public" / "treat hostname as local" property that could be specified in a response header or in a meta tag (or a response header meta equiv).

An initial idea was to add this on to CSP, however that was rejected due to CSP already being a bit overloaded and this not being a great conceptual fit for it. A separate "Sec-Treat-Origin-As-Private" header (or something along those lines) could be used to list origins that should be assumed to resolve to private IP addresses.

Additionally, it might be useful if such a header could be specified via an http-equiv meta tag (like one can do with CSP).

Top-level navigations remain a risk after restrictions on subresource local network requests are in place. For an attacker, main frame navigations are noisier (compared to subresource requests and iframe navigations), although popunder techniques could potentially be used to hide navigations from the user.

To prevent these, we could block or show an interstitial warning when a public page navigates to a local one. To avoid too much breakage or over-warning, we could maybe scope protections to just these cases. We might consider that "complex" requests such as POST navigations and GET requests with URL parameters are particularly risky. We might also be able to "defang" navigations by stripping the URL parameters to reduce the risk of exploitation.

We could instead define a local network request as "any request targeting an IP address in the local or loopback address space, regardless of the requestor's address space", while maintaining the exception for not blocking same-origin requests. This is a stricter / broader definition, and would likely cause more widespread breakage than our proposal to only consider requests that cross from one address space class to a more private one.

This would also allow the mixed content relaxation that is granted when an origin is given the local network access permission to apply to local -> local or loopback -> local requests. This has been raised as a concern by developers in WICG/private-network-access#109. Note that the status quo could continue here for now -- i.e., the top level page can get around mixed content checks by remaining on HTTP.

A recent study (Schmidt et al., S&P 2025) found that the transitivity of the local networks permission on iOS was the hardest for users to understand. It might be beneficial to scope the permission grant to an origin to the specific local network the user is currently connected to. Should the user grant permission to example.com on their home network, it may be reasonable for a UA to re-prompt the user if they bring their device to a different location and visit example.com again.

PNA 1.0 worked to add a new targetAddressSpace parameter to fetch() to label the target address space of the request, so that mixed content checks could be relaxed (and then enforced if that connection ended up being public, to avoid it being a mixed content blocking bypass). The challenge is how to handle HTML subresources (e.g., img, iframe, etc.). We could add a new property to these HTML elements allowing developers to "label" them as public/local/loopback. This would function similarly to the parameter on fetch() and allow the user agent to initially bypass mixed content checks (and to know it should trigger the permission prompt). Currently we don't think this is necessary, as most use cases that require explicitly marking the targetAddressSpace should be able to switch to using fetch().

Security

  • While the local network access permission exempts requests to a priori known local endpoints from mixed content blocking, the page should still be considered to have loaded mixed content if such a request is made. This means that, for example, browsers can choose to show a different security UI for pages that make insecure connections to the local network.
  • Compared to the original PNA proposal, a site granted the local network access permission has more power to probe and connect to devices on the local network, regardless of whether those devices expect it.
  • There is some risk of users accepting the permission without understanding it (which couldn't happen with preflights).

Privacy

  • Compared to the original PNA proposal, no local network connections are allowed until the user has explicitly granted permission to a site.
  • Compared to the original PNA proposal, there are no preflights (and thus no risk of timing/probing attacks from them).

The previous PNA proposal (using preflights) was positively received by Mozilla (mozilla/standards-positions#143) and WebKit (WebKit/standards-positions#163).

Many thanks for valuable feedback and advice from:

  • Titouan Rigoudy, Jonathan Hao, and Yifan Luo who worked on the original PNA proposals and specification, and generously discussed their work with me and helped brainstorm paths forward.

Read the original article

Comments

  • By mystifyingpoi 2025-06-0419:136 reply

    I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

    • By buildfocus 2025-06-0421:0813 reply

      This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.

      The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

      This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

      • By xp84 2025-06-0421:143 reply

        Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?

        So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.

        • By jonchurch_ 2025-06-0514:14

          I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.

          It may send an OPTIONS request, or not.

          It may block a request being sent (in response to OPTIONS) or block a response from being read.

          It may restrict which headers can be set, or read.

          It may downgrade the request you were sending silently, or consider your request valid but the response off limits.

          It is a matrix of independent gates essentially.

          Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.

        • By tombakt 2025-06-0421:219 reply

          No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.

          Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.

          • By varenc 2025-06-053:323 reply

            This tag:

                <img src="http://192.168.1.1/router?reboot=1">
            
            triggers a local network GET request without any CORS involvement.

            • By grrowl 2025-06-057:312 reply

              I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web

              • By lobsterthief 2025-06-0514:001 reply

                Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.

                • By sedatk 2025-06-060:43

                  Or you could just make the logout route POST-only. Problem solved.

              • By anthk 2025-06-057:411 reply

                <img src="C:\con\con"></img>

                • By jbverschoor 2025-06-0513:39

                  It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.

            • By lyu07282 2025-06-056:262 reply

              Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.

              • By formerly_proven 2025-06-0511:102 reply

                That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.

                • By mrguyorama 2025-06-0618:41

                  I'm betting HN is vastly more normal people and manager types than people want to admit.

                  None of us had to pass a security test to post here. There's no filter. That makes it pretty likely that HN's community is exactly as shitty as the rest of the internet's.

                  People need to stop treating this community like some club of enlightened elites. It's hilariously sad and self-congratulatory.

                • By lyu07282 2025-06-0515:061 reply

                  I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.

                  • By asmor 2025-06-0522:041 reply

                    If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.

                    Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.

                    • By bawolff 2025-06-067:20

                      This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.

                      In the modern web its much less of an issue due to samesite cookies being default .

              • By bawolff 2025-06-067:18

                > Exactly you can also trigger forms for POST or DELETE etc

                You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.

                To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.

            • By buildfocus 2025-06-0512:591 reply

              This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.

              While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.

              • By oasisbob 2025-06-0516:501 reply

                "No true Scotsman allows GETs with side effects" is not a strong argument

                It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.

                • By bawolff 2025-06-067:14

                  Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.

          • By LegionMammal978 2025-06-0421:288 reply

            The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).

            • By MajesticHobo2 2025-06-0421:471 reply

              There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.

              • By EGreg 2025-06-0423:17

                Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.

            • By Sophira 2025-06-0513:561 reply

              I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.

              The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.

              However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:

              Content-Type: multipart/form-data; boundary=application/json

              It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.

              This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.

              • By apitman 2025-06-066:02

                This is a great example; thanks.

            • By freeone3000 2025-06-0422:231 reply

              Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.

            • By mirashii 2025-06-0422:081 reply

              You don't even need to be exploiting the target device, you might just be leaking data over that connection.

              https://news.ycombinator.com/item?id=44169115

              • By notpushkin 2025-06-057:40

                Yeah, I think this is the reason this proposal is getting more traction again.

            • By kbolino 2025-06-0421:29

              Here's a formal definition of such simple requests, which may be more expansive than one might expect: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...

            • By chuckadams 2025-06-0513:24

              Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"

              I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.

            • By drexlspivey 2025-06-0422:541 reply

              It can send a json-rpc request to your bitcoin node and empty your wallet

              • By LegionMammal978 2025-06-0519:471 reply

                Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?

                • By drexlspivey 2025-06-0611:581 reply

                  Bitcoin Core if you disable authentication

                  • By LegionMammal978 2025-06-071:021 reply

                    There's no such thing, short of forking it yourself. You can set the username and password to admin:admin if you want, but Bitcoin Core's JSON-RPC server requires an Authorization header on every request [0], and you can't put an Authorization header on a cross-origin request without a preflight.

                    [0] https://github.com/bitcoin/bitcoin/blob/v29.0/src/httprpc.cp...

                    • By drexlspivey 2025-06-078:09

                      Good to know, I remember you used to be able to disable it via config but looks like I was wrong

            • By bawolff 2025-06-067:25

              I think that is because it is so old that its basically old news and mostly mitigated.

              https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.

          • By rafram 2025-06-051:522 reply

            You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.

            • By jonchurch_ 2025-06-0513:582 reply

              This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.

              The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)

              Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.

              All headers will be dropped besides the CORS safelisted headers [0]

              And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.

              [0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...

              • By rafram 2025-06-0514:081 reply

                That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.

                • By jonchurch_ 2025-06-0514:251 reply

                  I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.

                  What framework allows you to setup a misconfigured parser out of the box?

                  I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.

                  Meaning, its hard to get into this state!

                  Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.

                  • By rafram 2025-06-0515:55

                    SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).

                    I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.

              • By afavour 2025-06-0514:191 reply

                I think you’re making those restrictions out to be bigger than they are.

                Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.

                • By jonchurch_ 2025-06-0514:36

                  My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.

                  I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.

            • By chuckadams 2025-06-0513:33

              Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.

          • By bawolff 2025-06-067:12

            > No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.

            Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)

            So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)

          • By thayne 2025-06-0516:04

            It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.

            Also, form submission famously doesn't require CORS.

          • By layer8 2025-06-0422:54

            There is a limited, but potentially effective, attack surface via URL parameters.

          • By rerdavies 2025-06-0423:032 reply

            I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.

            As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.

            If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!

            • By dgoldstein0 2025-06-055:55

              I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.

            • By rerdavies 2025-06-0515:58

              [edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.

              Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/

        • By nbadg 2025-06-0421:252 reply

          Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.

          • By baobun 2025-06-0512:45

            eBay, for one, has been (was?) fingerprinting users like this for years.

            https://security.stackexchange.com/questions/232345/ebay-web...

          • By dgoldstein0 2025-06-056:02

            Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported

            ... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.

            That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.

      • By sidewndr46 2025-06-0513:322 reply

        This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.

        Significant components of the browser, such as Websockets have no such restrictions at all

        • By James_K 2025-06-0515:361 reply

          Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?

          • By bstsb 2025-06-0522:12

            yes, and that's exactly how discord's websocket communication checks work (allowing them to offer a non-scheme "open in app" from the website).

            they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc

        • By afiori 2025-06-0515:492 reply

          A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)

          • By hnav 2025-06-0517:23

            websockets aren't subject to CORS, they send the initiating webpage in the Origin header but the server has to decide whether that's allowed.

          • By odo1242 2025-06-0516:25

            Unfortunately, the initial WebSocket HTTP request is defined to always be a GET request.

      • By rnicholus 2025-06-051:05

        CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.

      • By hsbauauvhabzb 2025-06-0421:151 reply

        CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password

      • By spr-alex 2025-06-0518:59

        I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.

        https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...

      • By friendzis 2025-06-056:42

        > The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

        False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.

      • By Aeolun 2025-06-0422:511 reply

        How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.

        • By londons_explore 2025-06-055:19

          Webrtc allows you to find the local ranges.

          Typically there are only 256 IP's, so a scan of them all is almost instant.

      • By esnard 2025-06-058:351 reply

        Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it

      • By IshKebab 2025-06-0421:371 reply

        I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).

      • By ars 2025-06-0523:47

        > but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

        This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.

      • By ameliaquining 2025-06-0422:18

        Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.

      • By h4ck_th3_pl4n3t 2025-06-0521:40

        > Local network devices are protected from random websites by CORS

        C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.

        Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.

      • By kmeisthax 2025-06-0421:291 reply

        THE MYTH OF "CONSENSUAL" REQUESTS

        Client: I consent

        Server: I consent

        User: I DON'T!

        ISN'T THERE SOMEBODY YOU FORGOT TO ASK?

        • By cwillu 2025-06-054:42

          Does anyone remember when the user-agent was an agent of the user?

    • By jm4 2025-06-051:127 reply

      This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?

      • By loaph 2025-06-053:001 reply

        I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.

        • By A4ET8a8uTh0_v2 2025-06-059:04

          Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.

      • By necovek 2025-06-0515:39

        Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.

        One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub

        For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.

      • By Thorrez 2025-06-0513:121 reply

        >Why should websites ever have access to the local network?

        It's just the default. So far, browsers haven't really given different IP ranges different security.

        evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .

        • By chuckadams 2025-06-0513:42

          > It's just the default. So far, browsers haven't really given different IP ranges different security.

          I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.

      • By EvanAnderson 2025-06-0514:21

        > Is there even a use case for this for which there isn’t already a better solution?

        I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.

        Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.

      • By Thorrez 2025-06-0513:08

        >That presents an entirely new threat model for which we don’t have a solution.

        What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.

      • By charcircuit 2025-06-054:032 reply

        >for which we don’t have a solution

        It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.

        • By udev4096 2025-06-066:28

          Exactly, LAN is not a "secure" network field. Authenticate everything from everywhere all the time

        • By esseph 2025-06-066:15

          You got grandma running ZTA now?

          This is a problem impacting mass users, not just technical ones.

    • By lucideer 2025-06-0420:185 reply

      > normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

      MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.

      • By mastazi 2025-06-0422:453 reply

        Do we have any evidence that most users just click yes?

        My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.

        Unless we have statistics, I don't think we can make assumptions.

        • By technion 2025-06-051:18

          The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.

          The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.

          (yes, we can disable with a GPO, which I heavily promote, but that org has political problems).

        • By Aeolun 2025-06-0422:53

          As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.

        • By lucideer 2025-06-056:321 reply

          I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.

          • By mixmastamyk 2025-06-0515:31

            Interesting parallel between the older-parents who (may have finally learned to) deny and young folks, supposed digital-natives a majority of which who don’t really understand how computers work.

      • By paxys 2025-06-0422:192 reply

        People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.

        A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.

        • By poincaredisk 2025-06-0422:371 reply

          "Please accept the [tech word salad] popup to verify your identity"

          Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)

          • By quacksilver 2025-06-0512:05

            I have seen it posed as 'This site has bot protection. Confirm that you are not a bot by clicking yes', trying to mimic the modern Cloudflare / Google captchas.

        • By lucideer 2025-06-058:25

          To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.

          But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.

      • By mystified5016 2025-06-0420:375 reply

        I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.

        Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.

        • By knome 2025-06-0512:54

          I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.

        • By ameliaquining 2025-06-0422:15

          I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.

        • By lxgr 2025-06-0514:03

          I think it does, in many (but definitely not all) contexts.

          For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").

          "Local network access"? Probably not.

        • By A4ET8a8uTh0_v2 2025-06-059:06

          Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.

        • By xp84 2025-06-0421:202 reply

          This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.

          They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.

          • By donnachangstein 2025-06-0422:591 reply

            > The modern Mac is a sea of Allow/Don't Allow prompts

            Remember when they used to mock this as part of their marketing?

            https://www.youtube.com/watch?v=DUPxkzV1RTc

            • By GeekyBear 2025-06-051:502 reply

              Windows Vista would spawn a permissions prompt when users did something as innocuous as creating a shortcut on their desktop.

              Microsoft deserved to be mocked for that implementation.

              • By Gigachad 2025-06-057:101 reply

                MacOS asked a permission dialog when I plug my AirPods in to charge. I have no idea what I’m even giving permission for but it pops up every time.

                • By GeekyBear 2025-06-0515:201 reply

                  Asking you if you trust a device before opening a data connection to it is simply not the same thing as asking the person who just created a shortcut if they should be allowed to do that.

                  • By esseph 2025-06-066:22

                    How do you know the person created the shortcut and not some malware trying to get a user to click on an executable and elevate permissions?

              • By AStonesThrow 2025-06-052:041 reply

                I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.

                Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.

                So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.

                • By GeekyBear 2025-06-052:201 reply

                  A user creating a shortcut manually is not something that requires a permissions prompt.

                  If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.

                  • By esseph 2025-06-0614:481 reply

                    Programs running during the user session are often running as that user.

                    The "correct answer" to this is probably that there isn't a good answer here.

                    Security is a damn minefield and it's getting worse every day.

                    • By GeekyBear 2025-06-0618:511 reply

                      There is no universe in which it makes sense to ask the very user who just created a shortcut if they should have permission to create that shortcut.

                      This is why Microsoft was so widely mocked for just how bad their initial implementation of UAC was.

                      • By esseph 2025-06-073:421 reply

                        "iPhone Shortcuts always asks permission to access file"

                        https://discussions.apple.com/thread/254931245

                        iOS Shortcut danger

                        https://cyberpress.org/unveiling-risks-of-ios-shortcuts/

                        But anywho, cve.org lists 78 shortcut vulnerabilities across many platforms.

                        I know you'd like to believe the world we live in shouldn't require permissions for a user to create a shortcut and then access it, but that... Is actually the world we live in, and have been in for a very long time.

                        Security is hard and it's not getting any easier as system complexity increases.

                        If you don't believe me, ask your favorite LLM. I asked Gemini and got back what I expected to.

                        • By GeekyBear 2025-06-0715:38

                          If the user manually creating a shortcut is so dangerous, why did Microsoft remove that permissions prompt when they fixed their terrible initial UAC implementation?

          • By Gigachad 2025-06-057:081 reply

            A better option would be to put Mark Zuckerberg in prison for deploying malware to a massive number of people.

            • By bmacho 2025-06-0710:52

              And everyone that worked on it, also everyone that still keep working at any division at Meta after knowing that it is organized crime.

      • By lxgr 2025-06-0514:00

        And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.

        Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.

      • By grokkedit 2025-06-0420:322 reply

        problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great

        • By jay_kyburz 2025-06-0420:482 reply

          This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.

          • By Marsymars 2025-06-0421:34

            Note that the proposal also covers loopbacks, so domain names for local access would also still work.

          • By grokkedit 2025-06-0513:09

            I'm answering to the comment that explains how currently macos works

        • By planb 2025-06-0420:422 reply

          Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.

          • By zbuttram 2025-06-0422:08

            With the proposal in the OP, I would think so yes. But the MacOS setting mentioned directly above is blanket per-app at the OS level.

          • By grokkedit 2025-06-0513:09

            yes, but I'm answering to the comment that explains how currently macos works

    • By broguinn 2025-06-0512:461 reply

      This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:

      https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...

      It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!

      edit: localhost won't be restricted:

      "Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"

      • By Thorrez 2025-06-0513:021 reply

        >edit: localhost won't be restricted:

        It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:

        * If evil.com makes a request to a local address it'll get blocked.

        * If evil.com makes a request to a localhost address it'll get blocked.

        * If a local address makes a request to a localhost address it'll get blocked.

        * If a local address makes a request to a local address, it'll be allowed.

        * If a local address makes a request to evil.com it'll be allowed.

        * If localhost makes a request to a localhost address it'll be allowed.

        * If localhost makes a request to a local address, it'll be allowed.

        * If localhost makes a request to evil.com it'll be allowed.

        • By broguinn 2025-06-0520:28

          Ahh, thanks for clarifying! It's the origin being compared, not the context - of course.

    • By donnachangstein 2025-06-0419:185 reply

      [flagged]

      • By kulahan 2025-06-0419:522 reply

        I agree fully with him. I don’t care what part of your job gets harder, or what software breaks if you can’t make it work without unnecessarily invading my privacy. You could tell me it’s going to shut down the internet for 6 months and I still wouldn’t care.

        You’ll have to come up with a really strong defense for why this shouldn’t happen in order to convince most users.

        • By Aeolun 2025-06-0422:551 reply

          It just means I run a persistent client on your device that is permanently connected to the mothership, instead of only when you have your browser open.

          • By kulahan 2025-06-0520:57

            I’m so glad most people don’t truly consider software devs to be real engineers, because this is a perfect example of why that word deserves so much more respect than this field gives it.

        • By donnachangstein 2025-06-0420:031 reply

          [flagged]

          • By GlacierFox 2025-06-0420:13

            I like your "you've been *** my ass for 35 years, please feel free to keep doing it for all eternity" attitude.

      • By zaptheimpaler 2025-06-0419:252 reply

        I'm sure it will require some work, but this is the price of security. The idea that any website I visit can start pinging/exploiting some random unsecured testing web server I have running on localhost:8080 is a massive security risk.

        • By duskwuff 2025-06-0419:341 reply

          Or probing your local network for vulnerable HTTP servers, like insecure routers or web cameras. localhost is just the tip of the iceberg.

          • By donnachangstein 2025-06-0419:424 reply

            Can you define "local network"? Probably not. Most large enterprises own publicly-routable IP space for internal use. Internal doesn't mean 192.168.0.0/24. foo.corp.example.com could resolve to 9.10.11.12 and still be local. What about IPv6? It's a nonsense argument fraught with corner cases.

            • By duskwuff 2025-06-0420:011 reply

              > Can you define "local network"?

              Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.

              If your network is large enough that it consists of multiple routed network segments, and you don't have any ACLs between those segments, then yeah, you won't be fully protected by this browser feature. But you aren't protected right now either, so nothing's getting worse, it's just not getting better for your specific use case.

              • By donnachangstein 2025-06-0420:282 reply

                > Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.

                Fantastic. Well, Google doesn't agree

                The proposal defines it along RFC1918 address space boundaries. The spitballing back and forth in the GitHub issues about which imaginary TLDs they will or won't also consider "local" is absolutely horrifying.

                • By account42 2025-06-0510:18

                  Cool so it will protect 99.999% of home networks. Compared to 0% which are protected now. Sounds great!

            • By mystifyingpoi 2025-06-0419:48

              Not to be snarky, but that's a good example of "perfect being the enemy of good". You are totally right that there are corner cases, sure. But that doesn't stop us from tackling the low hanging fruit first. Which is, as you say, localhost and LAN (if present).

            • By eschaton 2025-06-050:53

              It should not even be able to communicate with the local network at all, it’s a goddamn web page. It should be restricted to just communicate with the server that hosts it and that’s it.

            • By emushack 2025-06-053:13

              They define it the explainer this was originally based on: (https://github.com/WICG/private-network-access/blob/main/exp...)

              Quote: We extend the RFC 1918 concept of private IP addresses to build a model of network privacy.

              Concretely, there are 3 kinds of private network requests:

                  public -> private
                  public -> local
                  private -> local

        • By donnachangstein 2025-06-0419:321 reply

          [flagged]

          • By hollerith 2025-06-0419:39

            The whole browser is a massive security leak. What genius thought it was a good idea for the web page I visit in the morning to get the weather forecast to be able to run arbitrary code and to communicate with arbitrary hosts on my local network?

      • By Wobbles42 2025-06-0420:58

        I do understand this sentiment, but isn't the tension here that security improvements by their very nature are designed to break things? Specifically the things we might consider "bad", but really that definition gets a bit squishy at the edges.

      • By protocolture 2025-06-054:27

        This attitude kept IE6 in production well after its natural life should have concluded.

      • By aaomidi 2025-06-0419:51

        I’m sorry but this proposal is absolutely monumentally important.

        The fact that I have to rely on random extensions to accomplish this is unacceptable.

  • By socalgal2 2025-06-051:236 reply

    I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.

    Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.

    I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)

    By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.

    Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.

    • By 3eb7988a1663 2025-06-053:185 reply

      I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.

      I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.

      • By ordu 2025-06-067:09

        Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.

      • By quickthrowman 2025-06-0516:44

        Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.

      • By nothrabannosir 2025-06-053:542 reply

        In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.

        • By shantnutiwari 2025-06-058:463 reply

          >In iOS you can share a subset of your contacts.

          the problem is, the app must respect that.

          WhatsApp, for all the hate it gets, does.

          "Privacy" focused Telegram doesnt-- it wouldnt work unless I shared ALL my contacts-- when I shared a few, it kept complaining I had to share ALL

          • By blacklion 2025-06-0510:142 reply

            Is it something specific to iOS Telegram client?

            On Android Telegram works with denied access to the contacts and maintains its own, completely separate, contact list (shared with desktop Telegram and other copies logged in to same account). I'm using Telegram longer than I'm using smartphone and it has completely separate contact list (as it should be).

            And WhatsApp cannot be used without access to contacts: it doesn't allow to create WatsApp-only contact and complains that it has no place to store it till you grant access to Phone contact list.

            To be honest, I prefer to have separate contact lists on all my communication channel, and even sharing contacts between phone app and e-mail app (GMail) bothers me.

            Telegram is good in this aspect, it can use its own contact list, not synchronized or shared with anything else, and WhatsApp is not.

            • By kayodelycaon 2025-06-0518:39

              I’ve never allowed Telegram on iOS to access my contacts, camera, or microphone and it’s worked just fine.

            • By HnUser12 2025-06-0511:29

              Looks to me like it was a bug. Not giving access to any contacts broke the app completely but limited access works fine except for an annoying persistent in app notification.

          • By pabs3 2025-06-089:55

            How does Telegram know that the subset you gave access to isn't ALL the contacts? Seems like iOS should not leak that bit of info to the app.

          • By nothrabannosir 2025-06-0510:37

            iOS generally solves this through App Store submission reviews so I’m surprised this isn’t a rule and that telegram got away with it. “Apps must not gate functionality behind receiving access to all contacts vs a subset” or something. They definitely do so for location access, for example.

        • By WhyNotHugo 2025-06-059:41

          WhatsApp specifically needs phone numbers, and you can filter out which contacts you share, but not which fields. So if you family uses WhatsApp, you’d share those contacts, but you can’t share ONLY their phone number, WhatsApp also gets their birthdays, addresses, personal notes, and any other personal information which you might have.

          I think this feature is pretty meaningless in the way that it’s implemented.

          It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.

      • By yonatan8070 2025-06-068:16

        Also for the camera, just feed them random noise or a user-selectable image/video

      • By baobun 2025-06-0512:56

        GrapheneOS has this feature (save for faking GPS) fwiw

    • By totetsu 2025-06-051:461 reply

      Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"

      • By kuschku 2025-06-0512:21

        Does that UI actually let you choose? IME it just tells me what orgs & repos will be shared, with no option to choose.

    • By bsder 2025-06-0522:06

      > Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.

      Blame Apple and Google and their horrid BLE APIs.

      An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.

      What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.

    • By rjh29 2025-06-0510:29

      Safari doesn't support Web MIDI apparently for this reason (fingerprinting), but it makes using any kind of MIDI web app impossible.

    • By Thorrez 2025-06-0513:171 reply

      Are you talking about web apps, mobile apps, desktop apps, or browser extensions?

      • By socalgal2 2025-06-0521:221 reply

        All of them.

        • By Thorrez 2025-06-0614:54

          I think webapps already have to ask for permission for USB and bluetooth.

          Desktop apps on Windows and Linux are generally able to do anything. Read any file, etc. Locking them down with a permission system would be a big change.

    • By _bent 2025-06-0516:32

      Apple does this for iOS 18 via the AccessorySetupKit

  • By paxys 2025-06-0421:354 reply

    It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?

    • By 3abiton 2025-06-0517:591 reply

      I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?

      • By bmacho 2025-06-0711:161 reply

        There are no mechanisms in browsers yet. Best you can do is using the OS to forbid your whole browser to access your local network. (And use another browser only for your local network.) Ask ChatGPT for methods to sandbox your browser.

        • By 3abiton 2025-06-081:52

          Thanks! I have already setup iptables rules for vms to deny them local network access. I'll use the same trick for local access now i guess.

    • By Too 2025-06-0518:32

      What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.

      Talking about MCP agents if that’s not obvious.

    • By thaumasiotes 2025-06-055:24

      > Does every one of them have the correct CORS configuration?

      I would guess it's closer to 0% than 0.1%.

    • By reassess_blind 2025-06-058:341 reply

      The local server has to send Access-Control-Allow-Origin: * for this to work, right?

      Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.

      • By meindnoch 2025-06-0510:051 reply

        No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.

        [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...

        • By reassess_blind 2025-06-0510:17

          I was thinking in terms of response exfiltration, but yeah, better put that /launch_rockets endpoint behind some auth.

HackerNews