Brave overhauled its Rust adblock engine with FlatBuffers, cutting memory 75%

2026-01-0517:34507274brave.com

Brave has overhauled its Rust-based adblock engine to reduce memory consumption by 75%, bringing better battery life and smoother multitasking to all users.

This is the 36th post in an ongoing series describing new privacy features in Brave. This post describes work done by Mikhail Atuchin (Sr. Staff Engineer), Pavel Beloborodov (Sr. Software Engineer) and Anton Lazarev (Staff Adblock Engineer). It was written by Shivan Kaul Sahib (VP, Privacy and Security).

Brave has overhauled its Rust-based adblock engine to reduce memory consumption by 75%, bringing better battery life and smoother multitasking to all users. The upgrade represents roughly 45 MB of memory savings for the Brave browser on every platform (Android, iOS and desktop) by default, and scales even higher for users with additional adblocking lists enabled. These performance boosts are live in Brave v1.85, with additional optimizations coming in v1.86.

Screenshot comparison of versions 1.79.118 and 1.85.118 of Brave, demonstrating a drop in memory consumption from 162 MB to 104 MB.
Screenshot comparison of versions 1.79.118 and 1.85.118 of Brave, demonstrating a drop in memory consumption from 162 MB to 104 MB.

As announced in June and October last year, we achieved this major memory milestone by iteratively refactoring the adblock-rust engine to use FlatBuffers, a compact and efficient storage format. This architectural transition allowed us to move the roughly 100,000 adblock filters shipped by default from standard, heap-allocated Rust data structures (such as Vecs, HashMaps, and structs) into a specialized, zero-copy binary format. 

Along the way, we completed several other key performance optimizations (some of these are coming in v1.86):

  1. Memory management: Used stack-allocated vectors to reduce memory allocations by 19% and improved building time by ~15%.
  2. Matching speed: Improved filter matching performance by 13% by tokenizing common regex patterns.
  3. Sharing resources: Resources are shared between instantiations of adblock engines, saving ~2 MB of memory on desktop.
  4. Storage efficiency: Optimized internal resource storage memory by 30%.

Saving 45+ MB of memory is a significant milestone in the world of browser performance and a massive win for users on mobile and older hardware. While Brave already improves performance on the Web by blocking invasive ads and trackers, our latest engineering effort ensures that our own built-in protections are as lightweight and invisible as possible. Unlike adblocking in other browsers, Brave’s adblocking engine is built into the browser and maintained by our privacy team. Such deep optimizations are impossible for extension-based blockers, which are restricted by browser extension APIs and sandboxing. This native architecture is also why Brave’s own ad and tracker blocking is entirely unaffected by Manifest V3.

This performance boost is the culmination of several months of in-depth cross-team engineering work between our performance and privacy teams. It marks a significant leap in the browser’s efficiency and ensures that we continue shipping best-in-class privacy to over 100 million users.


Read the original article

Comments

  • By nicoburns 2026-01-0522:313 reply

    Brave's adblocking engine is a neat example of open source and the ease of sharing lbraries in Rust. It uses Servo crates (also used by Firefox) to parse CSS and evaluate selectors, and is then itself published as a crate on crates.io where it can be pulled in by others who may want to use it.

    • By wodenokoto 2026-01-064:231 reply

      So brave has two CSS engines? One for rendering and one for blocking?

      • By hu3 2026-01-0610:423 reply

        Yes. Since for blocking you can afford to have a less mature CSS engine. A tradeoff for performance.

        • By nicoburns 2026-01-0613:14

          The `selectors` crate is pretty mature to be fair. It's what's used in Firefox for all CSS selector matching. The main advantage of using it is that it's modular so you can just pull that part out without the entire CSS engine.

        • By db48x 2026-01-0611:12

          Also the filters for adblocking have extended the CSS selector syntax to add extra features, and you might not want those to leak into your parser for stylesheets.

        • By Retr0id 2026-01-0612:182 reply

          I wonder if anti-adblock devs will ever take advantage of the difference between the two

          • By theultdev 2026-01-0615:093 reply

            Used to work in this realm. It doesn't matter.

            Easylist will contact you, strongarm you into disabling your countermeasures and threaten to block all JS on your page if you don't comply.

            So no ad servers can load, no prebid, nothing will function/load if the user has an adblocker that uses easylist (all of them) installed.

            • By jzebedee 2026-01-0617:59

              That's amazing. I just assumed the ad lists were volunteer maintained like a wiki. I'll be sure to use Easylist now that I know they're also advocating for users while punishing bad advertisers.

            • By satvikpendem 2026-01-0617:451 reply

              That is hilarious to be fair, like a modern day Robin Hood.

              • By theultdev 2026-01-0617:512 reply

                It was an interesting experience.

                I'm still not entirely sure it's for the best or not.

                On one hand, it is a concentration of power into hands a few people.

                On the other, it is for a good cause, to maintain a list of ad network and site banners that drain resources, cause privacy issues, etc.

                The Easylist people aren't saints. They get paid off by Google to allow "Acceptable Ads". So you just show a different campaigns if your user is running adblocker nowadays.

                • By satvikpendem 2026-01-0617:571 reply

                  Well, I run EasyList for most ads and then also other filters to remove "acceptable ads" so it works well for me.

                  • By theultdev 2026-01-0618:00

                    I wasn't implying there weren't workarounds.

                    Just that Easylist is indirectly funded by ad revenue (google)

                    When it comes to ads, it's about the bulk of people, most don't run anything other than the default lists.

                • By antonok 2026-01-0717:46

                  You're probably thinking of Adblock Plus? Acceptable Ads is their program; EasyList has no such policy or ties.

            • By acdw 2026-01-0616:52

              HAHAHAH Hell yeah that's praxis baby die mad about it

          • By goku12 2026-01-0612:39

            That's a lot of work to bypass the blocks on a browser that's far from the market leader. Now, even if the browser does become popular enough in the future to be targeted, the developers would probably gain enough resources and support to replace one of the engines with the other.

    • By nineteen999 2026-01-0523:284 reply

      At risk like node/npm with all the supply-chain attacks then?

      Or is there something that cargo does to manage it differently (due diligence?).

      • By nicoburns 2026-01-0523:531 reply

        You can use "cargo vendor" to copy-paste your dependencies C-style if you want to, and audit them all if you want. Mozilla does this for Firefox.

        Cargo does have lock files by default. But we really need better tooling for auditing (and enforcing tha auditing has happened) to properly solve this.

        • By drnick1 2026-01-060:183 reply

          I think the broader point being made here is that the C-style approach is to extract a minimal subset of the dependency and tightly review it and integrate it into your code. The Rust/Python approach is to use cargo/pip and treat the dependency as a black box outside your project.

          • By Cyph0n 2026-01-060:303 reply

            Advocates of the C approach often gloss over the increased maintenance burden, especially when it comes to security issues. In essence, you’re signing up to maintain a limited fork & watch for CVEs separately from upstream.

            So it's ultimately a trade off rather than a strictly superior solution.

            Also, nothing in Rust prevents you from doing the same thing. In fact, I would argue that Cargo makes this process easier.

            • By ronjakoi 2026-01-061:174 reply

              But that's what Linux distros are for, package maintainers watch the CVEs for you, and all you have to do is "apt upgrade"

              • By Cyph0n 2026-01-061:341 reply

                Not sure I follow. Suppose you tore out a portion of libxml2 for use in your HTTP server. A CVE is filed against libxml2 that is related to the subset you tore out. Obviously, your server doesn't link against libxml2. How exactly would distro maintainers know to include your package in their list?

                • By saagarjha 2026-01-065:121 reply

                  You’d list it in your attribution?

                  • By Cyph0n 2026-01-0612:271 reply

                    I am unfamiliar with the details of distro packaging. Do they commonly use the attribution to route CVEs?

                    Regardless, the maintenance burden remains.

                    • By BenjiWiebe 2026-01-075:51

                      I believe some distros require un-vendoring before accepting the package.

                      If the code you vendored was well hidden so the distro maintainer didn't notice, perhaps the bad guys would also fail to realize you were using (for instance) libxml2, and not consider your software a target for attack.

              • By ntqz 2026-01-061:54

                That's assuming you're using dynamically linked libraries/shared libraries. They're talking about "vendoring" the library into a statically linked binary or its own app-specific DLL.

              • By nyrikki 2026-01-0614:53

                Be very careful with that assumption.

                The distros try, but one complex problem with a project that holds strong opinions and you may not have a fix.

                The gnome keyring secrets being available to any process running under your UID, unless that process ops into a proxy as an example.

                Looking at how every browser and busybox is exempted from apparmor is another.

                It is not uncommon to punt the responsibility to users.

              • By MrJohz 2026-01-062:19

                In theory yes, but in practice I don't think you could build something like Servo very easily like that. Servo is a browser, but it's also purposefully designed to be a browser-developer's toolkit. It is very modular, and lots of pieces (like the aforementioned CSS selector library) are broken out into separate packages that anyone can then use in other projects. And Servo isn't alone in this.

                However, when you install Servo, you just install a single artefact. You don't need to juggle different versions of these different packages to make sure they're all compatible with each other, because the Servo team have already done that and compiled the result as a single static binary.

                This creates a lot of flexibility. If the Servo maintainers think they need to make a breaking change somewhere, they can just do that without breaking things for other people. They depend internally on the newer version, but other projects can still continue using the older version, and end-users and distros don't need to worry about how best to package the two incompatible versions and how to make sure that the right ones are installed, because it's all statically built.

                And it's like this all the way down. The regex crate is a fairly standard package in the ecosystem for working with regexes, and most people will just depend on it directly if they need that functionality. But again, it's not just a regex library, but a toolkit made up of the parts needed to build a regex library, and if you only need some of those parts (maybe fast substring matching, or a regex parser without the implementation), then those are available. They're all maintained by the same person, but split up in a way that makes the package very flexible for others to take exactly what they need.

                In theory, all this is possible with traditional distro packages, but in practice, you almost never actually see this level of modularity because of all the complexity it brings. With Rust, an application can easily lock its dependencies, and only upgrade on its own time when needed (or when security updates are needed). But with the traditional model, the developers of an application can't really rely on the exact versions of dependencies being installed - instead, they need to trust that the distro maintainers have put together compatible versions of everything, and that the result works. And when something goes wrong, the developers also need to figure out which versions exactly were involved, and whether the problem exists only with a certain combination of dependencies, or is a general application problem.

                All this means that it's unlikely that Servo would exist in its current form if it were packaged and distributed under the traditional package manager system, because that would create so much more work for everyone involved.

            • By darkwater 2026-01-0612:27

              And advocates of the opposite approach created the dependencies hellscape that NPM is nowadays.

            • By cpuguy83 2026-01-065:17

              I mean, that's exactly what you are doing with every single dependency you take on regardless of language.

          • By SkiFire13 2026-01-069:36

            Let's be real about dependencies https://wiki.alopex.li/LetsBeRealAboutDependencies seems to give a different perspective on C dependencies though.

          • By estebank 2026-01-0616:58

            > the C-style approach is to extract a minimal subset of the dependency and tightly review it and integrate it into your code. The Rust/Python approach is to use cargo/pip and treat the dependency as a black box outside your project.

            The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).

            Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).

            In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).

            [1]: https://crates.io/crates/ripgrep

            [2]: https://crates.io/crates/grep/

      • By Jalad 2026-01-0523:361 reply

        Supply-chain attacks aren't really a property of the dependency management system

        Not having a dependency management system isn't a solution to supply chain attacks, auditing your dependencies is

        • By ashishb 2026-01-0523:527 reply

          > auditing your dependencies is

          How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

          What if these sources include binary packages?

          The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

          Can anyone even review it in a month? And they publish a new update weekly.

          • By minitech 2026-01-060:001 reply

            > The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

            You’re looking at the number of dependents. The React package has no dependencies.

            Asides:

            > Do you read the source of every single package before doing a `brew update` or `npm update`?

            Yes, some combination of doing that or delegating it to trusted parties is required. (The difficulty should inform dependency choices.)

            > What if these sources include binary packages?

            Reproducible builds, or don’t use those packages.

            • By ashishb 2026-01-060:442 reply

              > You’re looking at the number of dependents. The React package has no dependencies.

              Indeed.

              My apologies for misinterpreting the link that I posted.

              Consider "devDependencies" here

              https://github.com/facebook/react/blob/main/package.json

              As far as I know, these 100+ dev dependencies are installed by default. Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.

              > Reproducible builds, or don’t use those packages.

              A lot of things are not reproducible/hermetic builds. Even GitHub Actions is not reproducible https://nesbitt.io/2025/12/06/github-actions-package-manager...

              Most frontend frameworks are not reproducible either.

              > don’t use those packages.

              And do what?

              • By nicoburns 2026-01-060:552 reply

                > As far as I know, these 100+ dev dependencies are installed by default.

                devDependencies should only be installed if you're developing the React library itself. They won't be installed if you just depend on React.

                • By ashishb 2026-01-061:352 reply

                  > They won't be installed if you just depend on React.

                  Please correct me if I am wrong, here's my understanding.

                  "npm install installs both dependencies and dev-dependencies unless NODE_ENV is set to production."

                  • By sponnath 2026-01-061:421 reply

                    It does not recursively install dev-dependencies.

                    • By ashishb 2026-01-062:112 reply

                      > It does not recursively install dev-dependencies.

                      So, these ~100 [direct] dev dependencies are installed by anyone who does `npm install react`, right?

                      • By frio 2026-01-062:17

                        No. They’re only installed if you git clone react and npm install inside your clone.

                        They are only installed for the topmost package (the one you are working on), npm does not recurse through all your dependencies and install their devDependencies.

                      • By SkiFire13 2026-01-0613:35

                        > ~100 [direct]

                        When you do `npm install react` the direct dependency is `react`. All of react's dependencies are indirect.

                  • By rafram 2026-01-0615:54

                    Run `npm install react` and see how many packages it says it added. (One.)

                • By remexre 2026-01-0615:01

                  If you're trying to audit React, don't you either need to audit its build artifacts rather than its source, or audit those dev dependencies too?

              • By timcobb 2026-01-062:181 reply

                > And do what?

                Keep on keepin on

          • By johncolanduoni 2026-01-067:09

            The best tool for your median software-producing organization, who can’t just hire a team of engineers to do this, is update embargoes. You block updating packages until they’ve been on the registry for a month or whatever by default, allowing explicit exceptions if needed. It would protect you from all the major supply-chain attacks that have been caught in the wild.

            > The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

            You’re looking a dependents. The core React package has no dependencies.

          • By QuiEgo 2026-01-061:30

            In security-sensitive code, you take dependencies sparingly, audit them, and lock to the version you audited and then only take updates on a rigid schedule (with time for new audits baked in) or under emergency conditions only.

            Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.

            Here's an example dependency policy for something security critical: https://github.com/tock/tock/blob/master/doc/ExternalDepende...

            Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.

            You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.

            This whole thing is not a science, you have to look at it case-by-case.

          • By j1elo 2026-01-060:05

            If that is really the case (I don't know numbers about React), in projects with a sane criteria of security, they would either only jump between versions that have passed a complete verification process (think industry certifications); or the other option is that simply by having such an enormous amount of dependencies would render that framework an undesirable tool to use, so they would just avoid it. What's not serious is living the life and incorporating 15-17K dependencies blindly because YOLO.

            (so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)

          • By goku12 2026-01-0612:58

            > How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

            There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.

            Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.

            > What if these sources include binary packages?

            Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.

            [1] https://rustsec.org/

          • By rcxdude 2026-01-060:072 reply

            More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space? Are you looking at the version of the package they manage? People often freak out about the number of packages in such ecosystems but what matters a lot more is how many different people are in your dependency tree, who they are, and how they operate.

            (The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)

            • By nicoburns 2026-01-0620:16

              > More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space?

              That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.

          • By nicoburns 2026-01-0523:551 reply

            Normally it would omly be the diff from a previous version. But yes, it's not really practical for small companies or individuals atm. Larger companies do exactly this.

            We need better tooling to enable crowdsourcing and make it accessible for everyone.

      • By shatsky 2026-01-063:01

        I don't know much about node but cargo has lock file with hashes which prevents dep substitution unless dev decide to update lock file. Updating lock file has same risks as initial decision to depend on deps.

      • By pxc 2026-01-060:333 reply

        Edit: I misremembered a Rust crates capability (pre- and post-install hooks), so my comment was useless and misleading.

        • By TheDong 2026-01-060:55

          Rust crates run arbitrary code more often at build/install time than npm packages do.

          Some people use 'pnpm', which only runs installScripts for a whitelisted subset of packages, so an appreciable fraction of the npm ecosystem (those that don't use npm or yarn, but pnpm) do not run scripts by default.

          Cargo compiles and runs `build.rs` for all dependencies, and there's no real alternative which doesn't.

        • By steveklabnik 2026-01-060:492 reply

          Rust crates can run arbitrary code at build time: https://doc.rust-lang.org/cargo/reference/build-scripts.html

        • By nasso_dev 2026-01-060:531 reply

          Aren't procedural macros amd build.rs arbitrary code being executed at build time?

          • By tuetuopay 2026-01-069:56

            Pretty much, yes. And they don’t have much as far as isolation goes. It’s a bit frightening honestly.

            It does unlock some interesting things to be sure, like sqlx’ macros that check the query at compile time by connecting to the database and checking the query against it. If this sounds like the compiler connecting to a database, well, it’s because it is.

    • By shatsky 2026-01-062:186 reply

      And yet Rust ecosystem practically killed runtime library sharing, didn't it? With this mentality that every program is not a building block of larger system to be used by maintainers but a final product, and is statically linked with concrete dependency versions specified at development time. And then even multiple worker processes of same app can't share common code in memory like this lib, or ui toolkit, multimedia decoders, etc., right?

      PS. Actually I'll risk to share my (I'm new to Rust) thoughts about it: https://shatsky.github.io/notes/2025-12-22_runtime-code-shar...

      • By RadiozRadioz 2026-01-069:22

        As a user and developer, runtime is my least favourite place for dependency and library errors to occur. I can't even begin to count the hours, days, I've spent satisfying runtime dependencies of programs. Cannot load library X, fix it, then cannot load library Y, fix it, then library Z is the wrong version, then a glibc mismatch for good measure, repeat.

        I'd give a gig of my memory to never have to deal with that again.

      • By no_wizard 2026-01-062:345 reply

        if I recall correctly Rust does not support any form of dynamic linking or library loading.

        Most of the community I’ve interacted with are big on either embedding a scripting engine or WASM. Lots of momentum on WASM based plugins for stuff.

        It’s a weakness for both Rust and Go if I recall correctly

        • By SkiFire13 2026-01-069:19

          > if I recall correctly Rust does not support any form of dynamic linking or library loading.

          Rust supports two kinds of dynamic linking:

          - `dylib` crate types create dynamic libraries that use the Rust ABI. They are only usesul within a single project though, since they are only guaranteed to work with the crate that depended on them at the compilation time.

          - `cdylib` crate types with exported `extern "C"` functions; this creates a typical shared library in the C way, but you also need to implement the whole interface in a C-like unsafe subset of Rust.

          Neither is ideal, but if you really want to write a shared library you can do it, it's just not a great experience. This is part of the reason why it's often preferred to use scripting languages or WASM (the other reason being that scripting languages and WASM are sandboxed and hence more secure by default).

          I also want to note that a common misconception seems to be that Rust should allow any crate to be compiled to a shared library. This is not possible for a series of technical reasons, and whatever solution will be found will have to somehow distinguish "source only" crates from those that will be compilable as shared libraries, similarly to how C++ has header-only libraries.

        • By shatsky 2026-01-062:403 reply

          It does support dynamic libs, but virtually all important Rust software seems to be written without any consideration for it.

          • By johncolanduoni 2026-01-066:593 reply

            Rust ABI (as opposed to C ABI) dynamic libraries are incredibly fragile with regard to compiler/build environment changes. Trying to actually swap them out between separate builds is pretty much unsupported. So most of the benefits of dynamic libraries (sharing code between different builds, updating an individual dependency) are not achieved.

            They’re only really useful if you’re distributing multiple binary executables that share most of the underlying code, and you want to save some disk space in the final install. The standard Rust toolchain builds use them for this purpose last time I checked.

            • By josephg 2026-01-067:26

              Yep that’s right. I’ve been working on a game with bevy. The Bevy game engine supports being dynamic linked during development in order to keep compile times down. It works great.

            • By ComputerGuru 2026-01-0618:35

              People thinking C++ libraries magically solve this ABI issue is the other side of the coin. I’ve filed numerous bugs against packages precompiled libraries but misusing the C abi so that (owned) objects cross the abi barrier and end up causing heap corruption (with a segfault only if you’re lucky) and other much more subtle heisenbugs.

            • By goku12 2026-01-0612:021 reply

              Rust does support C ABI through cdylib (as opposed to the unstable dylib ABI). This is used widely, especially for FFI. An example of this is Python modules in Rust using PyO3 [1].

              [1] https://pyo3.rs/v0.15.1/#using-rust-from-python

              • By johncolanduoni 2026-01-0616:301 reply

                Yeah but you can’t use the vast majority of crates that way. You have to create a separate unsafe C ABI, and then use it in the caller. Ergonomically, it’s like your dependency was written in C and you had to write a safe wrapper.

                • By ComputerGuru 2026-01-0618:37

                  C++ has the opposite problem where people think they can just dynamically or statically link against any api be ok. You can’t cross the ABI barrier without a) knowing it’s there, and b) respecting its rules.

                  You get lucky when all assets have been compiled with the same toolchain (with the same options) but will lose your mind when you have issues caused by this thing neither you nor the package authors knew existed.

          • By undeveloper 2026-01-067:421 reply

            the rust abi is explicitly unstable. there are community projects to bring dynamic linking, but it's mostly not worth it.

          • By nottorp 2026-01-069:522 reply

            RAM is cheap mmmkay?

            Or at least it used to be when they designed the thing…

            • By RealityVoid 2026-01-0610:181 reply

              Is it a RAM problem though? My understanding is that each process loads the shared library in its own memory space, so it's only a ROM/HDD space problem.

              • By nottorp 2026-01-0610:19

                If you stop using shared libraries each application will have its own copy in ram…

            • By goodpoint 2026-01-0613:55

              The problem is vulnerable dependencies and having to update hundreds of binaries when a vuln is fixed.

        • By PunchyHamster 2026-01-068:041 reply

          Go supports plugins (essentially libraries) but its has a bunch of caveats. You can also

          You can also link to C libs from both. I guess you could technically make a rust lib with C interface and load it from rust but that's obviously suboptimal

          • By goku12 2026-01-0612:23

            The dynamic libraries that use the unstable Rust ABI are called `dylib`s, while those that use the stable C ABI are called `cdylib`s. Suppose a stable version of the Rust ABI is defined, what would be the point of putting dynamic libraries that follows this API, in the system? Only Rust would be able to open it, whereas the system shared libraries are traditionally expected to work across languages using C ABI and language-specific wrappers. By extension, this is a problem that affects all languages that has more complex features than C. Why would this be considered as a Rust flaw?

        • By oooyay 2026-01-065:381 reply

          Go definitely supports dynamic libraries

          • By no_wizard 2026-01-065:461 reply

            I don’t mean Dylibs like you find on macOS, I mean loading a binary lib from an arbitrary directory and being able to use it, without compiling it into the program.

            It’s been some time since I looked into this so I wanted to be clear on what I meant. I’d be elated to be wrong though

            • By Groxx 2026-01-066:051 reply

              Both handle that just fine. Go does this via cgo, and has for over a decade.

              You do still need to write the interfacing code, but that's true for all languages.

              • By vlovich123 2026-01-066:452 reply

                Then by that argument Rust also supports dynamic linking. Actually it’s even better because that approach sacrifices less performance (if done well) than cgo inherently implies.

                • By josephg 2026-01-067:23

                  Well, Rust does support dynamic linking. It just doesn’t (yet) offer a stable ABI. So you need to either use C FFI over the dynamic linking bridge, or make sure all linked libraries are compiled with the same version of the rust compiler.

                • By Groxx 2026-01-066:46

                  It was built to do that, yes

      • By pezezin 2026-01-069:261 reply

        In any modern OS with CoW forking/paging, multiple worker processes of the same app will share code segments by default.

        • By rollcat 2026-01-0611:21

          COW on fork has been a given for decades.

          You can't COW two different libraries, even if the libraries in question share the source code text.

      • By Groxx 2026-01-066:21

        Not really? You just need to define the stable ABI: you do that via `[repr(C)]` and other FFI stuff that has been around since essentially the beginning. Then it handles it just fine, for both the code using a runtime library and for writing those runtime libraries.

        People writing Rust generally prefer to stay within Rust though, because FFI gives up a lot of safety (normally) and is an optimization boundary (for most purposes). And those are two major reasons people choose Rust in the first place. So yeah, most code is just statically compiled in. It's easier to build (like in all languages) and is generally preferred unless there's a reason to make it dynamic.

      • By terafo 2026-01-062:366 reply

        Dynamic libraries are a dumpster fire with how they are implemented right now, and I'd really prefer everything to be statically linked. But ideally, I'd like to see exploration of a hybrid solution, where library code is tagged inside a binary, so if the OS detects that multiple applications are using the same version of a library, it's not duplicated in RAM. Such a design would also allow for libraries to be updated if absolutely necessary, either by runtime or some kind of package manager.

        • By vlovich123 2026-01-066:48

          OSes already typically look for duplicated code pages as opportunities to dedupe. It doesn’t need to be special cases for code pages because it’ll also find runtime heap duplicates that seem to be read only (eg your JS code JIT pages shared between sites).

          One challenge will be that the likelihood of two random binaries having generated the same code pages for a given source library (even if pinned to the exact source) can be limited by linker and compiler options (eg dead code stripping, optimization setting differences, LTO, PGO etc).

          The benefit of sharing libraries is generally limited unless you’re using a library that nearly every binary may end up linking which has decreased in probability as the software ecosystem has gotten more varied and complex.

        • By shatsky 2026-01-062:53

          I believe NixOS-like "build time binding" is the answer. Especially with Rust "if it compiles, it works". Software shares code in form of libraries, but any set of installed software built against some concrete version of lib which it depends on will use this concrete version forever (until update replaces it with new builds which are built against different concrete version of lib).

        • By johncolanduoni 2026-01-067:051 reply

          The system you’re proposing wouldn’t work, because without additional effort in the compiler and linker (which AFAIK doesn’t exist) there won’t be perfectly identical pages for the same static library linked into the same executable. And once you can update them independently, you have all the drawbacks of dynamic libraries again.

          Outside of embedded, this kind of reuse is a very marginal memory savings for the overall system to begin with. The key benefit of dynamic libraries for a system with gigabytes of RAM is that you can update a common dependency (e.g. OpenSSL) without redownloading every binary on your system.

          • By 0xdeafbeef 2026-01-0611:40

            Also, won't most of the lib be removed due to dead code elimination? And used code will be inlined where applicable, so nothing to dedup in reality

        • By littlestymaar 2026-01-064:19

          I wish the standard way of using shared libraries would be to ship the .so the programs want to dynamically link to alongside the program binary (using RUNPATH), instead of expecting them to exist globally (yes, I mean all shared libraries even glibc, first and foremost glibc, actually).

          This way we'd have no portability issue, same benefit as with static linking except it works with glibc out of the box instead of requiring to use musl, and we could benefit from filesystem-level deduplication (with btrfs) to save disk space and memory.

        • By SkiFire13 2026-01-069:23

          What you're describing is not static linking, it's embedding a dynamically linked library in another binary.

        • By tliltocatl 2026-01-068:31

          IMHO dynamic libraries are a dumpster fire because they are often used as a method to provide external interfaces, rather then just share common code.

      • By lmm 2026-01-063:084 reply

        > And yet Rust ecosystem practically killed runtime library sharing, didn't it?

        Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.

        • By VorpalWay 2026-01-0610:21

          C++ already killed it: templated code is only instantiated where it is used, so with C++ it is a random mix of what goes into the separate shared library and what goes into the application using the library. This makes ABI compatibility incredibly fragile in practise.

          And increasingly, many C++ libraries are header only, meaning they are always statically linked.

          Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)

          C is really the outlier here.

        • By BobbyTables2 2026-01-064:161 reply

          Static linking is still better than shipping a whole container for one app. (Which we also seem to do a lot these days!)

          It still boggles my mind that Adobe Acrobat Reader is now larger than Encarta 95… Hell, it’s probably bigger than all of Windows 95!

          • By tcfhgj 2026-01-0610:56

            Whole container or even chromium in electron

        • By speed_spread 2026-01-0613:221 reply

          It's not just about memory. I'd like to have a stable Rust ABI to make safe plugin systems. Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table. This could be done today with a semi stable versionned ABI. New app builds would be able to load older libraries.

          The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.

          • By SkiFire13 2026-01-0613:451 reply

            > I'd like to have a stable Rust ABI to make safe plugin systems

            A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.

            > Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.

            This can already be done within a single project by using the dylib crate type.

            • By speed_spread 2026-01-0614:282 reply

              Loading dynamic libraries can fail for many reasons but once loaded and validated it should be no more unsafe than regular crates?

              • By VorpalWay 2026-01-0617:40

                You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).

                A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.

                So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.

              • By remexre 2026-01-0615:07

                What properties are you validating? ld.so/libdl don't give you a ton more than "these symbols were present/absent."

        • By goodpoint 2026-01-0613:56

          It's really bad for security.

  • By drnick1 2026-01-0523:047 reply

    I am surprised there does not exist a community fork of Brave yet that strips out all of the commercial stuff (rewards, AI, own updates), making it suitable for inclusion in the repos of mainstream free/libre Linux distros.

    • By w0ts0n 2026-01-0523:071 reply

      There is quite a lot of costs associated with running a browser (at scale). Brave is looking to offer something that does what you mention called Brave-origin.

      Brendan talks about this a bit more here: https://x.com/BrendanEich/status/2006412918783619455

      • By drnick1 2026-01-0523:494 reply

        This is good news, but I am confused by the following:

        """

        Brave Origin is:

        1/ new, optional, separate build (stripped down, no telemetry/rewards/wallet/vpn/ai);

        2/ free on Linux, one time buy elsewhere.

        """

        So the stripped down version (at least the non-Linux one) will not be open source?

        • By dfajgljsldkjag 2026-01-0523:531 reply

          Open source software can be sold for money. For example redhat selling cds with rhel on them, for quite a big sticker price. Free if you build it yourself but you have to pay to get a ready to use version.

          • By chii 2026-01-063:361 reply

            or figure out how to build it yourself from source. But you can count on the lazy tax - esp. for windows (as building from source on linux is likely to be much more convenient).

            • By HPsquared 2026-01-0613:16

              Maybe that's why it is free on Linux!

        • By pxc 2026-01-060:36

          open-source ≠ gratis binaries

          Rules that require the distribution of source code don't require the distribution of binaries.

        • By WackyFighter 2026-01-0610:45

          That seems pretty reasonable proposition IMO.

          As other people have mentioned you can resell open source software. I have a big box Linux distro on my shelf here.

        • By johnebgd 2026-01-0613:58

          I’d pay a monthly fee for a browser where I’m not the product anymore and it respects my privacy.

    • By brnt 2026-01-0610:411 reply

      This. I use Brave because it has a great, fast adblocker and is fast generally. Unticking all the wallet/AI crap upon install is an acceptable price, but if somebody is going to release Braveium I'm going to use it right away.

      • By thisislife2 2026-01-0613:062 reply

        Is its adblocker as good as uBlock Origin?

        • By brnt 2026-01-0613:27

          You can add the same lists as in uBlock. I haven't seen any ads in years, so yes.

        • By jorvi 2026-01-0617:021 reply

          It is better (especially than the Manifest V3 version) because it has first party access / integration.

          In general, of 3rd party blockers, uBlock Origin isn't even the best, AdGuard is.

          • By oktoberpaard 2026-01-0617:111 reply

            > In general, of 3rd party blockers, uBlock Origin isn't even the best, AdGuard is.

            Why? I thought uBlock Origin on Firefox was the most effective combination available (assuming that you use the same filter lists).

            • By jorvi 2026-01-070:44

              The main reason for people repeating "uBlock Origin + Firefox is best" is because CNAME uncloaking didn't work on most Chromium browsers, even on Manifest V2. It does on Brave.

              AdGuard works better simply because there's a bunch of people being paid to work on it. There's more optimization and less bugs. The UI is a whole lot more polished. Blocklists have improved syntax, and the lists themselves are updated more frequently to catch site breakage. EasyList often has breakage on their lists for months even after being reported on their Github, but reporting the same breakage to AdGuard results in the breakage being fixed in days if not hours. And they do adjacent projects like AdGuard Home (sort of a commercial Pi-Hole) too.

              FWIW, big names in adblocking work for these companies too. AFAIK, FanBoy (EasyList + EasyPrivacy + his own lists) gets paid by Brave to maintain the lists. So in a way, Brave is funding adblocking for everyone :)

    • By uyzstvqs 2026-01-0617:40

      You can disable all of that within seconds. There's no reason for it not to be included because of that, as all the code running on the client is open-source. If distros only shipped software without commercial interests (why even..?), it'd be an unusable mess of barely maintained hobby projects.

      And you should really be using https://flathub.org/en/apps/com.brave.Browser

    • By t0lo 2026-01-060:492 reply

      You can hide bat with one click as soon as you install brave

      • By bastawhiz 2026-01-064:152 reply

        Linux distros won't host the code for the commercial bits. It doesn't matter if you can hide it, it's the fact that it's there at all

        • By DaSHacka 2026-01-066:521 reply

          Yet they have no issue with Mozilla?

          • By drnick1 2026-01-070:55

            Mozilla does not have commercial bits. They do receive money from Google to be the default search engine, and the binaries they build report telemetry, but the versions found in Linux repos often either patch out the telemetry or disable it.

        • By WackyFighter 2026-01-0610:461 reply

          Most distros have a way of installing proprietary software via enabling additional repos after install.

          • By bastawhiz 2026-01-0915:42

            And you can do that if you want with Brave

      • By aorth 2026-01-066:34

        For technical users who are in the know, yes. I would not recommend Brave to less-technical friends and family knowing that they would surely be duped by some dark patterns in Brave's UI/UX.

        Even Firefox, which is the best we have currently, surprises us a few times a year with questionable decisions. Still, it's what I recommend to people.

    • By rb666 2026-01-068:511 reply

      Isn't this what Helium is doing? I have been using it as daily driver for half a year, works a charm. Only would like better 1Password integration.

      • By feverzsj 2026-01-069:561 reply

        Helium is based on ungoogled-chromium. It enables manifest v2 by simply reverting some code changes made by google. So, if google decides to remove manifest v2 wholly, helium will also lost its ublock original support.

        • By pitkali 2026-01-0610:29

          Removing all manifest v2 support is also a code change that can be reverted. Of course, the larger the change, the more work it's likely to require to maintain it in the future.

    • By ipsum2 2026-01-0523:241 reply

      It's annoying, but Brave makes it pretty easy to remove, so you only have to do it once per installation.

      • By drnick1 2026-01-0523:521 reply

        As far as I am concerned, this is more about Brave going through a vetting process and an independent build than only turning off annoyances.

        • By cromka 2026-01-070:10

          Indeed. It's basically like trying to get them into Debian main repo. If it works, then it's truly free.

  • By nialv7 2026-01-0611:023 reply

    162 to 104 is not 75% reduction... Who calculates reduction percentage like that?!

    • By llm_nerd 2026-01-0616:181 reply

      To be fair, they claim the adblock engine saw a 75% reduction in memory usage, and in the images they're showing the main browser process generally (I assume? I don't use Brave), or which the adblock engine is only a part but had a substantial impact on usage.

      • By skaul 2026-01-0616:531 reply

        That is correct.

        • By nialv7 2026-01-0622:36

          thanks for the clarification. is this 45MB reduction for the whole browser? or is this 45MB per tab?

    • By jonkoops 2026-01-0617:49

      "Brave has overhauled its Rust-based adblock engine to reduce memory consumption by 75%"

      This only claims that the memory usage of the adblock engine was reduced, not the total memory consumption of the browser.

    • By roelschroeven 2026-01-0616:11

      I'm guessing the same kind of people who don't understand the difference between 0.002 dollars and 0.002 cents (http://verizonmath.blogspot.com/2006/12/verizon-doesnt-know-...).

HackerNews