To update blobs or not to update blobs

2026-03-0313:08135codon.org.uk

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another…

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.

A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.

I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.

THINGS TO CONSIDER

  • Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?

  • You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?

  • If it’s in ROM and it turns out to be hostile then nobody can fix it ever

  • The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?

  • Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that.

  • Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware.

  • Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.

  • Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS.

  • Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.

Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.

I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.

Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I not install the update?

And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!

But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.

My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.

It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.


Read the original article

Comments

  • By LaSombra 2026-03-0316:422 reply

    I think this comes from this Mastodon thread, https://snac.lx.oliva.nom.br/lxo/p/1771789687.181567

    • By LegionMammal978 2026-03-080:01

      They do talk past each other a bit, and I find it difficult to follow, but overall, I'm more sympathetic to Garrett's position than Oliva's.

      As far as I understand: GNU Linux-libre, a distribution, excludes the ability to update proprietary CPU microcode. Oliva, an important Linux-libre maintainer, says that (e.g.) Intel's proprietary microcode is inherently a backdoor, and that the ability to replace it only with new proprietary microcode is also a backdoor and an attack. Furthermore, new microcode updates cannot plausibly benefit the user and may only cause further harm to the user, thus Linux-libre (as distributed) makes efforts not to facilitate them.

      Garrett is arguing against this notion, saying that microcode updates can very plausibly benefit the user in ways that cannot be mitigated in higher layers; that there have been no publicly-known cases of a microcode update introducing security vulnerabilities that were not already present; and thus, that it is beneficial to the user to have the ability (but not the requirement!) to update microcode blobs.

      Both of them seem to agree it is better to have free software over proprietary blobs in all components of the system, though they both accuse each other of not fully standing for that position (Oliva accuses Garrett of "overlooking" the inherent backdoor nature of proprietary microcode; and Garrett takes issue with Olivia treating "installable software" as ethically distinct from firmware ROMs w.r.t. software freedom).

      Personally, I'm not a fan of software or libraries that take active measures to make me use them in a certain way, so I'd lean toward Garrett's position, but thankfully no one is forcing me to use Linux-libre.

    • By awesome_dude 2026-03-0721:541 reply

      After reading that thread I immediately though - Why is there always that guy yelling "But the extreme case doesn't hold, therefore it's invalid"

      They just come off as an uninformed troll - the truth is it is very rare in life that any single thing meets the perfect solution.

      The best anyone can do is make an effort to move toward that goal whilst we look for better solutions AND we move away from solutions that are definitely not working in the direction of better solutions.

      In this case, we know for a fact that obscurity is a weaker and worse solution to open and honest security postures (for the most part), and the fact that we have the /opportunity/ to inspect things is infinitely better than not having that choice at all.

      • By tyteddffc 2026-03-0722:431 reply

        Which of the two are you referring to

        • By awesome_dude 2026-03-080:25

          > [?]Light » 2026-02-22 @light@noc.social

          @lxo Do you genuinely honestly actually audit the source code of every single piece of software running on your system and compile it all yourself, including web code? Either you have a lot of time on your hands and a lot of skill, or you're running a very minimal system, or you actually don't.

          ... [?]Light » 2026-02-22 @light@noc.social

          @lxo And even if you do, most people* can't. So for them, they need third-party audits, which as I have previously pointed out, can be done without source code. Or otherwise they try to get their software from sources they trust.

          *For example, rocket scientists and brain surgeons

          2 0 ↺ [?]Alexandre Oliva » 2026-02-22 @lxo@snac.lx.oliva.nom.br

          I don't have to. that's the power of community. security doesn't work in absolutes, and auditability is an imperfect deterrent, but it's infinitely better than the moves to prevent auditability that hostile vendors adopt

          I do audit the rare cases of web blobs that are imposed on me, because I can't count on community for those, and my security depends on it even when my freedom has been unjustly taken away

          ... [?]Light » 2026-02-22 @light@noc.social

          @lxo Then you personally know other programmers that you trust to audit it for you. Again, most people don't have that.

          ... 2 0 ↺ [?]Alexandre Oliva » 2026-02-22 @lxo@snac.lx.oliva.nom.br

          that's missing the point. auditability alone is already quite a deterrent. that some of us actually engage in auditing is a bonus that benefits everyone, even if it doesn't happen very often. it's kind of the panopticon effect, but for the better.

HackerNews