Attackers can exploit two newly discovered local privilege escalation (LPE) vulnerabilities to gain root privileges on systems running major Linux distributions.
Local privesc, don't care. If anyone still thinks that they can draw a security boundary anywhere with a shared kernel, they should really look at kernel CVE database (and be horrified). For every fancy titled exploit there are twenty that you've never heard of.
You can sort of do it if you carefully structure your program to restrict syscall use and then use some minimal and well audited syscall filtering layer to hide most of the kernel. But you really have to know what you're doing and proper security hardening will break a lot of software. To get a basic level of security, you have to disable anything with the letters "BPF", hide all virtual filesystems like /proc, /sys, disable io_uring and remove every CONFIG_* you see until something stops working. Some subsystems seem more vulnerable than others (ironically netfilter seems to be a steady source of vulnerabilities).
> they should really look at kernel CVE database
When quoting kernel CVEs as evidence as signs of insecurity, especially so seemingly authoritatively, please make sure you're informed about how what Linux kernel CVEs mean.
A CVE (for any product) does not automatically mean there is actually a vulnerability there or even if one is exploitable unless explicitly noted (in the CVE or credibly by someone else). Proof of concepts, reproducibility or even any kind of verification are not a part of the CVE process.
For the Linux kernel in particular, the CVE process is explicitly to be "overly cautious" [1]. In practice, this means the Linux security team requests a CVE for anything that has a mere whiff of being theoretically exploitable. Of course that doesn't mean that the bug that was fixed was actually exploitable, not even theoretically but certainly not in practice.
As a result, you can't use CVEs reported by the Linux kernel to make claims about the (lack of) practical security of any Linux system, including your desktop. The CVEs reported by the Linux kernel are there to notify you to very well informed users of the kernel to do further risk assessments, not to be taken at face value as a sign of insecurity. [The latter is true for the entire CVE system - they're not to be taken at face value as signs something is wrong. But it's especially true for the kernel.]
You're right. I review each one carefully, so here I mean only the real ones. It's still a massive amount of vulnerabilities, even after excluding obscure drivers or features that aren't used on headless systems.
This is a common complaint with the whole CVE process to begin with, and isn't even a Linux thing.
After the Linux Foundation became a CNA (CVE Numbering Authority), it started issuing CVEs for a broad range of "vulns", such as local denial-of-service, memory errors with no viable exploit path, and logic flaws lacking meaningful security implications.
Looking at the raw number of CVEs is not very meaningful
Indeed. They issue a CVE for every bugfix, because it's long been the position of the linux maintainers that there's no meaningful distinction between a security bug and a regular bug.
And I'm not sure I can fault them for that, tbh. When you're a kernel, it's very hard to prove that something is a "non-security" bug -- especially when we count DoS as a security bug.
> memory errors with no viable exploit path
i dont appreciate putting "vulns" in scare quotes, if that was your intent
swiss cheese theory. all it takes is someone changing a component that allows that vulnerability to be chained into an exploit, which has happened many times.
these should be tracked, and in fact, it's very helpful to assign cves to them
but yeah, raw numbers is less useful. in fact, cves as a "is it secure or not" metric are pretty rough. it makes it easier to convince vendors to keep their software up to date, though...
Additionally, having simpler vulns labelled allows more juniors to work on coding fixes for them.and getting their feet wet in that particular sub field.
The way I deal with this at work is: we both work for a person who can fire us for looking at them funny. The threat of dismissal is sufficient for us to expect our peers to be rude neighbors but not criminal ones. If the divisions get big enough that this gets blurry, well then it’s simple enough to ask for private VMs/separate Kube clusters. The Conway’s Law aspects of server maintenance cycles when you report to separate directors/VPs is self evident.
And of course collocating different classes of work can lead to a bug in a low priority task taking down a high priority one. So those also shouldn’t run in the same partition. Once you’ve taken both of those into account, you’ve already added some security in depth. It’s hard even to escalate a remote exploit into a privilege escalation into attacking a more lucrative neighbor.
> anyone still thinks that they can draw a security boundary anywhere with a shared kernel
Containers are everywhere.
They don't work as reliable security boundaries; they're developer/ops tools.
Thomas, what are your thoughts on micro-vms such as kata containers? You can use them as a backend for docker in place of runc.
I'm sure you're well aware, but for the readers, they are isolated with a CPU's VT instructions which are built to isolate VMS. I still think "containers don't contain" in a very Dan Walsh boston accent, but this seems like a respectable start.
I have no strong opinion other than that untrusting cotenants shouldn't directly share a kernel.
They're slow and so unsuitable for dev work. They might be somewhat better for prod, but it depends on a wide selection of unproven hypervisors.
Which "unproven" hypervisors are those? Kata works with Firecracker.
I think they mean in regards to cross kernel attacks. vms didn't protect across speculative execution attacks.
I believe there are even more course grained timing attacks with dma and memory that are waiting to be abused.
No, that's true, VMs don't protect against microarchitectural attacks. But neither does shared-kernel isolation; in fact, shared-kernel is even worse at it. So if that's the concern, it doesn't make much sense in the threat model.
I use my laptop logged in as root, so that's not an issue!
The best is they absolutely can install drivers without your permission unless your system is encrypted. So it's even worse!
Not really relevant, the threat being discussed is for multi-user systems.
And your pulse audio service is running as which user now? This is a local exploit but for any system supporting the mentioned combination of services, aka a lot of them, including the RHEL derivatives and likely Ubuntu.
https://almalinux.org/blog/2025-06-18-test-patches-for-cve-2...
> And your pulse audio service is running as which user now?
I'm not sure, I appear to be running pipewire. But assuming it's not my own account: not a user that will initiate an attack. A user account that allows logins or runs external servers would have to get compromised first, and at that point it can use the exploit directly with no need to touch pulseaudio.
If there's only one directory in your /home, it's very unlikely the urge for admins to patch this is directed at you.
Pipewire runs under the pipewire user, managed by systemd or OpenRC. Which means any of their managed processes can start a new pipewire user process.
A local priv-sec is one exploit [0] away from a remote one.
[0] https://www.bleepingcomputer.com/news/security/hackers-explo...
> Pipewire runs under the pipewire user, managed by systemd or OpenRC. Which means any of their managed processes can start a new pipewire user process.
The box I checked has no pipewire user and it's running under the account I logged in with.
> A local priv-sec is one exploit [0] away from a remote one.
That only matters for accounts that talk to the outside world.
If I'm the only user, I'm not depending on security features to keep my account and the pipewire account safe from each other. Privilege escalation is a big threat for systems that are running in a significantly different way.
If you play sound, such as from a browser, or a file you didn't record yourself, then your account is talking to the outside world.
Or you could just use NetBSD like SDF does.
You've worked in security theatre before then ;)
Yet people use container based isolation all the time in practice and the sky doesn't fall.
Also, every security domain in an Android systems shares a kernel, yet Android is one of the most secure systems out there. Sure, it uses tons of SELinux, but so what? It still has a shared kernel, and a quite featureful one at that.
I don't buy the idea that we can't do intra-kernel security isolation and so we shouldn't care about local privilege escalation.
Android delegated some security features to a different kernel called Trusty that is separated from the main Linux kernel using virtualisation. That kernel runs high value security services.
Yes, but that's not the main load-bearing security part of the system. Trusty doesn't isolate apps from each other. It doesn't isolate work profiles from user profiles. Regular SELinux-augmented thoughtfully-used uid- and process-isolation does that.
If you weren't aware, containers aren't a security boundary. Things like bubblewrap are.
Semantics make hard assertions about "containers" worthless. It depends on what one means by a container exactly, since Linux has no such concept and our ecosystem doesn't have a strict definition.
What to you think bubblewrap is, if not a container runtime?
bubblewrap is actually worse - there are known escapes in there that haven't been fixed for years
Wouldn't Android's kernel have most of the hardening steps / disabled features described in GP's comment?
No. Things like eBPF, strace, and packet filtering are enabled. Android uses SELinux and other facilities to limit the amount of code the kernel will allow to access these features. Big difference from their being compiled out of the kernel entirely as the OP suggests is necessary.
Container isolation can fail at shared libraries in shared layers too can't it? My evil service is based on the same cooltechframework base layer as your safety critical hardware control service and if there is a mistake in the framework...
then it affects each one separately since they are separate processes. The fact they run the same code is irrelevant if the data is separate.
Separate processes running the same shared instructions. If you compromise and modify those shared instructions, the othe container runs instructions of your choosing.
Layers are COW so one container modifying a layer has no effect on other containers started from the same image. Of course, preexisting vulnerabilities will remain but they'd have to be separately exploited in each container.
Worse, cannot disable eBPF due to too many packages demanding it.
Namely, nft tables and its filtering.
Ironically Ubuntu 24 now blocks users from accessing namespaces because that kernel interface had a bunch of local privilege escalations, breaking programs that want to use them for isolation.
For the last 10 years or so, namespaces in Linux were the source of the absolute hightest number of local privilege escalations and sometimes even arbitrary code executions in kernel space. Building a kernel without user namespace support has been goto-advice for multiuser systems for almost as long. Ubuntu is just late to the game because they mostly have server or single-user-desktop customers.
I've even seen namespaces used for hiding malicious software in Ubuntu systems too.
Actually I think device drivers got you beat there, but no ones suggesting we break them for users safety. Ubuntu today is more user hostile than Windows.
Device drivers are worse if you just count the numbers. But they are usually far less exploitable because very often you need to have the corresponding hardware plugged in or even need to manipulate said hardware to provide crafted inputs. So in reality, device driver problems are almost never exploitable.
Seems ironic considering namespaces are highly utilized for isolation/security purposes.
Because GP is talking about theoretical vectors of attack in highly secure environments. Whereas you are now discussing why hackers don’t target devices with zero-financial gain.
Also just because syscall A might be vulnerable to a particular type of attack, it doesn’t mean that service B uses that syscall, let alone calls it in a way that can be exploited.
I think a majority of systems security people, if asked, would say they assume an attacker with code execution on a Linux system can raise privileges.
I think in the land of people with ill intent to exploit such things they have more potential targets and security vulnerabilities than they can spend time exploiting. A given vulnerability may be terrible, but it might not coincide with something worth bothering with for a given person with ill intent. There's a factor of human choice / payoff at play.
udisks, not counting its dependencies, has 265,334 LoC. pmount, in contrast, has 19,978 LoC, or >13x less.
sudo, another setuid binary with a lot of policy code, has 210 CVEs / 430.150 kLoC = ~0.5 CVE per kLoC.
57.5% of CVEs have a CVSS >= 7, so 0.5 * 0.575 = 0.2875 CVE7/kLoC.
As a back-of-envelope estimate,
udisks: 0.2875 CVE7/kLoC * 265.334 kLoC = ~76.28 critical CVEs;
pmount: 0.2875 CVE7/kLoC * 19.9780 kLoC = ~5.7 CVEs.
It's incredible to me that sudo has that many LoC. I'd assume it would just ask the OS to execute something without restrictions, not have any logic to do so itself.
Asking the OS to do something without restrictions is not very difficult; sudo does that by virtue of its existence (it's setuid). The extra code is deciding when not to do that.
The problem isn't even setuid exactly but the size of the TCB. Setuid encourages a design in which tons of stuff that doesn't need to run as root runs as root anyway just because it's part of the same binary that needs elevated privileges. It's a footgun, but one can handle even a footgun safely if you practice trigger discipline and assume every gun is loaded.
Sudo (and other setuid programs) could in principle use privilege separation to punt everything not absolutely essential to an unprivileged context and thereby reduce the size of the TCB.
[dead]
OpenDoas, a portable version of OpenBSD's doas, has 4260 LoC while doing most you'd expect. Sudo just has a lot of policy tools that most don't even know about, but add to its surface area.
OpenDoas is used by default by Alpine linux for example.
I remember last time I installed it, there was neither sudo nor doas preinstalled.
sudo was officially depreciated in 3.15 and moved to community in the next release https://gitlab.alpinelinux.org/alpine/tsc/-/issues/1
sudo has a lot of machinery for representing complex policies which involve partial access to elevated (or just different) permissions, and with more conditions than just a correct password for the requesting user. The kernel itself just sees a binary running as root which may drop some of those permissions before starting another process.
(And this isn't even the most arcane part of linux userland authorization and authentication. PAM is by far the scariest bit, very few people understand it and the underlying architecture is kinda insane)
When I'm told my code needs to "just" do X, it is usually the case that it needs to do a bunch of other stuff to enable X.
I think it's all the stuff to do with using a shared sudoers across a network of hosts. They could really clean up the language if they removed all of that gunk, as it's not reflective of how sudo is deployed these days.
Even these days, I don't like having deployment SSH keys, or anything of that nature, unless the users are sudo-restricted. You might say that's obsolete in today's world of kubernetes/clouds, but there are many many use cases not met by these things and even for the clouds, someone needs to run them.
There's also full sudo session logging and a logging server now, along with binaries to replay all those logs. Whether those LOC reflect the logging server, I don't know.
It literally replays in the terminal like a movie. It's nice, but I worry too much about the security implications (passwords captured, etc) to roll it out.
edit:
Ah yes, sudoreplay. You can see this video a playback via it. That's not the guy typing, that's sudoreplay time-accurately replaying what happens.
have you heard of script/scriptreplay?
script --log-timing file.tm --log-out script.out
# do something in a terminal session ...
scriptreplay --log-timing file.tm --log-out script.out
# replay it, possibly pausing and increasing/decreasing playback speed
Why does anything at all need to be executed without restrictions though
Should your calculator ask who you are to compute 2+2? Contrary to popular belief, access control was stapled onto the computation space. There was a time when it was considered an unnecessary extravagance. It only became the night unbuckle mandate that machines give a shit about who you are once we started using computers as the basis of business systems.
Accounts thereafter, ruined everything.
> It only became […] that machines give a shit about who you are once we started using computers as the basis of business systems.
One we started using connected machines for much and people with flexible though morals noticed that there was trust in the system(s) ripe for exploitation for fun or profit or both.
I remember SMTP hosts being open by default because it wasn't a problem, that very quickly changed once spam was noted as potentially profitable.
There were accounts all over from quite early on, in academic environments before businesses took much of an interest, if only to protect user A from user B's cockups ("rm -rf /home /me/tmp") though to some extent also because compute time was sometimes a billable item, just not on single user designed OSs¹.
[1] Windows, for example, pre NT & 95 (any multi-user features you might have perceived in WfW 3.x were bolted on haphazardly and quite broken WRT actual security)
There is no such API on Linux, it is accomplished by sudo having the setuid bit set, which instructs the kernel to start it as root regardless of the current user. It's probably one of the worst legacy designs still in use - if any binary has setuid set, it runs as root, no questions asked. Conversely, you also have no way of elevating privileges for a running binary. This really should have been solved decades ago with a robust API for authentication and authorisation of running processes to gain and lose privileges, like what Windows has. Having a filesystem bit grant root privileges to a program is insane. There are probably a dozen CVEs waiting to be discovered with silently corrupting the filesystem and setting that bit on your binary.
> if any binary has setuid set, it runs as root
More precisely, it runs as the file owner. Which is often root.
For anyone thinking this is unnecessarily pedantic, it’s not.
I didn’t exactly know what setuid did. I learned something today. :)
You might also research what setgid bit on directories do, it's useful sometimes.
There's been some work on alternatives to setuid sudo. For example run0 from systemd.
https://en.wikipedia.org/wiki/Doas is all you need
I can't for the life of me find a list of 210 sudo CVE's. Are you sure this is correct?
I got it from here [0]. I didn't notice it was a keyword search, so it's an overcount. Thanks for correcting me.
Going off its security advisories page [1] and this tracker [2], it seems to be around 43 CVEs, most rated high severity.
So the actual rate would be 43 CVE / 430 kLoC = ~0.01 CVE per kLoC, so ~2.65 CVEs for udisks and ~0.2 for pmount.
[0] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sudo
You can search by CPE here: https://nvd.nist.gov/products/cpe/search and search for e.g.:
cpe:2.3:a:sudo_project:sudo:*:*:*:*:*:*:*:*
cpe:2.3:a:todd_miller:sudo:*:*:*:*:*:*:*:*
The above pair are the same "sudo", but split arbitrarily, perhaps varying by assigning authority preference.
(There are some other "sudo" named projects too).Those CPE IDs were determined by a brute-force-ish XML grep:
xml select -N cpe-23="http://scap.nist.gov/schema/cpe-extension/2.3" -t --match '//cpe-23:cpe23-item' --if 'contains(@name,":sudo:")' -v "@name" -n official-cpe-dictionary_v2.3.xml
Now, mapping CVE<->CPE is a tricker problem, it's not 1:1 (a single CVE can affect multiple product versions), and harder here since sudo (1986-ish) predates CVEs (1999) by a decade, and CPE (2009) by two. The most capable searches seem to be via non-free APIs or "vulnerability management $olutions", plus a few CLIs tools that need a lot of care and feeding.This web service is free: https://cve.circl.lu/ But, you cannot search directly by CPE right now; you can start a search by vendor, then filter by product:
todd_miller sudo: 58 vulnerabilities
sudo_project sudo: 42 vulnerabilities
Except, for reasons I don't understand, there are duplicates because they somehow source "unique" but overlapping CVEs from multiple databases. The true number might be 50 combined, of varying severity/concern, but I give up now. I'm going to go mutter into my beard for a while.Ubuntu is switching to a Rust implementation of sudo: https://www.phoronix.com/news/Ubuntu-25.10-sudo-rs-Default
Repo here: https://github.com/trifectatechfoundation/sudo-rs
It's permissively licensed, unfortunately. Wonder why. It's not a library. But it ought to improve security in the long run.
> It's permissively licensed, unfortunately. Wonder why.
I've been loosely involved in setting this up, so I can say a little: The people that funded the initial work wanted it permissively licensed. My (somethwat informed) conjecture is that they rank making things secure - even in closed source apps that now could take the code - higher than barring closed forks. It also tracks with the Rust ecosystem in general - APL or derivates are very common in that ecosystem.
Read the sudo license that argument don't make sense when sudo license is even more permissive
Do they not think that the switching is premature? I am pretty sure the Rust version has a lot of logic bugs that not have yet been found.
> I am pretty sure the Rust version has a lot of logic bugs
What makes you say that? I'm not trying to be argumentative, I'm genuinely interested.
I’m a pretty big advocate on Rust and while Rust does protect classes of certain kinds of bugs and probably encourages better unit test hygiene and thus higher code quality, it does not protect against logic bugs and all the historical CVEs and thus it’s possible for previous exploits vectors to resurface. Thus it’s not an unreasonable prior to assume there are vulnerabilities lurking.
On the other hand, if the replacement isn’t targeting full sudo feature set and also reducing the amount of code and/or making architectural improvements like keeping most code not running as root, then the blast area of such logic bugs can be reduced.
Whenever a complex system is rewritten, there are a lot of bugs and regressions in it.
Yup, hence why the downvotes. :( I thought it was a no-brainer, but I guess not.
Saying "a lot" and especially that it's still "a lot" is not a no-brainer.
> Some functionality is not supported, such as […] storing config in LDAP using sudoers.ldap, and cvtsudoers.
* https://github.com/trifectatechfoundation/sudo-rs?tab=readme...
Well that makes it useless for $WORK (for now), as we use LDAP as our central policy repo (and more generally our user account store). Will have to wait until (at least) that's implemented before we can even consider it.
> It's permissively licensed, unfortunately. Wonder why.
So it can be used distributed with fewer legal hassles.
> It's permissively licensed, unfortunately.
Well damn that's a shame. I just hate it when people let others use their work in a way they choose, that happens to be less restrictive than my own personal choices.
/s of course.
Worked out for Linux, which remains a largely open, collaborative ecosystem. Meanwhile all the BSDs are good for are as less-good Linuxes that can be shoved into proprietary products. Google is choking out AOSP, which they can do because of Android's "less restrictive" license.
Copyleft licenses are demonstrably better for open source projects in the long run. We've had enough time to prove that out now.
The success of Linux over BSD has more to do with a lawsuit in the early 90's over whether or BSD infringed on Unix's source code, which made Linux the only viable open source Unix-like operating systems if you had to ask a legal department the question.
Look beyond the OS, and much of the tech stack is dominated by non-copyleft open source projects. Both the major web servers--Apache and nginx--are permissively licensed, for example. Your SSL stacks are largely permissively licensed; indeed, most protocol servers seem to me to largely be permissively licensed rather than copyleft.
And I should also point out a clear example where copyleft has hobbled an ecosystem: Clang and LLVM have ignited a major compiler-based ecosystem of ancillary tools for development such as language servers. The gcc response to this is... to basically do nothing, because tight integration of the compiler into other components might allow workarounds that release the precious goodness of gcc to proprietary software, and Stallman has resisted letting emacs join in this revolution because he doesn't want a dependency on non-copyleft software. An extra cruel irony is that Clang appears to be an existential threat to the proprietary EDG compiler toolchain, which would mean it took a permissive license to do what the goal of the copyleft license was in the first place: kill proprietary software.
I think it's pretty reductive to boil down linux' success to the choice of license. There's governance model, development model, institutional inerta, ... - and the linux ecosystem contains tons of permissively licensed pieces of software, some of which massively contributed to its success (the once-default webserver that came with its own permissive license, the APL). Even the kernel includes APL, BSD-2 Clause and MPL'ed code.
To the contrary, GNU Hurd is GPL'ed and is much less successful than the linux kernel.
That is an extremely cherry-picked example. There are plenty of examples of permissively-licensed software that is very successful, and no evidence that the license choice is why Linux won.
Good take. Also note the very well thought out decision from Linus and team to keep GPLv2, it is a balancing game.
In the end, if you want projects to succeed they need contributors. Unfortunately, some of them need to be reminded to play fair more than others, and in those cases the legalese helps.
I'm not even going to point out the hundreds of counter examples to your argument.
You clearly didn't understand my point: I'm not arguing about whether GPL is better than MIT or BSD or even SSPL/etc.
My point is that if someone else chooses to release their software with less restrictions on it than I would choose, that's literally none of my business.
They wrote the fucking thing, they get to choose how it's fucking licensed.
Plenty of organisations (and thus people) skip using GPL licensed software due to inability or unwillingness to be bound by it's terms.
I'm still waiting for the day the GPL camp says they're not going to use things like OpenSSH, Apache, Nginx, Postgres, Python, Ruby - because they're too fucking permissive.
Given that enshittification is a thing, and embrace extend extinguish is a thing, I'm inclined to agree with you there, without the /s.
This is yet another case where my policy of stripping out unnecessary dependencies has paid off. thunar-volman and kde solid both pull in udisks by default but back in 2017 I started maintaining a fork of the default Gentoo ebuild to eliminate the dependency on udisks. The thunar-volman case is a great example of why Gentoo use flags are useful no only for customizing a system but for security by making it easier to reduce the attack surface by disabling features that upstreams leave enabled by default.
As someone who has been using linux quite happily on the desktop for more than 20 years now, I have to say it remains an eternal experiment, feature wise as well as security wise.
That's certainly an interesting standpoint.
I use both privately and professionally and while I accept that security-wise (even with selinux) they feel lacking, feature-wise they far exceed Windows I use as my other is except in gaming experience.
I wish I had something like GrapheneOS on desktops (yes I know about Qubes)
> feature-wise they far exceed Windows
I tried Ubuntu last year, and it felt very limited compared to Windows. It lacked very basic features like face/fingerprint login, hybrid sleep, factory reset, live FDE (or post-installation FDE), fast fractional HiDPI, two-finger right-click, "sudo" on dock etc.
There is https://grsecurity.net/ but it's not free. It's developed by people with much more experience defending against attackers than all of the other projects combined.
Looks like grsecurity has a different view of ethics than I do.
Just searching grsecurity on HN turns up some interesting stuff.
Who are they?
Chromium OS gets very close, they also have fully-functional VM-based isolation for Linux applications with GPU acceleration.
Unfortunately, there's no popular non-Google distro of it.
The fact that Chromium OS has been teetering on the edge of deprecation/merging with Android/Fuchsia for a decade I think has deterred people from building stuff on top of it.
It also seems to have a lot of new code every year for very few new features. It's as if they get every new intern to rewrite a bit of the innards, and then next summer another intern rewrites it again.
OTOH, it was used for multiple container-optimized distros by now:
First CoreOS, which forked into Flatcar Linux (now funded by Microsoft) and Fedora CoreOS (rewrite from Gentoo/ChromeOS base to Fedora base), and Google's Container-Optimized System (used heavily in Google Kubernetes Engine).
A lot of code to do very little user visible changes is the nature of operating systems. Making light of the work who work on chromeos just makes you sound ignorant.
> I wish I had something like GrapheneOS on desktops (yes I know about Qubes)
SecureBlue and Kicksecure are the closest equivalents.
Don't know much about SecureBlue but Kicksecure isn't comparable to Qubes at all. It's a hardened distro, not a way to isolate workloads through virtualisation. Depending on what you're trying to achieve they can both fit but they are fundamentally very different in their approach to security.
> I swear to god reading comprehension is approaching zero due to chatgpt.
> I wish I had something like GrapheneOS on desktops
Secureblue is essentially as close to GrapheneOS as Desktop Linux can get. Neither my response nor the original question required qubes comparisons. It was merely mentioned.
[flagged]
No the closest alternative is https://grsecurity.net/
Factually wrong from that very site
> grsecurity® is the only drop-in Linux kernel replacement offering high-performance, state-of-the-art exploit prevention against both known and unknown threats.
While secureblue is a full desktop distro (not just a kernel) that integrates key grapheneos hardening tools like their hardened malloc and forks of their hardened chromium and works with flatpak as a base for hardened application deployment.
grsecurity does literally none of that.
Yes grsecurity offers actual hardening instead of touting snakeoil.
You are literally saying that hardening the kernel is the same as having the desktop environment hardened and a basis for app isolation. And to add a cherry on top of that both secureblue and kicksecure use almost all the same hardening additions to the linux kernel as grsecurity.
You do not understand what you are talking about because if you did you'd be embarrassed for how braindead your response is.
Qubes is definitely hard to daily drive. With it's ancient default XFCE design, it looks really ugly. Plus no hardware acceleration
What's hard about it exactly? It's my daily driver. You can install KDE, too: https://forum.qubes-os.org/t/kde-changing-the-way-you-use-qu...
same! qubes is probably the actual solution for now, but i've seen some grapheneos people work on https://secureblue.dev/ and that seems a lot more "normal"
I have been meaning to try out secureblue and hopefully even run it on production VMs in proxmox. Is it stable yet?
Re:"Eternal experiment"... have you seen Windows 11? Or even 10? The devs can't keep their hands off of the thing, changing, breaking and fixing every component every few months.
Adding ADs to every possible surface, finding new ways to obfuscate built-in spyware
I don’t think we need to “whatabout” Windows. I don’t think anyone would say they are trying too many experiments… actually, Windows feels like it was mostly made by overworked folks doing the bare minimum to not get fired. No time for experiments or caring.
If you think Linux is an experiment, you should see the other OSes.
I'm pretty sure, that the BSD family is pretty mature and secure. Linux is just good enough for most people.
A big part of the difference is that the BSDs are designed by a governing committee. They usually don't have 15 different solutions for the same problem, but instead 2-3 solutions that work well.
Take filesystems, the official filesystems are UFS(1/2) and ZFS. They have GEOM as LVM and LUKS and more.
That being said, the majority of money and development goes into Linux, which by itself may make it a better system (eventually).
Edit: Of course UFS is not deprecated.
I can't help but make the comparison with cryptographic network protocols, where the industry started with a kitchen-sink approach (e.g. pluggable cipher suites in TLS) and ended up moving towards fixed primitives (e.g. Wireguard mostly uses DJB-originated techniques, take them or leave them).
The general lesson from that seems to be that a simpler, well-understood, well-tested and mostly static attack surface is better than a more complex, more fully-featured and more dynamic attack surface. I wonder whether we'll see a trend towards even more boring Linux distributions which focus on consistency over modernity. I wouldn't complain if we did.
The main strength of WireGuard is that it’s simple. It’s like 10% of the code size of IPSEC.
Less code means less possibility for bugs, and is easier to audit.
In my book, WireGuard perfectly follows the UNIX philosophy of making a simple tool that does exactly one thing and does it well.
> A big part of the difference is that the BSDs are designed by a governing committee. They usually don't have 15 different solutions for the same problem, but instead 2-3 solutions that work well.
The right comparison is not between a particular BSD and Linux, its between a particular BSD and a Linux distro.
I feel the BSDs are much more different from each other than the average Linux distros are.
Average/most popular distros, maybe.
The full range of distros are very different from each other. Consider Void, Alpine, Gentoo, Chimera, NixOS.....
Different C libraries, init systems, different default command line utilities....
That's nothing. Alpine can run Glibc binaries with compat libraries.
Try running a FreeBSD binary under OpenBSD.
> A big part of the difference is that the BSDs are designed by a governing committee
While I cannot agree nor disagree on the quality of BSDs (haven't used one in 20 years), I find it funny that in this case a design by committee is proof of quality.
I guess it's better than design by headless chicken which is how the Linux user-space is developed. Personally, I am a big fan of design by dictatorship, where one guy at the top either has a vision or can reject silly features and ideas with strong-enough words (Torvalds, Jobs, etc.) - this is the only way to create a cohesive experience, and honestly if it works for the kernel, there's no reason it shouldn't work in userspace.
> While I cannot agree nor disagree on the quality of BSDs (haven't used one in 20 years), I find it funny that in this case a design by committee is proof of quality.
I don't think "design" is correct word: organized, managed, or ran perhaps.
> The FreeBSD Project is run by FreeBSD committers, or developers who have direct commit access to the master Git repository.[1] The FreeBSD Core Team exists to provide direction and is responsible for setting goals for the FreeBSD Project and to provide mediation in the event of disputes, and also takes the final decision in case of disagreement between individuals and teams involved in the project.[2]
* https://en.wikipedia.org/wiki/FreeBSD_Core_Team
There is no BDFL, à la Linux or formerly Python: it's a 'board of directors'. Decisions are mostly dispute / policy-focused, and less technical for a particular bit of code.
They fill the same position as a BDFL though.
They decide what gets included in the default distribution, they set the goals and provider sponsorships for achieving them.
So yes, board of directors is probably more fitting.
And then of course you have the people with a commit bit. They can essentially work on whatever they like, but inclusion into the main branch is still up to the core team.
There was a huge debate some years ago when Netgate sponsored development/porting of WireGuard to FreeBSD, and the code was of a poor quality, and was ultimately removed from FreeBSD 13.
Similar to Debian's governing structure.
UFS is not deprecated on FreeBSD.
I believe it is the default on netBSD
BSD doesn’t count, everybody agrees it is the best OS they aren’t using.
> I'm pretty sure, that the BSD family is pretty mature and secure.
Not to mention illumos-based systems too.
I ran Open Solaris for a while on my Laptop and it's quite nice. However the lack of support by practically any software vendor made many things a pain.
Since then even more stuff went to the Web, but I really I doubt Illumos got any extra traction.
Most of our server infrastructure runs on illumos at $work. SmartOS/Triton handles our "cloud" and OmniOS runs our storage. The linux monoculture problem luckily can still be handled with zones and bhyve, and I do trust illumos developers' competence to deliver good quality secure software a lot more than linux developers' as well.
Now if FreeBSD (or indeed illumos) would get CUDA-support we could stop using linux for GPU nodes too.
> Now if FreeBSD (or indeed illumos) would get CUDA-support we could stop using linux for GPU nodes too.
Could you not run Linux CUDA binaries under FreeBSD's Linuxulator?
It is possible, yes, but I would prefer to have full linux-free support for production use. There is on-going work for FreeBSD Cuda, though[0]. Just have to wait and see.
[0] https://www.freebsd.org/status/report-2024-04-2024-06/#_part...
>is pretty mature and secure
They are still missing something like capability based security like iOS and Android have where apps have to be granted access to use things like files or the camera. It may have been considered secure a couple decades ago, but they have fallen behind the competiton.
FreeBSD literally has Capsicum: https://en.wikipedia.org/wiki/Capsicum_(Unix) That might be the most pure capability system out of all of them, though it's not something that works without application modification (yet). Android and iOS applications can automatically work with the native capability framework because they rely on higher-level SDK APIs. But AFAIU those capability systems are very coarse-grained, in the sense that it's difficult leverage the capability system internally within a single application. And keeping lower-level APIs (e.g. for C and POSIX filesystem I/O) nominally working (if at all) requires some impure hacks. All of which makes them very similar to FreeBSD Jails or Linux containers in that respect.
I wouldn't consider any of these systems "secure", though, as a practical matter. In terms of preventing a breakout, I'd trust an application on OpenBSD with strict pledge and unveil limits, or a Linux process in a classic seccomp sandbox (i.e. only read, write, and exit syscalls), more than any of those other systems. Maybe Capsicum, too, but I'm not familiar enough with the implementation to know how well it limits kernel code surface area. But any application that can poke at (directly or indirectly) complicated hardware, like the GPU, is highly problematic unless there are proofs of correctness for any series of inputs that can be sent by the process (which I don't think is the case).
You can use Jails and limit access to hardware resources for each jail. Still not as dynamic, but will get the job done.
Sure, but this is not done automatically for the user.
For the types of computers BSD is typically run on, just unplug the webcam.
IMO, the real problem with trying to enforce capability-based systems on desktop/server environments is the correct API isn't implemented. `capabilities(7)` is only a tiny subset of `credentials(7)`, `PR_SET_NO_NEW_PRIVS` is an abomination, `SCM_RIGHTS` has warts, and `close_range` is fundamentally braindead.
We need at least the following sets: effective, permitted, bounding (per escalation method?), and the ability to make a copy of all of the preceding to automatically apply to a child (or to ourselves if we request an atomic change). Linux's `inheritable` set is just confusing, and confusion means people will use it wrong. At least we aren't Windows.
> They are still missing something like capability based security
...like Capsicum?
No, that requires explicit changes by programs to use meaning that malware can ignore it and steal your browser's cookies and take secret photos with your webcam.
So the capability-based security framework is not missing unlike your original statement?
My original statement is about how users have to explicitly give programs access to the files and the webcam before they can use them. This is missing.
iOS so so insecure that thousands of people have been hacked and at least 1 person was killed.
The last place in security is iOS.
Including Openbsd?
>20 years ago
So while Windows was letting everyone be root?
Software is rarely "done", so is quite naturally always an evolving experiment of sorts.
how much harder is container escaping compared to vm escaping? i understand that containers are not truly meant to be security boundaries but they are often thought of and even used as such.
> how much harder is container escaping compared to vm escaping?
The answer heavily depends on your configuration. Unprivileged with a spartan syscall filter and a security profile is very different than privileged with the GPU bindmounted in (the latter amounts to a chroot and a separate user account).
Hence if I ever get money for an infrastructure pentest, I want to include a scenario that scares me a bit: The hijacked application server. The pentesters give me a container with whatever tooling they want and a reverse shell and that gets deployed in the dev-infrastructure, once privileged and once unprivileged, both with a few secrets an application server would have. I'd just reuse a deployment config from some job. And then have at it.
And yes, this will most likely be a mess.
Situational but if you're in default configurations it's comparable. Both will need some form of unknown vuln. It boils down to wether you trust more the linux namespacing logic and container runtime glue or the hypervisor logic.
That goes for all (active) software really. Otherwise people call it obsolete or abandoned.
We're talking about a local privilege escalation here.
That assumes:
1) Attacker already have an account on the system
2) The app `udisks` is installed on the system.
Everyone is fighting the same battle and it's a good thing. It is happening because the rest of the system is hard enough to attack these days. This is true for all major OS:es.
Only fanboys bend reality to make this into a good-vs-bad argument.
> I have to say it remains an eternal experiment
You just defined 'life' in general.
Let us not pretend other OS are flawless as well. Microsoft is constantly patching and Apple has been the source of so many hacks that thousands of VIPs were affected and a person was murdered.
What a weird comment - if Apple software had less exploits then the murder would have been averted? And those 'VIPs', whoever they are - would it be less significant if there were normies? I sincerely hope none of my coding mistakes ever causes a VIP to be murdered.
Local root privilege escalation is mostly irrelevant these days. It’s only useful as part of an exploit chain, really. It’s not like shell servers are still around.
An exploit chain, like combining it with the PAM issue they mentioned in the very same article, affecting Fedora.
The article was about two issues that combine to make a single local-privilege-escalation, so the PAM thing isn't a separate exploit chain, it's just part of getting local root in this vulnerability.
What the parent poster meant is that you first need a way to run arbitrary code before local privilege escalation matters, so the exploit chain has to include _something_ that gets you local code execution.
I tend to agree with the parent poster, for most modern single-user linux devices, local privilege escalation means almost nothing.
Like, I'm the only user on my laptop. If you get arbitrary code execution as my user, you can log my keystrokes, steal my passwords and browser sessions, steal my bitcoin wallet, and persist reasonably well.... and once you've stolen my password via say keylogging me typing `sudo`, you now have root too.
If you have a local privilege escalation too, you still get my passwords, bitcoin wallet, etc, and also uh... you can persist yourself better by injecting malware into sshd or something or modifying my package manager? idk, seems like it's about the same.
Some services dont run as the same user logging into the laptop.
> ...for most modern single-user linux devices, local privilege escalation means almost nothing.
I haven't actually looked at the numbers, but I strongly suspect that it's true that the overwhelming majority of single-user Linux devices out there are Android devices. If that's true, then it's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access.
Android is not a single user system. Every app, every service basically everything gets its own user.
Applications have different user IDs and different SELinux contexts.
Android security is tight
Does Android use Udisks? I assumed it did not, due to the difference in architecture over most traditional GNU/Linux desktop systems
I have no idea if Android uses udisks. It has been something like a decade since I last looked at 'ps' output on an Android machine, so any information on the topic I might have had has faded away with time.
[dead]
this type of exploits are goldmines for attackers, it means they have a window of a few month to years to turn any basic access into root. It doesn't have to be a super complex exploit chain, anyone running wordpress botnets it going to add this to their arsenal
Usually they don't need to be root to access and exfiltrate data anyway.
There are plenty of shell servers in academic environments...
An attacker doesn't need a shell server to run code locally, you chain it with an exploit to a service and you have root and now have lateral attack capabilities.