
Righty-ho, I’m back from Rust Nation, and busily horrifying my teenage daughter with my (admittedly atrocious) attempts at doing an English accent1. It was a great trip with a lot of good…
Righty-ho, I’m back from Rust Nation, and busily horrifying my teenage daughter with my (admittedly atrocious) attempts at doing an English accent1. It was a great trip with a lot of good conversations and some interesting observations. I am going to try to blog about some of them, starting with some thoughts spurred by Jon Seager’s closing keynote, “Rust Adoption At Scale with Ubuntu”.
For some time now I’ve been debating with myself, has Rust “crossed the chasm”? If you’re not familiar with that term, it comes from a book that gives a kind of “pop-sci” introduction to the Technology Adoption Life Cycle.
The answer, of course, is it depends on who you ask. Within Amazon, where I have the closest view, the answer is that we are “most of the way across”: Rust is squarely established as the right way to build at-scale data planes or resource-aware agents and it is increasingly seen as the right choice for low-level code in devices and robotics as well – but there remains a lingering perception that Rust is useful for “those fancy pants developers at S3” (or wherever) but a bit overkill for more average development3.
On the other hand, within the realm of Safety Critical Software, as Pete LeVasseur wrote in a recent rust-lang blog post, Rust is still scrabbling for a foothold. There are a number of successful products but most of the industry is in a “wait and see” mode, letting the early adopters pave the path.
The big idea that I at least took away from reading Crossing the Chasm and other references on the technology adoption life cycle is the need for “reference customers”. When you first start out with something new, you are looking for pioneers and early adopters that are drawn to new things:
What an early adopter is buying [..] is some kind of change agent. By being the first to implement this change in the industry, the early adopters expect to get a jump on the competition. – from Crossing the Chasm
But as your technology matures, you have to convince people with a lower and lower tolerance for risk:
The early majority want to buy a productivity improvement for existing operations. They are looking to minimize discontinuity with the old ways. They want evolution, not revolution. – from Crossing the Chasm
So what is most convincing to people to try something new? The answer is seeing that others like them have succeeded.
You can see this at play in both the Amazon example and the Safety Critical Software example. Clearly seeing Rust used for network services doesn’t mean it’s ready to be used in your car’s steering column4. And even within network services, seeing a group like S3 succeed with Rust may convince other groups building at-scale services to try Rust, but doesn’t necessarily persuade a team to use Rust for their next CRUD service. And frankly, it shouldn’t! They are likely to hit obstacles.
All of this was on my mind as I watched the keynote by Jon Seager, the VP of Engineering at Canonical, which is the company behind Ubuntu. Similar to Lars Bergstrom’s epic keynote from year’s past on Rust adoption within Google, Jon laid out a pitch for why Canonical is adopting Rust that was at once visionary and yet deeply practical.
“Visionary and yet deeply practical” is pretty much the textbook description of what we need to cross from early adopters to early majority. We need folks who care first and foremost about delivering the right results, but are open to new ideas that might help them do that better; folks who can stand on both sides of the chasm at once.
Jon described how Canonical focuses their own development on a small set of languages: Python, C/C++, and Go, and how they had recently brought in Rust and were using it as the language of choice for new foundational efforts, replacing C, C++, and (some uses of) Python.
Jon talked about how he sees it as part of Ubuntu’s job to “pay it forward” by supporting the construction of memory-safe foundational utilities. Jon meant support both in terms of finances – Canonical is sponsoring the Trifecta Tech Foundation’s to develop sudo-rs and ntpd-rs and sponsoring the uutils org’s work on coreutils – and in terms of reputation. Ubuntu can take on the risk of doing something new, prove that it works, and then let others benefit.
Remember how the Crossing the Chasm book described early majority people? They are “looking to minimize discontinuity with the old ways”. And what better way to do that than to have drop-in utilities that fit within their existing workflows.
With new adoption comes new perspectives. On Thursday night I was at dinner5 organized by Ernest Kissiedu6. Jon Seager was there along with some other Rust adopters from various industries, as were a few others from the Rust Foundation and the open-source project.
Ernest asked them to give us their unvarnished takes on Rust. Jon made the provocative comment that we needed to revisit our policy around having a small standard library. He’s not the first to say something like that, it’s something we’ve been hearing for years and years – and I think he’s right! Though I don’t think the answer is just to ship a big standard library. In fact, it’s kind of a perfect lead-in to (what I hope will be) my next blog post, which is about a project I call “battery packs”7.
The broader point though is that shifting from targeting “pioneers” and “early adopters” to targeting “early majority” sometimes involves some uncomfortable changes:
Transition between any two adoption segments is normally excruciatingly awkward because you must adopt new strategies just at the time you have become most comfortable with the old ones. [..] The situation can be further complicated if the high-tech company, fresh from its marketing success with visionaries, neglects to change its sales pitch. [..] The company may be saying “state-of-the-art” when the pragmatist wants to hear “industry standard”. – Crossing the Chasm (emphasis mine)
Not everybody will remember it, but in 2016 there was a proposal called the Rust Platform. The idea was to bring in some crates and bless them as a kind of “extended standard library”. People hated it. After all, they said, why not just add dependencies to your Cargo.toml? It’s easy enough. And to be honest, they were right – at least at the time.
I think the Rust Platform is a good example of something that was a poor fit for early adopters, who want the newest thing and don’t mind finding the best crates, but which could be a great fit for the Early Majority.8
Anyway, I’m not here to argue for one thing or another in this post, but more for the concept that we have to be open to adapting our learned wisdom to new circumstances. In the past, we were trying to bootstrap Rust into the industry’s consciousness – and we have succeeded.
The task before us now is different: we need to make Rust the best option not just in terms of “what it could be” but in terms of “what it actually is” – and sometimes those are in tension.
Later in the dinner, the talk turned, as it often does, to money. Growing Rust adoption also comes with growing needs placed on the Rust project and its ecosystem. How can we connect the dots? This has been a big item on my mind, and I realize in writing this paragraph how many blog posts I have yet to write on the topic, but let me lay out a few interesting points that came up over this dinner and at other recent points.
First, there are more ways to offer support than $$. For Canonical specifically, as they are an open-source organization through-and-through, what I would most want is to build stronger relationships between our organizations. With the Rust for Linux developers, early on Rust maintainers were prioritizing and fixing bugs on behalf of RfL devs, but more and more, RfL devs are fixing things themselves, with Rust maintainers serving as mentors. This is awesome!
Second, there’s an interesting trend about $$ that I’ve seen crop up in a few places. We often think of companies investing in the open-source dependencies that they rely upon. But there’s an entirely different source of funding, and one that might be even easier to tap, which is to look at companies that are considering Rust but haven’t adopted it yet.
For those “would be” adopters, there are often individuals in the org who are trying to make the case for Rust adoption – these individuals are early adopters, people with a vision for how things could be, but they are trying to sell to their early majority company. And to do that, they often have a list of “table stakes” features that need to be supported; what’s more, they often have access to some budget to make these things happen.
This came up when I was talking to Alexandru Radovici, the Foundation’s Silver Member Directory, who said that many safety critical companies have money they’d like to spend to close various gaps in Rust, but they don’t know how to spend it. Jon’s investments in Trifecta Tech and the uutils org have the same character: he is looking to close the gaps that block Ubuntu from using Rust more.
Well, first of all, you should watch Jon’s talk. “Brilliant”, as the Brits have it.
But my other big thought is that this is a crucial time for Rust. We are clearly transitioning in a number of areas from visionaries and early adopters towards that pragmatic majority, and we need to be mindful that doing so may require us to change some of the way that we’ve always done things. I liked this paragraph from Crossing the Chasm:
To market successfully to pragmatists, one does not have to be one – just understand their values and work to serve them. To look more closely into these values, if the goal of visionaries is to take a quantum leap forward, the goal of pragmatists is to make a percentage improvement–incremental, measurable, predictable progress. [..] To market to pragmatists, you must be patient. You need to be conversant with the issues that dominate their particular business. You need to show up at the industry-specific conferences and trade shows they attend.
Re-reading Crossing the Chasm as part of writing this blog post has really helped me square where Rust is – for the most part, I think we are still crossing the chasm, but we are well on our way. I think what we see is a consistent trend now where we have Rust champions who fit the “visionary” profile of early adopters successfully advocating for Rust within companies that fit the pragmatist, early majority profile.
It strikes me that open-source is just an amazing platform for doing this kind of marketing. Unlike a company, we don’t have to do everything ourselves. We have to leverage the fact that open source helps those who help themselves – find those visionary folks in industries that could really benefit from Rust, bring them into the Rust orbit, and then (most important!) support and empower them to adapt Rust to their needs.
This last part may sound obvious, but it’s harder than it sounds. When you’re embedded in open source, it seems like a friendly place where everyone is welcome. But the reality is that it can be a place full of cliques and “oral traditions” that “everybody knows”9. People coming with an idea can get shutdown for using the wrong word. They can readily mistake the, um, “impassioned” comments from a random contributor (or perhaps just a troll…) for the official word from project leadership. It only takes one rude response to turn somebody away.
So what will ultimately help Rust the most to succeed? Empathy in Open Source. Let’s get out there, find out where Rust can help people, and make it happen. Exciting times!
Also, Ubuntu using a non-GPL licensed userland means they can pull all kinds of tricks to allow more TiVoization in the Linux ecosystem.
Combine this with what Amutable (systemd guys) are building, and you can have monolithic, closed source, non-user-modifiable Linux distributions or flavors.
Ubuntu and companies which embed Linux into their products will love this from a business perspective.
Consider: An end to end signature-enabled, verified, attestable, Linux environment with completely closed source util-linux and userland packages, down to the "ls" and "cd". Deliciously apocalyptic.
We're two stops away from this, and there are no shortage of momentum or funding to enable teh future.
Sure, but util-linux and the BSDs won't suddenly cease to exist. If you don't like what Ubuntu is doing, just don't use it.
Upstream debian has been much more stable for as long as Ubuntu has existed...
Sure, but util-linux and the BSDs won't suddenly cease to exist. If you don't like what Ubuntu is doing, just don't use it.
And then websites and applications stop working if you're not using a verified, attested, locked-down OS and you're stuck with your nice free software system that will not do your online banking, let you chat with your friends, or access your company resources.
At that point I'll just move into the woods with a typewriter and chat with my friends via HAM radio
Edit: Also, why would some userspace components in a slightly-less-free license cause this to happen? if the powers-that-be want to shut you out of the internet, they can do it now; lots of proprietary software already exists.
Also, why would some userspace components in a slightly-less-free license cause this to happen?
It won't, in itself, but it appears to be yet another little push forward on the slippery slope that probably will end where it appears to inevitably end.
but again, the bsd userspace already has a permissive license. if the mustache twirling villains want to lock down stuff, they can do it now. they don't need any push forward.
Yeah, but people don't really want to use the BSD userspace. A lot of the Linux stuff people want to build on assumes a GNU userland and it's not trivial to build a BSD/Linux that actually does relevant computer stuff.
But in places where that stuff isn't relevant, we already see a lot of locked-down devices like the Nintendo Switch and PlayStation based on BSD precisely because they can leverage free software but still lock it down. macOS with its BSD userland is also kind of like this -- the OS is getting gradually more locked down over time, but the frog boils slowly.
If you tighten the screws too hard and fast then people will scream and yell and maybe leave your business for a competitor -- even though it's technically feasible, that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.
The argument is basically that making it easier to lock down general purpose computing devices like desktop computers (by, for example, making a non-GPL drop-in replacement for GNU *utils) will eventually aid in making it happen. The powers that be will use tried-and-true arguments about security and think-of-the-kids etc to make it seem like running a mutable, untrusted OS is an unacceptable risk.
>that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.
If you have too much non-standard stuff going on in your browser or mobile device, this is already happening, to a degree. Not a hard block, but increasing difficulties
People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.
Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).
If this was going to happen, it would have already happened. The pieces are all there already.
People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.
This is absolutely true. I'm saying someone should care, because it does matter.
Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).
For some businesses, losing 1% of your customers is actually a lot of customers and a lot of money, and all else being equal they would prefer to not lose them.
If this was going to happen, it would have already happened. The pieces are all there already.
No, they really aren't. Again, it's perhaps technically feasible to flip the switch, but it doesn't make business sense yet.
How many people are doing online banking without running on a fully cryptographically verifiable/attestable OS? This means everyone not using a TPM, Secure Boot, etc. This means grandpa with an old Windows 10 machine or an old Mac that perhaps he should not still be using but he doesn't care, he just wants to pay his bills. I don't have numbers of course but I bet you this starts looking like a hell of a lot more than 1% of the userbase.
There are web APIs for this sort of thing in all major browsers but no one is really using them yet. But they exist for a reason, much like Windows 11 requires a TPM for a reason, and this tech will at some point be deployed for things like online banking. Of course it will.
Yes cancel
> If this was going to happen, it would have already happened. The pieces are all there already.
Same things were said for:
- Removal of DRM from music: Happened.
- Age verification in the internet: Happening.
- Locked down personal devices: Happened.
- Total surveillance in cities: Happened.
- Not being able to buy but only rent: Happened in many digital formats.
- Internet activation of software: Happened.
- Tracking individual persons real-time: Happened.
- Browser attestation: Google is trying hard.
- Attestation for Internet Banking: Reality in S. Korea.
etc. etc.This resonates. The after effects of age verification and the general exclusion of freedom loving coders is going to leave me standing here in the tumbleweeds with my 90s toyota and laptop with solar panels and unregulated radio frequencies my only communication with the outside world.
Its like those movies coming true. I've already had casual user accounts frozen just for accessing via VPN, or some other inscrutable reason.
I'm with you and the only solace in this dystopia is the fact that I increasingly feel like I just don't care. I don't really like using computers anymore. I liked them when they represented freedom and creativity.
So fine, exclude me from all your platforms, there's nothing there for me. It's all bad content from bad people (or increasingly: not even people) running on bad software. I'm not giving up my freedom to partake in that, I'd rather just stop using your shit.
(But I would very much like to be able to pay my bills and buy my train tickets, so I'll play your game and have a smartphone. Fine. You win this round.).
Yeah. If it were a thing it would have happened by now. The friction to lock users down would be very bad for business.
I don't use Ubuntu anywhere, so there's no actions I need to take.
> Upstream debian has been much more stable for as long as Ubuntu has existed...
Well, I use Debian before Ubuntu has existed, and it was never unstable to begin with. I understand the value of more eyes looking into something and its advantages, but let's say, Ubuntu has acted with selfish reasons towards Debian in some cases. I personally taken side in one of these debates, even.
Yes, I follow debian-devel, and even leaded a Debian derivative distro for some time.
Meandwhile the Canonical employee who's responsible for some aspects of apt has decided to insert rust code. Because of this, and just this, Debian dropped 4 entire architectures. https://lists.debian.org/debian-devel/2025/10/msg00285.html
>I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem. ... If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port. It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
If you think Canonical isn't going to lead Debian around by the nose on this you haven't been paying attention.
further down that thread
https://lists.debian.org/debian-devel/2025/10/msg00288.html
``` Rust is already a hard requirement on all Debian release architectures and ports except for alpha, hppa, m68k, and sh4 (which do not provide sqv). ```
It seems to me that the APT change was just a nail in the coffin of these older architectures, which would have eventually been sunset anyway, due to sqv not being available. If you really want to run some kind of Linux on these very old machines, godspeed, but you can't expect them to be maintained by a project with it's fingers in so many pies forever.
Yep. And nothing you've linked or pointed out changes the claim I made: that re: rust, Canonical employees are making the decisions, not Debian.
The thing with open source, and many industry standards like ISO and ECMA, is that who shows up gets to call the shots.
So when it isn't going into the right direction that we care, maybe more people with other mindset should join.
It is like complaining about who wins elections without bothering to cast a valid vote.
Well, it's not always true.
Look at how the proposal for making netplan the default network manager in Debian went. Not good, from Canonical's perspective.
Making /tmp behave the way systemd guys want also went not according to plan. The behavior is modified somewhat because of the discussion.
Rust's influence doesn't come from Canonical per se, but from its promise to eradicate memory related bugs. The initial hype was off the charts, but it's coming down, and the shortcomings are becoming obvious.
Canonical is trying to affect Debian, that's true, but it's not always a given.
The fact that Canonical has always been happy to ship software that they know fully well shouldn't be shipped doesn't fill me with hope that it will even work decently without causing massive issues to everyone (remember when they started to use pulseaudio? In the end it was such a mess that the solution was to abandon it).
It was rough for a while, but my debian machine still runs pulseaudio and it works pretty well. I agree that ubuntu doesn't do enough testing before releasing stuff, but I am grateful that so many people are willing to grind themselves against the bugs before they hit more conservative distributions
Remove it right now
> If you don't like what Ubuntu is doing, just don't use it.
They said this for systemd too but look LFS dropped non-systemd support
So... Don't use LFS. Nothing is stopping you from using Linux with whatever init system or user space you want.
It used to be GNU/Linux for a reason, Android/Linux is surely not GPL userland and there are others as such.
There is also a reason why all the GNU/Linux competition on embedded space, including Linux Foundation's own Zephyr, aren't GPL licensed.
People seem to forget Linux is only a kernel.
> People seem to forget Linux is only a kernel.
I certainly don't and that's why I'm advocating the userspace shall stay GPL. The freedom has two pillars. Kernel and userspace. If you mow one of the two down, you lose everything.
Yet there are already distros like Chimera Linux and Alpine Linux.
That train is long gone, as folks rather have business friendly FOSS projects.
If we stop fighting for anything just because someone said it's long gone, we'd have nothing.
World's history has changed through wars where some people said that winning [said war] is impossible.
Nothing is set in stone. World is changing more drastically than ever. Assuming that we can't change things or things will stay a certain way is a funny fallacy, at best.
Permanence is an illusion. The pendulum is on the move. It might be moving in a way I don't like, but it can't continue like that forever. I'm just doing what I feel right. I'd rather die trying than regretting that I didn't try.
Well, that is why I dislike Proton, but hey games! Courtesy of Microsoft's ecosystem.
Folks rather have folks friendly FOSS projects :)
There's nothing about using permissive licenses that reduces freedom. Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free. Comparing permissive licensing to a loss of freedom is not a valid comparison.
> Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free.
Like MinIO, Solaris, Elasticsearch, Hashicorp Suite and countless others. The versions before the license changes are healthy as a doornail. You're absolutely right.
Some of them are re-forked, some did not.
Also, sometimes that closed fork is the only viable option, making the hardware it's running on an expensive doornail. I also don't like that.
I remember using SDKs and software forked from open ones with version numbers like "1.8.7-really1.9.0-internal-thishardwareonly-special-3.2.5-unlocked" which only runs on a distro from 2006 when it's full moon on 29th of February, and the sum of digits of the date is divisible by 7 and 11 at the same time.
Can you patch this? I guess you can, but where's the source? I bet somebody deleted it by accident and it's not present anymore.
Permissive licenses don't take away the four freedoms, but add a fifth one. The ability to take the other four away. Without prior notice. This is what I don't like personally.
In short, I don't like doornails which are not actual doornails. Permissive licenses enable that freedom.
History probably says I'm being naive, but I feel like I don't hate this possibility (hear me out!).
Personally, I'd always choose to use a distribution with open source userland packages and utils, but if closed source alternatives exist and conform to the same specifications (i.e. we get "embrace" without the "extend and extinguish") then I don't mind if a company has closed sourced tech, especially if it'll help there business case, potentially boosting funding for open source linux projects.
Maybe that's all naive, I guess we'll find out if ubuntu really do go for more and more closed source options.
I understand the optimism, but after being burned by what Microsoft did to the Linux community for the last 20+ years, I'll just distance myself more from Ubuntu ecosystem.
When you put Snaps, Juju, uutils, etc. as a list, it all smells like a path to lockdown, not dissimilar to what RedHat did with their "unbranded" patches recently (IBM being IBM, which was unsurprising).
Also, remembering how Canonical worked together with Microsoft on some projects like WSL, which felt like "Surrender servers to Linux, and save the Windows desktop by allowing Linux run as a slave inside a VM" type of deal, I do not trust them a bit.
So, Linux is maturing, but it'll also bring a couple of very big cracks through ecosystem, and it'll be noisy and painful. Personally, I'm on Debian for the last 20+ years, and not planning to move anywhere for now.
I understand that there needs to be an economy, but money is not more important than destroying what we're standing on. Let it be physical like our planet, or virtual like the free software and the culture we built around it.
i.e. we get "embrace" without the "extend and extinguish"
This only ever happens when the party trying to EEE is fighting a losing battle. If they have the upper hand, they will always get to the extend and extinguish part. Do we think movements for user freedom have the upper hand right now?
To be fair, no one should be using Ubuntu. They are the free CD people from the 2000s. They are the Apple of Linux, marketing wins, but low quality.
They used outdated linux (Debian-family) because its lower cost to maintain.
All around, never use debian-family outside servers. Fedora is the future. Maybe OpenSUSE too. (Note these are not Arch or related to Arch)
> They used outdated linux (Debian-family) because its lower cost to maintain.
Ubuntu forks Sid, and evolves from there. They don't downstream Debian Stable.
> All around, never use debian-family outside servers. Fedora is the future. Maybe OpenSUSE too. (Note these are not Arch or related to Arch)
Daily driving Debian stable on servers and Testing on desktops for more than two decades. Testing is a rolling distribution and you install it once (ever). The only time I reinstalled it was to migrate to 64 bit architecture back in the day.
Also, considering stable to stable upgrades take 5 minutes, I have no problems with Debian Stable, either.
Fedora is nice, but it's RedHat's lab. While I have nothing against them, it's not user oriented as much as it looks. Debian Testing is much more stable than many (if not almost all) of the alternative distros, and follows versions reasonably well.
IF I want cutting edge, I can go Arch or Gentoo way. Lastly, Debian is an iceberg. Looks simple from outside, and once you start to develop it, you understand why Debian is considered one of the golden standards. The underbelly is a rich ecosystem of very well designed yet simple subsystems.
>> All around, never use debian-family outside servers. Fedora is the future.
That take in of itself also feels... uncommon?
My experience matches yours more or less, I've run both Debian (and their LTS project version at one point) and Ubuntu LTS on my servers, both have been generally okay, albeit with a snag or two along the way.
https://blog.kronis.dev/blog/debian-and-grub-are-broken
https://blog.kronis.dev/blog/debian-updates-are-broken
https://blog.kronis.dev/blog/ubuntu-lts-is-broken
Aside from a few cases of not-very-serious configurations with off the shelf hardware having issues that I get to write the occasional rant about (back when I had an "Everything is broken" section in my blog), it's been surprisingly stable otherwise.
I've had far more issues with RHEL-compatible distros (hate that they killed CentOS, Oracle Linux is sometimes weird but kinda works, outside of work stuff I'd personally reach for Rocky Linux which is a nicer experience) both when it comes to running stuff like Docker (way before Podman was even stable, RHEL-compatibles didn't play nicely with Docker when it came to SELinux and networking) and also support for slightly more uncommon consumer hardware, like my netbook touchpad didn't work at all by default on Fedora, but did work on DEB distros.
The 10 year EOL is really nice, though, and if they had something as nice as Proxmox (for free), I'd probably be using RPM distros for my hypervisors right now!
That's also kind of why I think saying that either of those don't have much of a future would be an odd statement - in my experience, both have their occasional issues but are still generally good for desktop and server use cases.
As an addendum, however, snaps suck, viva la Linux Mint for desktop, plus, Cinnamon is a nice desktop and it's still close enough to Ubuntu LTS I run on servers if I ever need that familiarity in regards to packages!
*systemd
Once upon a time Mandrake was great for consumer hardware, alongside SuSE, both kind of ignored nowadays, then came Ubuntu, which no one apparently should be using.
So we're kind of left out of options, because there is hardly another distro on Distrowatch that has a similar success rate being installed on random laptops that normies want to try GNU/Linux on.
So we will have a closed OS just like macOS and Windows but linux based. I don't see why it would stop all the other open source distros to exist.
Systemd is just another init system. People said the same thing about how it can exist with other ones in a level playing field.
By the virtue of having some motivated backers, not only they have pushed everyone out from any distro which matters or acts as a root for others, they have formed a neat little company called Amutable which produces tech allowing anyone to lockdown any installation to an immutable, untouchable state.
Yeap, systemd is just another init system existing on a level playing field. They just dare to be successful by tackling problems that people have today over trying to deliver solutions designed in 1989.
> They just dare to be successful by tackling problems that people have today over trying to deliver solutions designed in 1989.
Thanks for your input. Can you please elaborate about these problems a bit more? I'm pretty new on this Linux thing. Using for just 20 years or so, and managing a quite a few hundred servers only. systemd didn't make my life drastically different or smoother.
Oh, I also used to be a tech-lead of a Debian derivative, and also did some country-wide rollouts of the thing we developed, but I'm sure it has no addition to my already extremely limited knowledge of how things work.
Maybe this is because I'm a noob, or not using enough machines, or not have enough downtime, IDK.
Any info will be greatly appreciated, thanks.
Because well funded projects start to hire developers all over the place to add dependencies and it's very difficult to do otherwise when you have an army of salaried people who do that 40h a week.
If that means that the massive fragmentation stops and we will have an OS 95% percent of linux users install, It might not be that bad.
I run and develop on various Linux distributions and fail to see that fragmentation for that last couple of decades, sorry.
I've only used GNU/Linux since 2012, but I do think we have to face the fact that there is a fair amount of ~~choice~~ fragmentation in the ecosystem. Deb/RPM/Flatpak/Snap/PKGBUILD/Nix, GNOME/KDE/Cosmic/Cinnamon/Xfce/LXQt/MATE/Budgie/Sway/Hyprland, AppArmor/SELinux, GTK/Qt/Electron/Tauri/WxWidgets there's even distributions which use musl libc instead of GNU libc or non-systemd inits. Sure, you can just pick one and focus on it, but if someone else picks something else then they may need to duplicate some effort to get things working on their preferred setup.
When you put down your project in a sound and standards compliant way, packaging doesn't matter much. RPM and DEB automatically builds your code and packages it. DEB also has a lot of tools which allows you to make sure that everything is done correctly. I'm sure RPM has similar tools, but I didn't use them a lot.
Desktop environment doesn't matter much, because GTK and Qt works on every Linux Desktop. I'm using KDE, but I don't know which tools I use are GTK, which are Qt, etc. Qt and GTK teams collaborate a ton both in window management and desktop underpinnings side. Also there are tons of standards, and things just work if you follow them. Even the standard libraries of programming languages and Linux userland gives the tools to utilize these standards.
C libraries are mostly interoperable. I operate with GNU's C library, but aside from interesting behavioral differences, the API is not different.
If you're not writing daemons, you have no business with your init system in 99% of the cases, unless you want to utilize a special feature of any of them. You can just ship the service files. daemon() function is part of libc, not your init system.
In total, after your code builds, you can add these layers step by step, one at a time, and have a codebase which works everywhere with minimal effort.
Eroding user's rights is good if it means users have fewer choices because choice is bad? I suppose it would mean that resources could concentrate in a smaller, more focused set of software, but I really can't see how that would justify the harm caused.
Just think about how easy it would be though - imagine - one single OS, one single version always immediately up to date, one consistent set of installed software, attestation to ensure no adversaries are attempting to modify or install unsupported software, full accurate and thorough analytics, what a dream...
> Just think about how easy it would be though
Endlessly painful. Right?
The defining characteristic is that everyone is using it, not that it's your personal ideal operating system. We have a few major players trying to create their version of the one single os. They're already nobodies ideal, and they have the luxury of telling people to go elsewhere if the system isn't right for them. Imagine how much worse it would be if they had to support everything.
Yea what a Fall of Rome type dream - just look what happened when people overused a specific measure - we had Crowdstrike with around 8.5 million devices crashed to BSoD. Identical OS, identical apps, identical updates, identical crashes at same time.
If you centralise then it is not the question "Will?" but "When?" it will fail.
Isn't that what Apple purports to sell? There's also Haiku, but I don't know to what extend it matches that description.
> Also, Ubuntu using a non-GPL licensed userland means they can pull all kinds of tricks to allow more TiVoization in the Linux ecosystem.
Can we stop this conspiracy nonsense? They have explicitly stated that licensing was not a motivation, and even if you think they're lying it wouldn't make any sense anyway! No Tivoization is foiled by Coreutils being GPL. That's ridiculous for so many reasons, not least you can just use the BSD versions, as Apple does (and they still release the source code!).
Well, why not license it as GPL then? If they don't care of course.
Because GPL doesn't play well with static linking, the new favourite of programming languages rediscovering the pre-1990's ways of most operating systems linkers (aka binders).
Good question. Probably just because most Rust projects don't use GPL and they copied that. I searched but couldn't find an answer.
I wouldn't be entirely surprised if they change it to GPL just to shut people up.
> I wouldn't be entirely surprised if they change it to GPL just to shut people up.
They don't and wont.
> I searched but couldn't find an answer.
Here's the answer: https://github.com/uutils/coreutils/issues/2757. This is a link I found long time ago and saved to reference when need arises.
From the (current) lead author:
The license has been decided way before my time. I am 0 interest in starting a license debate (I care if the license is DFSG - Debian Free Software Guidelines) and spend time on it. I would rather use my limited time to make rust/coreutils ready for production.
More debate: https://github.com/uutils/coreutils/issues/834From what I understood, they don't "believe" in GPL and don't like the idea of "having to keep it open". They believe in Developer Freedom(TM), not User Freedom(TM), so they don't care whether their code is closed by others or not.
To summarize #834: We don't like GPL. We'll do MIT, thanks.
Lol, that won't happen. The whole point for doing this is getting rid of yet another copyleft component. Especially GPLv3 stuff, companies hate it.
> Can we stop this conspiracy nonsense?
If they can earn my trust, why not? I'm not a pointlessly stubborn person. I have changed my views in the past, and can certainly change in the future. This my view resulting from my experiences, and jury is still out from my perspective. If you want to trust Canonical and Co., you can. Don't let me stop you.
> They have explicitly stated that licensing was not a motivation, and even if you think they're lying it wouldn't make any sense anyway!
Who prevents forking Ubuntu and taking that extra mile, esp. now we have a company which wants to enable exactly that lockdown?
> No Tivoization is foiled by Coreutils being GPL.
Belts have holes. They can be used to hold or to choke. Adding more holes to a belt allows more different uses.
> That's ridiculous for so many reasons,
Can you give me more reasons to believe me that I'm a tinfoil wearing crazy weirdo?
> not least you can just use the BSD versions, as Apple does (and they still release the source code!).
Same Apple removing any GPLv3 (and possibly GPLv2) tool from their roster in every iteration from their OS. Same Apple which provides no way to verify that what's published is what's running on their hardware. Same Apple which provides SIP to seal their system partitions which can't be modified without breaking tons of guarantees and seals. Same Apple which controls from their processor to software, without any gaps.
Having the source have no meaning there. You can't use that source. You can't modify the machine you use, you can't install any other OS or just test something.
Ubuntu oozes over debian like a parasitic malaise of vile chicanery. Their entire goal is to, essentially, use the work of untold millions, provided for free, to build their own cathedral.
People complain about AWS and others taking OSS projects and profiting wildly off them, but they're all children compared to the machinations of Ubuntu.
It's been this way from the start. The constant conflict of interest, where Ubuntu devs manipulate debian democratic processes, and the inclusion of systemd and its ridiculous swiss-army-knife implementation of an init system and surrounding tools.
You ever see those knives, the huge ones, with a screwdriver, plyers, and 100 other tools on them? Thing is, they're all crap. They work for an odd case, but you always need to reach for a real tool.
That's systemd. Whether systemd-timesync, its dns services, or anything else it does, it's kiddie time. Barely functional, broken in inane ways, and leaving any professional in a situation of endless masking of a myriad of barely cogent and horribly malign services.
We didn't gain anything with systemd, except for an init system 1000 times larger, codebase wise, and a collection of tools you have to replace anyhow.
And now these guys want to do Amutable. I hope they fail, for if they succeed, they will sink us further into this absurd system. Systemd for death. For despair. For dislike. For dumb. For disregard, devilment, debased, disturbed, systemd is all these things, and more, all packaged for you, all presented to you, all given to you, to all of us, dragging us down, destroying us.
If there is an apocalypse, it'll be somehow some bug in systemd that causes it. Nukes will fly due to its broken code, viral containment systems will fail due to buffer overruns in its code, systemd is the end-of-world waiting to happen, its over-complicated, poorly written code a guillotine waiting to fall upon us all.
I run thousands of sysvinit, and thousands of systemd systems.
Which ones, do you think, have the worse record of "something stupid" bringing down a service, a machine, preventing a boot up, you name it, it's systemd.
I swear to God that Trump exists because of systemd somehow. I place all the ills, perils, at the feet of systemd. It represents everything wrong in the tech ecosystem, its tendrils spreading dark, deep disturbing dreams of madness through all it touches.
Just learning how to use systemd, destroys the logic centres of the mind, rendering advocates incapable of productive work.
All wrong and ill that befalls this world, is at its feet.
I suspect through some incomprehensible twist of fate, the entire fabric of the universe may unravel, undoing all that is, and ever was. All lost, all gone, all because of systemd.
I am beginning to suspect I dislike systemd.
(Send $19.95 to my address, if you wish to subscribe to my newsletter, and hear my REAL, UNFILTERED opinions about systemd)
I could have gotten behind the first three paragraphs until you started mentioning systemd. After that everything you've written sounds like the rambling of a crazy person.
systemd won the init system wars because it is pretty damn good from a technical perspective. The competition didn't even try to participate.
>Barely functional, broken in inane ways, and leaving any professional in a situation of endless masking of a myriad of barely cogent and horribly malign services.
Everything you're complaining about was even more true of previous init systems and barely true for systemd at all.
Meanwhile Ubuntu is a garbage fire from a technical perspective. Snaps are garbage and forced down your throat.
systemd won because of politics, and absolutely nothing more. Debian, the root of the most used tree of Linux distros, only adopted systemd due to pressure from Redhat/Gnome, and threats that if it didn't?
Gnome would no longer work on Debian.
Understand, that many of the issues "resolved" by systemd, were redhat issues. And further, not even init issues.
For example, the most predominant being "predictable NIC names", which were already a thing with Debian. Or bootup times, of which Debian had excellent parallelization and boot times of a similar scope to how "fast" systemd was.
There's really nothing good, from a technical perspective, when something is enlarged 1000x the requirement. If you look at the code for sysvinit, it's maybe 10k lines. Systemd is > 1M lines of code, likely approaching 1.5M by now. So I suppose, 100x the size.
It needs to be understood that the more code you have, the more bugs. It's just the way it is. There have been more security issues in core systemd yearly, than sysvinit in its entire lifespan. That's not even systemd's fault, it's just a simple fact, you have more code, you have more bugs.
And when you say "systemd", you're likely referring to all the inane nonsense it does? How broken it is managing mounts, which really isn't an init's job anyhow? Or the absurd nature of having shutdown and startup identical, with automatic ordering, so you end up in all sorts of ridiculous edge cases?
Why would anyone presume that start and stop MUST be mirror images of each other. The very logic is broken, and shows an immense lack of comprehension of how the real world works.
And you speak of "it won" for superior this and that? At the start, it didn't even have an easy way to extend stop time. Hell, even now it just sends SIGTERM and a nanosecond later SIGKILL, as if shutting down a box FAST FAST NOW NOW is more important that data integrity or properly closed tcp connections or processes doing any form of proper cleanup.
The number of mysql/database issues caused by this behaviour in the early days was insane.
Look, I get you like systemd. But it's provided no real value, and certainly, even if there is some? The detractors outweigh it as the sun turns meat to leather.
> There's really nothing good, from a technical perspective, when something is enlarged 1000x the requirement. If you look at the code for sysvinit, it's maybe 10k lines. Systemd is > 1M lines of code, likely approaching 1.5M by now. So I suppose, 100x the size.
The systemd repo is a mono repo for other tools in addition to the init system.
I've heard from many sysadmins and distribution maintainers that systemd has been amazing. We went from ad hoc shell scripts to declarative plain text files. I think that's a huge win.
> We went from ad hoc shell scripts to declarative plain text files. I think that's a huge win.
Current sysadmin and former distro maintainer here, who respectfully disagrees with you and your friends.
Many, if not all software packages followed a well-defined SYS-V service file stub, esp. after so-called "Parallel SYS-V". We were able to order services, define dependencies and deterministically boot systems at the speed of light. Nothing broke, and the systems fully supported "pull the plug if you want, it won't break" promise.
While I don't hate systemd, I don't like its many ways. It's something like X11 before auto-configuring support for me. The less I touch it, less grumpy I am. Technical parts aside, remembering the ugliness surrounding it (people, ecosystem and predatory aspects) makes me really angry sometimes.
Tip: Research "Amutable" and what they are up to.
The only plus for Amutable, is that it may finally cause a sane systemd fork.
My immense, strong suspicion here, is that they believe they can use their control over the systemd project, to add immense layers of code and change, to support Amutable's needs.
When this happens, there will likely be pushback of some sort. I'm hoping a fork will happen at that time, and even better, hoping that maybe the project can go someplace saner.
Getting rid of all tcp support (eg, systemd providing inetd functionality) from an init system would be an excellent start. The absurdity of pid 1 having networking hooks is absolutely madness.
Splitting start/stop ordering would be an additional benefit.
Removing all daemons, and all support code, and forking them (for legacy support) would be next. No horribly enacted timesyncd, or resolvd.
Dropping the absurd journal and returning to a syslog solution would be next. Literal kiddie town, to have no centralized logging as a default when first created. There are now attempts to entirely re-skin the cat, with systemd-journal-gatewayd, yet every single appliance and piece of hardware supports... that's right, syslog protocol, not systemd's proprietary journalling protocol or formats.
There is so much about systemd that is just about re-writing the entire universe, not for immense gain, not for immense improvement, but instead for the tiniest, smallest shred of edge-case betterment, and meanwhile, creating massive, overwhelming denigration of every other aspect of that same use case.
Has the journal improved anything for anyone, anywhere, in any real, meaningful way? Absolutely not. All searching, etc is available on text files with | grep. Zero improvement.
Has the journal improved performance? No.
And the ridiculous and absurd and inane concept of the journal being removed at each reboot?
It's as if the people writing systemd, had absolutely no real-world experience with servers, maintaining them, or working with them, and simply made design decisions predicated upon rumour, with no actual understanding of edge cases, or why things are, or were, as they are.
--
An example would be some aspects of Hyundais. They are relatively new, in many ways, to much of the market they have entered. Yes, I know, decades may not seem like that, but it is so. And until they stole all of Toyota's QA methods by hiring engineers (which also took all documentation), they were of horrible quality.
That said, I say in one of their newer SUVs, electric, the other day. Their dashboard, down at the bottom, ended in a sharp corner. When I sat in the car, I realised that should I be in an accident, or even brake aggressively, my kneecap would mash into this non-rounded, extremely square, sharp angle. I could literally see my kneecap being sliced/popped off.
This sort of "it's silly to have round everywhere, let's do something new ascetically, and make it a sharp edge down there!", coupled with "There aren't many people 6'3" in S. Korea, so we'll never notice how dangerous this is", is a prime example of this.
The authors had no idea of edge cases, and the litany of bug reports over the last decade has shown all their supposed improvements filed away, as they have basically had to conform to logical design standards, developed by people far wiser than they, over the last half century.
No, someone-new-to-the-entire-unix-ecosystem, the phrase "but we can just" isn't a viable means to determine sensible design methodology.
Go ahead, enact change, just make sure it makes some sense.
b112 wrote a substantial comment already, but I'll give a single line summary:
yes, systemd changed how I manage my systems, but it didn't bring in speed, safety or integration I didn't have before them. Moreover, they brought out secure-boot related shenanigans in house and integrated to everything it touches. Before that the line was drawn at the bootloader.
Here's the chasm I want to see Rust cross:
Dynamic linking with a safe ABI, where if you change and recompile one library then the outcome has to obey some definition of safety, and ABI stability is about as good as C or Objective-C or Swift.
Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.
> Until that happens, it'll be hard to adopt Rust in a lot of C/C++ strongholds where C's ABI and dynamic linking are the thing that enables the software to get huge.
Wait, Rust can already communicate using the C ABI. In fact, it offers exactly the same capabilities as C++ in this regard (dynamic linking).
That's an unsafe ABI.
As unsafe as C or C++. In fact, safer, because only the ABI surface is unsafe, the rust code behind it can be as safe or unsafe as you want it to be.
I was addressing this portion of your comment: "C's ABI and dynamic linking are the thing that enables the software to get huge". If the C ABI is what enables software to get huge then Rust is already there.
There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.
Here's the problem. If you told me that you rebuilt the Linux userland with Rust but you used C ABI at all of the boundaries, then I would be pretty convinced that you did not create a meaningful improvement to security because of how many dynamic linking boundaries there are. So many of the libraries involved are small, and big or small they expose ABIs that involve pointers to buffers and manual memory management.
> There is a second claim in your comment about a "safe ABI", but that is something that neither C or C++ offers right now.
Of course C and C++ are no safer in this regard. (Well, with Fil-C they are safer, but like whatever.)
But that misses the point, which is that:
- It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.
- So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.
> - It would be a big deal if Rust did have a safe dynamic linking ABI. Someone should do it. That's the main point I'm making. I don't think deflecting by saying "but C is no safer" is super interesting.
I think we all agree that it would be a huge deal.
> - So long as this problem isn't fixed, the upside of using Rust to replace a lot of the load bearing stuff in an OS is much lower than it should be to justify the effort. This point is debatable for sure, but your arguments don't address it.
As you point out, this is the debatable part, and I'm not sure I get your justification here.
This might end up being the forcing function (quoting myself from another reply in this discussion):
> It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
If memory was cheap, then maybe you could say, "who cares".
Unfortunately memory isn't cheap these days
Can you even make the standard library dynamically linked in the C way??
In C, a function definition usually corresponds 1-to-1 to a function in object code. In Rust, plenty of things in the stdlib are generic functions that effectively get a separate implementation for each type you use them with.
If there's a library that defines Foo but doesn't use VecFoo>, and there are 3 other libraries in your program that do use that type, where should the Vec functions specialized for Foo reside? How do languages like Swift (which is notoriously dynamically-linked) solve this?
You can have an intermediate dynamic object that just exports Vec<Foo> specialized functions, and the three consumers that need it just link to that object. If the common need for Vec<Foo> is foreseeable by the dynamic object that provides Foo, it can export the Vec<Foo> functions itself.
How much overhead is that? Also, why would that have much overhead? Things deduplicate in memory.
They dedup at the page level.
This isn’t that kind of duplication.
I thought they were suggesting the stdlib be dynamically linked or something, at which point it would be. But for static linking, no.
Your apt update would still be huge though. When the dependency changes (eg. a security update) you’d be downloading rebuilds of 20 apps. For the update of a key library, you’d be downloading your entire distribution again. Every time.
NixOS "suffers" from this. It's really not that bad if you have solid bandwidth. For me it's more than worth the trade off. With a solid connection a major upgrade is still just a couple minutes.
A couple of minutes at the moment that is, with dynamic linking everywhere. What will it become when everything is statically linked?
I think you misunderstand my point. Nix basically forces dynamic linking to be more like static linking. So changing a low level library causes ~everything to redownload.
Oh, well yeah, statically linked binaries have that downside. I guess I don't think that's a big deal, but I could maybe imagine on some devices that are heavily constrained that it could be? IDK. Compression is insanely effective.
You are forgetting about elephant in the room - if every bug require rebuild of downstream then it is not only question of constraint it is also question of SSD cycles - you are effectively destroying someone drive faster. And btrfs actually worsens this problem - because instead of one Copy on Write of library you now have 2n copies of library within 2 copies of different apps. Now (reverting/ø) update will cost you even more writes. It is just waste for no apparent reason - less memory, less disk space.
"compression is insanely effective" - And what about energy? compression will increase CPU use. It will also make everything slower - slower than just plain deduplication. Also, your reason for using worse for user tech is: the user can mitigate in other ways? This strikes me as the same logic as "we don't need to optimize our program/game, users will just buy better hardware" or just plain throwing cost to user - this is not valid solution just downplaying of the argument.
I basically think that's all much ado about nothing.
If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.
Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."
Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.
One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.
We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.
Yeah I'm not really convinced that this matters at all tbh
What you are asking for is to make a library definition replacement to .h-files that contain sufficient information to make rust safe. That is a big, big step and would be fantastic not only for rust but for any other language trying to break out of the C tar pit.
So you're calling for dynamic linking for rust native code? Because rust's safety doesn't come from runtime, it comes from the compiler and the generated code. An object file generated from a bit of rust source isn't some "safe" object file, it's just generated in a safe set of patterns. That safety can cross the C ABI perfectly fine if both things on either side came from rust to begin with. Which means rust dynamic linking.
Would a safe ABI work with sandboxing the C code? I'm a bit unsure how one would construct a safe C ABI from Rust's side,
The argument for unsafe ABI not being that big of a deal is that ABI boundaries often reflect organizational boundaries as well.
E.g. the kernel wouldn't really benefit from a "safe ABI" because users calling into the kernel need to be considered malicious by default.
How could a safe dynamic linking API ever work?
I think you're moving the goalposts significantly here.
I don’t think GP is moving the goalposts at all, rather I think a lot of people are willfully misrepresenting GP’s point.
Rust-to-rust code should be able to be dynamically linked with an ABI that has better safety guarantees than the C ABI. That’s the point. You can’t even express an Option<T> via the C ABI, let alone the myriad of other things rust has that are put together to make it a safe language.
You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/
It would be very hard to accomplish. Apple was extremely motivated to make Swift have a resilient/stable ABI, because they wanted to author system frameworks in swift and have third parties use them in swift code (including globally updating said frameworks without any apps needing to recompile.) They wanted these frameworks to feel like idiomatic swift code too, not just be a bunch of pointers and manual allocation. There’s a good argument that (1) Rust doesn’t consider this an important enough feature and (2) they don’t have enough resources to accomplish it even if they did. But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.
> You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/
> It would be very hard to accomplish.
Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.
Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.
> You can’t even express an Option<T> via the C ABI
But you can express Option<Foo> for a concrete Foo. Do you really need any more than that?
> But you can express Option<Foo> for a concrete Foo
I don’t think that’s true?
https://users.rust-lang.org/t/option-is-ffi-safe-or-not/2982...
You could maybe say that a pointer can be transmuted to an Option<&T> because there’s an Option-specific optimization that an Option<&T> uses null as the None value, but that’s not always guaranteed. And it doesn’t apply to non-references, for instance Option<bool>’s None value would be indistinguishable from false. You could get lucky if you launder your Option<T> through repr(C) and the compiler versions match and don’t mangle the internal representation, but there’s no guarantees here, since the ABI isn’t stable. (You even get a warning if you try to put a struct in your function signatures that doesn’t have a stable repr(C).)
You're right that there isn't a single standard convention for representing e.g. Option<bool>, but that's just as true of C. You'd just define a repr(C) compatible object that can be converted to or from Option<Foo>, and pass that through the ABI interface, while the conversion step would happen internally and transparently on both sides. That kind of marshaling is ubiquitous when using FFI.
> but that's just as true of C
Right, that's the whole point of this thread. The only stable ABI rust has is one where you can only use C's features at the boundaries. It would be really nice if that wasn't the case (ie. if you could express "real" rust types at a stable ABI boundary.)
As OP said, "I don't think deflecting by saying "but C is no safer" is super interesting". People seem intent on steering that conversation that way anyway, I guess.
> I don’t think GP is moving the goalposts at all
Thank you :-)
> It would be very hard to accomplish.
Yeah it's a super hard problem especially when you provide safety using the type system!
The work the Swift team did here is hella impressive.
> But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.
Yeah!
> How could a safe dynamic linking API ever work?
Fil-C solves it. I think Swift solves it, too.
So it's solvable.
No fundamental reason, that I know of, why Rust or any other safe language can't also have some kind of story here.
> I think you're moving the goalposts significantly here.
No. I'm describing a problem worth solving.
Also, I think a major chasm for Rust to cross is how defensive the community gets. It's important to talk about problems so that the problems can be solved. That's how stuff gets better.
Swift and fil-c are only pseudo safe. Once you deal with the actual world and need to pass around data from memory things are always unsafe since there is no safe way of sharing memory. At least not in our current operating systems. Swift and fil-c can at least guard to some extent the api.
A safe ABI would be cool, for sure, but in the market (specifically addressing your prediction) I don't know if it's really that big a priority for adoption. The market is obviously fine with an unsafe ABI, seeing how C/C++ is already dominant. Rust with an unsafe ABI might then not be as big an improvement as we would like, but it's still an improvement, and I feel like you're underestimating the benefits of safe Rust code as an application-level frontline of security, even linked to unsafe C code.
What is a safe ABI? An ABI can't control whether one or both parties either end of the interface are honest.
You can't have safe dynamic linking, dynamic linking requires you to trust the library you load with no ability to verify.
> An ABI can't control whether one or both parties either end of the interface are honest.
You are aware that Rust already fails that without dynamic linking? The wrapper around the C getenv functionality was originally considered safe, despite every bit of documentation on getenv calling out thread safety issues.
Yes? That's called a bug? The standard library incorrectly labelled something as safe, and then changed it. The root was an unsafe FFI call which was incorrectly marked as safe.
It's no different than a bug in an unsafe pure Rust function.
I'm choosing to ignore that libc is typically dynamically linked, but linking in foreign code and marking it safe is a choice to trust the code. Under dynamic linking anything could get linked in, unlike static linking. At least a static link only includes the code you (theoretically) audited and decided is safe.
A "safe" ABI is just a C ABI plus a "safe" Rust crate (the moral equivalent to a C/C++ header file) that wraps it to provide safety guarantees. All bare-metal "safe" FFI's are ultimately implemented on top of completely "unsafe" assembly, and Rust is not really any different.
C++ ABI stability is the main reason improvements to the language get rejected.
You cannot change anything that would affect the class layout of something in the STL. For templated functions where the implementation is in the header, ODR means you can't add optimizations later on.
Maybe this was OK in the 90s when companies deleted the source code and laid off the programmers once the software was done, but it's not a feature Rust should ever support or guarantee.
The "stable ABI" is C functions and nothing else for a very good reason.
I think if Rust wants to evolve even more aggressively than C++ evolves, then that is a chasm that needs to be crossed.
In lots of domains, having a language that doesn't change very much, or that only changes very carefully with backcompat being taken super seriously, is more important than the memory safety guarantees Rust offers.
In my view, this is a good thing.
As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.
Rust preventing this makes my life so much better.
Rust does not prevent you from creating a library that exports a C/C++ interface. It's indistinguishable from a C or C++ library, except that it's written in Rust. cbindgen will even generate proper C header files out of the box, that Rust can then consume via bindgen.
> As a C++ developer, I regularly deal with people that think creating a compiled object file and throwing away the source code is acceptable, or decide to hide source code for "security" while distributing object files. This makes my life hell.
I mean yeah that's bad.
> Rust preventing this makes my life so much better.
I'm talking about a different issue, which is: how do you create software that's in the billions of lines of code in scale. That's the scale of desktop OSes. Probably also the scale of some other things too.
At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.
This is true even and especially if everyone has access to everyone else's source code
> At that scale, you can't just give everyone the source and tell them to do a world compile. Stable ABIs fix that. Also, you can't coordinate between all of the people involved other than via stable ABIs. So stable ABIs save both individual build time and reduce cognitive load.
Rust supports ABI compatibility if everyone is on the same compiler version.
That means you can have a distributed caching architecture for your billion line monorepo where everyone can compile world at all times because they share artifacts. Google pioneered this for C++ and doesn't need to care about ABI as a result.
What Rust does not support is a team deciding they don't want to upgrade their toolchains and still interoperate with those that do. Or random copy and pasting of `.so` files you don't know the provenance of. Everyone must be in sync.
In my opinion, this is a reasonable constraint. It allows Rust to swap out HashMap implementations. In contrast, C++ map types are terrible for performance because they cannot be updated for stability reasons.
My understanding: Even if everyone uses the same toolchain, but someone changes the code for a module and recompiles, then you're in UB land unless everyone who depends on that recompiles
Am I wrong?
If your key is a hash of the code and its dependencies, for a given toolchain and target, then any change to the code, its dependencies, the toolchain or target will result in a new key unique to that configuration. Though I am not familiar with these distributed caching systems so I could be overlooking something.
That's not the issue I'm worried about
> At that scale, you can't just give everyone the source and tell them to do a world compile.
Firstly, of course you could.
Secondly, you don't even need to, as NixOS shows.
C++ is still changing quite a lot though, just not in ways that fix the existing issues (often because doing so would break ABI stability).
That is a reason why a lot of folks stick with C.
In some sense, the chasm I'm describing hasn't been crossed by C++ yet
I'm not sure I'm following, are you claiming that C++ is still not widely used enough? That doesn't seem to be the case.
Except as you well know, C might not change as fast, but it does change, including the OS ABI.
Those folks think it doesn't.
> Except as you well know, C might not change as fast, but it does change, including the OS ABI.
I don't know that.
Here's what I know: the most successful OSes have stable OS ABIs. And their market share is positively correlated with the stability of their ABIs.
Most widely used: Windows, which has a famously stable OS ABI. (If you wanted to be contrarian you could say that it doesn't because the kernel ABI is not stable, but that misses the point - on Windows you program against userland ABIs provided by DLLs, which are remarkably stable.)
Second place: macOS, which maintains ABI stability with some sunsetting of old CPU targets. But release to release the ABI provides solid stability at the framework level, and used to also provide stability at the kernel ABI level (not sure if that's still true - but see above, the important thing is userland framework ABI stability at the end of the day).
Third place: Linux, which maintains excellent kernel ABI stability. Linux has the stablest kernel ABI right now AFAIK. And in userland, glibc has been investing heavily in ABI stability; it's stable enough now that in practice you could ship a binary that dynlinks to glibc and expect it to work on many different Linuxes today and in the future.
So it would seem that OS ABIs are stable in those OSes that are successful.
Speaking of Windows alone, there are the various calling conventions (pascal, stdcall, cdecl), 16, 32, 64 bits, x86, ARM, ARM64EC, DLLs, COM in-proc and ext-proc, WinRT within Win32 and UWP.
Leaving aside the platforms it no longer supports.
So there are some changes to account for depending on the deployment scenario.
The most stable would be FreeBSD with compaNx libraries/modules for old binaries, where N = FreeBSD version number.
Isn’t this solution solved by just compiling your libraries with your main app code? Computers are fast enough that this shouldn’t be a huge issue.
This assumes a lot:
- the same entity has access to the source of both the library and the main app
- library and main app share the same build tooling
And even if that’s the case, you have the problem of end users accidentally using different versions of the main app and the library and getting unexpected UB.
I think it's the domains that need to evolve, because the effects of that approach have been very bad for a very long time already
In what way rust needs to evolve? It seems pretty evolved to me already but I’m no language expert
What's the stat of single-compiler version ABI? I mean - if the compiler guaranteed that for the same version of the compiler the ABI can work, we could potentially use dynamic linking for a lot of things (speed up iterative development) without committing to any long term stable API or going through C ABI for everything.
I think the way to fix this is:
1. Have the stable ABI be opt-in similarly to how the C ABI is opt-in in Rust (`#[repr(stable)]` or similar)
2. Have the stable ABI be versioned. So it would actually be `#[repr(stable_2026)]` or whatever
The big question is does Rust want to play being adopted by those vendors, or it would leave them alone with languages that embrace native libraries.
> Here's the chasm I want to see Rust cross:
That's not important. What I want to see is the Rewrite-it-in-Rust movement move towards GPL.
GPL is pro-user. MIT is pro-business.
In their zeal to convert, they are happily replacing pro-user software with pro-business software. Their primary goal is to convert, not to safeguard.
If they shifted their goal from spreading Rust to protecting users, I'd be a lot happier about the community.
> In their zeal to convert, they are happily replacing pro-user software with pro-business software.
This is one of the two main reasons I'm not using Rust. Second reason is being addressed by gccrs team, so I have no big gripes there, since they are progressing well.
By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed? Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.
> By this same metric, do you refuse to use C because the vast majority of OSS C codebases are permissively licensed?
It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.
The last significant online C community was the one that gave us the pro-user products in the first place.
> Surely you see that this makes no sense, yes? Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.
I don't care whether or not they are hostile, that is not relevant. What is relevant to the complaints you are reading is that their primary goal is the spread of Rust, not the interests of the users.
It is totally reasonable to be against a community who are working very hard to replace pro-user software with pro-business software.
> The last significant online C community was the one that gave us the pro-user products in the first place.
You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.
> It's not comparable - the Rewrite-it-in-Rust community is aiming to replace the existing pro-user products, with new pro-business products.
It's clear that you have no idea what you're talking about. There is no "rewrite-it-in-Rust community", there are just people using Rust and writing what they want. That copyleft licenses have lost mindshare to permissive licenses in the decades since the rise of the OSI is a broader movement in OSS that long predates Rust, and has nothing to do with Rust itself.
> You mean the OSI, headed by famous C hacker Eric S. Raymond, the permissive-license rebellion against the GPL? Pretending that the MIT/BSD licenses aren't a legacy of the C ecosystem is revisionist history.
Sure, C played a great part there too, but you are ignoring the present.
What we are seeing now is a concerted effort to replace pro-user products with pro-business products.
Even if you re right that the start of Copyleft, with gcc, is revisionist history, that has no relevance to what is happening now, which is a large effort by a specific community to replace pro-user products with pro-business products.
>
> Neither Rust-the-language nor Rust-the-ecosystem are any more hostile to GPL than any other language and ecosystem.
Acta, non verba.
Couching a non-sequitur in Latin does not an argument make. By all means, have the courage to make an actual statement.
> have the courage to make an actual statement.
Well, that's funny. Considering all the comments I have written for this submission.
First of all, most of the arguments I'd make is already addressed by lelanthran. Do I need to write the same things over and over? It's bad etiquette to write the same things said by someone else. This is why we have the voting mechanism here.
So, since you insist, let me reiterate the same thing.
No I don't refuse to use C, because most of the GPL software which is enabling everything we do today is written in C or a C-descendant language. However, as I write everywhere, I refuse to use Rust because of two reasons:
1- LLVM only for now (I don't use any language which doesn't have a compiler in GCC) 2- Rust's apparent rewrite in rust, in MIT, replace the thing and beat it with a club if it refuses to die attitude.
For reference, uutils and sister projects use "drop-in-replacement" and "completely replace" leisurely, signaling their clear intentions to forcefully replace GPL code with more permissive, business-friendly bits.
I tend to reluctantly accept Rust in the Kernel since gccrs is in the works and progressing steadily, and Rust guys are somewhat forced to write a proper reference for their language and back it with proper PLT, since it's a hard requirement if you want your programming language to be a long-living, dependable one.
Similarly, you use words like courage and non-sequitur leisurely. I'm not sure it's fitting in this instance.
I think this makes you implicitly are a part of this trend, because even less pro user software exists in Rust because of your decision
Seriously, that's a good point. I'll seriously consider my position when gccrs becomes a bit more mature.
Thanks for your reply.
There is absolutely nothing "pro-business" about permissive licenses. People choose permissive licenses for all kinds of reasons. For example, I personally use them because I believe they are more free and thus more in line with my values. You shouldn't project unsubstantiated statements onto people's motives like this.
With permissive licenses you often run into the following situation:
You buy something physical from a company, say a humanoid unitree robot, a robot actuator or Arm SBC. These pieces of hardware come with their own proprietary SDK that they sell for a significant fee or a proprietary GPU driver without any hope of updates. The SDK heavily uses MIT licensed code and there is no possibility of modifying or inspecting the code for debugging.
From the perspective of the user, the system might as well be 100% proprietary and his freedoms are maximally restricted. You could say that this is fine since it doesn't detract from the original open source project, but you have to remember that these companies would ordinarily have to pay significant development fees to build the same level of functionality and they have no obligation to help or support your project financially. You as the open source developer will then have to beg them to hire you, so you can do paid work that is unrelated to the original project to finally work on your project in your spare time, purely because it is possible to charge for hardware but not the software that the hardware depends on.
What I'm trying to get at here is that this means full vertical integration is the only way. The problem is that most hardware companies are hardware companies first and they don't care about software. They concentrate on making hardware, because each sale brings in money. They don't spend money on software, because it appears to be optional. You can just tell the customer or an open source community to bring their own software. The money that is needed to pay for open source projects flows through the very companies that refuse to spend money on software.
If you want to write open source software, you must be a hardware company so you are customer facing and have access to customer money that can be diverted to the development of the software.
> You shouldn't project unsubstantiated statements onto people's motives like this.
I am not criticising their motives, I am criticising the result!
Also, definitions are hard. It's why we have pro-choice/pro-life and not anti-choice/anti-life - using the positive spin is a good faith characterisation of a position.
In much the same way, I am using pro-user/pro-business; if my intention was to vilify one of those positions I would have used pro-user/anti-user or pro-business/anti-business to label those positions.
No reasonable interpretation of pro-user/pro-business can make the audience think that I am unfairly characterising either of two positions.
I say this to address the use of the word "unsubstantiated" in your assertion about my characterisations.
Yup. Work hand in hand with FSF. Use GPLv3. No. it is about fat binaries that are just blobs without any introspection and ownership.
That would be great, but Rust relies on compile-time monomorphization for efficiency (very much like C++, if you consider templates polymorphic functions/classes).
This means that any Rust ABI would have to cater for link-time specialization. I think this should be doable, but it would require a solution that's better than just to move the code generation into the linker. Instead, one would need to carefully consider the usage of the "shape" of all parameters of a function.
I wonder if we look at it from a too narrow perspective. We use the C ABI because it's the only game in town. We should be aiming for a safe cross language ABI. I'd love to make Rust, C, PHP, Swift, Java and Python easily talk to each other inside 1 process.
It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.
As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.
> It should extend the C ABI with things like strings, arrays, objects with a way to destruct them, and provide some safety guarantees.
> As an example, the windows world has COM, which is at the core pretty reasonable for its design constraints, even if gnarly sometimes.
Yeah, and we had CORBA. Gnome was originally not a DE - the acronym stood for Gnu Network Object Model Environment or similar.
I programmed in CORBA in the 90s. Other than being slower than a snail on weed, I liked it just fine. Maybe it's time for a resurgence of something similar, but without requiring that calls work across networks.
That is why platforms like Common Language Runtime exist, not only COM.
CLR was going to be the COM Runtime+, and idea was reborn again as Windows team with their anti-.NET bias decided to redo Longhorn in C++, with WinRT.
"Turning to the past to power Windows’ future: An in-depth look at WinRT"
https://arstechnica.com/features/2012/10/windows-8-and-winrt...
It is also how Android IPC and Apple's XPC kind of get into the picture.
The elephant in the room is that FOSS OSes hardly embrace such solutions.
You'll find that all of these languages ultimately build FFI on top of C ABI conventions, though Swift's own internally stable ABI uses a lot of alloca() to place dynamically sized objects on the stack, in a way that's somewhat unidiomatic (the Rust folks are trying to back out of their alloca() equivalent). You can even interface to COM from pure C.
Just in case someone gets funny ideas: GObject is pretty bad. Don't use it for FFI.
> We should be aiming for a safe cross language ABI.
now to simply get everyone to stop what theyre doing so they can rewrite their c code into this new language, shouldnt be too hard i imagine
Dynamic linking is also great for compile time of debug builds. If a large library or application is split up into smaller shared libraries, ones unaffected by changes don't need to be touched at all. Runtime dynamic linking has a small overhead, but it's several orders of magnitude faster than compile-time linking, so not a problem in debug builds.
for developer turnaround time, it is huge. we explicitly do not statically link Ardour because as developers we are in the edit-compile-debug cycle all day every day, and speeding up the link step (which dynamic linking does dramatically, especially with parallel linkers like lld) is a gigantic improvement to our quality of life and productivity.
A common pattern is dynamic linking for development and static linking for production-ready releases.
We considered doing both, but it turned out that the GUI toolkit we use was really, really not designed to be statically linked, so we stopped trying.
Yes, that's a good way to do it.
The C ABI can already be used, it comes with all the existing safety guarantees that C will provide. Isn’t this as good as C?
It is as good as C.
It's also as bad as C.
I'm saying that the chasm to cross is a safe ABI.
There is no existing safe ABI, so this cannot be an adoption barrier.
Lots of reasons why it is. I'll give you two.
1) It can't be that replacing 20 C/C++ shared objects with 20 Rust shared objects results in 20 copies of the Rust standard library and other dependencies that those Rust libraries pull in. But, today, that is what happens. For some situations, this is too much of a memory usage regression to be tolerable.
2) If you really have 20 libraries calling into one another using C ABI, then you end up with manual memory management and manual buffer offset management everywhere even if you rewrite the innards in Rust. So long as Rust doesn't have a safe ABI, the upside of a Rust rewrite might be too low in terms of safety/security gained to be worth doing
Many Rust core/standard library functions are trivial and inlining them is not really a concern. For those that do involve significant amount of code, C ABI-compatible code could be exported from some .so dynamic object, with only a small safe wrapper being statically linked.
I found c ABI a bit too difficult in rust compared to c or zig. Mainly because of destructors. I am guessing c++ would be difficult in a similar way.
Also unsafe rust has always on strict-aliasing, which makes writing code difficult unless you do it in certain ways.
Having glue libraries like pyo3 makes it good in rust. But that introduces bloat and other issues. This has been the biggest issue I had with rust, it is too hard to write something so you use a dependency. And before you know it, you are bloating out of control
Not really. The foreign ABI requires a foreign API, which adds friction that you don't have with C exporting a C API / ABI. I've never tried, but I would guess that it adds a lot of friction.
Indeed, Victor Ciura from Microsoft DevDiv has several talks on how this is currently an adoption problem at Microsoft.
They have been working around it with DLLs, and COM/WinRT, but still the tooling isn't ideal.
COM is interesting as it implements interfaces using the C++ vtable layout, which can be done in C. Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.
You can also access .NET/C# objects/interfaces via COM. It has an interface to allow you to get the type metadata but that isn't necessary. This makes it possible to e.g. get the C#/.NET exception stack trace from a C/C++ application.
>Dynamic COM (DCOM) is used to provide interoperability with Visual Basic.
DCOM is Distributed COM not Dynamic COM[1].
COM does have an interface for dynamic dispatch called IDispatch[2] which is used for scripting languages like VBScript or JScript. It isn't required for Visual Basic though. VB is compiled and supports early binding to interfaces.
[1] https://en.wikipedia.org/wiki/Distributed_Component_Object_M...
Ah yes, that's what I was thinking of. It's been a while since I've worked with COM.
Eh, some people can work on moving to Rust, while others work on adding dynamic linking to Rust.
Or maybe we can some how get used to living with static linking. (I don't think so, but many seem to think so in spite of my advice to the contrary!)
Another possibility is to use IPC as the dynamic linking boundary of sorts, but this will consume lots more memory, and as is stated elsewhere in this thread, memory ain't cheap no more.
One particular chasm to keep an eye on, possibly even more relevant than Ubuntu using Rust: When it comes to building important stuff, Ubuntu sticks to curl|YOLO|bash instead of trusting trust in their own distributions.
https://github.com/canonical/firefox-snap/blob/90fa83e60ffef...
When people say "curl|bash", this usually means secondary fetches, random system config changes, likely adding stuff to user's .bashrc
But it's not quite that bad in this particular case - they are fetching pre-built static toolchain, and running old-school install script, just like in 1990s. The social convention for those is quite safer.
(Although I agree, it is pretty ironic that they prefer this to using ppa or binary packaged into deb...)
I don't get it. What's the chasm here?
The "issue" isn't that these new tools from Ubuntu is in Rust, that's almost irrelevant. The issue is that they are not the "standard" tools.
If Ubuntus Rust replacements aren't adopted in other distributions, or only in some of them, we get an even more fragmented Linux ecosystem. We've already seen this with the sudo-rs (which really should be called something else). It's a sudo replacement, ideal a one to one replacement, but it's not 100% and for how long? You can also think of the Curl provided by Microsoft Powershell, which isn't actually Curl and only partially provides Curl functionality, but it squats the command name.
Ubuntu might accidentally, or deliberately, create a semi-incompatible parallel Linux environment, like Alpine, but worse.
Aren't the versions of Rust in stable Linux distributions like, a century old? Or at least they were last I checked what Debian and Ubuntu LTS were distributing. I think it's because they don't like static linking.
Hasn’t the right way to install rust has always been using rust up? I am an Ubuntu user and never once tried apt for rust.
I believe Rust is typically only used through `apt` as a dependency for system packages written in Rust, or for building system packages that are written in Rust, so that they can link against a single shared instance of the Rust Standard Library.
Debian had a new stable release 45 days ago. For now I would imagine things aren't too old there. Although a friend of mine recently ran into some ancient packages on Mint, so maybe Mint/Ubuntu are oddly behind Debian Stable right now for some things.
[flagged]
should we trust someone whos HN account is just as shiny?
[flagged]
In theory, yes.
In practice, very rarely. Lots of 'curl | sh' do secondary fetches, and those don't come with hash checks. And even if they come with hash checks _today_, there is no guarantee next version won't quietly remove them.
> And even if they come with hash checks _today_, there is no guarantee next version won't quietly remove them.
...But you could say this about literally every security measure in literally every codebase. At any point, anyone could quietly remove anything that enhances security, or quietly add anything that reduces security. So what's your point?
Yes, technically it's all Turing-complete, but conventions matter, a lot. And Rust, being a mature project, is very likely to follow the conventions.
"static toolchain .tar.gz" means bunch of files you download and manually extract. There may be an install.sh script, but it'll just copy files around, not download extra files. And sometimes install.sh is optional, and tools can be run directly from extraction location.
"curl | bash" means "do whatever developers think gives best experience with minimal prompts", which absolutely means download extra files, but also install system packages, update ~/.bashrc, change system settings and so on.
".run installer" mean interactive installer, Windows-style, often with actual GUI. Often goes into /opt.
"deb file" means "all installed files are managed by apt, and can be examined. /etc conflicts are managed by apt. pre/post install scripts are minimal, and there is a clean uninstall command you can trust to actually work".
You can have deviations - like curl|bash used to pull a deb file or something - but no one likes surprises, so people usually stick to their lanes. If you have .deb files, it might get an officially-specified dependency, more files and maybe a post-inst script, but it won't suddenly start rewriting your .bashrc. Having static toolchain suddenly download files will make many people unhappy, so it likely won't happen either.
(One exception to this rule is enterprise software being packaged into .deb files - Google Chrome surprised everyone when they started to install apt source in their postinst, but many enterprise softwares (cough nomachine cough) do much worse things, like only using apt to unpack their installer file, an dthen running their proprietary install script in postinst)