I guess using Go + Godot to build native & installable Android & iOS binaries (without any proprietary SDKs) was too easy. So it's time for a real challenge... Linux Binary Compatibility (some back...
I guess using Go + Godot to build native & installable Android & iOS binaries (without any proprietary SDKs) was too easy. So it's time for a real challenge...
(some background reading: https://jangafx.com/insights/linux-binary-compatibility)
For a while now, it's been very easy to reliably ship command line software & servers for Linux, just run go build and out pops a single static binary that will run on any Linux distribution running kernel 3.2 or later (which was released in 2012, so there's plenty of room for backwards compatibility).
The problems begin to creep in when you want access to hardware accelerated graphics. All the GPU drivers on Linux require accessing dynamic libraries via the C ABI. These C libraries are built against a particular libc, which is most commonly glibc but there are also a selection of musl-based distributions. If you compile a glibc library or executable, it won't run on a musl system and vice-versa. That's a big incompatibility right there!
In fact, I've directly experienced this, as I recently replaced the OS on my personal computer with the musl edition of Void Linux. Compiling the Zed editor with musl for example, was quite the challenge. It turns out that building graphics.gd projects on musl was also very broken. Go doesn't properly support c-shared or c-archive when building against musl.
That's a problem, firstly because this is my distro now, I need to be able to build graphics.gd projects! Secondly, in theory, musl has better support for static linking than glibc; so if there's any solution to this Linux Binary Compatibility mess, it's probably going to have something to do with musl.
muslTo work around these musl issues with Go, I had to patch the runtime with a build-overlay that applies when building for GOOS=musl. This is a new GOOS that I've introduced to graphics.gd, specifically to make musl builds possible.
Next up, I decided to ditch c-shared builds for musl, these were only convenient because you could easily plug and play Go into the official Godot binaries. The Godot Foundation doesn't provide official musl builds, so instead, I'm linking the Go code directly with Godot c-archive to end up with a single binary. Amazing, graphics.gd supports musl now!
There's just one issue, this means whenever somebody wants to release their project for Linux, they would have to create two builds, a Linux glibc build + a musl build and somehow communicate to their users, to pick the correct binary. Hell, before I installed Void Linux I didn't even fully comprehend the differences between musl and glibc, this feels like I'm simply contributing to the problem!
Hold up! Earlier I reported that a key benefit of musl was better static library support. There should be a way to build a graphics.gd project into a single static binary. Well, here's the thing. Yes, you can totally do this. Godot includes all of it's dependencies on Linux, everything else is dynamically loaded at runtime, so just add the -static command and...
ERROR Dynamic loading not supported
Ouch, Godot wants to use dlopen to interface with X11, Wayland, OpenGL, Vulkan etc. As it turns out, musl refuses to implement dlopen for static binaries. They don't want anyone to load a glibc library from musl because there are fundamental incompatibilities between how they implement TLS (thread-local-storage).
Don't worry though! As dlopen is compiled as a weak symbol, this means, that as long as graphics.gd implements it, there's still a chance to get a single static binary that can execute on any Linux system 3.2 onwards.
There's some precedent for this, there's the detour technique in C which will let you dlopen SDL and show graphics when running without a standard library. There's also Cosmopolitan's dlopen which uses a similar technique. So the solution here is to extend this for musl.
The way this works, is by including (or compiling) a small C program for the target machine. We load the program and execute into it from the same process. This program brings in the host's dynamic linker so that we can steal the system's dlopen and longjmp back into graphics.gd. We wrap any dynamically loaded functions with an assembly trampoline that switches to the system's libc TLS for the duration of the call. It all starts looking a lot like cgo.
So after much hair pulling and LLM wrangling, it turns out that musl + dlopen is all we need to produce single static binaries + graphics for Linux. Everyone can now enjoy the Go single-static-binary experience on Linux with full support for hardware accelerated graphics.
Here's a build of the graphics.gd Doge The Creeps sample project that should execute (and hopefully render graphics) on any Linux system with gcc installed (we don't embed the helper binaries yet).
https://release.graphics.gd/dodge_the_creeps.static
You can also cross-compile your own project (on any supported platform)
GOOS=musl GOARCH=amd64 gd build
Note you may need to delete your export_presets.cfg so that the new musl export preset is added to your project
So what we need is essentially a "libc virtualization".
But Musl is only available on Linux, isn't it? Cosmopolitan (https://github.com/jart/cosmopolitan) goes further and is available also on Mac and Windows, and it uses e.g. SIMD and other performance related improvements. Unfortunately, one has to cut through the marketing "magic" to find the main engineering value; stripping away the "polyglot" shell-script hacks and the "Actually Portable Executable" container (which are undoubtedly innovative), the core benefit proposition of Cosmopolitan is indeed a platform-agnostic, statically-linked C standard (plus some Posix) library that performs runtime system call translation, so to say "the Musl we have been waiting for".
I find it amazing how much the mess that building C/C++ code has been for so many decades seems to have influenced the direction technology, the economy and even politics has been going.
Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?
What I find amazing is why people continously claim glibc is the problem here. I have a commercial software binary from 1996 that _still works_ to this day. It even links with X11, and works under Xwayland.
The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.
We (small HPC system) just upgraded our OS from RHEL 7 to RHEL 9. Most user apps are dynamically linked, too.
You don't want to believe how many old binaries broke. Lot of ABI upgrades like libpng, ncurses, heck even stuff like readline and libtiff all changed just enough for linker errors to occur.
Ironically all the statically compiled stuff was fine. Some small things like you mention only linking to glibc and X11 was fine too. Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.
But yeah, now that I'm writing this out, glibc was never the problem in terms of forwards compatibility. Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
> Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.
Why "better than expected"? I can run the entire userspace from Debian Etch on a kernel built two days ago... some kernel settings need to be changed (because of the old glibc! but it's not glibc's fault: it's the kernel who broke things), but it works.
> Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
But this is a different problem, and no one makes promises here (not the kernel, not musl). So all the talk of statically linking with musl to get such type of compatibility is bullshit (at some point, you're going to hit a syscall/instruction/whatever that the newer musl does that the older kernel/hardware does not support).
Better than expected as it's mixing userlands. We didn't put the entire /usr/lib of the old system in LD_LIBRARY_PATH but just some stuff like old libpng, libjpeg and the shebang. Taking an image of an old compute node still on RHEL 7 and then dumping it a container naturally worked, but at that point it's only the kernel interface you have to worry about, not different glibc, gtk, qt and that kind of stuff.
> it's the kernel who broke things
I remember this in a heated LKML exchange, 13 years ago, look how the table has turned:
>
> Are you saying that pulseaudio is entering on some weird loop if the
> returned value is not -EINVAL? That seems a bug at pulseaudio.
Mauro, SHUT THE FUCK UP!
It's a bug alright - in the kernel. How long have you been a maintainer? And you still haven't learnt the first rule of kernel maintenance?
If a change results in user programs breaking, it's a bug in the kernel. We never EVER blame the user programs. How hard can this be to understand?
To make matters worse, commit f0ed2ce840b3 is clearly total and utter CRAP even if it didn't break applications. ENOENT is not a valid error return from an ioctl. Never has been, never will be. ENOENT means "No such file and directory", and is for path operations. ioctl's are done on files that have already been opened, there's no way in hell that ENOENT would ever be valid.
> So, on a first glance, this doesn't sound like a regression,
> but, instead, it looks tha pulseaudio/tumbleweed has some serious
> bugs and/or regressions.
Shut up, Mauro. And I don't _ever_ want to hear that kind of obvious garbage and idiocy from a kernel maintainer again. Seriously.
I'd wait for Rafael's patch to go through you, but I have another error report in my mailbox of all KDE media applications being broken by v3.8-rc1, and I bet it's the same kernel bug. And you've shown yourself to not be competent in this issue, so I'll apply it directly and immediately myself.
WE DO NOT BREAK USERSPACE!
Seriously. How hard is this rule to understand? We particularly don't break user space with TOTAL CRAP. I'm angry, because your whole email was so _horribly_ wrong, and the patch that broke things was so obviously crap. The whole patch is incredibly broken shit. It adds an insane error code (ENOENT), and then because it's so insane, it adds a few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").
The fact that you then try to make excuses for breaking user space, and blaming some external program that used to work, is just shameful. It's not how we work.
Fix your f*cking "compliance tool", because it is obviously broken. And fix your approach to kernel programming.
LinusThe problem of modern libc (newer than ~2004, I have no idea what that 1996 one is doing) isn't that old software stops working. It's that you can't compile software on your up to date desktop and have it run on your "security updates only" server. Or your clients "couple of years out of date" computers.
And that doesn't require using newer functionality.
But this is not "backwards compatibility". No one promises this type of "forward compatibility" that you are asking for . Even win32 only does it exceptionally... maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.
And this has nothing to do with 1996, or 2004 glibc at all. In fact, glibc makes this otherwise impossible task actually possible: you can force to link with older symbols, but that solves only a fraction of the problem of what you're trying to achieve. Statically linking / musl does not solve this either. At some point musl is going to use a newer syscall, or any other newer feature, and you're broke again.
Also, what is so hard about building your software in your "security updates only" server? Or a chroot of it at least ? As I was saying below, I have a Debian 2006-ish chroot for this purpose....
> maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.
In my experience, that's not quite accurate. I'm working on a GUI program that targets Windows NT 4.0, built using a Win11 toolchain. With a few tweaks here and there, it works flawlessly. Microsoft goes to great lengths to keep system DLLs and the CRT forward- and backward-compatible. It's even possible to get libc++ working: https://building.enlyze.com/posts/targeting-25-years-of-wind...
What does "a Win11 toolchain" mean here? In the article you link, the guy is filling missing functions, rewriting the runtime, and overall doing even more work than what I need to do to build binaries on a Linux system from 2026 that would work on a Linux from the 90s : a simple chroot. Even building gcc is a walk in the park compared to reimplementing OS threading functions...
Windows dlls are forward compatible in that sense. If you use the Linux kernel directly, it is forward compatible in that sense. And, of course, there is no issue at all with statically linked code.
The problem is with the Linux dynamic linking, and the idea that you must not statically link the glibc code. And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
> Windows dlls are forward compatible in that sense.
If you want to go to such level, ELF is also forward compatible in that sense.
This is completely irrelevant, because what the developer is going to see is the binaries he builts in XP SP3 no longer work in XP SP2 because of a link error: the _statically linked_ runtime is going to call symbols that are not in XP SP2 DLLs (e.g. the DecodePointer debacle).
> If you use the Linux kernel directly, it is forward compatible in that sense.
Or not, because there will be a note in the ELF headers with the minimum kernel version required, which is going to be set to a recent version even if you do not use any newer feature. (unless you play with the toolchain) (PE has similar field too, leading to the "not a valid win32 executable" messages).
> And, of course, there is no issue at all with statically linked code.
I would say statically linked code is precisely the root of all these problems.
In addition to bring more problems of its own. E.g. games that dynamically link with SDL can be patched to have any other SDL version, including one with bugfixes for X support, audio, etc. Games that statically link with SDL? Sorry..
> And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
Funnily, I think that is exactly the same as the solution I'm proposing for this conundrum: just (dynamically) link with the older glibc ! Voila: your binary now works with glibc from 1996 and glibc from 2026.
Frankly, glibc is already the project with the best binary compatibility of the entire Linux desktop , if not the only one with a binary compatibility story at all . The kernel is _not_ better in this regard (e.g. /dev/dsp).
If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.
I know, I've done that.
> just (dynamically) link with the older glibc!
Except that the older glibc is unmaintained and very hard to get a hold of and use. If you solve that, yeah, it's the same.
> If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.
No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.
If you use VC++6 in 7, then yes, you can; but that's not really that different from me using a Debian Etch chroot to build.
Even within XP era this happens, since there are VS versions that target XP _SP2_ and produce binaries that are not compatible with XP _SP1_. That's the "DecodePointer" debacle I was mentioning. _Even_ if you do not use any "SP2" feature (as few as they are), the runtime (the statically linked part; not MSVCRT) is going to call DecodePointer, so even the smallest hello world will catastrophically fail in older win32 version.
Just Google around for hundreds of confused developers.
> Except that the older glibc is unmaintained and very hard to get a hold of and use.
"unmaintained" is another way of saying "frozen" or "security updates only" I guess. But ... hard to get a hold of ? You are literally running it on your the "security updates only" server that you wanted to target in the first place!
> No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.
Yes, you can! There are even multiple Windows 10 era toolchains that officially support XP. VS 2017 was the last release that could build XP binaries.
"Without following any special procedure". I know you can install older toolchains and then build using those, but I can do as much on any platform (e.g. by using a chroot). The default on VS2012 is Vista-only binaries.
This kind of compatibility is available on, of all systems, macOS.
> The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
How would that work given that glibc has gone through a soname change since then? If it's from 1996 are you sure the secret isn't that it uses non-g libc?
It has libc5 and glibc versions. It even has a version shipped as an rpm, which I guess makes it from 97. The rpm, by the way, also installs.
> It has libc5 and glibc versions
That suggests someone went to significantly more effort than "just dynamically link it".
What effort exactly does it suggest? It ls literally dynamically linked with glbc.
It suggests someone went into the details of how it was linked and was careful about what it was and wasn't linked to, and perhaps even intervened directly in the low-level parts of the linking process.
Or rather it simply suggests you build for two versions of the major distributions of then, or the two distributions even...
Why is my entire argument so hard to understand? To build for a different glibc you do not have to do _any_ type of arcane magic or whatever you claim. You just build in a different system... or chroot... I have been doing that _myself_ for at least 15 years, and I know of other Linux desktop commercial shops that have been doing it for much, much longer. Chroots are _trivial_.
can you write up a blog of how this is working? because both as a publisher and a user, broken binaries are much more the norm
Broken binaries because of glibc? you'd need to put an example, cause my point is that I'm yet to see any.
If you are talking about _any_ other library, yes, that is a problem. My point is that glibc is the only one who even has a compatibility story.
got it thanks for clarifying.
How does this technical issue affect the economy and politics? In what way would the world be different just because we used a better linker?
Well, you could just look at things from an interoperability and standards viewpoint.
Lots of tech companies and organizations have created artificial barriers to entry.
For example, most people own a computer (their phone) that they cannot control. It will play media under the control of other organizations.
The whole top-to-bottom infrastructure of DRM was put into place by hollywood, and then is used by every other program to control/restrict what people do.
That was the point of the successive questions in the second paragraph...
existential crisis: so hot right now
If the APE concept isn't appealing to you, you may be interested in the work on LLVM libC. My friend recently delivered an under-appreciated lecture on the vision:
tl;dw Google recognizes the need for a statically-linked modular latency sensitive portable POSIX runtime, and they are building it.
At the rate things are going we'll need a container virtualization layer as well, a docker for docker if you know what I mean
I'm building in this space, I take a docker inside a microvm (vm-lite) approach.
I wonder if inside the docker container we can run a sandboxed WASM runtime?
It's just fun ;)
Do you mean something like gVisor?
"All problems in computer science can be solved by another level of indirection"
I desperately want to write C/C++ code that has a web server and can talk websockets, and that I can compile with Cosmopolitan.
I don't want Lua. Using Lua is crazy clever, but it's not what I want.
I should just vibe code the dang thing.
You should, it’s fun.
I have a devcontainer running the Cosmopolitan toolchain and stuck the cosmocc README.md in a file referenced from my AGENTS.md.
Claude does a decent job. You have to stay on top of it when it’s writing C, easy to turn to spaghetti.
Also the fat binary concept trips up agents - just have it read the actual cosmocc file itself to figure any issues out.
I've got it working. Used Mongoose. Unfortunately Actually Portable Executables seem to not play well with WSL, and the suggested fixes didn't work. I'm able to play with it in a VM. Not as portable as I'd hoped, but I'll see how it goes.
Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?
There are things like this.
The things I know of and can think of off the top of my head are:
1. appimage https://appimage.org/
2. nix-bundle https://github.com/nix-community/nix-bundle
3. guix via guix pack
4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )
5. A docker image (a package that runs everywhere, assuming a docker runtime is available)
7. https://en.wikipedia.org/wiki/Snap_(software)
AppImage is the closest to what you want I think.
It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods and also very big for typical systems which include most libraries. They're good as a "compile once, run everywhere" approach but you're really accommodating edge cases here.
A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?
Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.
IMO one of the best features of AppImage is that it makes it easy to extract without needing external tools. It's usually pretty easy for me to look at an AppImage and write a PKGBUILD to make a native Arch package; the format already encodes what things need to be installed where, so it's only a question of whether the libraries it contains are the same versions of what I can pull in as dependencies (either from the main repos or the AUR). If they are, my job is basically already done, and if they aren't, I can either choose to include them in the package itself assuming I don't have anything conflicting (which is fine for local use even if it's not something that's usually tolerated when publishing a package) or stick with using the AppImage.
I agree. I've seen quite a few AUR packages built that way and I'm using a few myself too. The end user shouldn't be expected to do this though! :D
I agree with you as well! I don't think AppImage is perfect by any means. I do prefer it overall to the other commonly mentioned "universal" package tools like snap and flatpak, but I think my ideal system would essentially be a middleware protocol between build tool "frontends" and package format "backends" and then an ecosystem of tooling around that. LLVM and LSP haven't magically solved every issue with compilers and editor support for languages by any means, but they have significantly moved the needle on the average experience for quite a large number of end users even if they never directly touch any of the underlying protocols.
> It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods
'Noticeably slower' at what? I've run, e.g. xemu (original xbox emulator) as both manually built from source and via AppImage-based released and i never noticed any difference in performance. Same with other AppImage-based apps i've been using.
Do you refer to launching the app or something like that? TBH i cannot think of any other way an AppImage would be "slower".
Also from my experience, applications released using AppImages has been the most consistent by far at "just working" on my distro.
I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.
We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.
Been doing it this way for years now, so it's well battle tested.
That kind of defeats the point of an AppImage though - you could just as well have a tar archive with a c classic collection of binaries + optional launcher script.
A single file is much better to manage on the eyes than a whole bunch of them, plus AppImages can be installed into the desktop using integration.
AppImage looks like what I need, thanks.
I wonder though, if I package say a .so file from nVidia, is that allowed by the license?
AppImage is not what you need. It's just an executable wrapper for the archive. To make the software cross-distro, you need to compile it manually on an old distro with old glibc, make sure all the dependencies are there, and so on.
https://docs.appimage.org/reference/best-practices.html#bina...
There are several automation tools to make AppImages, but they won't magically allow you to compile on the latest Fedora and expect your executable to work on Debian Stable. It's still require quite a lot of manual labor.
Yeah a lot of Appimage developers make assumptions about what their systems have as well (i.e. "if I depend on something that is installed by default on Ubuntu desktop then it's fine to leave out"). For example, a while ago I installed an Appimage GUI program on a headless server that I wanted to use via X11 forwarding. I ended up having to manually install a bunch of random packages (GTK stuff, fonts, etc) to get it to run. I see Appimage as basically the same as distributing Linux binaries via .tar.gz archives, except everything's in a single file.
Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.
>I wonder though, if I package say a .so file from nVidia, is that allowed by the license?
It won't work: drivers usually require exact (or more-or-less the same) kernel module version. That's why you need to explicitly exclude graphics libraries from being packaged into AppImage. This make it non-runnable on musl if you're trying to run it on glibc.
https://github.com/Zaraka/pkg2appimage/blob/master/excludeli...
No, that's a copyright violation, and it won't run on AMD or Intel GPUs, or kernels with a different Nvidia driver version.
But this ruins the entire idea of packaging software in a self-contained way, at least for a large class of programs.
It makes me wonder, does the OS still take its job of hardware abstraction seriously these days?
The OS does. Nvidia doesn't.
Does Nvidia not support OpenGL?
Not really. Nvidia-OpenGL is incompatible to all existing OS OpenGL interfaces, so you need to ship a separate libGL.so if you want to run on Nvidia. In some cases you even need separate binaries, because if you dynamically link against Nvidia's libGL.so, it won't run with any other libGL.so. Sometimes also vice versa.
Does AMD use a statically linked OpenGL?
AMD uses the dynamically linked system libGL.so, usually Mesa.
So you still need dynamic linking to load the right driver for your graphics card.
Most stuff like that uses some kind of "icd" mechanism that does 'dlopen' on the vendor-specific parts of the library. Afaik neither OpenGL nor Vulkan nor OpenCL are usable without at least dlopen, if not full dynamic linking.
It does, and one way it does that is by dynamically loading the right driver code for your hardware.
That’s a licensing problem not a packaging problem. A DLL is a DLL - only thing that changes is whether you’re allowed redistribute it
Typically appimage packaging excludes the .so files that are expected to be provided by the base distro.
Any .so from nvidia is supposed to be one of those things. Because it also depends on the drivers etc.. provided by nvidia.
Also on a side note, a lot of .so files also depends on other files in /usr/share , /etc etc...
I recommend using an AppImage only for the happy path application frameworks they support (eg. Qt, Electron etc...). Otherwise you'd have to manually verify all the libraries you're bundling will work on your user's distros.
Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.
You generally still also have to abide by license obligations for OSS too, e. G., GPL.
To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.
I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.
You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.
edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.
15-30 years ago I managed a lot of commercial chip design EDA software that ran on Solaris and Linux. We had wrapper shell scripts for so many programs that used LD_LIBRARY_PATH and LD_PRELOAD to point to the specific versions of various libraries that each program needed. I used "ldd" which prints out the shared libraries a program uses.
There is this project "actually portable executable"/cosmopolitan libc https://github.com/jart/cosmopolitan that allows a compile once execute anywhere style type of C++ binary
Ermine: https://www.magicermine.com/
It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year
Someone already mentioned AppImage, but I'd like to draw attention to this alternate implementation that executes as a POSIX shell script, making it possible to dynamic dispatch different programs on different architectures. e.g. a fat binary for ARM and x64.
So autotools but for execution instead of compilation?
I don't think it's as simple as "run this one thing to package it", so if the process rather than the format is what you're looking for, this won't work, but that sounds a lot like how AppImages work from the user perspective. My understanding is that an AppImage is basically a static binary paired with a small filesystem image containing the "root" for the application (including the expected libraries under /usr/lib or wherever they belong). I don't line everything about the format, but overall it feels a lot less prescriptive than other "universal" packages like flatpak or snap, and the fact that you can easily extract it and pick out the pieces you want to repackage without needing any external tools (there are built-in flags on the binary like --appimage-extract) in helps a lot.
AppImage comes close to fulfilling this need:
https://appimage.github.io/appimagetool/
Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..
You can "package" all .so files you need into one file, there are many tools which do this (like a zip file).
But you can't take .so files and make one "static" binary out of them.
> But you can't take .so files and make one "static" binary out of them.
Yes you can!
This is more-or-less what unexec does
- https://news.ycombinator.com/item?id=21394916
For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.
But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!
[1]: ASLR would be one of those things...
What if the library you use calls dlopen later? That’ll fail.
There is no universal, working way to do it. Only some hacks which work in some special cases.
> What if the library you use calls dlopen later? That’ll fail.
Nonsense. xemacs could absolutely call dlopen.
> There is no universal, working way to do it. Only some hacks which work in some special cases.
So you say, but I remember not too long ago you weren't even aware it was possible, and you clearly didn't check one of the most prominent users of this technique, so maybe you should also explain why I or anyone else should give a fuck about what you think is a "hack"?
Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.
No you can't. dlopen signature takes a file path, not a memory range. And if you start to save the libraries to the filesystem before opening them, there's no difference to shipping an archive directly and skip the trouble of your own archive code.
Yes, that's true.
But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.
These are just strange and confusing from the end users' perspective.
> But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.
Why would you include most of your dynamic libraries but not your libc?
You could still run into problems if you (or your libraries) want to use syscalls that weren't available on older kernels or whatever.
You can include it, but
- either you use chroot, proot or similar to make /lib path contain your executable’s loader
- or you hardcode different loader path into your executable
Both are difficult for an end user.
This isn't that hard (that's not to say this is easy, it is tricky). Your executable should be a statically linked stub loader with an awful lot of data, the stub loader dynamically links your real executable (and libraries, including libc) from the data and runs it.
To add to this, in case of any remaining confusion. You can implement your own execve in userspace. [0] But the kernel's execve is a piece of machinery that invokes the loader so obviously it follows that you're free to make any changes you'd like to the overall process.
Bonus points if you add compression or encryption and manage to trip a virus scanner or three. [1]
[0] https://grugq.github.io/docs/ul_exec.txt
[1] https://blackhat.com/presentations/bh-usa-07/Yason/Whitepape...
mkdir chroot
cd chroot
for lib in $(ldd ${executable} | grep -oE '/\S+'); do
tgt="$(dirname ${lib})"
mkdir -p .${tgt}
cp ${lib} .${tgt}
done
mkdir -p .$(dirname ${executable})
cp ${executable} .${executable}
tar cf ../chroot-run-anywhere.tgz .You're supposed to do this recursively for all the libs no?
Eg. Your App might just depend on libqt5gui.so but that libqt5gui.so might depend on some libxml etc...
Not to mention all the files from /usr/share etc... That your application might indirectly depend on.
> You're supposed to do this recursively
ldd works recursively.
> Not to mention all the files from /usr/share
Well yeah, there obviously cannot be a generic way to enumerate all the files a program might open...
Exodus (https://github.com/intoli/exodus) used to be good for this but is giving me a Python error these days.
A couple similar projects:
https://github.com/sigurd-dev/mkblob https://github.com/tweag/clodl
https://github.com/gokrazy/freeze is a minimal take on this
(not an endorsement, I do not use it, but I know of it)
[dead]
Binary comparability extends beyond the vide that runs in your process. These days a lot of functionality occurs by way of IPC which has a variety of wire protocols depending on the interface. For instance there is dbus, Wayland protocols, varlink, etc. Both the wire protocol, and the APIs built on top need to retain backwards comparability to ensure Binary compatibility. Otherwise you're not going to be able to run on various different Linux based platforms arbitrarily. And unlike the kernel, these userspace surfaces do not take backwards compatibility nearly as important. It's also much more difficult to target a subset of these APIs that are available on systems that are only 5 years old. I would argue API endpoints on the web have less risk here (although those break all the time as well)