I think the storage optimization aspect is secondary, it is more about keeping control over your distribution. You need processes to replace all occurrences of xz with an uncompromised version when necessary. When all packages in the distribution link against one and the same that's easy.
Nix and guix sort of move this into the source layer. Within their respective distributions you would update the package definition of xz and all packages depending on it would be rebuild to use the new version.
Using shared dependencies is a mostly irrelevant detail that falls out of this in the end. Nix can dedupe at the filesystem layer too, e.g. to reduce duplication between different versions of the same packages.
You can of course ship all dependencies for all packages separately, but you have to have a solution for security updates.
> But they’re roughly the same paradigm as docker, right?
Absolutely not. Nix and Guix are package managers that (very simplified) model the build process of software as pure functions mapping dependencies and source code as inputs to a resulting build as their output. Docker is something entirely different.
> they’re both still throwing in the towel on deploying directly on the underlying OS’s userland
The existence of an underlying OS userland _is_ the disaster. You can't build a robust package management system on a shaky foundation, if nix or guix were to use anything from the host OS their packaging model would fundamentally break.
> unless you go all the way to nixOS
NixOS does not have a "traditional/standard/global" OS userland on which anything could be deployed (excluding /bin/sh for simplicity). A package installed with nix on NixOS is identical to the same package being installed on a non-NixOS system (modulo system architecture).
> shipping what amounts to a filesystem in a box
No. Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.
With docker two independent images share nothing, except maybe some base layer, if they happen to use the same one. With nix or Guix, packages automatically share their dependencies iff it is the same dependency. The thing is: if one package depends on lib foo compiled with -O2 and the other one depends on lib foo compiled with -O3, then those are two different dependencies. This nuance is something that only the nix model started to capture at all.
You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.
E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.
> Most people don't want a mental model just to type a sentence.
"Just typing a sentence" is what I was referring to with "basic linear text writing", for which modal editing indeed does not bring much of a benefit. That's not text editing though.
> Instead of the snark, you could just admit that your preference doesn't align with the median user.
? I explicitly wrote that people work differently and have different preferences. What was snarky about that?
Besides, the median user does not edit configuration files via ssh, so they are hardly relevant here. The median user does not even know what a terminal is. If this was about the median user, then we would be discussing Word vs. Notepad, or whatever.