
On Tuesday 30 November 2010 15:58:00 David Collier wrote: > I see that busybox spreads it's links over these 4 directories. > > Is there a simple rule which decides which directory each link lives >…
On Tuesday 30 November 2010 15:58:00 David Collier wrote: > I see that busybox spreads it's links over these 4 directories. > > Is there a simple rule which decides which directory each link lives > in..... > > For instance I see kill is in /bin and killall in /usr/bin.... I don't > have a grip on what might be the logic for that. You know how Ken Thompson and Dennis Ritchie created Unix on a PDP-7 in 1969? Well around 1971 they upgraded to a PDP-11 with a pair of RK05 disk packs (1.5 megabytes each) for storage. When the operating system grew too big to fit on the first RK05 disk pack (their root filesystem) they let it leak into the second one, which is where all the user home directories lived (which is why the mount was called /usr). They replicated all the OS directories under there (/bin, /sbin, /lib, /tmp...) and wrote files to those new directories because their original disk was out of space. When they got a third disk, they mounted it on /home and relocated all the user directories to there so the OS could consume all the space on both disks and grow to THREE WHOLE MEGABYTES (ooooh!). Of course they made rules about "when the system first boots, it has to come up enough to be able to mount the second disk on /usr, so don't put things like the mount command /usr/bin or we'll have a chicken and egg problem bringing the system up." Fairly straightforward. Also fairly specific to v6 unix of 35 years ago. The /bin vs /usr/bin split (and all the others) is an artifact of this, a 1970's implementation detail that got carried forward for decades by bureaucrats who never question _why_ they're doing things. It stopped making any sense before Linux was ever invented, for multiple reasons: 1) Early system bringup is the provice of initrd and initramfs, which deals with the "this file is needed before that file" issues. We've already _got_ a temporary system that boots the main system. 2) shared libraries (introduced by the Berkeley guys) prevent you from independently upgrading the /lib and /usr/bin parts. They two partitions have to _match_ or they won't work. This wasn't the case in 1974, back then they had a certain level of independence because everything was statically linked. 3) Cheap retail hard drives passed the 100 megabyte mark around 1990, and partition resizing software showed up somewhere around there (partition magic 3.0 shipped in 1997). Of course once the split existed, some people made other rules to justify it. Root was for the OS stuff you got from upstream and /usr was for your site- local files. Then / was for the stuff you got from AT&T and /usr was for the stuff that your distro like IBM AIX or Dec Ultrix or SGI Irix added to it, and /usr/local was for your specific installation's files. Then somebody decided /usr/local wasn't a good place to install new packages, so let's add /opt! I'm still waiting for /opt/local to show up... Of course given 30 years to fester, this split made some interesting distro- specific rules show up and go away again, such as "/tmp is cleared between reboots but /usr/tmp isn't". (Of course on Ubuntu /usr/tmp doesn't exist and on Gentoo /usr/tmp is a symlink to /var/tmp which now has the "not cleared between reboots" rule. Yes all this predated tmpfs. It has to do with read- only root filesystems, /usr is always going to be read only in that case and /var is where your writable space is, / is _mostly_ read only except for bits of /etc which they tried to move to /var but really symlinking /etc to /var/etc happens more often than not...) Standards bureaucracies like the Linux Foundation (which consumed the Free Standards Group in its' ever-growing accretion disk years ago) happily document and add to this sort of complexity without ever trying to understand why it was there in the first place. 'Ken and Dennis leaked their OS into the equivalent of home because an RK05 disk pack on the PDP-11 was too small" goes whoosh over their heads. I'm pretty sure the busybox install just puts binaries wherever other versions of those binaries have historically gone. There's no actual REASON for any of it anymore. Personally, I symlink /bin /sbin and /lib to their /usr equivalents on systems I put together. Embedded guys try to understand and simplify... Rob -- GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem Forever, and as welcome as New Coke.
This is what happens when a system is designed by multiple people and companies over a long period of time. An amalgam of ideas which are there just because. There's no reason Linux should be like this. e.g., see https://gobolinux.org/ which has more sane dirs.
Linux does not use this split any more. Many of these dirs were merged back together. The "/usr merge" was adopted by Debian, Ubuntu, Fedora, Red Hat, Arch Linux, openSUSE and other major distros:
https://itsfoss.gitlab.io/post/understanding-the-linux--usr-...
`man file-hierarcy` defines modern Linux filesystem layout.
https://www.man7.org/linux/man-pages/man7/file-hierarchy.7.h...
Question: why did they decide to make /usr/bin the "primary" and /bin the symlink? Methinks it should have been the other way around as was the original Unix design before the split.
Also the first URL is serving me scam popup ads that do a crap job at pretending to be android system alerts. Next time please try to choose a more reputable source.
Using /usr/bin instead of /bin comes down to that is much easier to mount one /usr instead of doing bunch of bind mounts for /bin /sbin /lib /lib32 /lib64,
Also some more background info https://systemd.io/THE_CASE_FOR_THE_USR_MERGE/
There is some logical grouping. Everything under /usr is "executables+libraries+docs, mostly immutable" so there is some logical grouping.
Whereas /etc is for configuration and /var is for mutable data.
Oh, that's an awesome idea to get rid of those awful splits and focus on apps! Scoop package manager on Windows works the same way. Though it has a few issues when some security apps ignore "current" symlinks (and don't support regex for versioned paths), and then versioned dirs bite you when versions changes. Wonder whether this distro has similar issues and whether it'd be better to have the current version a regular dir and then the versioned dir a symlink
> Standards bureaucracies like the Linux Foundation (which consumed the Free Standards Group in its' ever-growing accretion disk years ago) happily document and add to this sort of complexity without ever trying to understand why it was there in the first place.
this is the reason in my opinion and experience
as a lead dev in a rather complicated environment I tended to solve the problem many times where some identifier was used. short deadlines and no specification made us solve the problem quickly, so some shortcuts and quick actions were done. this identifier gets asked about later and super overcomplicated explanations given as a reason by people that don't know the history.
...and the history is often like 'they mounted stuff to /usr because they got a third drive'. and now, people even in this thread keep giving explanations like it's something more.
> There's no reason Linux should be like this. e.g., see https://gobolinux.org/ which has more sane dirs.
And I thought we just got over the systemd drama…
gobo's a neat idea. I for one really like that its package management can have multiple packages without conflicts etc.
I think the only other I can think of like this is probably nix or spark and nix really wants to learn a new language so it has some friction but nix is a neat idea too
I think not many people know this but how tinycore packages work are really fascinating as well and its possible I think to have this as well by just downloading the .tcz and manually running it since it actually mounts the code to a squashfs mount loop, I am not familiar with the tech but removing applications and adding them can be just about as easy as deleting and adding files when one thinks about it.
Does anybody know some more reference pointers to a more smooth/easy way of not having to deal with dependency management etc.
I think that mise for programming languages is also another good one. Appimages/zapps are nice too for what they are worth. Flatpak's a little too focused on gui section for my liking though. Its great that we have flatpak but I dont think its just the right primitive for cli applications that much
Not really, back then disks were very expensive and you had no choice but to split. And disk sizes were very small.
But, I in a way int kind of makes sense.
/bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
's' for items regular users do not need to run, remember, UN*X is a multi-user system, not a one person system like macs, windows and in most cases Linux.
> /bin and /sbin, needed for system boot. /usr/bin and /usr/sbin for normal runtime.
Nowadays most Linux systems boot with initramfs, that is a compressed image that includes everything the system needs to boot, so you're basically saying /bin and /sbin is useless.
> initramfs, that is a compressed image that includes everything the system needs to boot
Not always (raise your hand if you've had an unbootable system due to a broken or insufficient initrd).
In retrospect, the whole concept of the initrd seems like an enormous kludge that was thrown together temporarily and became the permanent solution.
Yes of course it can break. The point is that the stuff needs to be in initramfs. "includes everything" has an implicit "when working".
What seems bad about it to you? Initrd means you only need /boot (or equivalent) to be working at boot time, which seems nice to me. And looking at mine, the image is smaller than the kernel, so it's not wasting a ton of space.
More than once I've run into weird issues with missing filesystem drivers and other important things that caused me major grief during an emergency.
Sure it could be blamed on shitty distro maintenance and development but a better architecture would be putting essential things like filesystem drivers in /boot without this extra kludge of rebuilding an initrd (that you hopefully didn't forget to do before typing reboot) which depends on a pile of config files set just right (and oh by the way different in literally every distro).
A folder in boot could still be missing drivers, though.
Rebuilding an image isn't a big factor there, it's a tradeoff between making setup a bit more annoying versus making it a bit easier to manage your boot files.
I rather like it for embedded systems because I can pop a simple installer into it and bundle that with the kernel.
> initrd seems like an enormous kludge that was thrown together temporarily and became the permanent solution.
Eh, kinda. That's where "essential" .ko modules are packed into - those that system would fail to boot without.
Alternative is to compile them into kernel as built-ins, but from distro maintainers' perspective, that means including way too many modules, most of which will remain unused.
If you're compiling your own kernel, that's a different story, often you can do without initrd just fine.
I really should write that "Yes, Virginia; executables once went in /etc." Frequently Given Answer.
Because it was /etc (and of course the root directory) where the files for system boot and system administration went in some of the Unices of yesteryear. In AT&T Unix System 5 Release 3, for example, /etc was the location of /etc/init, /etc/telinit, and /etc/login .
sbin is actually quite complex, historically, because there were a whole lot of other directories as well.
I think it's less about saving space and more that / and /usr can be two separate disks
This post gets some of the details wrong. /usr/local is for site-local software - e.g. things you compile yourself, i.e in the case of the BSDs the ports collection - things outside the base system. (They may be compiled for you).
Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
/opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software. You're putting your $500k EDA software under /opt.
I normally wouldn’t be this pedantic, but given that this is a conversation about pedantry it only seems right: you’re using i.e. and e.g. backwards.
> Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
The Linux base system is managed by the package manager, leaving local for the sysadmin to `make install` into
> The Linux base system
There is no such thing as a Linux base system.
Separate components, separate people.
Hence the term Ganoo plus Leenox...
Well, no, my exact argument is that there is a base system, even if it is composed of assorted components. If you install Debian (or whatever) on a machine, the software installed by the package manager ships as a unified release that has been adapted to work together. I think it's reasonable to call that the base OS. And then, separate from that base system that is managed by the package manager, the local admin my install things into /usr/local.
They're talking about Linux, the kernel. The kernel has no concept of a base system. There is initramfs and init.
Okay, that's true but other than the slight semantic point of "Linux" vs a "Linux distro" or "GNU/Linux" I don't think it matters. Whatever words you use to describe it, there is a base OS which is composed of a variety of components from different sources but which ultimately amounts to a single thing.
> there is a base OS
In most distributions yes, there is Linux and then there is userspace on top of it. What you call "base system" is actually part of userspace, which has nothing to do with Linux itself.
No, what I call the "base system" is the result of running debootstrap, and encompasses all the packages that make a complete operating system. The kernel is just one part of the OS.
If you can remove GNU coreutils and replace them with something else (like that Rust garbage) then you don't have a base system. You have a loose collection of packages around a kernel.
> Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it
Good grief. How does this end up as the top comment on HN of all places? I'll bet anything that this author also thinks that systemd is way too opinionated and unified and that the system needs a less coupled set of init code.
Edit to be at least a tiny bit more productive: the Linux Filesystem Hierarchy Standard is about to pop the cork on its thirty second birthday. It's likely older than most of the people upvoting the post I responded to. https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
To wit: that's outrageous nonsense, and anyone who know anything about how a Linux distro is put together (which I thought would have included most of the readers here, but alas) would know that.
> /opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software
Steam says hi.
On Windows, a common Steam library exists in Program Files directory, therefore not user specific. On Linux, each user has a separate Steam installation and library. I'm not sure why there isn't a common Steam library on Linux, but /opt would be a good place for it.
By default, Program Files is not writable by non-Administrators. This is likely done by some background service. Or they loosened the default file permissions (which would be dumb).
No reason this can't be done on Linux but since NT's security model is more flexible it's a lot easier to do so on Windows. You'd need to add dedicated users. (Running a Steam daemon as root would probably cause an uproar.)
They loosen the permissions on the steam folder on windows. I would have expected just the library folder but apparently it's the whole thing.
Oof. The correct location for this is C:\ProgramData
Developers who knowingly reduce or disable default Windows security settings should be censured. Because in 99% of cases it is due to ignorance or plain laziness.
Well ProgramData didn't exist when they designed it, and the crime of putting their folder in the wrong place is a pretty minor one. They don't change the permissions of anything outside Steam.
It doesn't "reduce or disable default Windows security settings" in a meaningful way if you say to yourself "that folder effectively is in ProgramData, but spelled wrong".
CSIDL_COMMON_APPDATA is the API call to get this special folder which has been around since <checks notes> Windows 2000, 26 years ago.
You should never hardcode the path since it can and has moved around, though MS has implemented hard links to legacy paths because most developers are stupid and against persistent better advice do it anyway. I've seen multi-million dollar software packages whose vendor requires it to be writable by "Everyone".
Steam was first released in 2003, three years later.
For 80% of grievances about Windows, there is likely a solution in place that no one knows about because they didn't read the documentation.
Steam's original system requirements in the 2002 beta included Windows 98. [1]
They didn't stop advertising Win98 support until sometime in early 2007.
Granted, Steam back then was a different creature than Steam now.
[1] https://web.archive.org/web/20020605222619/http://www.steamp...
So you're saying they've had 18+ years to remove legacy cruft put in there to support a nearly 28 year old legacy OS that had no real multi-user support and basically zero security?
Moving away from Program Files would cost far more than it's worth - it'd cause lots of issue for a massive amount of users and be of very little value for others, when the only practical issue with the Steam folder being in Program Files right now is people going "oh I didn't expect that directory to be writable I guess" which is not something worth spending a bunch of time orchestrating a massive transition over.
It's literally in the name: PROGRAM files. It was never meant to store variable data.
It's also assumed that its contents can be safely restored from original sources, so Program Files is often not backed up - because it's wasteful and not needed.
Rogue developers thinking they know better than the people who actually designed the system and ignoring the rules put in place is the source of an untold number of problems in the software world. It's absolutely stupid and I have no empathy for the problems caused as a result of their laziness. This attitude is why modern Linux is a complete clusterfuck, a free-for-all with components duct taped together every which way. Do it right or don't do it at all.
How are the games not programs?
The save files don't go in the steam folder, they go into per-user Documents or AppData.
There is something oddly satisyfing that in 2026, there are people out there complaining that Steam installs programs into Program Files.
And steam was originally released to be compatible with Windows 98. windows 2000 wasn't widely used as a consumer installed OS.
> windows 2000 wasn't widely used as a consumer installed OS
But Windows XP, which came out in 2001, inherited everything from Windows 2000 and more, and was used extensively for gaming.
Absolutely and first iterations of steam hardware survey showed mostly XP users, but still 5-7% win 98 install base, which they maintained compatibility with for quite a while, that's just to say that I can see why they might not have used those specific windows APIs at the start.
Back when it was actually AppData in the user documents folder, that doesn't seem like the right place to install many gigabytes of games.
And it's the same permissions either way. This isn't about permissions, it's about where they put the folder.
Really? Programs installed by non-administrators should go in ProgramData?
The actual solution, which remains both compatible and consistent with the security model, is that you should have to be administrator and pass UAC to install a game, just like you do to install anything else.
I seem to recall Solaris put packages in /opt. Each package got its own prefix under /opt.
Now I get what the folks using FreeBSD typically like to point to as a reason why they prefer FreeBSD over Linux because there is a clear distinction between the base system and userland.
Linux has more of a clear distinction between kernel and userspace. But the base system in *BSD includes a lot of userspace, so the API boundary is more the libc and some core libraries (TLS) instead of the kernel ABI.
FreeBSD is moving to a scheme where the base system is managed with pkg. In the release notes for last month's 15.0 release, they suggest that this will be mandatory in 16.0.
The ports tree will still be very different from base, but I feel this may erode some of the difference between FreeBSD and a typical Linux distro in terms of user experience, with respect to base vs ports. You'll update both with pkg.
The problems with what you say are that:
1. The history of /usr subdirectories is a lot more complex than that. There was a /usr/lbin once, for example.
1. /usr/local is not where third party softwares from packages/ports go on "the BSDs". On NetBSD, they go in /usr/pkg instead, again exemplifying that this is quite complex through history and across operating systems.
While practically useless in reality /usr/local is `site-local software`, E.G. software that if you nfs mounted /usr, would be local to the `site` not the machine.
The BSD ports explanation is a bit revisionist I hate to say, this all predates ports.
It was a location in a second stage mount you knew the upstream wouldn’t overwrite with tar or cpio. Later ports used it to avoid the same conflict.
So, in Debian, where should I be placing a Firefox tarball I download from Mozilla’s site?
It is open-source, and I can get source files, but it’s precompiled…
Anywhere in your `$PATH` that isn't managed by `apt`/`dpkg`. E.g. add `~/bin` to your `$PATH`, and install it in there. No risk of overwriting files the system package manager manages & having manually-installed software break next time it updates them.
I understand /usr/local to be for anything not managed by your distribution but following the standard system layout (e.g. Python that you compiled yourself) while /opt is used for things that are (relatively) self-contained and don't integrate with the system, similar to Program Files on Windows (e.g. a lot of Java software).
Regarding "that's a Linux-ism" - well yeah? Linux is the main OS this is about. FreeBSD can do what it wants, too.
> anything not managed by your distribution
That's a Linux-ism. Other *nix there is a lot more in /usr/local.
In reality /usr is similar to Windows' System32 directory on most Unicies.
/opt is really the only good place for Java and where I've been putting it for decades (old habits die hard).
> /usr/local is for site-local software - e.g. things you compile yourself
See, you assume here that /usr/local/ makes any sense.
I use a versioned appdir prefix approach similar to GoboLinux. So for me, /usr/local never ever made any sense at all. Why should I adhere to it? I have ruby under e. g. /Programs/Ruby/4.0.0/. It would not matter in the slightest WHO would compile it, but IF I were to need to store that information, I would put that information under that directory too, perhaps in a file such as environment.md or some other file; and perhaps additionally into a global database if it were important to distinguish (but it is not). The problem here is that you do not challenge the notion whether /usr/local/ would make any sense to begin with.
> /opt is generally for software distros for which you don't have source; only binaries.
Makes no sense. It seems to be about as logical as the FHS "standard". Why would I need to use /opt/? If I install libreoffice or google chrome there under /opt, I can as well install it under e. g. /Programs/ or whatever hierarchy I use for versioned appdirs. Which I actually do. So why would I need /opt/ again?
> See, you assume here that /usr/local/ makes any sense.
You’re presenting your comment as a rebuttal but you’re actually arguing something completely different to the OP.
They’re talking about UNIX convention from a historic perspective. Whereas you’re talking about your own opinions about what would make sense if we were to design the file system hierarchy today.
I don’t disagree with your general points, but it also doesn’t mean that the OP is incorrect either.
> This post gets some of the details wrong
"some" is an understatement.
You've entirely missed the point of the article.
Here [1] is a related trick in the old Unix to run either `foo`, `/bin/foo` or `/usr/bin/foo` (apparently before `PATH` convention existed):
char string[10000];
strp = string;
for (i=0; i<9; i++)
*strp++ = "/usr/bin/"[i];
p = *argv++;
while(*strp++ = *p++);
// string == "/usr/bin/foo"
execv(string+9, args); // foo (execv returns only in case of error, i.e. when foo does not exist)
execv(string+4, args); // /bin/foo
execv(string, args); // /usr/bin/foo
[1] https://github.com/dspinellis/unix-history-repo/blob/Researc...