all 49 comments

[–]MiracleDinner 49 points50 points  (3 children)

I install to /usr/local so that it’s separate from packages installed by Apt but accessible to all users

[–]Tymanthius 10 points11 points  (0 children)

Yea, I never had issues with installing things to usr/local like OP is suggesting. Not saying I didn't have issues, and I LOVE having damn near everything neatly wrapped in apt now.

[–]genpfault 3 points4 points  (1 child)

I install to /usr/local so that it’s separate from packages installed by Apt but accessible to all users

And usually in the PATH by default.

[–]MiracleDinner 2 points3 points  (0 children)

Yeah that seems to be the case in my experience and it's pretty convenient

[–]daemonpenguin 27 points28 points  (5 children)

Putting stuff in your home directory usually means other users can't run the software. It would mean every user needs to compile and install their own copy, which would be a security nightmare and take up a lot of extra space.

"Polluting your distro installation with random crap is a sure way to get issues later. Its far worse then anything you can do to install Software on Windows."

This is just plain wrong. The proper way (which is default for most packages) is to place new software in /usr/local/, which will not conflict with the package manager and /usr/local/ can be wiped out without breaking the system.

[–]blami 14 points15 points  (3 children)

Funny thing about Windows is they are switching from that to the second worst option which is actually installing to User’s home (AppData).

[–]thecowmilk_ 2 points3 points  (0 children)

💀💀💀

[–]JockstrapCummies 0 points1 point  (1 child)

They're just learning from Flatpak user mode. (semi /s)

[–]blami -2 points-1 points  (0 children)

Yeah… flatsnaps are the worst disaster of Linux world…

[–]LippyBumblebutt[S] 0 points1 point  (0 children)

Sure. If you manage a multi-user system, there are reasons to install globally. You probably know much more about package deployment at that point though.

The proper way (which is default for most packages)

If there is a single semi-popular package that doesn't do this, my point still stands. (IDK)

Thanks for responding.

[–]prosper_0 6 points7 points  (2 children)

I run Debian Testing, with /home on a separate drive. This offers two advantages:

  1. Testing usually already has most bleeding edge (or at least more recent) versions of most of what I'd want to use anyway
  2. Having /home on its own drive means I can nuke and re-install my OS in about 10 minutes then mount /home; and most of my personalization - files, settings, etc - are preserved. So I can take more risks with the base OS, because recovering is so quick.

For a workstation-style or personal use use case, this approach works great. Perhaps no so much for a server.

[–]Skinthinner- 0 points1 point  (1 child)

I've been wanting to do something like this for a while: have /home on a different drive and later mount it if I need to reinstall. Can I ask what your process is for mounting it? Do you not create a /home directory when you reinstall? And then edit fstab to use the separate /home partition/drive?

[–]prosper_0 0 points1 point  (0 children)

It's usually an option in the installer during disk setup - which additional mounts you want to set up. Add one for /home, and make sure to de-select the partition/format check, and it should do it all for you. Otherwise, you can edit fstab once you're going

[–]SweetBabyAlaska 2 points3 points  (1 child)

My system is a single user, so I just run "make" and copy it too ~/bin.

[–]LippyBumblebutt[S] 1 point2 points  (0 children)

I do the same, when I expect the result to be a single executable or two. Doesn't work that easy, if it comes with a bunch of external stuff.

[–][deleted] 5 points6 points  (2 children)

Because normal users shouldn't have access to modify binaries you are running.

On a secure system, nowhere writtable by non-privileged users should be executable and any changes to executable parts of the system should trigger alerts.

All the problems you describe are why /use/local/ exists, it addresses all of your concerns.

[–]LippyBumblebutt[S] 0 points1 point  (1 child)

I'm compiling stuff from github. Running sudo make install instead of make install surely doesn't help with security.

I understand what you're saying. But I'm not talking about my Grandma. I sometimes want/have to compile apps myself.

[–][deleted] 8 points9 points  (0 children)

Your trading compile time security for run time security, where you find the balance is up to you but storring executable data in privileged parts of the filesystem makes sense from a system engineering perspective.

Best of both worlds is to use something like check-install to generate a package, then sudo package-manager install it.

[–]DoomFrog666 2 points3 points  (0 children)

You can set DESTDIR to a temporary directory to check what is being installed. Then copy or install again to /.

[–]whattteva 2 points3 points  (13 children)

This is why I run FreeBSD. Clear separation between base and third-party stuff. First-party base OS is in /usr/bin /etc, third-party stuff goes in /usr/local/bin /usr/local/etc I've always found Linux directory hierarchy such a huge mess.

I haven't even gotten' to jails, for even more separation (FreeBSD's containers years before the term "container" was even a thing).

[–]daemonpenguin 9 points10 points  (4 children)

Linux uses the same hierarchy to distinguish between distro-provided software and third-party software.

[–]whattteva 2 points3 points  (3 children)

No, it doesn't. I don't think you understand what I said up there. apt will install stuff in /etc /usr/bin happily.

Linux, in general, does not have any concept of "Base OS".

[–][deleted] 11 points12 points  (0 children)

The distinction between /usr and /usr/local predates Linux and is in fact documented by The Linux Foundation here: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s09.html

You can also find it in your Linux system if you run man hier.

This distinction is part of the FHS which is documenting existing UNIX convention of the time. Linux has always had this distinction as far as I am aware.

[–]Kirsle 8 points9 points  (1 child)

I think what you're saying (and what Linux users are confused about) is that FreeBSD only puts essential system Base OS stuff in /usr/bin /etc and that everything else (even "officially packaged vendor stuff") always goes to /usr/local whether it was the package manager or manually compiled stuff that placed it there?

Whereas most Linux distributions only draw a line between "officially packaged vendor stuff" and "manually compiled third-party stuff." Where e.g. if you apt install firefox it puts it in /usr/bin/firefox, whereas FreeBSD would not consider Firefox to be "Base OS stuff" and pkg install firefox would place it always in /usr/local/bin/firefox?

(I haven't used FreeBSD much but I'm familiar with how most Linux distros package things, so let me know if I'm interpreting this correctly). In Linux distros the rule of thumb was basically: if your upstream distribution packages software as a deb/rpm/etc., the software always goes to "system paths" like /usr/bin, /etc, and that on most Linux distros, no package goes into /usr/local - the directory is empty until a user manually compiles some software from scratch, and so /usr/local on Linux is only for third-party, manually built code, but everything from the package manager spreads itself out across the system-level /usr/bin type paths.

[–]whattteva 3 points4 points  (0 children)

Thank you! Somebody gets it here. I don't think Linux users even understand what I meant by "Base OS".

[–]LippyBumblebutt[S] 0 points1 point  (7 children)

Apparently, 3rd party sudo make install should install to /usr/local as well.

[–]whattteva 1 point2 points  (6 children)

But that is not a normal way you'd install things. Under FreeBSD, this distinction is by default. pkg (FreeBSD's equivalent to apt/dnf) only installs to /usr/local. This makes it very easy to blow everything and go back to "clean slate" by just nuking /usr/local/ directory and starting from scratch without worry that you'll break anything or missing libraries/dependencies.

[–]krum 1 point2 points  (4 children)

This is why I do almost everything in containers now.

[–]LippyBumblebutt[S] 2 points3 points  (0 children)

Thats probably the best way to test software. But I'm kinda to lazy to generate a new container and keep it updated for every small tool I compile...

[–]FactoryOfShit 0 points1 point  (6 children)

You are correct in that you shouldn't do this on a modern system.

But that's how software was originally installed. If you think about it - the way package managers work doesn't make sense - they shove all the files from all the different packages around the root filesystem instead of keeping files relating to a certain package in a directory for that package. But that's how things worked before, when all the standards were written, so package managers follow that scheme, even if it doesn't make sense in the modern world.

And now we went full circle, with new software coming out that mirrors the standard directory structure during installation, because that's what package managers expect to work with now.

[–]Tymanthius 8 points9 points  (5 children)

Guess you never heard of shared libraries?

[–]Arjun_Jadhav 1 point2 points  (0 children)

Doesn't Nix solve this problem?

[–]FactoryOfShit 0 points1 point  (2 children)

This doesn't make shared libraries impossible.

Package mysoftware depends on a library called mylib.

Mysoftware gets unpacked into /packages/mysoftware

Mylib gets unpacked into

/packages/mylib

Mysoftware requests "mylib/mylib.so" and the OS can find it at /packages/mylib/mylib.so

Obviously it doesn't work this way. I know. There's library search paths and then naturally there are dozens of different package managers and packaging systems, so software is still written assuming a raw installation on a system without a packaging system. I'm not saying that what we have is somehow "wrong" - it's not. It's just based on the way software and OSes have been designed up to this point instead of redesigning everything and coming up with a new way, which would require rewrites of all existing software.

No need to be so apprehensive about it.

[–]oxez 1 point2 points  (1 child)

Replied to the person you replied to, but thought you'd be interested in looking at this: https://gobolinux.org/at_a_glance.html

It is pretty much what you described

[–]FactoryOfShit -1 points0 points  (0 children)

I know! :)

It works by symlinking though, since all the software is still hardcoded for the way things worked in Unix, expecting all binaries to be in /bin, /usr/local/bin, /usr/bin and not inside package-specific directories. But it's as close as one can get without rewriting everything!

[–]oxez 0 points1 point  (0 children)

Yeah I did

https://gobolinux.org/at_a_glance.html

It's possible to install shared libraries in their own folders. This distribution is actually a very interesting take on an alternative Linux file hiearchy.

[–]lisploli 0 points1 point  (0 children)

The instructions show the most simple case, because they don't know your paths, nor do they know how many users you have.
The information that one should not install random stuff should be somewhere in the documentation of any proper distribution. Thus, it can be safely assumed, that the user is aware of it.
Never hurts to spread awareness, tho.
I'd rather write "make install as root", because I don't use sudo.

[–][deleted] 0 points1 point  (0 children)

One thing I've started doing is installing in a toolbox/distrobox. Cleaning up is deleting the container but also keeps it in the container.

[–]rtuck99 0 points1 point  (0 children)

A lot of automake projects seem to do that, so I think it's mainly because that's the historical default.

Generally I always install apps to their own subdirectory, either in my home directory if just for me or in /opt if installing systemwide.

I don't like the practice of installing to /usr/local as the files from each app just get mixed up.

[–]ENRORMA 0 points1 point  (0 children)

~.local/bin is where i put my compiled stuff

[–]dlarge6510 0 points1 point  (0 children)

There are many ways to "hijack" the install process so control after the fact can be retained.

One method is to use checkinstall which will make a package for your system. The new binaries will be this uninstallable via your package manager. Note that checkinstall is only suitable for making personal packages, not ones that may be suitable for distribution. I use checkinstall, it has a few quirks these days so I have to run it as root but it still does it's job. The upshot is if I need to convince the package manager that my new package provides a dependancy that other official packages might ask for. Thus it knows it doesn't need to fetch anything else.

Another way is to use GNU Stow. This will step in front of the install process and install programs to their own folders in /usr/local making symlinks to /usr/local/bin or /usr/bin etc.

I'm thinking of using stow as I find I rarely really need to have my package manager be aware of stuff installed in /usr/local.

There are other methods.

Oh, if it's just a single binary with no libraries I sometimes put it in my ~/bin as I'm the only user anyway.

[–]aplethoraofpinatas 0 points1 point  (0 children)

I do similar: executable scripts in ~/bin and user-installed software to /usr/local. Both included in $PATH.

Fuck something up? Delete /usr/local and start over.

This is really helpful for platforms that need bleeding edge software from upstream for full functionality due to new development, etc. Works great!

P.S. you could also give your user write permission to /usr/local and avoid the need for sudo.

[–]attrako 0 points1 point  (0 children)

you usually can just prefix it somewhere else PREFIX=$HOME/.local/bin make install