You assume there’s a philosophy or coherent reasoning behind it, rather than “This is the way we did it with static libraries, so when we adopted shared/dynamic libraries we didn’t change anything else.” Because near as I can tell that’s exactly what happened when BSD and Linux implemented Sun-style .so support in the early 1990s, and there hasn’t been any attempt to rethink anything since then.
Probably because the purpose of the dynamic linker serves the typical O/S layout where there's only one copy of different dynamic libs and everything is linked against those, and packages installed by package managers are authoritative for the things they ship. Distro maintainers want this and lots of system admins expected packages to behave like this.
There's an alternative universe somewhere in which containerization took a different path and Unix distros supported installing blobbier things into /opt, but without (or optionally) the hard container around it. Then fat apps could ship their own deps.
The problem is that there's a lot of pushback from people who want e.g. only one openssl package on the system to manage and it legitimately opens up a security tracking issue where the fat apps have their own security vulns and updates need to get pushed through those channels. It was more important to us though to be able to push a modern ruby language out to e.g. CentOS5, so that work was more than an acceptable tradeoff.
Containerization of course has exactly the same problem, and static compilation probably just hides the problem unless security scanners these days can identify statically compiled vulnerable versions of libraries.
I need to look at NixOS and see if it supports stuff like multiple different versions of interpreted languages like ruby/python/etc linking aginst multiple different installed versions of e.g. openssl 1.x/3.x properly. That would be even better than just fat apps shoved in /opt, but requires a complete rethink of package management to properly support N different versions of packages being installed into properly versioned paths (where `alternatives` is a hugely insufficient hack).
> and static compilation probably just hides the problem unless security scanners these days can identify statically compiled vulnerable versions of libraries
Some scanners like trivy [1] can scan statically compiled binaries, provided they include dependency version information (I think go does this on its own, for rust there's [2], not sure about other languages).
It also looks into your containers.
The problem is what to do when it finds a vulnerability. In a fat app with dynamic linking you could exchange the offending library, check that this doesn't break anything for your use case, and be on your way. But with static linking you need to compile a new version, or get whoever can build it to compile a new version. Which seems to be a major drawback of discouraging fat apps.
Indeed, containers and static linking are just hiding the problem.
> each software is installed in it's own folder, and the search path for dynamic linking starts in the binary's folder.
I think the benefit of this is that an app can be fat but doesn't have to. And an app can be made fat afterwards if need be. The app folder is just the starting point for searching. If the library is not there it is probably shared.
Wrappers and a build system that tweaks everything feel like a hack, not a system wide solution.
There is also Gobo Linux. I wonder if they solved this.
It is done on Linux, for most large popular software. Blender, Firefox, VSCode all are distributed this way. The reason it's not done more is probably some combination of culture and tooling.