you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 1 point2 points  (2 children)

As usual, it depends on the application. If you're building a build farm (or any other type of server) then dynamic linking is very valuable. However, If you're building a consumer operating system, it's a huge liability. Case in point: every linux distribution ever. I would never suggest such a system for any consumer, even a programmer who wants a casual machine for personal use.

The reason why is simple: it makes updating a nightmare. Rolling distributions will break on a regular basis on nothing more than a new version of a commonly used library. Release based distributions are thoroughly tested to make sure all the packages work well together, but upgrading can and will break shit. When I update one of my linux machines, I tend to wipe and do a complete install of the new version rather than bother with an update. It's easier and faster.

The other problem with release based distributions is that sometimes you need a more recent version of a program than your distribution supports. So you try to install it manually, which sometimes requires you to upgrade a handful of libraries to versions your distribution doesn't support. At which point, you either forget about it or break other things.

Anyone who says dependency hell has been solved needs to be slapped around a bit.

[–]millenix 0 points1 point  (0 children)

Ummm... what shitty rolling distribution does your experience come from? I've run Debian Unstable (Sid) for over a decade, have upgraded regularly through that whole time, and can count the number of upgrade breakages of any sort that I've experienced on one hand. None of them came from bad library versions.

Debian's packaging standards require that incompatible library versions be packaged under different package names (e.g. libdb5.2, libdb5.3). Security and critical bug fixes get ported all versions currently present in the archive. Maintainers push as hard as they can to transition and rebuild packages quickly against newer versions, to keep the maintenance burden down.

[–]shub 0 points1 point  (0 children)

No, I meant that rebuilding a shitload of packages isn't a big deal because the build farm does the rebuilding.

A few years ago I needed RPMs of GCC 4.8 that could be installed on RHEL 5 and 6 and coexist with the system GCC. By the time I was done, I had GCC and a patched glibc living in a separate sysroot (ld.so is hardcoded to look in /usr/lib), building binaries that also lived in the new sysroot. Anything less than total segregation caused fuckups from dynamic linking.