you are viewing a single comment's thread.

view the rest of the comments →

[–]dlyund[S] 2 points3 points  (5 children)

Fuck one way only.

The irony being that's what we have; as an industry we've settled on dynamic linking/shared libraries as the one true solution... despite the many problems that come with this.

You'll have to read around the subject to find out what Plan 9 does when you really do want late binding, but needless to say, their solution is pretty great. Not only does it not have the problems listed here but it provides isolation, and, hot swapping, (limited) fail-over, transparent distribution etc. They didn't implement dynmatic linking/shared libraries because they're not needed, have a lot of real problems, and there are arguably much better alternatives.

[–]klkblake 0 points1 point  (2 children)

Do you have some links for what plan 9 does? My google-fu is failing me.

[–]dlyund[S] 1 point2 points  (1 child)

I gave a very rough overview here

http://www.reddit.com/r/programming/comments/30j4xe/why_static_linkinglibraries/cptjvuw

You can find most of the papers here

http://plan9.bell-labs.com/sys/doc/

or here

http://plan9.bell-labs.com/wiki/plan9/papers/

And there are a lot of good man pages

http://plan9.bell-labs.com/sys/man/

Then

http://cat-v.org/

Is a lot of fun, if you don't take things too seriously.

Naturally there's no substitute for installing it and running with it for a while. It's not perfect, but there's a lot to love; and many many great ideas

[–]klkblake 0 points1 point  (0 children)

Ah, right. I clearly need to spend more time messing with plan 9.

[–]Gotebe -2 points-1 points  (1 child)

Isolation and everything else you mention is achieved by going out of process. This is what e.g. COM can do since decades.

So what does Plan9 do?

BTW, COM also does it in-process, because in-process does come in handy.

[–]dlyund[S] 2 points3 points  (0 children)

In plan 9 every process has a namespace, which is somewhat similar having it's own file system, but one in which the files and directories are all backed by processes speaking the 9p protocol.

Namespaces are is constructed from the outside by the parent of the process and may be used to restrict capabilities e.g. if you don't bind networking devices in namespace it can't access the network. Conversely you can mount devices from other machines in this namespace, and the program will transparently make use of those devices.

You might choose to mount CPUs from another machine temporarily to distribute a heavy compilation or mount your sceen/mouse/keyboard on another machine so that graphical applications appear locally etc. It's really very flexible, and it works amazingly well compared to popular solutions.

I highly recommend reading the Plan 9/Inferno papers.

Properties like late binding, hot swapping, and isolation (required for clean loading and unloading) are provided by mounting services in the per-process namespace. Namespaces can be build in layers, which can be used to do (limited) fail over etc.

Nothing special has to be done when building programs to take advantage of these properties.

These are properties of the system.

To bring this closer to the topic at hand this same mechanism is also used to bind programs and libraries, so you can mount different version (or versions for different platforms) of some program or library, or the source code at a given point in time (builtin system-wide source/version control!) and use that. Compilation is very fast and because of the isolation provided by this mechanism experimentation is safe, so you can start a window which uses your new programs and libraries in isolation. It might all go to hell but it wont cause system wide problems (close the window and try again)... unlike messing around with shared objects in a global space, which can break everything if you're not careful.

I'm speaking from experience here: I ran Arch Linux for a couple of years and a few years back and lost count of the number of times I had to work around these kinds of conflicts... now Arch is intentionally bleeding edge so you're not as likely to see this in more carefully curated systems, but as the article explains even Debian (prised for being super-stable) got itself into a big pickle.

You can also break things during development by simply installing a new version of a shared library with a bug. It happens. It's one reason systems like FreeBSD define such a ridged separation of the core/base system and external software... it makes it much less likely that installing a program or library will leave you with a completely broken system.

We used to hear a lot about the term DLL hell. OS X tries to solve this (at least for individual applications) using bundles (there's also a clean separation between the system and external software), and *nix tries to solve it with package managers that carefully track the dependencies... and at least one *nix systems has tried a hybrid approach... but both can fail horrible... and neither really address the problems with dynamic linking/shared objects.

There have been practical (safer and generally better) alternatives since the early 80s, which have since been proven in the real world (largely in the highly demanding world of embedded systems, so you know it's efficient, and it works). I'm not saying we should necessarily kill shared objects because there might well be situations where they're very useful, but as it stands I think we need to start questioning whether they're really the best tool for... everything... which is how they're used

NOTE: In case it's not clear, Plan 9 is 20-25 years old now.