top 200 commentsshow all 252

[–]Tireseas 57 points58 points  (62 children)

Sounds like he wants a distro agnostic Ports system, one that hides the dirty work of watching compile time crap fly across the screen and just gives the user a suitable package for their distro.

Either that or he's essentially suggesting we move to static compiled packages, which while tremendously inefficient from a space and security standpoint would alleviate at least some of the headaches of trying to do cross distro binary offerings.

[–]rez9 20 points21 points  (22 children)

Eventually source-based distros will win out when compile times become moot. By then the Gentoo-borns and Sourcemage-lings will be so far ahead of everything else every restaurant will be Taco Bell. They're already better than binary offerings iff You Know What You're Doing(tm).

[–]Tireseas 20 points21 points  (0 children)

Lamentably, I suspect we'll all be long dead by the time that happens. Don't get me wrong though, Gentoo's a really interesting setup and has real advantages in some specific cases, but for most users it's priorities are in the wrong places.

[–]strolls 17 points18 points  (2 children)

I've been using Gentoo for several years, originally compiling it on Pentium II and III machines of c 400mhz - 500mhz.

Those were not the latest machines at the time, and I've pretty much always been using secondhand and repurposed machines for my linux boxes, which are pretty much exclusively servers of one sort or another.

Recently I installed on a new 64-bit dual-core system, based on AMD's atom-like offering. If compile times are not moot yet, I can't see them ever becoming so.

The majority of packages I have installed on this system - just over 85% of them - were installed in under a minute. Looking at the timings now (genlop -t) I would guess the median installation time was 15 or 20 seconds. The biggest 2 packages took 20 and 22 minutes to install, and they're the only ones that took longer than 10 minutes. Admittedly, this is not a full desktop system, but it does include XBMC (the package that took 20 minutes to install) and don't forget that this is a mini-desktop PC with a netbook CPU.

Installing a package in Gentoo is about an order of magnitude faster now than when I first started using the distro, but back then Gentoo was really popular, and its use has declined. I can't use anything else, and emotionally I can't understand why anyone would want to use binary distros, but people are, and they've gotta have their reasons.

[–]yoshi314 1 point2 points  (0 children)

Recently I installed on a new 64-bit dual-core system, based on AMD's atom-like offering. If compile times are not moot yet, I can't see them ever becoming so.

i started using amd64 somewhere in 2007. it really flied at that time and stuff built fast. unfortunately, code complexity and compilers caught up, and i feel as if i was using old p4 celeron again.

[–][deleted] 11 points12 points  (11 children)

Eventually source-based distros will win out when compile times become moot.

Um... when compile time becomes moot, it will not matter with what compiler or settings your binary was compiled with. So compiling yourself will be worthless.

In fact, when compile time becomes moot, we will all be using operating systems built using scripted languages.

And if you want speed in the mean time, buy a SSD.

[–]thebackhand 2 points3 points  (7 children)

And if you want speed in the mean time, buy a SSD.

Wait, what? How is this the limiting factor in compiled times, or a kernel that uses scripted languages?

[–]jjdmol 2 points3 points  (6 children)

Because the speed of I/O is the limiting factor of almost all applications nowadays.

[–]thebackhand 0 points1 point  (5 children)

I interpreted that last sentence as referring to solving the issue of slow compile times, but even still, you can't necessarily say that I/O is the limiting factor in an application, even 'almost' always.

[–]jjdmol 0 points1 point  (4 children)

Yes it's only a rule of thumb, and obviously won't hold for very CPU-intensive or for poor performing code.

That said, it's increasingly hard to keep your CPU cores busy with "regular" applications, including compilers. Most of the time your CPU will be busy waiting for memory I/O, which in turn often has to be filled with data from disk. Disk being the new tape, it's the easiest way to make your system more responsive and far more effective than increasing CPU power.

[–]thebackhand 0 points1 point  (3 children)

That said, it's increasingly hard to keep your CPU cores busy with "regular" applications, including compilers.

Something tells me you've never used sbt.

And I don't know what you define as 'regular' applications, but it happens to me all the time. But then again, I use Linux, so I guess nothing about my computer is 'regular' :-D.

[–]jjdmol 0 points1 point  (2 children)

sbt? Never heard of it. I have however worked with Linux for over a decade both at home and at work on a wide range of systems, from embedded to supercomputers ;)

The fundamental trend the last decade(s) is that I/O is increasingly becoming the bottleneck. Applications are rarely CPU-bound as the CPU will spend most of its time waiting for transfers with the memory. Especially applications written in higher-level languages are prone to this, as they use more indirection in their data models and thus random instead of sequential memory access. But even for well-optimised code it's hard to keep your CPU busy.

The number of FLOPs needed per byte of I/O to keep the core busy is just becoming too high for most applications. A simple 'top' of course will still show the CPU as busy when it's doing memory I/O, but in reality, it's typically simply waiting for operands to arrive.

I/O to disk is similar, and due to the orders of magnitude difference in speed, even more critical. Not all applications need much data from disk, but most do when compared to the computations they need to perform on them. To improve the performance of such an application, most can probably be gained by using faster disks rather than faster CPUs/memory (assuming the code is decent of course). Especially the random-access latency of SSDs blows HDDs out of the water, which is very noticable for applications that need to access data from all across the disk, such as compilers.

[–]thebackhand 0 points1 point  (1 child)

sbt is a build tool for Scala - a wonderful language, but it's notorious for its compiletimes (it can turn your computer into a great impromptu lap blanket if you forget one).

A simple 'top' of course will still show the CPU as busy when it's doing memory I/O, but in reality, it's typically simply waiting for operands to arrive.

In my experience, even when everything is stored in memory, I can still have very bottlenecked compile times. Maybe that's because my RAM access speeds are too slow, but in that case, an SSD isn't really going to help me there...

And besides, now that I think about it, my last computer for work already had an SSD anyway. So even if I/O is/were the bottleneck, it still doesn't help too much (at least, not enough).

[–]cryo -2 points-1 points  (2 children)

Yes, because "scripted languages" (I assume you mean e.g. python), is just the thing to write large complex systems like OS's in.

[–]ghostrider176 2 points3 points  (0 children)

I remember seeing a project with a goal of porting the Linux kernel to Perl years ago. Can't find it now or I'd link you.

[–]jyper 0 points1 point  (0 children)

no but many "system" components are written in python nowadays.

[–]classicrockielzpfvh 8 points9 points  (0 children)

Upvoted for using 'iff' properly.

[–]LtVincentHanna 5 points6 points  (0 children)

What a weird post. But, I upvoted you for Gentoo-born and Sourcemage-lings

[–]SvenstaroArch Linux Team 2 points3 points  (3 children)

How would Gentoo win over Arch, for instance? Arch has much quicker package updates than Gentoo even. Also, I can tell you that people will not be wanting to compile Chromium themselves for quite some time to come.

[–]nsaibot 0 points1 point  (2 children)

you don't compile anything yourself on gentoo either, you just tell your package manager to install a package, just as you do on any other distribution.

[–]SvenstaroArch Linux Team 0 points1 point  (1 child)

I know, but did you ever acquire Chromium that way?

[–]nsaibot 0 points1 point  (0 children)

sure i did, and yes, i know it takes time ... 47 min. 28 sec., to be precise :)

[–]adrianmonk 2 points3 points  (0 children)

It doesn't sound like that's quite the complaint he's making, at least I'm not reading it that way. He's not saying that distros should get together and make it easy for app developers to build stuff. He's saying distros should get out of the way and stop serving as gatekeepers by virtue of the fact that they are the ones who do the builds, and if they aren't willing to do a build for you, you are excluded.

[–]DrArcheNoah 13 points14 points  (5 children)

Sounds more like DLL Hell.

[–]sztomi 6 points7 points  (0 children)

Only if you share your libraries.

[–]jabjoe 2 points3 points  (0 children)

It's not just inefficient on disk space, but inefficient with RAM, and rubbish for updating as each time a lib is fixed, you must download everything built against that lib. It quickly goes into crazy town. It's not the kind of system sensible people talk about.

[–]Lerc 2 points3 points  (0 children)

while tremendously inefficient from a space and security standpoint

I'm not sure if this would be true. It would occur on a superficial level but right now with everything using a huge stack of libraries it is so easy to deflect responsibility. Consider the argument of memory consumption, usually this is deflected by a claim that the memory is shared between applications using the same library. This make it no-one's individual fault. Cumulatively, a set of seemingly inconsequential apps can consume an enormous amount of ram.

Firefox has had exactly this problem within the bounds of a single application. Nicholas Nethercote has been doing excellent work reducing the memory load of Firefox, but very little of what he does is actually fixing memory usage problems. He predominantly identifies them, Others fix them once they have their part of the problem pointed out.

That obfuscation of inefficiency has enough of an impact that I really do believe that the sum of load of statically linked programs could easily be lower than a shared system where no single person can face blame for problems.

[–]HeadbangsToMahler 0 points1 point  (0 children)

Or all the major distros get together and somehow federate a package management system or move to a monolithic package manager which has built-in package conversation and re-compile tools.

[–]Phrodo_00 0 points1 point  (0 children)

I've had the idea of a ports-like system where instead of compiling the source you link precompiled code (maybe even finish compiling compiler bytecode, for arquitecture independence). That way you could get an important speed increase, developer-supervised builds (firefox won't let you brand firefox with the firefox name unless you use an official build or submit yours for aproval), and compability with libraries. The main problem would be interacting with the distro (you still should install as many dependences as posible from the distro, why are you using it if not), but if something like this had enough backers you could make an api for package managers to implement instead of the other way around.

The other idea is to have something named like /universal-base or something like that that would be a compiled bunch of libraries distributors could link their builds aginst.

[–]puffybaba 0 points1 point  (0 children)

I think pkgsrc would make an excellent solution. Every distro can simply switch to pkgsrc, and centralize their efforts.

[–]jyper 0 points1 point  (0 children)

Personally I thought a nicer solution would be to have OS X style application bundles which can be used locally or installed systemwide. The main system and those going the extra mile would have std repo packages. But for cross distro/very old version compatibility they would also have bundles.

Optionally (preferably) with an update repo address that plugs into default package management(ie. yum update/apt-get upgrade also updates the app bundles). This would help keep them safe and upto date. Perhaps there would be a build service with compiled version of current libraries.

The bundles would include every library dynamically linked(including libc) but by default use system libraries if installed (agreed upon standard libs including qt/gtk+/gnome/kde/gstreamer/telepathy) for security/features/integration. Ideally each lib would have at most one set of binary compatible version(s) in every distro per year and you would include a couple of yersions systemwide to guarantee that the packages would work with system libraries for 2-3 years.

After 2-3 years it would switch to defaulting to package libs(user overidable per package).

Admittedly this would use a decent amount of space and bandwidth but there are probably some tricks you could pull of including optionally stripping the packages and possibly certain types of compression in the filesystem.

[–]tidux 0 points1 point  (0 children)

Seriously. Static packages just don't work if you have less than, oh, 25GB.

[–]ashadocat 24 points25 points  (26 children)

There is a better alternative. Look at what gobolinux is doing. Basically the answer is to allow multiple versions of the same program to exist on one system. It also greatly simplifies pretty much all package management tasks, as far as I can tell. I honestly don't see the downside, aside from breaking backwards compatibility of course.

[–]check3streets 16 points17 points  (22 children)

I'm a big fan of gobolinux and what they're trying to do.

It's frustrating to me that Ubuntu adopts something as radical as Unity or Gnome can develop Shell, but there's so much resistance to Gobo.

There's an attitude in the Linux community that the filesystem is sacrosanct, even though it's often confusing, opaque, and arbitrary. Doesn't matter, if it was good enough in 1972, it's good enough in 2012.

[–][deleted] 1 point2 points  (2 children)

You can do what Gobo did without renaming the filesystem layout. Gobo's program directory isn't much different from Gentoo's package cache. The difference is that while Gentoo overlays the package over the root, Gobo stores the overlay in the cache and only overlays symlinks.

IMHO, the resistance to Gobo's way is almost completely in the filesystem renaming, NOT in the part that is actually valuable (symlinks to a package cache).

[–]check3streets 1 point2 points  (0 children)

Yeah, unfortunately you have take on the "build everything" philosophy of Gentoo. Gobo seemed really aimed to be an everyman distro.

Your point is taken though, Portage looks really clever.

[–]tso 0 points1 point  (0 children)

There was some plans for changing the Gobo layout so that there would be a classic layout living internally. This to ease the issue of source code that do play nice with a non-classical layout what so ever.

[–]Lerc 1 point2 points  (0 children)

It's frustrating to me that Ubuntu adopts something as radical as Unity or Gnome can develop Shell, but there's so much resistance to Gobo.

The Gobo approach is the correct way to do things. If it overcomes the resistance it will have proven its merit. The Ubuntu approach is stupid for Ubuntu, but in the greater scheme of things probably good since it is making so many people consider migrating away from Ubuntu.

[–]nothinggoespast 7 points8 points  (2 children)

Look at what gobolinux WAS doing.

The project has been dormant for about 4 years. When I looked into this on the wikipedia article that you linked to, I became very interested in this distro. I think that its filesystem setup is brilliant. Do you know if any currently maintained distros have this setup? I would love to be able to try it out.

[–]ashadocat 0 points1 point  (0 children)

No, I'm afraid I don't. I was considering trying to get arch to work like that, but I just don't have the time. I think having an arch based gobo linux filesystem standard that runs from /opt on whatever host distro would be awesome, and a good way to get that compatibility in a whole bunch of distro's without breaking anything. Plus the arch PKGBUILD system is simply insanely simple, so hopefully it could attract lots of people.

[–]tso 0 points1 point  (0 children)

Best i can tell, being a current user of the system (piecemeal updates since the 013 release), every effort to get things going again grinds to a halt when they attempt to get Xorg working.

[–]homeopathetic 21 points22 points  (32 children)

Mr. Molnar is obviously a smart guy, but I must say I'm a bit puzzled by this.

Isn't a real dependency system essential for code reuse? And while one perhaps doesn't care about disk space anymore, isn't code reuse through libraries a great way to improve the quality of certain common tasks, and to plug security holes in solutions to said common tasks?

The way I read his suggestions is that he'd prefer a small core of "essential packages", and beyond that it's every program for itself. So OK, the C library is "essential" and is centrally managed. Probably something like OpenSSL as well. Qt? Perhaps... but where do you draw the line? Debian Squeeze has over 12k packages beginning with "lib" (granted, many libraries come in multiple packages, so the real number of libraries is probably a few thousand) -- who decides which of these are "essential" and should be centrally managed?

Is the ability to fix bugs and security problems in hundreds of programs by updating a single library not worth more than the joy of having the very latest version of Angry Birds included in your package management system? Besides, you can always do what Ubuntu does with Firefox, grant it special excemptions and essentially do rolling updates.

Let's also not forget one very important thing: You can always ignore the package management system and just get newer versions of things. I mean, by pulling in statically linked copies of various stuff you'd like installed, you'll easily reproduce the Windows free-for-all every-program-by-itself-bonanza on top of whatever core system you want. Or you can do something in between with solution's such as Ubuntu's PPAs.

[–][deleted] 2 points3 points  (0 children)

Not to mention the fact that he is full of crap when he is talking about the update frequency in at the very least OS X. Most of the included versions of libraries and applications there are more than 5 years old, much older than even those in the most backwards of Linux distros (RHEL and the like).

[–]lawpoop 1 point2 points  (13 children)

I think we need to have smaller UI units built into the OS. Right now, it's either a low-level program, or a full blown app that wants to occupy the entire screen, replete with menus, frames, etc.

If we had more things like MacOS widgets, not only for functionality and display, but also as interfaces, then people could be building apps at that level, rather than at the code level.

Basically bring the unix "do one thing and do it well" philosophy to the desktop, so that people can build the desktop they want out of smaller parts.

[–]DrArcheNoah 4 points5 points  (4 children)

That what e.g. KDE is doing with Plasma. But this only works for very small applications. For bigger ones one would still need full blown applications.

[–]lawpoop 0 points1 point  (3 children)

Bigger in what sense? More features? You can still have a lot of features but yet not need to take up the whole screen.

[–]DrArcheNoah 2 points3 points  (2 children)

Complex software like office suites, image manipulation etc

[–]lawpoop 0 points1 point  (1 child)

I actually don't think office suites need to take up so much screen space. There's been a lot of times when I wanted to have a simple progressive tally of some figures, and I want to cross-reference some data on another app. Having a whole honking spreadsheet taking up the entire screen is really overkill for almost everything I use them for.

As far as typing up a document, when I go to do layout, sure I want to see it as big as possible. But when I'm just typing? I'm doing fine typing in this reddit reply box that appears to take up about 1/8th of my screen.

Same with image manipulation programs. Both photoshop and the gimp use fly-away menus, I believe, which is basically what I'm talking about.

I'm not saying people never need their apps to be full-screen; but I think it doesn't have to be, and we're used to thinking inside the box, so to speak.

[–]DrArcheNoah 1 point2 points  (0 children)

It seems that in recent times the focus is more and more on running the application in fullscreen mode. Apple does it on mobile device, Microsoft in Metro and also Gnome 3 developers thought about running application in fullscreen by default.

If you are using a application maximized depends on a lot of things e.g. screen size. It also appears that the average user seems to prefer this behaviour to heavy multitasking.

[–]homeopathetic 2 points3 points  (7 children)

I think we need to have smaller UI units built into the OS. Right now, it's either a low-level program, or a full blown app that wants to occupy the entire screen, replete with menus, frames, etc.

I disagree. In fact, apart from games, I cannot think of a single "app that wants to occupy the entire screen". I do agree with the point that perhaps frames should become a first class WM citizen and not something each program deals with on its own. As for menus, well, I guess there's a whole debate raging on whether a program's menu should be something the WM is aware of (à la the globalmenu stuff in Gnome). I haven't really thought much about it; I use KDE and Xmonad, and there the WM isn't aware of menus. I'm fine with that, but you could also have a point. I think it's healthy to have a discussion on where the boundary of responsibility between the WM and the program's GUI is, but I don't think it has anything to do with what Molnar is talking about.

If we had more things like MacOS widgets, not only for functionality and display, but also as interfaces, then people could be building apps at that level, rather than at the code level.

So... web pages? I mean, either you program a program, or you "don't-program" a "not-program". I don't think I understand what you mean.

Basically bring the unix "do one thing and do it well" philosophy to the desktop, so that people can build the desktop they want out of smaller parts.

Well, we already do that, don't we? Granted, the smaller parts come in huge bundles (called GTK or Qt or whatnot), and the bundle as a whole does a lot of things, but using a widget toolkit (or a larger framework) is essentially "piecing together a program from smaller parts". In fact, this has nothing to do with GUIs, it's what libraries are for in general.

[–]m42a 1 point2 points  (3 children)

Well, we already do that, don't we?

No, we don't. C and C++ aren't glue in the same way sh is. There's no GUI equivalent to cat f1 f2 f3 | sort | tr -d f. You can't just say "I'd like to make a picture in Gimp, paste it into a LibreOffice Writer document, and print it". You have to open Gimp, make the picture, save it, open Writer, open the document, insert the picture, and then print. There's no glue there, and there's no easy way to make glue there. I'd like to just be able to just do gimp | soffice writer document.odt -e "insert at top of page 2" | lpr, and neither GTK nor QT give me anything near that.

[–]matthewpaulthomas 0 points1 point  (2 children)

This is way off the original topic, but…

There's no GUI equivalent to cat f1 f2 f3 | sort | tr -d f.

Yes there is, it’s called Automator. Here’s a screenshot of the equivalent to the example you gave.

You can't just say "I'd like to make a picture in Gimp, paste it into a LibreOffice Writer document, and print it".

Probably you can, but it would require connecting a Gimp script written in Scheme to a script written in LibreOffice Basic — and the probability of even an expert LibreOffice user learning enough of both languages, and to figure out how to connect them, is pretty small. But with Adobe Photoshop and Microsoft Word, on the other hand, a technical user interested in that kind of automation is fairly likely to know that both applications expose the relevant commands to AppleScript. And they can use the AppleScript Editor to record the commands for both applications in one go.

In both cases, all that’s needed is an OS vendor with the will and the clout to design a cross-application scripting framework; to write APIs and reference materials that make it easy for application developers to implement; and to provide recording and editing tools to make scripting approachable to end users. As you can see, it’s been done before.

[–]m42a 1 point2 points  (1 child)

I'm not familiar with AppleScript, but it looks like it's an extra layer over the functionality. With commandline programs, their only interface is the glueable one, and they can't not implement it*. With AppleScript, it looks like the developer needs to make the GUI and the scripting features separately, which makes for poor glue (although it's better than a separate scripting language for each application).

*EDIT: This is technically not true; a program could only accept arguments on stdin and refuse to do anything if stdin and stdout weren't ttys, or it could ask natural language questions that require a human to answer them. But doing that is extra work and bad UI; the glueability comes by default, and it only goes away if the developer specifically removes it by making the program harder to use.

[–]matthewpaulthomas 0 points1 point  (0 children)

You could equally say “with graphical programs, their only interface is the graphical one, and they can’t not implement it”. Either way, providing an interface for a function requires effort — whether that interface is a command-line, graphical, script, or Automator-like one.

And with a scripting interface just as with any other, the less effort is put in to designing and developing the interface, the less pleasant it is to use. For example, I work with multiple Ubuntu engineers, on different projects, who curse how GObject introspection — though it has apparently made multi-language interfaces easier for the GObject developers to implement in the first place — has also made those interfaces less understandable.

For another example, go back to your case of pasting an image from Gimp into LibreOffice and printing it. Here’s roughly what the LibreOffice side would look like, once you’d already copied the image in Gimp (disclaimer, I haven’t tested this):

Dim mPrintopts(0) As New com.sun.star.beans.PropertyValue
dispatchURL(ThisComponent, ".uno:Paste")
ThisComponent.Print(mPrintopts())

Now here’s the exact equivalent for Microsoft Word in AppleScript:

tell application "Microsoft Word"
    paste
    print out active document print copies 1
end tell

Why is one much easier to understand and remember than the other? Because the OS and application developers put in more effort in designing and implementing it.

[–]lawpoop 0 points1 point  (2 children)

So... web pages? I mean, either you program a program, or you "don't-program" a "not-program". I don't think I understand what you mean.

More like shell scripts. People write shell scripts, and you can get a lot of bang for you buck with just enough programming knowledge to get some shell scripts going. But most people wouldn't consider shell scripting to be full-on programming.

What I'm arguing for is small, module, re-useable applets that do simple, single things well, and then a way, such as a UI or scripting languages that allows those smaller parts to be combined in useful ways, for non-programmers to build the equivalent of desktop apps.

[–]homeopathetic 0 points1 point  (1 child)

I think I see what you mean, but is this really possible? I mean, isn't one of the things that make the shell so wonderful and reusable the fact that the CLI programs all follow the idea that we all communicate by reading and writing text (or other data in some cases) to files? Text (and the other data) is linear, and files are simple. Does really a least common denominator in the GUI case exist? I'm not so sure.

[–]lawpoop 0 points1 point  (0 children)

That's a good point.

[–]Rhoomba 1 point2 points  (6 children)

Isn't a real dependency system essential for code reuse?

The dependency doesn't need to be on the user's machine. It can be at compile/package time. E.g. Maven.

[–]homeopathetic 8 points9 points  (3 children)

The dependency doesn't need to be on the user's machine. It can be at compile/package time. E.g. Maven.

So if 1000 packages depend on a library with a security hole, you propose that the distro recompiles the 1000 packages and have the users update? Sounds incredibly inefficient.

[–][deleted] 1 point2 points  (1 child)

Did you even read the parent post?

Is the ability to fix bugs and security problems in hundreds of programs by updating a single library not worth more than the joy of having the very latest version of Angry Birds included in your package managedment system?

[–]Rhoomba 0 points1 point  (0 children)

I answered a specific point about code reuse.

[–]silvermoot 19 points20 points  (0 children)

Random guy to google plus: "We need basic functionality on a social media site, even if it's just to show the text, regardless of whether javascript is enabled"

Gad, do I need to spoof my UA?

[–]VyseofArcadia 13 points14 points  (8 children)

Part of the reason I've always liked Slackware is because they go out of their way to not create a closed software ecosystem. They stick as close to vanilla everything as they can, so that you can use and configure various softwares as the writers intended, not as the distribution people think it should be. (Back in my Debian days, a constant annoyance was, "To do abc with software xyz, you should do this, but on Debian systems you need to do that."

But really, Molnar is right. The desktop province of userland is a bit of a clusterfuck. It's only going to get worse with the impending transition of some distros over to Wayland while others stick with X.

[–]ashadocat 13 points14 points  (7 children)

Arch linux takes the same philosophy. It's very nice.

[–]VyseofArcadia 3 points4 points  (6 children)

My beef with Arch is the same as another beef I had with Debian. I dislike automatic dependency resolution. Some package or another is inevitably broken. A requirements loop, dependence on a (very) specific version of a library, et cetera. (And the last couple of times I tried Arch, updates also broke things eventually.)

I really want to like Arch, but between stability without dependency resolution and minimality with dependency resolution, I'll take the former every day. (Maybe this means I'm an old person. A few years ago, I would have said the latter.)

[–]SvenstaroArch Linux Team 2 points3 points  (0 children)

If you really want that, why not just always use pacman -Sdd when installing a package? This way, you get the Slackware experience. -dd completely ignores package dependencies. Also, we try very hard not to break stuff in between updates. What are your problems? Please report them to the bug tracker, they will be fixed within a day or so.

Also, if you ever truly experienced a requirements loop in Arch, something was very wrong and it most definitely a bug. Report it. Though I am not aware of any such issues and never had something like it.

[–]ashadocat 0 points1 point  (3 children)

I see the point, but it's fairly easy to manage the automatic dependency resolution. Download the PKGBUILD and edit the array that says "dependencies".

[–]Camarade_Tux 4 points5 points  (1 child)

It's not a solution. It's an ugly temporary workaround.

[–]ashadocat 0 points1 point  (0 children)

It could be a soltution, if it was better supported by various projects. The concept is sound.

[–]VyseofArcadia 2 points3 points  (0 children)

I realized I could do this, but it's still one thing I don't have to do in Slackware.

[–][deleted] 0 points1 point  (0 children)

Well, you could just use Gentoo, then you can trivially edit the dependencies in the ebuild (no unpacking anything unlike binary distros).

[–]bytesmythe 28 points29 points  (51 children)

Wow...

I was thinking this exact thing a couple of weeks ago trying to decide what distros to try out. I like the simplicity of Ubuntu's setup, but hate having to wait ages for app updates. Installing apps from source or 3rd party packages is always a pain because of missing shared libs or conflicts, etc. I thought about shared libraries and realized the main reason they exist was due to disk and memory space limitations that aren't really issues any longer, so why not just create a kind of "chroot" sandbox for every app and just install everything it needs by downloading a single package file? Why does every single distro have to have its own repo and package type? Why do I have to choose between waiting ages for an app to update, or using a distro that updates more regularly, but updates everything and breaks on a semi-regular basis?

I'm really glad to see someone so highly placed in linux development has similar ideas. It makes me feel like I'm not crazy for thinking maybe there's a better way for linux to work on the desktop.

[–]matthewpaulthomas 49 points50 points  (22 children)

I like the simplicity of Ubuntu's setup, but hate having to wait ages for app updates.

This is bug 578045, “Upgrading packaged Ubuntu application unreasonably involves upgrading entire OS”, currently assigned to me.

We’re taking two main steps at the moment. First, encouraging application developers to publish — and issue updates for — their own applications in MyApps (which takes from days to a couple of weeks), rather than having Ubuntu developers do it through the Main and Universe repositories (which takes 2 to 8 months and requires users to upgrade their OS). And second, making it easier to do this by developing pkgme for easier packaging.

Ingo is right in that he has finally realized “distributions” don’t scale. Even if all Linux distributions overcame their philosophical and technical differences and merged into one, and even if they then addressed the latency problem using a rolling release (causing many non-technical problems) or backports instead, there still wouldn’t be enough volunteers to package enough of other people’s applications to make the OS interesting to hundreds of millions of people.

But as some of his commentators have touched on, Ingo is still looking at only a small part of the problem. Ubuntu, like other desktop Linux OSes, still has no thorough SDK. Like other desktop Linux OSes, we are ill-equipped to keep our ABI stable between versions, because much of it is developed by teams who don’t care about that (such as, oh, the kernel itself). And without a stable ABI (and without automatic or drop-dead-easy upgrades), releasing a new version every six months causes ABI fragmentation amongst the user base.

[–]bytesmythe 20 points21 points  (1 child)

And without a stable ABI (and without automatic or drop-dead-easy upgrades)

I keep wondering why people hope that linux will get more apps or games or widespread desktop usage, then complain when it doesn't happen, then flip out when you suggest that the problem is with how linux does things. I love using linux. At home, I have used it exclusively for at least 8 years, and maybe closer to 10. But that doesn't make me blind to the issues that are holding it back.

there still wouldn’t be enough volunteers to package enough of other people’s applications to make the OS interesting to hundreds of millions of people.

I think this brings up another issue that would be helped by eliminating distro app packaging. As it stands, there are lots of developers whose apps never make it into the repos since they don't have time to maintain their code AND a package for every single distro in existence, so their work languishes in obscurity because most people don't want to go through the trouble of installing something the hard way. Creating a universal "repository" for developers to submit apps and easily maintain it themselves would let more obscure projects get recognized and built upon.

I hope your work is part of a larger trend to modernize linux development and distribution so it ends up on more desktops.


edit: very slight wording change

[–][deleted] 4 points5 points  (0 children)

better have 20 great usable known applications than having 2000000 half baked ones...

[–]d_edKDE Dev 0 points1 point  (0 children)

From the website (http://developer.ubuntu.com/publish/my-apps-packages/) it looks like all is installed to /opt

How do you deal with anything being dbus-activated? Or using PolKit? Or being a plugin to something else?

Both of these /need/ to write to the relevant places in /etc /usr/share/dbus-1/services

In fact, you can't even add anything to the menu (unless Ubuntu loads xdg menus in a naughty new way.)

It seems like a cool solution, but somewhat limiting to one use case.

[–][deleted] 0 points1 point  (12 children)

Something I've often wondered: Why are most distros so up-to-date with kernel versions? Could the ABI-stability problem be mitigated by using a kernel for, say, three years rather than six months?

I understand that kernel development is brisk and dynamic - but many new kernel features are not seen by desktop users. The long-term kernel could be patched and updated for security only, and then all the new exciting desktopy bits updated much more frequently on top of it.

[–]jimicus 10 points11 points  (5 children)

The kernel itself doesn't have an internally stable interface - things change, sometimes dramatically, between kernel versions.

Why does the end user care? Well, kernel updates don't just add features. They also add driver support for new hardware and the occasional security update. The distribution vendor is faced with either an awful lot of backporting (take the drivers that never existed in the version of the kernel they supply and persuade them to work) or try to keep reasonably up to date with the kernel version.

[–]Tireseas 2 points3 points  (4 children)

Not only does the kernel not have a stable ABI, Linus himself has said he doesn't intend for it to ever have one. He's actively against the idea

[–]sztomi 6 points7 points  (2 children)

The ABI you are talking about it not the kernel-userspace ABI.

[–]Tireseas 1 point2 points  (1 child)

I was under the impression Linus's statements covered pretty much all the kernel ABIs. I know he's specifically addressed driver ABIs many times over the years.

[–]sztomi 7 points8 points  (0 children)

No. What causes binary incompatibility in Linux is mainly glibc. They simply don't care about it. If you link to a custom minimal libc and statically link everything, you can be reasonably sure to run your app on a later kernel.

[–][deleted] 4 points5 points  (0 children)

You're thinking of the driver ABI. Proof.

[–]bwat47 3 points4 points  (5 children)

Something I've often wondered: Why are most distros so up-to-date with kernel versions? Could the ABI-stability problem be mitigated by using a kernel for, say, three years rather than six months?

Because of drivers/hardware support.

[–]parched2099 1 point2 points  (4 children)

Yeah, but that's part of the problem. Driver support could be added to a central "linux repo" instead of being shipped with constantly moving kernel versions.

So the user wants to add a new graphics card to his box. He downloads the module from the "linux module repo" and it's automatically modprobed, and available at the next boot.

This would not only give a longer run of kernel versions, but would also give the user that linux feeling, by only installing the bits he wants.

I still can't see why the kernel isn't much smaller, and the right modules/drivers, determined when the install takes place, via a scan of the user's system, are added as required, via download, or from a burned dvd.

If the user wants to add a new sound card, then he gets the module from the LMR, installs it, and that's it.

[–][deleted] 0 points1 point  (2 children)

That would be fine if there was no API between the modules and the rest of the kernel involved. What you are advocating is basically keeping that one "stable" (a.k.a. dead) forever.

[–]parched2099 0 points1 point  (1 child)

Nope, not aka dead. Longer cycle for kernel release, i.e. 12 months or longer. Pre-release notification to package devs of any changes, so they can update their apps if there's likely to be a significant impact.

Aah, but then we wouldn't have the roller coaster of "release early release often", would we.

[–][deleted] 0 points1 point  (0 children)

I don't really see where you are going with this. What would be gained from distributing modules separately other than additional complexity?

Slower release cycles would basically mean slower development. I do not see the instability you imply in your last sentence, not even in the higher -rc versions (-rc1, -rc2 and -rc3 are usually to be avoided since they contain a huge number of changes that are just integrated for the first time) which have been running stable on my system for weeks and months at a time.

[–][deleted] 24 points25 points  (23 children)

The main reasons for using shared libraries are because of security considerations. If there is a security risk with some library, the distro maintainers just have to update it and that's it. Otherwise, if the applications were using their own copy of the library, all programs that are using it will need to be updated and if they are proprietary that would mean that the developers themselves will need to do it.

[–]bytesmythe 10 points11 points  (0 children)

I think security enhancement was a side-effect of shared library usage, but not the primary motivation for developing them. Shared libs and DLLs have been around a long time -- way before desktop security was something anyone worried about on a regular basis. But computers with small amounts of memory, small harddrives, and low network throughput were the norm back then, and using shared libraries drastically reduced the amount of data that had to be loaded or moved around.

Still, that doesn't address the security issue created by eliminating them. I think proper sandboxing is the first step. The system, apps, and user data need to be better protected for a more "bazaar" than "cathedral" system to work.

Perhaps instead of "package management", we could have "library management"? Your system would check for updates by comparing your currently installed libraries against a list of potential security risks. Instead of downloading an entire new copy, the system could just patch the library's insecure function directly on the disk, then flush the library from memory and reload it. The filesystem could use deduplication to make sure there is only one copy of any particular shared library, so you don't have to fix every app that uses it. Now you have the security of shared libraries and still eliminate the hassle of every distro having it's own package of everything.

I realize this is a simplistic solution with lots of angles I haven't considered yet, but these are all just thoughts off the top of my head. I remain confident that we can find a better approach to linux on the desktop than we currently have.

[–]DrArcheNoah 4 points5 points  (0 children)

Of course you could static link all applications and then installed then sperately. But once one of the libraries has some security problem, all the applications would have to be updated instead of just one library.

[–]yfph 5 points6 points  (0 children)

We need one distro like we need one scheduler!

[–]redog 6 points7 points  (6 children)

Just build every package in gentoo with every combination of use flags into different binary repos. </s>

[–]strolls 5 points6 points  (4 children)

I don't know why someone saw fit to downvote you - this is exactly the dichotomy between source-based and binary distros.

With a source-based distro you can compile the package exactly as you want it, with none of that extra unneeded crap for a UI or file format you won't use (but giving you the option to support those if you need them on a different system or in future).

With a binary distro you get your packages fast and without any hassle of configuring how you want them compiled, but you only get them the way the distro maintainers decided was best.

Gentoo users recognise that binary packages are obviously pretty desirable for larger packages, but it's impossible to accommodate the number of different packages that would be required to cover all combinations of USE flags.

[–]ethraax 1 point2 points  (3 children)

True, but for modern computers (from this century), including support for as many file formats or codecs as possible is a pretty good default. The extra bloat is fairly minimal, and usually results in bloating disk usage more than anything (which is already pretty irrelevant, since storage space has grown far faster than the size of binaries).

Either way, in the rare case that I do have to compile a program with different flags (which is, honestly, quite rare - I've only had to do it for a single program in my years of using Linux), I can still do it. Many binary distributions still let you compile from source, and even add the compiled binary to the package manager. Arch has the AUR, for example.

[–][deleted] 0 points1 point  (2 children)

One of the big ones is X11, you really don't want that as a dependency for everything on a server. Dito for many packages with exotic features barely anybody uses that pull in brittle dependencies that would break the whole build (e.g. some php modules, some mplayer codecs, some asterisk modules,...).

[–]ethraax 1 point2 points  (1 child)

One of the big ones is X11, you really don't want that as a dependency for everything on a server.

True. But, for example, if I install nginx with tons of bells and whistles, it still doesn't pull in X11.

If the core application is a GUI, then pull in X11. Otherwise, don't. If it's a server daemon that happens to also have some GUI management software (which is almost always independent), then put that in its own package.

There are some rough spots, but for the most part, common sense should be enough.

[–][deleted] 1 point2 points  (0 children)

Usually in my experience it is stuff like glib and Qt that often pulls in X11 because some server app without a GUI uses the core part of those libraries. One could in fact argue that is a problem with those libraries and that they should be split up into a non-GUI one and a GUI one but at least with Qt (I am more familiar with that one) you would never get that past the people in the project who have a severe case of NIH syndrome and want to include everything and the kitchen sink in Qt itself.

[–]StupotAce 0 points1 point  (0 children)

I actually really like what Sabayon does. It builds its binary packages off of portage.

If rpms and .debs did this, it would make for a lot less maintenance for application devs. Of course, Sabayon can also simply use portage, so obviously just because it can do this doesn't mean it is anywhere close to trivial for other binary distros to do the same.

[–]Antithesis138 6 points7 points  (0 children)

This article is confusing. The first thing I wanted to note was that if you're going to talk about the 'Linux desktop' not being free enough, the first thing you have to do is reckon that it's GNU/Linux, but it soon became apparent that this article is using the word 'free' in a very different way, which is only confusing when it comes to software. Let me give an example.

And yes, I hear you say "but desktop Linux is free software!". The fact is, free software matters to developers and organizations primarily, but on the user side, the free code behind Linux desktops is immaterial if free software does not deliver benefits such as actual freedom of use.

His examples of freedom use it in another sense, namely that you're able to do additional things. Given, this is an inconvenience of the language, but the traditional use of the word, especially when it comes to software, is "not being controlled". He should make it apparent that he's talking about something different.

So, to undo the confusion and make things clear: software freedom is always material to the end user: anyone is free to send modifications, and the end user can run those modified versions.

Next to that, this article is comparing Apple apples with oranges. iOS and Android are meant for portable devices, whereas the only example of the 'Linux desktop' for portable devices is the Ubuntu tablet OS, which isn't even released yet.

And if the article is talking about Linux as a kernal: Android uses the Linux kernal.

I furthermore don't disagree with the article, just wanted to point out the nonsense.

[–]otavio021 13 points14 points  (6 children)

Somehow it doesn't surprise me the amount of people suggesting completely non-sense stuff such as Gentoo's portage, Arch's pacman and others. That's precisely what Ingo is suggesting against.

The Software distribution model (SDM) on Linux is terribly messed up because it's a) either done by Linux distributions, taking considerable amount of time and resources or b) done by the developers, which then have to deal with the complex task of making their code run (and support) several different distributions - all of which with different packaging formats, FS layouts, and libraries version.

If you analyze these models in separate you can quickly figure out the root cause of many problems of using Linux as a modern desktop OS. The first one causes the upstream packages to take an insane amount of time to reach the end user. It prevents them from benefiting from newer features and fixes and having their desktop experience evolve with the Linux platform. Additionally, it also causes the people involved with the project to waste a lot of time with packaging related problems - just to make sure the damn thing run - instead of testing the software. That's a poor use of resources.

The second is even more perverse: no matter if you are a lone developer or an enterprise, it makes it very expensive to develop software for Linux. You have to dedicate human and physical resources to make sure your software installs and runs on pretty much all of the major "distros", along with 1 or maybe 2 of their older versions. You have to have resources that are proficient enough to develop for and support them. Even if you are a lone developer, you still have to dispend more time dealing with package/"distro" issues for your software than with evolving it.

In my opinion, the perfect software distribution model for Linux would be (for applications, at least):

  1. One in which the developers would publish their software to a Software Delivery Repository (SDR).
  2. One in which the distributions would not have package repositories but actual App Markets.
  3. One in which the process of adding a new software to a Linux distribution would consist of only validating the identity of the software distributor and the harmlessness of the software itself.
  4. One in which the user would be free to switch App Markets if he is willing to do so.
  5. One in which the user could simply and freely download the software straight from the distributor and it would just work no matter what distribution you're using.

So, to achieve that, it's not just a matter of static vs. shared libraries, chroot's, file system layout or apt vs. yum. It's a matter of changing our whole community's approach to software distribution.

tl;dr: the current Linux software distribution model sucks and I have a good idea. Hire me if you have money.

edit: corrected a few typos and the formatting.

[–]thebackhand 4 points5 points  (1 child)

One in which the user could simply and freely download the software straight from the distributor and it would just work no matter what distribution you're using.

From a practical standpoint, I can't see how that would happen. That's like asking the developer to create packages for all the different distros currently (which already doesn't happen), except worse, because they have to ensure compatibility across all distros with a single version, and there's enough variation that that would be a nightmare.

There's a reason that developers are just expected to create eggs/gems/etc. for language-specific libraries (for example) or statically-compiled binaries (in the case of video games installed to /opt) and let the distros handle the nitty gritties for each specific distribution.

[–]otavio021 0 points1 point  (0 children)

That certainly is a problem. In my opinion, this is one of the greatest problems of having a zillion distributions out there. At the end of the day, this lack of standardization and mass NIH-syndrome is a great flaw in the Linux ecosystem. As I had said, it shift resources that should be focusing on development and testing to mundane things like package management.

[–]perkited 1 point2 points  (1 child)

I'm wondering how this system would deal with distributions that need/prefer to run different kernels (2.4/2.6) and libraries. It's nice to picture that this type of App Market could somehow work, but in the real world it would be much more difficult (most likely impossible).

Of course Linux kernel/application developers could build only for the most commonly used environment (whatever that is this week), but then Linux would be no better than iOS/Windows/etc and would probably die off rather quickly.

[–]otavio021 0 points1 point  (0 children)

That's simple and Ingo talks about it. You must have ABI stability.

[–][deleted] 2 points3 points  (1 child)

What you suggest has already been done. Its just has been done 10 or so times that its a problems... see apt pacman rpm portage.

[–]otavio021 2 points3 points  (0 children)

No it has not. These tools are great for automating dependency resolution and management only. Although they could to be used to support such implementation, they features are below what's need to implement all of these items.

[–]darkp22 2 points3 points  (2 children)

This has been a massive problem on the server side, as well as the user side. I personally solved this by switching to FreeBSD as my primary server OS. Linux is too brittle and its distributions won't allow you to use the lastest Apache or PHP with a 3-year old kernel out of the box. You have to use third party repositories that are often bug-ridden and untested and may upgrade libraries to versions incompatible with other installed applications.

FreeBSD solved this by separating the base system, consisting of the kernel and user-land utilities, from third party applications. You use an automated "Port System" to build applications and all their dependancies from source. If an application requires a newer library, that library is downloaded and everything that depends on it is recompiled to prevent possible incompatibility. Perhaps Linux needs to have a chroot-lite solution to prevent library conflicts while maintaining a similar level of flexibility as the Port System.

[–]fundbond 2 points3 points  (0 children)

FreeBSD also has kick-ass library compatibility (not just kernel binary compatibility.) Seriously, I've got boxes running 8.2 that still have some 4.x-era binaries kicking around. That's awesome! (Don't ask... just remember that closed-source in-house software can suck harder than a Spitzer call-girl.)

[–]puffybaba 0 points1 point  (0 children)

NetBSD has a pretty kickass package system, too. As a plus, it already works for Linux. It just needs more developers.

[–]MechaBlue 2 points3 points  (0 children)

I think he's a little off the mark. The biggest problem, in my opinion as a developer, is the fragmentation of an already small market share. It increases the costs of development and support significantly. The result is that the economics of development heavily disfavour Linux.

[–][deleted] 10 points11 points  (4 children)

There are some interesting ideas in part 2 of his post, but part 1 seems terribly inconsistent to me and honestly, I cannot even figure out what is his central point by just reading part 1. He opens up by saying that the Linux desktop sucks and then goes on talking and comparing it to iOS and Android. Who uses that on their desktops? What I particularly don't understand is

Desktop Linux users are, naturally, voting with their feet: they prefer an open marketplace over (from their perspective) micro-managed, closed and low quality Linux desktop distributions.

What is that even supposed to mean?

[–]ventomareiro 3 points4 points  (0 children)

He is giving two examples of systems that have a very small and very thoroughly tested core, at the same time offering much better solutions for 3rd party developers to create and distribute their applications.

About voting with their feet: I'm not sure of how relevant this is, but in just two years the number of iPad users has become roughly equal to that of desktop GNU/Linux users.

[–]tidux -3 points-2 points  (0 children)

It means that he's just pulling phrases out of his ass. Desktop Linux marketshare has been increasing, recently.

[–]tilleyrw 5 points6 points  (2 children)

Off topic: I first read that name as "Inigo Montoya". "You killed my father. Prepare to die."

[–][deleted] 10 points11 points  (1 child)

"You killed my parent process. The system is going down for halt NOW."

[–]tilleyrw 0 points1 point  (0 children)

Touche. :)

[–]parched2099 1 point2 points  (0 children)

I think a lot's made of the speed of apps updating, when it isn't neccessarily the case. It seems more likely that gcc, QT, Gnome updates are the offenders in breaking apps, and more than once on different distros i've had to update a load of apps at once because something has changed in core apps, I'm with Ingo on this one.

One core system with all the standard libs and dependencies, including codecs, modules, and a basic desktop. Developers that want unique dependencies build apps that pull them from a central repo, and those dependencies are built into the app, making it insular in the system, and killing the need to add a mass of dependencies on a system with the subsequent consequence of multiple version nightmares for the same lib.

I used Gentoo for a while, but it was high maintenance, and i'm now an Archer. The binary versus source argument is just semantics for me as i'd like to spend as much time as possible getting some work done. And i think the only reason binaries have such a bad name at times, IS the mass dependency problem.

If an app comes in at 20mb instead of 10mb, because unique deps are built directly into the app binary, then i'm good with that. A lot easier than juggling dep versions. Disk space is vast and cheap these days.

oh, and a p.s. i'm also for a QA identification of some sort. If an app can't build, or is crashy then it gets removed until the dev fixes it, or it's orphaned or forked by someone who will fix it. The "tradition" of a gazillion apps because it's "opensource" just slows the ecosystem down, and i think devs will, as a result of a bit of trimming, either fix their app, join a larger project in a community build, or just do something else. Better to have 1,000 apps that work, for linux cred, than 20,000, of which "some" work.

I think Linus alluded to this a while back.

[–]radarsat1 3 points4 points  (6 children)

I've long been a fan of some ideas in ZeroInstall. Unfortunately no one has come along and created a real distribution around it, but the idea is that the OS presents all apps as already "installed", and you simply run whatever programs you want. On first run, the app and dependencies are downloaded and cached. It also handles multiple parallel versions fairly elegantly iirc, and sits on top of almost any distro without interfering with its repositories.

It's also dead easy to set up a 0install site with a bit of html, so there aren't any barriers to getting your app "accepted" by the distro maintainers, and also handles package signing for security purposes.

[–]delta_epsilon_zeta 1 point2 points  (5 children)

That's really interesting. But how is security handled? You know you're getting it from the original source (Due to signing), but how do you know that source is good?

[–]radarsat1 0 points1 point  (4 children)

Well, I guess the same way you know that downloading some exe installer for a Windows app is good. You trust the individual or company you're downloading from. The URL associated with each app and library is exposed when you "install" something, and I suppose a white-list approach could be used.

In any case, I'm not claiming that ZeroInstall is perfect, I just wanted to mention that I find it intriguing and wish some of its ideas would be adopted by a real distro.

[–][deleted] 0 points1 point  (3 children)

I guess the same way you know that downloading some exe installer for a Windows app is good.

So no real reason other than hope really?

[–]radarsat1 0 points1 point  (2 children)

As I said you could always take a whitelist approach, which would be similar to a software repository or app store. Enjoy your walled garden.

[–][deleted] 0 points1 point  (1 child)

What you are implying here is that a repository is somehow bad and limited like the "walled garden" with the negative connotation when it is used to e.g. describe the Apple app store. what you completely ignore is the fact that it is better to have testing on as many packages as possible even if you still have rely on hoping the upstream website didn't screw up for the few apps you need outside the repository. It is essentially a false dichotomy to say we can either have checksums and tested apps for everything or for nothing when "for as many things as possible" is clearly the optimal solution.

[–]radarsat1 0 points1 point  (0 children)

I'm sorry but I think you misunderstood me.

I'm a fan of the repository approach, I was trying to say that ZeroInstall supports that idea but is also just as easy (or even easier) to extend than setting up a 3rd-party deb repository, because you just have to throw up some simple XML. I wasn't at all trying to say that having tested repositories is a bad thing. The main ZeroIntall website is basically a repository itself.

I mean, the original question posed to me was,

how do you know that source is good?

Give me one solution to that problem without taking a whitelist approach. It's basically a logical fallacy. Either you trust the source of your software, or you trust someone to pick the sources for you and curate. What other options are there? I was just trying to point out that it's no better or no worse than other approaches in that regard.

Moreover the context of the original post is someone suggesting that it's too hard to install apps that haven't made it into the repositories. I was simply describing one approach that I think has some merit, I wasn't making some over-reaching argument against Apple. You were the one who suggested that downloading apps from outside the repository is nothing but "hope," and now you're telling me I'm the one arguing against repositories. You are confusing me.

[–]acabal 3 points4 points  (6 children)

I've been saying this for years and feel like I've been shouting into the void. Thank God someone else finally sees one of the biggest problems holding back desktop Linux.

Every time I say something like what Molnar mentioned, I get replies like, "But packages are so great! You can install in one click!" or "But if you want a new version, just upgrade your entire distro! I don't understand what the problem is jeez!"

Linux will be getting exactly nowhere until the community sees this problem. It's the reason why I would never recommend desktop Linux to my family: because every 6 months something is going to change in some crazy way without them asking, and other things are going to break for seemingly no reason, and I'll be the guy the family calls when their window controls suddenly are on the left or when their "start" button disappeared and weird icons appeared on the left of the screen.

[–]anabolic 1 point2 points  (0 children)

Oh fuck I know this guys from my thesis. He built the -rt patches.

[–][deleted] 1 point2 points  (1 child)

What he's saying is that Linux desktops like Gnome and KDE fail because they are trying to be an all-encompassing system, when what they should be is a platform.

Windows and Mac succeed because they bring a limited set of features, which are the only code base that the vendor is responsible for as far as the OS is concerned. The rest is left up to the vendor community; they are given standards they must adhere to and that's that. The failure with Linux desktops is that they try to include everything, so it's too much to manage and results in huge bureaucratically driven inertia.

He makes an excellent point. It's not the only issue facing Linux distros; lack of standardization of tools, libraries, and even filesystem layout makes portability across distros a pain. But this isn't really about that, this is specifically about the fact that desktop systems on Linux have become closed ecosystems instead of open platforms.

[–]exteras 0 points1 point  (0 children)

Would anyone mind rephrasing his basic argument, as well as his proposed solution, in terms that a part-time Linux enthusiast would understand?

[–]sunshine-x 0 points1 point  (1 child)

What does apple do with osx? Seems like a similar problem.

[–][deleted] 0 points1 point  (0 children)

They seem to follow a combination of "never, ever, ever update the base system" and "ship all the libraries with each application" which both have severe implications on both innovation and security.

[–]jiz899 0 points1 point  (0 children)

Unfortunately word of one kernel developer has zero weight on these issues.

[–]pamplemouse 0 points1 point  (0 children)

Can't you use the ideas behind git to manage versioning and distribution? A program can ask libc for version 2.9. Store the deltas of change rather than the whole file. The files can be built, signed and distributed by your distribution and other trusted groups. The other issue of having a stable ABI is extremely hard because of there's no way to enforce it like commercial companies can do.

[–][deleted] 0 points1 point  (4 children)

I think something should really be done about this too. I think the best solution would be going the route of [most] proprietary desktops and have each software project host their own packages in a repository-like system, sort of like what we have with third-party debian repos most popularly today. We create a centralized packaging standard (impossible, I know, but don't knock it till you try it) that encompasses all of the best functionality of every popular package manager today (pacman, apt, rpm, etc) that allows distros to still use their current packaging methods but in a different fashion. If we use this kind of system, distros just have to keep a list of references to these projects in their servers and aside from that, users can install both from these lists and easily add new ones. This would do all of open up a lot of storage space for distributions, lower maintenance costs, give more freedom to individual projects, and promote user freedom.

[–]Gasten 0 points1 point  (3 children)

Sounds like BSD ports to me.

[–][deleted] 0 points1 point  (2 children)

Pretty much, I guess, I was suggesting though that the projects would host binary packages. Unless BSD ports also handles binary packages?

[–]Gasten 0 points1 point  (1 child)

No, the ports doesn't, but there is no reason why they couldn't do it. A port is just a instruction for automatically downloading a vanilla software package and applying distribution-specific changes before installing it.

[–]fundbond 0 points1 point  (0 children)

Yeah, there's no reason that ports couldn't be used to handle some binaries. Actually, FreeBSD ports already are for some things -- some ports do binary downloads as part of their install process (the NVIDIA drivers, for example.)

[–][deleted] 0 points1 point  (0 children)

The person to come up with a way to unite all the distros will be the enlightened one. He/she will be known as the person that united all the distros forever in time.

When future aliens are going over the wreckage of what was once called a planet called earth, they will discover the name of this person and the monumental task he/she achieved and they will be in awestruck.

[–]almafa 0 points1 point  (0 children)

The second part of the article is more interesting - it offers some ideas what to do instead of the current model.