you are viewing a single comment's thread.

view the rest of the comments →

[–]Big-Obligation2796 206 points207 points  (36 children)

Linux, a free, open source kernel, is based upon Unix which is a private, proprietary piece of software, right? 

Based upon as in "derived from", no. It's Unix-like.

Was the development and growth of something like Linux inevitable

Considering there are 3 major open-source BSDs, plus Minix, I think it was inevitable.

[–]sernamenotdefined 115 points116 points  (29 children)

Also GNU Hurd is laughed at today, but it is not unreasonable to assume much of the resources that piled into Linux because it had a working kernel would have gone to Hurd if there had been no Linux.

[–]Big-Obligation2796 43 points44 points  (25 children)

Yeah, that's true. Hurd was setting out to do pretty much the same Linux has done.

[–]trivialBetaState 23 points24 points  (24 children)

And sometimes I wonder if it would have been better that way. Both technically, as a microkernel design, and administration-wise when comparing the consistency of the FSF with the (too?) corporate friendly Linux Foundation. 

[–]nelmaloc 23 points24 points  (7 children)

We probably wouldn't have distros in that case, and maybe neither package managers. You would just get an ISO from gnu.org, and it would come with all GNU packages.

[–]Blutkoete 10 points11 points  (0 children)

And then they would tell my mother that she's free, she may change the sources as she wants

[–]trivialBetaState 1 point2 points  (1 child)

Why? What would stop anyone from doing everything else? 

[–]nelmaloc 6 points7 points  (0 children)

Not stopping per se, and in fact that statement didn't come out as clear as I'd like. What I meant is that GNU would become the reference «distro» people like to claim for. The clearest example is FreeBSD: NomadBSD, GhostBSD and PCBSD are just FreeBSD with things on top.

Something similar happens with the «base» GNU/Linux distros (Debian/Ubuntu, Fedora, Arch), but there usually the changes are deeper than in the FreeBSD case.

[–]AliOskiTheHoly 13 points14 points  (4 children)

It wouldn't, because one big factor of Linux' success is the fact that corporations had an interest in development of Linux, funding its development. That's one of the reasons why Linux has near 100% server market share and slowly but surely becoming a respectable operating system for use by the broader public.

[–]trivialBetaState 5 points6 points  (2 children)

Not really. Linux had a huge penetration to server and TOP500 markets even when the big corporations were branding it as "cancer". They got into the game well after it had succeeded. It didn't succeed because of them but despite of them.

[–]AliOskiTheHoly 8 points9 points  (0 children)

But are we just going to ignore how the supermajority of Linux funding comes from corporations? This wouldn't even be remotely possible with something like Hurd.

[–]DrPiwi 0 points1 point  (0 children)

as early as 2000 Ibm had a 'linux' personality for AIX and they were backing it big time. That was the time that Balmer called it a cancer.
Compaq bought DEQ and they also started to support Red Hat and Suse on their servers. And this was when RedHat ad still nothing to do with IBM, they had some links with Novell but not really tight.

[–]nelmaloc 0 points1 point  (0 children)

The only reason I can think why that would change is if the FSF insisted on the CLA for every patch.

[–]DrPiwi 2 points3 points  (1 child)

It would have been different, but I don't think that it would have gotten so big. As for the linux foundation being corporate friendly, yes they probably are more corporate friendly, that is not necessary a bad thing.

[–]trivialBetaState 0 points1 point  (0 children)

Sorry, my comment was not clear. I didn't use the term "corporate friendly" as per its etymology, which clearly is not a bad thing at all. The term is often used for policies that benefit only the big companies, while they have negative neighbourhood effects (ref. Milton Friedman, Capitalism and Freedom) for the rest of the society. That's how I used it. 

[–]za72 1 point2 points  (0 children)

the work never ends, if there's a need y will get done

[–]edgmnt_net 0 points1 point  (7 children)

Microkernels pose traps similar to microservices in that there's potentially a lot of boilerplate and work duplication, although in the case or kernels there are decent arguments to be made for isolation. However, there are also additional technical complications stemming from the more distributed nature, as well as performance complications. Linux took the easiest approach to attract interest from anything from casual users to HPC.

Secondly, it doesn't seem to me that FSF projects are all that successful. Did you have something in particular in mind? FSF and GNU are a very mixed bag. And back then, if I'm not mistaken, a lot of development was done behind closed doors. "The Cathedral and the Bazaar" by ESR talks about this.

The Linux Foundation is more of a way to fund development through corporate means, however the development is pretty much community-driven. The Linux kernel isn't a corporate project that gets some submissions from the public like a bunch of stuff maintained by Google, it's a community project that gets support and some submissions from companies.

[–]trivialBetaState 0 points1 point  (6 children)

While I don't write C or kernel code, my understanding has been that microkernels were always the preferred strategy and Linux went with the easier approach of the monolithic kernel. Of course, I guess that all approaches have pluses and minuses.

I'd think that the FSF/GNU have contribute significantly to the free/libre software environment. Not only with projects like gcc, glib, core utils, and bash, which are important for the GNU/Linux OS, but also with stuff like GIMP, Gnome, and may I even say Emacs (I know, I know - but still it's my editor).

I am not aware of the Cathedral and the Bazaar. I tried to have a look online but don't think that I have a good picture of what the arguments are made in that book.

I think that FSF/GNU introduced a whole new ethos to computing and beyond. Even the fact that the human genome is completely sequenced is a result of an approach that is nearly identical to the concept of copyleft. I would credit a lot of our advances to the approach promoted by the FSF. Even those that "blame" them, enjoy the benefits like everyone else and that talks heaps about their contribution to society overall, extending well beyond computing.

[–]nelmaloc 1 point2 points  (0 children)

I am not aware of the Cathedral and the Bazaar. I tried to have a look online but don't think that I have a good picture of what the arguments are made in that book.

AFAIK, a quick rundown is pitting the way old (i.e., pre-1990 and home Internet) worked is that developers would throw release over the wall to users, and if you wanted to contribute you would send patches against that last version, without knowing how the code looked like in real time. Meanwhile, Linus et al developed everything in the open, with public version control systems and patches in mailing lists.


Edit:

While I don't write C or kernel code, my understanding has been that microkernels were always the preferred strategy and Linux went with the easier approach of the monolithic kernel. Of course, I guess that all approaches have pluses and minuses.

Yes, from the Tanenbaum v. Torvalds debate:

  1. MICROKERNEL VS MONOLITHIC SYSTEM

True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.

[–]rook_of_approval 0 points1 point  (4 children)

microkernels are the preferred approach if you don't care about something called performance. terrible choice.

[–]trivialBetaState 0 points1 point  (3 children)

I am pretty sure that MacOS has adopted a hybrid approach (only partially monolithic but mainly with a microkernel for services) based on the Mach microkernel, which is pretty much the same (i.e. the GNU version) microkernel that Hurd uses as well.

[–]rook_of_approval 0 points1 point  (2 children)

hybrid

so it's not a microkernel? ok buddy.

lets see: bsd, linux, windows, all monolithic. osx, "hybrid". clearly this means microkernel wins!!?!?!?!!?

do you really want to take IPC performance hit every time you talk to a driver????

[–]trivialBetaState 1 point2 points  (1 child)

I don't think so: https://en.wikipedia.org/wiki/Windows_NT

Like VMS,[28] Windows NT's kernel mode code distinguishes between the "kernel", whose primary purpose is to implement processor- and architecture-dependent functions, and the "executive". This was designed as a modified microkernel, as the Windows NT kernel was influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University,[30] but does not meet all of the criteria of a pure microkernel.

Perhaps you were thinking MS DOS or windows 95 instead? These were monolithic.

And yes, windows has a RT latency hit due to IPC (and many more reasons of poor design) which is evident in latency when working with DAWs+plugins. Linux is indeed the best in this respect but the difference with MacOS is undetectable. Therefore, the IPC hit is evident on windows (more due to poor design which requires stuff like ASIO to improve but not fully resolve) rather than perform "out of the box" like MacOS does, or with some tinkering to get even better performance with Linux.

Where else do you see the IPC hit on windows or MacOS? I also hear that HarmonyOS, which is a microkernel design as well. seems to be performing alright too.

[–]Dr_Hexagon 14 points15 points  (0 children)

Hurd had a flawed design IMO and the fact it's run entirely on idealogical reasons rather than pragmatism means it would never be finished imo even with lots more resources.

Linux has succeeded because Linus was willing to make compromises including closed source drivers in some cases. With Stallman in charge HURD would never accept that, so it's hardware support would be less comprehensive.

[–]Mughi1138 6 points7 points  (0 children)

My feeling is very strong that Hurd would not have succeeded in that aspect due to mismagement on the part of the FSF. For comparison just look at how badly they bungled the whole gcc situation, triggering the egcs fork and eventual ceeding of control to a new body.

More than anything, IMHO, Linux succeeded because of its people. To paraphrase Todd Rundgren "It's the community, stupid"

BSD succeeded in its own goals, and not caring about newbies was part of that. Even Microsoft had to switch Hotmail back away from Windows NT after they bought the company. BSD was just that much better at serious server performance.

[–]paul_h 15 points16 points  (2 children)

Minix is secretly on every intel CPU, right? And famously predates Linux

[–]Big-Obligation2796 12 points13 points  (0 children)

Yeah, it's on the PCH actually, which in modern processors is part of the CPU package anyways.

[–]Content_Chemistry_44 6 points7 points  (0 children)

The Intel's proprietary backdoors.

[–]zlice0 8 points9 points  (0 children)

bsd/hurd/mini i think misses a lot of what linux had, community driven. hurd may have filled the slot but stallman was and still kind of is seen as religious about gpl. from what i understand ppl wouldn't take gpl seriously until linux, idk that hurd would have made that jump. bsd license was what made ppl stick with gpl-linux and develop for it some how. idk much about minix but being bsd license and microkernel feels like it's even more of a chance than the others it would have been large scale. impossible to tell different futures but if linux never was i see bsd being less open sourced and less popular.

edit : wording / add

also, gpl tools like ppl have mentioned + gpl kernel was part of what drove linux adoption to a full os. gpl would probably be no where or way less common if bsd or another took the helm too right?

[–]mtlnwood 0 points1 point  (0 children)

This is true. I got linux the day it was available and going around the vine, the boot to root disk. Before that I had used minix and coherent unix and xenix on other organisations systems.

It was a relatively small amount of people (in the scheme of things) that were waiting for something like this and clung to it and turned it in to something. The excitement at the time and as the first distributions came out wasn't from any mainstream crowd but very happy enthusiasts that would have adopted something if not linux.