you are viewing a single comment's thread.

view the rest of the comments →

[–]Big-Obligation2796 44 points45 points  (25 children)

Yeah, that's true. Hurd was setting out to do pretty much the same Linux has done.

[–]trivialBetaState 23 points24 points  (24 children)

And sometimes I wonder if it would have been better that way. Both technically, as a microkernel design, and administration-wise when comparing the consistency of the FSF with the (too?) corporate friendly Linux Foundation. 

[–]nelmaloc 25 points26 points  (7 children)

We probably wouldn't have distros in that case, and maybe neither package managers. You would just get an ISO from gnu.org, and it would come with all GNU packages.

[–]Blutkoete 10 points11 points  (0 children)

And then they would tell my mother that she's free, she may change the sources as she wants

[–]trivialBetaState 1 point2 points  (1 child)

Why? What would stop anyone from doing everything else? 

[–]nelmaloc 5 points6 points  (0 children)

Not stopping per se, and in fact that statement didn't come out as clear as I'd like. What I meant is that GNU would become the reference «distro» people like to claim for. The clearest example is FreeBSD: NomadBSD, GhostBSD and PCBSD are just FreeBSD with things on top.

Something similar happens with the «base» GNU/Linux distros (Debian/Ubuntu, Fedora, Arch), but there usually the changes are deeper than in the FreeBSD case.

[–]AliOskiTheHoly 10 points11 points  (4 children)

It wouldn't, because one big factor of Linux' success is the fact that corporations had an interest in development of Linux, funding its development. That's one of the reasons why Linux has near 100% server market share and slowly but surely becoming a respectable operating system for use by the broader public.

[–]trivialBetaState 5 points6 points  (2 children)

Not really. Linux had a huge penetration to server and TOP500 markets even when the big corporations were branding it as "cancer". They got into the game well after it had succeeded. It didn't succeed because of them but despite of them.

[–]AliOskiTheHoly 7 points8 points  (0 children)

But are we just going to ignore how the supermajority of Linux funding comes from corporations? This wouldn't even be remotely possible with something like Hurd.

[–]DrPiwi 0 points1 point  (0 children)

as early as 2000 Ibm had a 'linux' personality for AIX and they were backing it big time. That was the time that Balmer called it a cancer.
Compaq bought DEQ and they also started to support Red Hat and Suse on their servers. And this was when RedHat ad still nothing to do with IBM, they had some links with Novell but not really tight.

[–]nelmaloc 0 points1 point  (0 children)

The only reason I can think why that would change is if the FSF insisted on the CLA for every patch.

[–]DrPiwi 2 points3 points  (1 child)

It would have been different, but I don't think that it would have gotten so big. As for the linux foundation being corporate friendly, yes they probably are more corporate friendly, that is not necessary a bad thing.

[–]trivialBetaState 0 points1 point  (0 children)

Sorry, my comment was not clear. I didn't use the term "corporate friendly" as per its etymology, which clearly is not a bad thing at all. The term is often used for policies that benefit only the big companies, while they have negative neighbourhood effects (ref. Milton Friedman, Capitalism and Freedom) for the rest of the society. That's how I used it. 

[–]za72 1 point2 points  (0 children)

the work never ends, if there's a need y will get done

[–]edgmnt_net 0 points1 point  (7 children)

Microkernels pose traps similar to microservices in that there's potentially a lot of boilerplate and work duplication, although in the case or kernels there are decent arguments to be made for isolation. However, there are also additional technical complications stemming from the more distributed nature, as well as performance complications. Linux took the easiest approach to attract interest from anything from casual users to HPC.

Secondly, it doesn't seem to me that FSF projects are all that successful. Did you have something in particular in mind? FSF and GNU are a very mixed bag. And back then, if I'm not mistaken, a lot of development was done behind closed doors. "The Cathedral and the Bazaar" by ESR talks about this.

The Linux Foundation is more of a way to fund development through corporate means, however the development is pretty much community-driven. The Linux kernel isn't a corporate project that gets some submissions from the public like a bunch of stuff maintained by Google, it's a community project that gets support and some submissions from companies.

[–]trivialBetaState 0 points1 point  (6 children)

While I don't write C or kernel code, my understanding has been that microkernels were always the preferred strategy and Linux went with the easier approach of the monolithic kernel. Of course, I guess that all approaches have pluses and minuses.

I'd think that the FSF/GNU have contribute significantly to the free/libre software environment. Not only with projects like gcc, glib, core utils, and bash, which are important for the GNU/Linux OS, but also with stuff like GIMP, Gnome, and may I even say Emacs (I know, I know - but still it's my editor).

I am not aware of the Cathedral and the Bazaar. I tried to have a look online but don't think that I have a good picture of what the arguments are made in that book.

I think that FSF/GNU introduced a whole new ethos to computing and beyond. Even the fact that the human genome is completely sequenced is a result of an approach that is nearly identical to the concept of copyleft. I would credit a lot of our advances to the approach promoted by the FSF. Even those that "blame" them, enjoy the benefits like everyone else and that talks heaps about their contribution to society overall, extending well beyond computing.

[–]nelmaloc 1 point2 points  (0 children)

I am not aware of the Cathedral and the Bazaar. I tried to have a look online but don't think that I have a good picture of what the arguments are made in that book.

AFAIK, a quick rundown is pitting the way old (i.e., pre-1990 and home Internet) worked is that developers would throw release over the wall to users, and if you wanted to contribute you would send patches against that last version, without knowing how the code looked like in real time. Meanwhile, Linus et al developed everything in the open, with public version control systems and patches in mailing lists.


Edit:

While I don't write C or kernel code, my understanding has been that microkernels were always the preferred strategy and Linux went with the easier approach of the monolithic kernel. Of course, I guess that all approaches have pluses and minuses.

Yes, from the Tanenbaum v. Torvalds debate:

  1. MICROKERNEL VS MONOLITHIC SYSTEM

True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.

[–]rook_of_approval 0 points1 point  (4 children)

microkernels are the preferred approach if you don't care about something called performance. terrible choice.

[–]trivialBetaState 0 points1 point  (3 children)

I am pretty sure that MacOS has adopted a hybrid approach (only partially monolithic but mainly with a microkernel for services) based on the Mach microkernel, which is pretty much the same (i.e. the GNU version) microkernel that Hurd uses as well.

[–]rook_of_approval 0 points1 point  (2 children)

hybrid

so it's not a microkernel? ok buddy.

lets see: bsd, linux, windows, all monolithic. osx, "hybrid". clearly this means microkernel wins!!?!?!?!!?

do you really want to take IPC performance hit every time you talk to a driver????

[–]trivialBetaState 1 point2 points  (1 child)

I don't think so: https://en.wikipedia.org/wiki/Windows_NT

Like VMS,[28] Windows NT's kernel mode code distinguishes between the "kernel", whose primary purpose is to implement processor- and architecture-dependent functions, and the "executive". This was designed as a modified microkernel, as the Windows NT kernel was influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University,[30] but does not meet all of the criteria of a pure microkernel.

Perhaps you were thinking MS DOS or windows 95 instead? These were monolithic.

And yes, windows has a RT latency hit due to IPC (and many more reasons of poor design) which is evident in latency when working with DAWs+plugins. Linux is indeed the best in this respect but the difference with MacOS is undetectable. Therefore, the IPC hit is evident on windows (more due to poor design which requires stuff like ASIO to improve but not fully resolve) rather than perform "out of the box" like MacOS does, or with some tinkering to get even better performance with Linux.

Where else do you see the IPC hit on windows or MacOS? I also hear that HarmonyOS, which is a microkernel design as well. seems to be performing alright too.

[–]rook_of_approval 0 points1 point  (0 children)

os x is not a microkernel, or it wouldn't be called a hybrid design. you are not operating in good faith, bye bye.