Comment Better to revisit monolithic vs microkernel debate (Score 1) 139
https://en.wikipedia.org/wiki/...
"The Tanenbaum-Torvalds debate was a written debate between Andrew S. Tanenbaum and Linus Torvalds, regarding the Linux kernel and kernel architecture in general. Tanenbaum, the creator of Minix, began the debate in 1992 on the Usenet discussion group comp.os.minix, arguing that microkernels are superior to monolithic kernels and therefore Linux was, even in 1992, obsolete.[1] The debate has sometimes been considered a flame war.[2]"
This implementation language issue conundrum facing the Linux kernel is just one more reason why microkernels (or other layered approaches like VMs) make sense for reliability and flexibility. With a microkernel, the implementation language of 99% of the core underlying system (e.g. device drivers) is a non-issue as far as the kernel maintainers are concerned.
See also "Simple Made Easy":
https://www.infoq.com/presenta...
"Rich Hickey emphasizes simplicity's virtues over easiness', showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."
The Linux kernel is ultimately a dead-end because it is too complex as a single entity. That makes it much more likely to fail catastrophically for society (e.g. a widespread computer virus) than it otherwise needs to be if the core was simpler and most functionality (e.g. drivers) was outside the kernel. Prioritizing speed in the Linux kernel by the monolithic design essentially deprioritizes reliability and security. We are decades past when that priority makes sense given hardware speed advances for most use cases (and where dedicated hardware or GPU use and so on makes sense when speed really matters, stuff running outside a kernel). And that old priority also means all the increasing time and effort that goes into dealing with Linux security risks (e.g. constant updates) is not time available to actually optimize performance of a microkernel and associated drivers and applications (or advancing hardware design).
Example:
"Understanding Linux kernel vulnerabilities"
https://link.springer.com/arti...
"Protecting the Linux kernel from malicious activities is of paramount importance. Several approaches have been proposed to analyze kernel-level vulnerabilities. Existing studies, however, have a strong focus on the attack type (e.g., buffer overflow). In this paper, we report on our analysis of 1,858 Linux kernel vulnerabilities covering a period of Jan 2010-Jan 2020. We classify these vulnerabilities from the attacker's view using various criteria such as the attacker's objective, the targeted subsystems of the kernel, the location from which vulnerabilities can be exploited (i.e., locally or remotely), the impact of the attack on confidentiality, system integrity and availability, and the complexity level associated with exploiting vulnerabilities. Our findings indicate the presence of a large number of low-complexity vulnerabilities. Most of them can be exploited from the local system, leading to attacks that can severely compromise the kernel quality of service, and allow attackers to gain privileged access"
Almost 2000 vulnerabilities over the past decade due to the kernel being a monolithic design.
tl;dr This debate should be about kernel architecture not kernel implementation language.