That is the other thing. Some physicist did an estimation of the most efficient way to do massively parallel computations, including node speed, communications peed, interconnect length, etc. Turns out the human brain is pretty much optimal in this universe, everything larger or with faster nodes or the like will be performing worse. So it is entirely possible, that human intelligence (such as it is in the average case) is really the best possible.
Exactly. There is not even any credible theory that explains how intelligence could be created. "No theory" typically means >> 100 years in the future and may well be infeasible. It is not a question of computing power or memory size, or it would have long since been solved.
Well, maybe he just realizes that it is unlikely we will get AI like that any time soon and probably never. If you follow the research in that area for a few decades, that is the conclusion you come to. AI research over-promises and under-delivers like no other field. (Apologies to the honest folks in there, but you are not those visible to the general public.)
Seriously, stop redefining my statement. You have no clue whether the threads in my example were needed or bad design. And no, it was not the OS.
It may also turn out that when systemd-created problems begin to mount (as they are likely to do), this situation will change.
It is also a good model for numerous other things. Of course, sometimes you have to deviate. An excellent rule is that unless you have a very good reason, you must not deviate. "Faster boot" is not such a reason.
Well said. One problem here is that systemd indeed increases complexity significantly and decreases the flexibility and control of the system administrator. That can still result in a good outcome, but not with these people in control and it shows in numerous places already.
At this time, that is doubtful. Initially, yes, but remember that he does the Kernel, not the user-space and it shows. (The user-space was and is mostly GNU, not Linux.) True, he does the kernel rather well, and I have no doubt systemd would never have had any chance as a kernel-module or the like, but really, once you reach user-space, Linus becomes an advanced user who is, for example, missing large installation system administration experience.
Good to know. I have been contemplating that for a while for my headless systems. The network-stack is actually much better than what Linux uses and so is the firewall code.
Or stated differently: Most people are morons. Do not make majority-decisions if you want quality, reliability, etc.
People have reported corrupt log files. The result is all the data is unrecoverable. The complaints have been answered 'as designed'.
That may be intentional. In fact they confirm it is. What better way for an attacker to cover his tracks after a successful break-in then being able to credibly corrupt the logs.
When things are right, it works as intended. When things are bad, it can go far off the rails.
The hallmark of a system that has gotten far more complex than it has any business being. I foresee that the standard answer to Linux system problems will be to "reinstall". I think I have heard that utterly primitive and anti-intellectual advice somewhere else before...
There is some suspicion that the main goal in systemd is to make the init-system vulnerable to well-funded attackers. Especially the binary logs are a huge red flag.
Sendmail is solved? Not from what I can see. I took a look at it again a few years back and decided that Postfix was a much more sane option.