Comment Re:And again (Score 1) 928
There's NOTHING wrong with script files.
Using script files to setup and manage daemons because init is so primitive like SysVinit is, is simply a bad idea.
I'm sorry, but just saying "X is 'primitive'" isn't actually very useful. There is nothing primitive about scripts.
Also this isn't about SysVinit, this is about systemd. Picking at the one isn't an argument for the other. In fact SysVinit is pretty sophisticated in its own way, but its fine if we make it better or replace it with something BETTER. Again, just pointing out the problems with it isn't automatically enough to make systemd the correct alternative. I've pointed out negatives about the systemd approach, you need to address those.
Mixing executable code and declarative config statements like script based init systems do, is simply indefensible; it makes it hard to parse for both humans and machines.
There are two problems with this statement. First of all properly written init scripts ala RedHat put all their config in
No one would ever design an init-system these days that didn't use pure text files for daemon config. SysVinit and similar are relics from a time when computing was done in a completely different way than today.
I am not trying to be disrespectful here for the pioneers that made various OS's back in the 1960's, 1970's and 1980's. It is just that some of the design choices they made, reflected the contemporary problems they had.
They made some simple but very flexible init systems based on shell scripts. But the simplicity just showed all kinds of problems over to the user space developer side, like handling dropping privileges when a daemon needed a low number port etc. And many people, including me, have long been of the opinion that such init-systems have been obsolete for years (if not decades). Most if not all certified Unix vendors have replaced their script based init systems (SMF and launchd are major inspiration init systems for systemd).
Horse Petunias. Computing hasn't fundamentally changed, and the same use cases that existed back in the mid 1980's when init systems in use today were birthed exist today. Believe me, I was there, and I know. A machine room today is solving the same problems that they were back then. Some of the technology is different and there are obviously some new things, but my 1980's Sun Unix machines ran pretty much like my Linux servers do today. If you want to tell me that a mobile phone is quite different from a mini-computer, well yes. However nobody is asserting that a mobile phone should be running the same init system as a web server, except the people making everything depend on systemd....
As I've said elsewhere in this thread you are failing to understand good overall system design and factoring if you think a large complex program that incorporates every behavior into itself and puts only configuration options into files is 'superior'.
I do think I understand enough about how OS's work to have a qualified opinion. We just happen to disagree about some things and is having a civilized exchange of arguments.
The core of systemd is certainly a lot more complex than SysVinit and similar, and the complexity isn't avoided by using SysVinit; it is just moved into other external programs.
systemd it isn't large, and all core daemons are really lightweight when it comes to memory/cpu and other resources.
A better solution would be individual scripts which can perform all the functions required for each service and coordinate with each other, with all the commonalities between them pulled out into shared library code. This allows for any level of flexibility and initialization strategy a packager or developer requires without forcing anyone to do anything and avoiding large disruptive changes in key subsystems.
I am really not interested in whether or not you understand the Unix way of doing things, but it is a real and highly beneficial approach which you might do well to actually understand. I don't have a big issue with a lot of the functionality that we're talking about here, though I think you are very much oversold on binary logging, but it doesn't all need to be bundled together in one package that is in any practical sense impossible to incorporate what you want from without being forced to swallow the whole thing.
Sure, an improved "Super" SysVinit would have been easier to deal with for many. Upstart was one such attempt. But the problems with making such an improved script based init system is harder than it appears; Why should upstream support it, if it breaks compatibility or just have small improvements? Changing how everything works for a small gain will mean little traction for such a project, which again will mean little support from upstream projects.
What makes systemd so attractive for upstream projects is, that while it changes things a lot, it also really help upstream projects in many ways by making a cross distro compatibility layer, providing needed low level functions like logind, and by simplifying daemon development and daemon configs. I am aware that you think systemd provides no benefit for your user case; fair enough, but it is a mistake to think other people doesn't have other user cases where systemd is a huge improvement over existing solutions.
I think many of the design decisions made in systemd are really sensible; if someone wanted to design a modern, general purpose init-system that scaled from embedded to supercomputers, and had the same features as systemd, including backward compatibility, they would end up with a design very much like systemd.
I think it is armchair OS design to think one could have systemd features by making many small totally independent programs; all performance would be killed by inter-process communication, and the design would be really, really fragile and complicated.
Obviously we will have to agree to disagree. As a highly experienced system architect I'll take my instincts over those of people that seem to be going up the wrong path every time. IPC isn't a big deal, and if your init system is doing enough work that 'killing performance' is even a remote possibility then something is VERY wrong. At the most basic level there are a whole series of different problems here, starting and stopping services at system initialization/termination, managing hotplugging, general event management, and "compatibility layer". These need not in any sense be lumped together into one solution. Respectfully, you are wrong on this. I've been doing my system engineering trade for a LONG LONG time since an early age and when I know I'm right about something, I'm just right. Its not that common, but this is one of those cases, so you're going to have to do detailed exposition of every single technical point if you expect guys like me to be AT ALL convinced.
And no, I'm not really being given a choice when the only option I have is something like Arch Linux, which no offense to its maintainers, is not something I'm going to run my bank on top of. Neither am I going to suddenly convert said operation to using systemd willingly. I've already had to deal with a lot of fallout from this stupid idea. I didn't have any problems before that needed this solution frankly and the changes ARE disruptive while we've noted zero practical benefit.
There is Slackware too. But I understand what you mean; at the moment there simply isn't a stable long term release Linux distro besides Slackware and its derivatives. If you run a traditional server setup, Gentoo and other rolling release distros aren't so attractive.
It would be strange if such an distro didn't materialize at some point however.
Yeah, I'm actually not pissed off at the people writing systemd. I'm pissed off at the people that maintain RHEL because they showed incredibly poor judgement, and making them pay for it is going to be quite a lot of work that should never have needed to be done.