It absolutely is true for system utilities (like init). Vim is a text editor. It edits text. It does not edit graphics, init the system, act as a login daemon or multiplex your shell. But then, it's not a system utility. Look at awk, sed, grep, less, etc etc. Look at getty and login. Look at screen.
Exactly. But look at all the systemd "system utilities" like systemctl, journalctl, machinectl; they all work exactly like any other first class Linux system tools; they all only do one thing but do it well, they can be piped, they aren't chatty when successful (and actually care a lot about exit codes), aren't interactive etc, care about text output formatting (turn off legends etc) so they are perfectly scribtable.
The point is that all the systemd tools are doing everything expected by system tools according to "Unix philosophy".
Did you know that in Debian Wheezy there are TWO init systems that work at the same time? They weren't designed to do that but because they do things the Unix way, they don't 'mind' either.
I must say I can't see a reasonable use case for this. Sounds racy in all circumstances.
But here's a "funny" where systemd is most definitely wrong (yes, I have actually been giving it a chance, I just don't like what I see). I have a VM where I have yanked a virtual disk out from under btrfs. My fstab states that I want it to mount in degraded state if necessary (such as if a disk is missing). systemd *REFUSES* even though I explicitly commanded the action. How is that the Unix way? How is that supposed to help uptime? Thank the gods it's not a production box! Then I google as to why that might be and first post I find is some someone claiming IT'S A FEATURE! So there we are, the admin and owner of the box says just do it and damn the consequences and it refuses like a Windows box.
Take a look on these discussions;
http://www.spinics.net/lists/l...
http://lists.freedesktop.org/a...
Basically, systemd requires manual intervention to allow to boot btrfs arrays with both /a missing disk/ and in /degraded mode/
Not a bad default really.
Anyway, in order to allow btrfs to automatically boot in degraded mode with missing disks, and doing it /correctly/ you need some extra module/script/daemon to handle it, since neither the kernel nor systemd (udev) have any knowledge about the internal state of btrfs. Nothing new in that, raid etc. have always been handled by such a daemon. I think that if you use mdadm with btrfs raid, you can automatically mount degraded mode arrays. The critical point is the timeout timer; a crude method that needs to be set according to the particular array in question.
Bringing up a degraded array as RW risk killing the whole array, so it is not something to be done just because a drive is late at appearing.
http://git.neil.brown.name/git...
Now, just to complete the picture, do you know what journalctl told me about what was failing and why? It said the mount timed out. THAT IS ALL. Is this the system I am supposed to trust in production? The one designed by people who KNOW what they're doing?
Isn't that all you need to know to find the error?
Also, use the "-x" with journalctl, it may give further info to generic error messages and even link to more info.
Anyway, systemd have excellent debugging facilities; try to turn on debugging ("kill -56 1" from the CLI, or by setting "MaxLevelKMsg=debug
MaxLevelConsole=debug" in "/etc/systemd/journald.conf" and restart (journald or the VM)
Digging in to it, I find the really sad part. It knows enough about btrfs to dig in to it and discover what physical drives go with the volume label. It wasn't even attempting the mount command fstab suggested (if it had, it would have succeeded). Surely after sitting in the penalty box for a minute and a half staring at the cylon, it could have given it a try?!? Or known a bit more about btrfs and seen that I intended a degraded mount? Or known less about btrfs and just done what fstab said to do?!?
It's a sick joke.
If you read the above discussions, you will find that there is no right solution for all; doing brute force attempts to mount missing drives or bring up raid arrays even though they may not be complete yet (a late drive will make the array degraded) have its own sets of problems, and differs whether it is a two drive or a 1024 drive array. Also, it is not up to either systemd or udev to know about complicated raid states; that should be handled by the raid array software/daemon who can probe internal logic and then inform init what to do (or rely on crude timers).
So this seems more like a RFE than any serious bug. Yes, I can see a use case for automatically booting degraded arrays, but it shouldn't be a default, but an explicit setting of the admin, since only the admin can evaluate risks and knows how much redundancy there is etc.