Once upon a time, if something failed, you booted in single-user mode.
...which means yet another reboot, which may or may not replicate the problem and allow debugging.
And you got a shell, not the "One True and Non-Replaceable Shell". Systemd takes away the flexibility to configure things optimally for your specific needs.
Per the documentation, the "login shell" that everyone complains about is just a controller command that runs /bin/sh by default, or any other command presented.
If systemd offered plug-in loggers and one of them happened to be a binary log database, that would be OK. But systemd's designers apparently lack the skills to make a simple and flexible system.
Also per documentation, the default journal is not the only option. You can also send output to syslog, kmsg, the console, or a socket.
Can't comment. I haven't had much to do with anything beyond the man pages.
Clearly you haven't bothered to read those much, either.
Well, in this case, it's that there was no "trial mode" for people to gradually evaluate, find bugs in, and accept/reject. Instead all of the sudden the familiar, functional (if imperfect) systems were all gone and systemd ruled everything. Since systemd isn't as flexible as what it replaced, you couldn't fall back to the old stuff in cases where it failed to satisfy or as an emergency solution.
Systemd was apparently around for a year before the first distro adopted it. There's been plenty of time to review and comment, but so far all of the discussion seems to be just criticism by folks who clearly haven't bothered to read the documentation for the things they complain about.
OK. But the rate at which you "close bugs" is a meaningless metric. Were the bugs closed because repairs had been made or were they simply marked "WONTFIX"?
Again, that's no different from any other software project.
If your system is so fragile that a single server being down is that critical, maybe you need to re-evaluate your architecture.
I never said it was fragile. I said that it must be 100% operational. I work on a very particular kind of high-end processing system, running a few dozen specialized servers. There's an array of video processors, a separate array of audio processors, a number of systems just for I/O, and a small (by the vendor's standards) HPC system. Some of it runs Linux, some runs Windows, and the whole thing has to be able to be brought online in 15 minutes.
Again, you don't get to assume what my requirements are.
For those of us to whom such things are essential, we have clusters, failovers, and other HA constructs so that the loss of a single machine doesn't hold the whole operation prisoner.
All of those options are expensive, and require additional infrastructure and upkeep, as well as additional engineering to make it all work in the first place. It also increases the price tag on every system we build.
Yes, faster boot times are nice, but even at its worst, a Linux system boots significantly faster than Windows. You don't have the machine being thrashed by massive software updates and disk-burning virus checks on reboot.
Instead the system waits for fsck every month, refuses to move until it's tried to initialize every NIC on the system, won't start X until the sound system is ready, and let's not even discuss the wait if a NFS mount is unavailable. Yes, you can also install a virus scanner or configure your system to look for updates at every boot.
The real problem is serialized startup. If systemd makes it easier to run things in parallel, that's a good thing. Upstart did well for that, too.