I was more thinking about starting postgres before the server that uses that DB to store its stuff.
Allow to go off on a tangent here, but most of the setups I deal with keep databases and webservers on separate machines. Today we've got virtual machines coming out of our collective asses so even developers start separating their services from each other. What you're talking about is dependencies and how to deal with failure of a service. That goes far beyond the scope of what systemd can provide, since it's so cheap and easy to run stuff on different (virtual) machines these days. However, I realize that you're using this as an example, so I won't really press on the matter other than that this is the poorest example you can choose.
But it does speak to part of the problem. All I continuously hear about systemd as the greatest advantage is solving the "complex boot order" dependency problem, parallelizing boot scripts and decreasing boot-time. To me, as a sysadmin: I DON'T care. I'm pretty sure that 90% of the servers I have running take a longer time in POST than they do in booting. If I reboot something, I'm taking it OFFLINE. I'm not sitting there crossing my fingers "Please come up quickly", but I've got a failover setup and another machine has taken over before the server is rebooting. And "complex boot order" really? Is that really that big of a concern? I really hope you don't have to manage 500+ servers then, because you'll be in for a surprise on complexity.
This brings me to interesting questions that I hear nobody talk about. If you're doing failover type of things, you're bound to end up with heartbeat/corosync/... + pacemaker & co. As a sysadmin, for me those things are extremely important. Last time I checked (and granted, that was a while ago), none of these actually had a proper way to deal with the impact of systemd. I'm sure that the people behind the various failover solutions are working on it, but last time I checked I saw very little in official documentation, and only the occasional headscratching on mailinglists.
To me, systemd presents quite the challenge, with consequences even outside of the technical side of things. For a while I'm going to end up in a mixed environment, forced to write two sets of operational procedures, disaster recovery scenarios, etc etc. I'm going to have to retrain people on how to read system logs, have to deal with systemd's quircks (no offence, all software has its peculiarities, and so will systemd), and will probably have to completely rethink how we do failover scenarios. And all I gain, what I'm really interested in is... cgroups...
Projectors, usb disks and sticks, whatever ... Those things have no place in a serverroom, and I don't care about it. And let's be frank, nobody in a corporate environment gives a shit about linux on the desktop. The few companies that I've been at that have linux on their desktop are small businesses with a near complete tech workforce.
Why would you want to convert rich information into a string and shove it down a pipe before you make use of it?
I think I know the answer to that question. Some environments need to archive their logs, for a long long time. Some things are more complex than a "simple" local syslog setup, and with that complexity comes a set of tools that often organically grows over the years or follows certain administrative procedures inside a company. In such environments, change is a difficult process, not because of people but because of corporate inertia and red tape bullshit. And it's great that systemd provides a syslog fallback for us text-junkies, at the very least it buys time...
The other side of the medal however is, nobody is really having a problem with text logfiles. The examples I continuously see are so contrived, or focussed on "the linux newbie" that it just reeks of a developer looking for something to do. My biggest frustration with logging on unix isn't "I don't know where to look" or "I don't know what to look for", but "This logmessage is meaningless without several Google queries". Any person who knows about /var/log can relatively effortless find the errors they're looking for, but an error message like "Connection reset : read() returned 5 bytes" says very little about what is wrong unless you're a developer of that particular daemon. No matter what abstraction you put into place for logging, it doesn't really solve the issues I'm dealing with.
So please enlighten me: How do you kill apache with all the php/ruby/whatnot crap it directly or indirectly spawned? With systemd it is just one convenient systemctl stop apache
service apache stop. And when all else fails : kill -KILL .
Don't get me wrong, but if you've got rails or pyramid, django stuff you should either be using the correct apache modules, or documenting how the two communicate with eachother (like with fcgi or some proxy solution or whatever). If I stop apache, I expect apache to stop. If apache is spawning stuff it's not supposed to, I get off my chair and walk over to the developer that wrote crappy code, or poorly documented how to start and stop his stuff and find a way to fix it. If it becomes the default behaviour of your apache deployment, clearly it's nowhere NEAR production ready.
Having said that, sure stuff goes wrong. I'm well aware of that, on a daily basis. But I've never been unable to solve these kinds of small issues before. There are bigger issues that haunt me than process management, and systemd isn't going to help me with those. If process management on a single machine is a sysadmins worst concern, I fear for his career in a real environment.
Most are a real improvement over the existing tools to manage hostname/date/time/timezone/locale/service/network/efi boot loader/virtual machine/whatnot. At the very least they are way more consistent in how they work and they work on all modern Linux distros in the same way. That was never possible before.
Give it some time. Let the distributions do fun and neat things by wrapping systemd's tools with their own scripts and you'll have the same fragmentation that you have now. This is like that story where there are 10 standards, and a guy says "I'm making a standard that unifies all these standards!". Result: you now have 11 standards. Again, don't get me wrong, I like the idea of having a standard way of doing things, but there have been so many efforts in that regard, without all too much success, or relatively quickly heading into different directions.
Linux in the end doesn't have a single set of developers, but hundreds of distributions, each with their own developers, each with their own collective ideas, so it's guaranteed to happen.
everybody will be using systemd (including gentoo and slackware)
I haven't used slackware in ages, but I think you're underestimating their core audience and developers, and their resilience to change they deem unnecessary.
Finally, I'd like to make a couple of remarks in general about this whole topic. I hardly ever comment on systemd, and since I've written all of this I might as well add that.
First of all, I'll admit, I'm actually really interested in it. I haven't had the time to play with it, break it, customize it, and do my thing with it, but I definitely aim to do so relatively soonish. To say the very least, it is an ambitious project and it certainly is making waves. However, I am somewhat concerned with the scope of systemd far outgrowing what seems to me to be reasonable for an init system. These discussions have been going on for a while now, and my concerns aren't being reassured by the arguments in favor of it, simply because I have no need for these features or find them to be non-issues in my use of Linux.
For me professionally, I'll have to deal with it at some point in the future, so it's one of those things where I'll (albeit somewhat grudgingly) adapt. I do however think that when the major distributions have released their new systemd based versions, what you're seeing today will only be a fraction of the criticism the project will have to deal with. People are averse to change, especially if it's change for something they've been used to for years. If systemd breaks in unexpected and interesting/fun ways, I expect quite a bit of backlash. If systemd should fail hard, I kind of fear for the consequences. The feedback I'm seeing from a lot of Fedora users is not the most flattering either, although I suppose some of it is blatant trolling.
I've seen sysvinit replacements come and go, including such wonderful examples as DJBs attempt at reinventing the wheel, which had the wonderful side effect of making sysadmins having an unexplainable urge to bang their heads against a wall at random intervals. Each had their set of problems, imperfections and fun and interesting new ways to break. While past experience shouldn't be an excuse not to try out new things, I do tend to be guided by that experience. Systemd seems like a system that is far more complex and encompassing than any of the other sysvinit replacements I've used in the past, and considering that complexity goes hand in hand with bugs and looking back at my previous experiences with sysvinit replacements, allow me to be somewhat skeptical and not the first to take the plunge in an operational environment.