Yeah, I've done a fair bit of time as sysadmin of several networks AND enjoy the cool stuff that comes with change and improvement in hardware and software over time.
Systemd no doubt will have growing pains associated with it, but I still remember the "growing pains" associated with kernel 2.0 (the first multiprocessor kernel) and issues with resource locking and ever so much more. Anybody want to assert that this wasn't worth it, that "single core/single processor systems were good enough for my granddad, so they are good enough for me"? Server environment or not?
Decisions like this are always about cost/benefit, risk, long term ROI. And the risks are highly exaggerated. I'm pretty certain that one will be able to squeeze system down to a slowly varying or unvarying configuration that is very conservative and stable as a rock, even with systemd. I -- mostly -- managed it with kernels that "could" decide to deadlock on some resource, and when the few mission critical exceptions to this appeared, they were aggressively resolved on the kernel lists and rapidly appeared in the community. The main thing is the locking down of the server configurations to avoid the higher risk stuff, and aggressive pursuit of problems that arise anyway, which is really no different than with init, or with Microsoft, or with Apple, or with BSD, or...
But look at the far-side benefits! Never having to give up a favorite app as long as some stable version of it once existed? That is awesome. Dynamical provisioning, possibly even across completely different operating systems? The death of the virtual machine as a standalone, resource-wasteful appliance? Sure, there may well be a world of pain between here and there, although I doubt it -- humans will almost certainly keep the pain within "tolerable" thresholds as the idea is developed, just as they did with all of the other major advances in all of the other major releases of all of the major operating systems. Change is pain, but changes that "wreck everything" are actually rare. That's what alpha/beta/early implementation are for, and we know how to use them to confine this level of pain to a select group of hacker masochists who thrive on it.
On that day, maybe just maybe, systemd will save their ass, keep them from having to replace some treasured piece of software and still be able to run on the latest hardware with up to date kernels and so on.
I've been doing Unix (with init) for a very long time at this point. I have multiple books on the Unix architecture and how to use systems commands to write fully complex software, and have written a fair pile of software using this interface. It had the advantage of simplicity and scalability. It had the disadvantage of simplicity and scalability, as the systems it runs on grew ever more complex.
Everybody is worried about "too much complexity", but Unix in general and linux in particular long, long ago passed the threshold of "insanely complex". Linux (collectively) is arguably one of the most complex things ever build by the human species. The real question is whether the integrated intelligence of the linux community is up to the task of taming the idea of systemd to where it is a benefit, not a cost, to where it enables (eventually) the transparent execution of any binary from any system on a systemd-based system, with fully automated provisioning of the libraries as needed in real time as long as they are not encumbered legally and are available securely from the net.
We deal with that now, of course, and it is so bloody complex and limiting that it totally sucks. People are constantly forced to choose between upgrading the OS/release/whatever and losing a favorite app or (shudder) figuring out how to rebuild it, in place, on the new release -- if that is even possible.
I'll suffer a bit -- differently, of course -- now in the mere hope that in five years I can run "anything" on whatever system I happen to be using and have it -- just work.