Most people will agree that systemd adds a number of important features to GNU/Linux that the old alternatives didn't offer.
This is very true. Most people will also agree that it accomplishes this at the cost of significant downsides inherent to the design of systemd, and sacrificing important features that the old alternatives do offer. The controversy is about whether the upsides are worth the downsides.
Adopting systemd will over time lead to a better system.
Depending on your position regarding the aforementioned tradeoff.
Well, division by zero should never happen, but you want it to be handled gracefully in case it does.
You are aware that segfaults are there specifically as a graceful handling of error conditions, right? We could just have every invalid memory access return 17 if we preferred. You seem to be underestimating just how nongraceful not aborting would be. The alternative to a segfault is a program that could go do absolutely anything, unpredictably.
Nobody wants the autopilot in charge of a barge train to segfault.
I would much prefer that over the autopilot deciding that its current speed is [broken computation... division by zero... "zero"] and the desired speed is 50km/h, so hit the accelerator until the division by zero situation resolves itself.
While C++ happens to be useful for cross platform mobile development, that's not because of C++ itself is better at cross platform development.
Yes it is. Well-written C++ code will run on any platform, whereas even the best java code only runs on the java platform. This makes C++ much more suitable for cross platform development than java.
Is this sophistry? I don't think so. Java is not a cross-platform system, java *is* a platform. And I think that no matter what the initial intentions may have been, time has shown that languages that compile to any platform, while less convenient than languages that bring their own platform, are actually the more flexible and practical for cross platform development of the two designs.
In my mind, this comes down to whether we want a better functioning OS or an OS that adheres to the mindset that I think attracted many of us to Linux in the first place.
In my mind, it comes down to streamlining the common use cases for a given system, while throwing under the bus everyone who wants to do something with their system that Lennart didn't think of or doesn't care to support.
What we really need is some kind of standardized identity management system-- like you know how you can sign onto various sites using either your Facebook or Google+ sign-on? Like that, but standardized. We need a true single-sign-on solution that is easy to manage, hard to screw up and lose your identity permanently, and usable everywhere.
Is there any particular reason why we shouldn't just use public key authentication as the standard authentication method to use absolutely everywhere, optionally delegated to some remote single-signon service of your choice which is not in any way visible to the service you're authenticating against? This seems like the obviously correct solution to me, but for some reason I never see it mentioned in threads about replacing passwords as an authentication scheme.
If an activity is safe for a hobbyist to perform, why is it suddenly dangerous and in need of regulation when a professional does it?
Because "commercial" is really code for "on a large scale", and "hobbyist" is code for "on a small scale". What's safe on a small scale need not be safe on a large scale.
Of course, "commercial" is only a poor approximation of "on a large scale", but it's measurable and hard to game and does a pretty good job as an approximation in practice, so that's what the law will say.
The world is coming to an end--save your buffers!