Oh sure, I get that. You are absolutely correct. Proper monitoring requires making sure people can use the system for what it was intended for, not just publish artifical uptimes. The only point I was trying to make here is that systemd allows you to obtain status about running (or not) processes, memory/cpu usage, log events, dbus events, hardware events, forking, open ports, etc, that could be obtained before, but only in roundabout ways with specialized daemons. Systemd now provides a standardized and centralized infrastructure for doing all of that. It does not replace the need for monitoring tools, it just helps them do their job. And it makes containerization and automatic provisioning much easier.
True, systemd doesn't do monitoring per se, but it provides the infrastructure to do monitoring easily. Both Ganglia and Nagios rely on either a daemon installed that can collect data and report it, or on polling ports and such. Neither is really integrated into the system the way systemd is. I'm sure both projects will benefit greatly by their ability to now use systemd features for much of their work.
Well, it does kind of both in that you can join the public NTP pool, or maintain a private NTP server for your network with ntpd. Bottom line though is it's way overkill for what most people need. Your run of the mill server/desktop just needs a simple NTP client, which systemd-timesyncd is.
News to me. Ubuntu Gnome is working just fine without systemd on my desktop right now. They do plan on switching to systemd in the next release, but that is a separate issue.
If you don't have a setup system that establishs monitoring automatically and without manual intervention on all new systems
You do understand why systemd was created, right? To do exactly that! You may be proud of your collection of hacked together bash scripts, or maybe you use a third-party tool to do it, I don't know, but some of us think this capability should be a part of the OS itself. And now it is, thanks to systemd.
It's not just about auto-restart after crash. When the system knows something about its state, it can manage that state. So you can have a rule set that defines what to do when a particular service crashes. Or you can automatically start and stop services in response to system load. Tools like Puppet and Chef will have an actual infrastructure to use instead of needing to resort to a million polling hacks to do their job. There are a ton of reasons why it may be advantageous for the system to know something about its state (hey, I managed to do that without saying "cloud").
Even things as far removed as NTP functionality are now rolling into systemd (did you know systemd is trying to replace ntpd?).
You do realize that most distributions do not enable ntpd by default, right? And that the primary use of ntpd is for running an NTP server not time synchronization to an NTP server, right? Most distributions simply run ntpdate in a startup script to do a quick sync of the time/date during boot. So all systemd is doing is daemonizing the same functionality.
Reinventing APIs radically - The big case here is the basic OS interfaces for networked daemons.
It really isn't. It is providing additional functionality that applications can choose to use, or not (hence why some applications have dependencies on systemd). Nothing about POSIX has been deprecated with systemd. It just happens that the methods systemd provides are often more efficient or have other advantages (ex: monitoring), that you can't get with a bunch of random init scripts. So people want to use those features (surprise!).
For example, a traditional *nix daemon might be capable of managing its daemonization in an advanced way with the flexibility of the POSIX APIs (e.g. allowing a controlled 'restart' behavior for reducing downtime that starts a new daemon overlapped with the old one and uses some IPC to coordinate the handoff of live sockets, etc).
What you're basically saying here is that you can hack the POSIX socket implementation, but with systemd you have to do it differently. That is not really an argument about anything. If what you meant to say is that with systemd there is no way to manage service restarts with minimal downtime, that is completely false. The fact that you have to learn how to use systemd does not obviate its usefulness.
Also, a full conversion of a system to systemd doesn't work well with just leaving some daemons as traditional sysv-init style
Define "doesn't work well." Every distribution that implements systemd already does this, and probably will for the indefinite future because there is a lot of software out there that doesn't (and may never will) use systemd services.
It introduces a new latency in exposing new APIs.
The different ways that you keep using the term API makes me think you don't know what an API actually is. The ways in which APIs will be available to applications will not change with systemd. If systemd provides an API that an application wants to use, then it will of course depend on systemd, but that is really it. What you might be referring to is that systemd provides a bunch of new APIs that parallel kernel and glibc APIs, but doesn't really change the way APIs are developed or exposed to applications.
In general, while they minimally accommodate server-side daemon software, most of the development focus of systemd is for the desktop user's use-case.
This just keeps getting repeated. It's like a self-referencing Wikipedia article. Just because somebody said it doesn't mean it's true. Can you point to a whitepaper or design document somewhere that says "systemd is primarily developed to support desktops"? It's complete BS. Look, just take a minute to determine what Red Hat's primary market is. Hint: it is not the desktop. If it isn't obvious to you why systemd is great for servers, and in particular large systems of servers that regularly communicate with each other, need to be monitored remotely, and need to be completely auditable, then you have never really worked seriously with servers. You may be the kind of guy that likes to script your own toolkit to provide the functionality that sysV lacks. That's fine, but don't pretend systemd doesn't solve problems for servers. It does, a lot of them.
systemd, in spite of seeming to want to completely encapsulate or replace large swaths of well-regulated APIs from POSIX, doesn't seem to have any real version control, changelogging, or version/feature -querying capabilities to manage compatibility of this new pseudo-API.
Well, I agree that that would be nice to have. But I think they are really aiming for two states: stable interfaces and interfaces under development. The stable interfaces are stable and won't change, so you don't need to check version. The interfaces under development can change, but won't change indefinitely. So you can either wait until they stabilize, or stay on the mailing list and keep an eye on it. They probably don't think a "capabilities" call is necessary because they don't anticipate having a lot of different versions with different features.
Total disregard for everything outside of Linux,
You might be right about that. But then again, the BSDs and other Unixen have plently of their own features that are unique to them. It doesn't have to be a portable standard, but it does add another consideration to applications that may or may not care about portability.
FYI, the scientists who did the work did not report it as "communication." As usual, the popular science writers were a bit over zealous in their choice of words.
My guess though is that you "hate vi" because it's strange and foreign to you and if you humbled yourself, took the time to learn some simple, easily memorizable things you'd probably change your tune.
No, it's really that I already have to know and be familiar with a number of things. I've used vi, very frustratingly, maybe three times. I don't care to spend more time learning it because i see it's complexity as just completely unnecessary. When I already have to know how to configure a dozen different services off the top of my head, manage cross-distribution complexities, script in half a dozen different languages, and keep up with new stuff coming out every month, the last thing I need is to keep a bunch of completely non-intuitive random letters and symbols in my head to do very basic every day things. I'll stick with nano, thanks.
Simple 3x5 card with the commands on it is all you need to be proficient enough to get most things done. Hell, a Post-It note would do.
That's just the point. I shouldn't need to refer to a reference sheet, notecard or otherwise, to edit a bloody text file!. I shouldn't need to spend two hours learning the difference between !#*$ and ?!$& just to go between a bunch of nonsensical modes (view, edit with insert, edit with overwrite, edit the end of a line, edit the middle of line, blah blah blah) in vi just to edit a text file. It is retarded.
With nano, you have a basic intuitive text editor. Navigate with the cursor keys (amazing, imagine that!), edit with backspace, delete, and just type letters to insert them ( more shocking things). When you need to save and quit, there is a help text at the bottom of the screen, Ctrl-x. That's it and that's why I like nano and hate vi. On top of that, nano is small and efficient and easily fits in a minimal environment. There is really no reason to not have it as a default text editor in any distribution.
However, as an admin, I have long ago standardized on VI for the simple reason that it's included by default on every single *nix variant out there. (At least, in my experience.)
While true, in my experience there is no reason why nano could not be included (and should be).
(though Linux does have non-stock application deployment packages available, like Puppet, that partially fill that last point).
You're kidding right? In addition to Puppet, which is a relative newcomer, there has been Satellite (http://www.redhat.com/products/enterprise-linux/satellite/) and Landscape (http://www.ubuntu.com/management/landscape-features) among others (Suse has one too). Where do you think the distros make their money? Now you may have meant there is no free application deployment and management software, but last time I checked Windows Server was definitely not free. If you need free, though, you can roll some scripts fairly easily, wrapping things like Kickstart with custom repositories (yum or apt) and services like Cobbler or Spacewalk (which Satellite is based off of), rsync, cron jobs, and ssh (for remote execution).
Linux AD-via-Samba quite simply doesn't even come close for the convenience of centralized GP maintenance,
I don't know what you are trying to say here. Why would you manage linux machines with a Samba domain? If you want the same functionality as AD on linux, FreeIPA is the most mature project, and it can integrate with AD via cross-realm trusts in the latest version. So you can manage a mixed Windows/Linux environment with the same core infrastructure. If instead you meant Samba as an AD domain controller for Windows, Samba4 is (mostly, 95%) a drop-in replacement for Windows Server. There are a few features missing, but you can provision and manage an AD domain via Samba with ease.
Well, if it's linux, FreeIPA is better because then you can take advantage of group policies that are designed to work with linux. If you use AD, you will get authentication and that's about it. Now if you have windows+linux it's a bigger problem. In our lab we went with AD forsaking the advantages of FreeIPA for our linux users, but you could also set up both servers with a shared trust. It's a bit more complicated, but this is something RedHat are trying to develop into a turnkey solution.
I agree it is difficult and challenging. It is not happening to me, but it recently happened to some friends of mine. What did they do? They tightened the belt, looked for temporary opportunities where they could, went back to school, and it is starting to turn around. I think they will be fine. They won't live a lucrative suburban life, but they didn't really want that anyway. They will survive at above the median wage, living in a modest apartment, driving old cars, and raising two children.
It would be ideal if it didn't happen at all, but really ask yourself, what's the alternative? Change, chance, and shifting jobs is a reality of life. We can't stop it. We can blame companies, but unless we are prepared to stop economic growth and development, we are fooling ourselves. Protectionism will not make the reality any easier to bear. They could have had government put a stop to the development of computers and robots that were taking jobs away from Americans in the 1970s. And then where would we be now? Still working shitty factory jobs for some other first world country that moved ahead and developed their technological sector.
I would argue that if the government is to do anything, it is to establish a solid safety net that will catch people as they fall and help them get back on their feet. Such a safety net used to exist, but it has been become far less effective than it used to be. Part of this is due to changing times, and part due to underfunding. So let's get it working again. The second, I would say, is helping to ensure that employees benefit from the growth of the companies they work for. I don't know exactly how to do this. It is not as simple as "wealth redistribution," but I think it needs to happen so that workers do not feel increasingly disconnected from their employers. Cultivate better relationships, and better ideas and a more productive work force will emerge.
Uh, this is not just scientific curiosity. There are some deep practical applications to such technology. Newsflash, malaria is still a big problem in the world and many other efforts to combat it are failing. If we can target the mosquito population I. Ways that don't involve copious amounts of DDT, or inhibit the ability of Mosquitos to act as a vector for the disease, we may make some significant inroads finally.
While the 12 Monkeys doomsday scenario is popular amongst techies, I don't think we should discount a useful tool just because of a possibility for misuse. The authors themselves recognize the need to use it responsibly and develop an appropriate regulatory framework. From the article,
Ecological changes caused by gene drives will be overwhelmingly due to the particular alteration and species, not by the CRISPR drive components. That means it doesn’t really make sense to ask whether we should use gene drives. Rather, we’ll need to ask whether it’s a good idea to consider driving this particular change through this particular population. While gene drives could tremendously benefit humans and the environment if used responsibly, the potentially accessible nature of the technology raises concerns about the risks of accidental effects or even intentional mismanagement. In a new paper published in Science, we specifically address the regulation and risk governance of gene drive applications to promote responsible use.