However, that all started out as "can windows run without explorer". It turned out that it probably couldn't and Microsoft was found guilty of using one Microsoft product to unfairly increase the use of another Microsoft product. This is different and rather interesting though because now, Microsoft and Cyanogen are going to prove that Android can run perfectly happily without Google apps. This should suit Google just fine when the EU comes knocking.
You have plainly been around a while. Even longer than I in fact, at least on Slashdot.
I am slightly disconcerted with the unseemly personal attacks on the developer of a controversial new system component. I am slightly more concerned with the aggressive tone adopted by those who believe this component a step forward. The rather distasteful suggestion that those that hold a different opinion are simply disposable is not very attractive especially since many who are not convinced by systemd have many, many years of watching unix and unix like systems break.
I am on the fence personally, willing to be convinced but bear in mind I have many, many years of having my arse saved by following the trail the very transparent init gives us. I have many many years of experience of pain when required to follow less transparent approaches such as SMF.
Perhaps you have a convincing argument for me.
This is an improvement and makes it possible to use standard monitoring tools. If one of those is a unique identifier, thats even handy.
What makes this even better is that it allows me to easily create a script to re-format the logs into something everything and everybody can easily read just as they used to.
pidfile approach is guaranteed to fail or require operator intervention from time to time. It can even be dangerous and result in the wrong thing being killed just as much, if not more than not using a pidfile.
Your description of the workflow using a pidfile already requires accessing the process table so why bother with all the other stuff. Just look for another instance of your daemon. If there is one, send it your commands but if there isnt start up.
OK thanks for the clarification.
However, I have never ever needed a pid file. If I have written the deamon myself, it ensures it "ps" only produces the correct "hit" or can be readily found in
Ok that might be a bit annoying for most its true. However, as all daemons are the child of init, init will be informed by the kernel if a child dies and wait() will get the pid so all that difficulty could be fixed in a few extra lines to init.
The grouping of everything together is achieved by cgroups, not systemd so theres no reason why you cant arrange that using standard sysvinit. To be honest as cgroup like technology has been around in other systems before, youd wonder why nobody ever bothered to implement such a solution before, maybe nobody saw the need.
Actually, the binary logs bother me the most.
I can see your point but in the cases where I have had to parse binary logs that come to mind i.e utmp files and BSM audit logs, it was significantly more annoying than parsing something like syslog with grep/awk/sed/cut/expr etc etc.
It occurs to me that the problem you are trying to address is only a problem because maybe you havent found the right tools and maybe havent split your logs up into logical files rather than just using syslog.
The tool you want to parse your logs is so good it seems like magic. It is an unbelievable tool. It indexes log files, extracts reports, draws graphs, alerts and keeps your coffee warm. It is http://www.splunk.com/ you can use it for free if you dont index too much information.
Like so many enterprise tools, including all monitoring software, it cant read binary logs.
I was there too.
cron does care about exit codes. Any cron job returning a non zero exit code will have "rc=X" in the cron log and it can even mail you the stderr.
Admittedly it will also have "rc=1" if it couldnt run the job at all e.g. if the user account is locked but mostly the cron log doesnt lie.
I just need to get this clear what you are saying.
Are you saying that a "daemonized" process needs to keep track of what pid it had itself and that it has to keep it in a pidfile? I just wondered because nothing ever, ever, ever needs a pidfile especially as any service can get its own pid by going "getpid()"
And then you are saying that one advantage of systemd is that you get the pid in syslog and perhaps some other data about the process itself. I had never thought of that. I mean I never thought when I read something like "can not write to
Maybe it is sometimes for some people. Seems a terrible upheaval for a very small gain.
I wish you hadnt posted this as Anonymous Coward.
It is extremely good and currently has a score of 0.
What Sys V does isnt dependency checking, it is simply an order of execution. True, you could put something in one that hangs the thing, I did myself by accident once but its a very easy fix and it takes two minutes to come up with a timeout strategy if you want one. Here Ill try ok. This one took me two minutes to come up with and test. Im sure there are many other ways of doing it that are simpler. The shell is such a wonderful tool isnt it and that makes it great for initialising your system.
(sleep 10 && kill $PARENT) &
commands that could hang
You can put a "trap" in there if you want to do more than just exit of course.
Doing your own dependency checking IS trivial and I am informed BSD even provide a tool to do it. I admit it might be more difficult for red hat to do it and my advise is that they shouldnt.
You do not understand a high security system even though you think you do.
Think steel plated walled rooms, double locked steel doors requiring two security certified people with two keys. Tell me Im complaining when I have to arrange for people to go down there because a useless tool like utmp needs to be safeguarded.
What you fail to understand and I dont blame you is that vendor driven dependency checking as a concept is actually broken. The Solaris dependency is correct if you look at it from one perspective and wrong if you look at it from another. smf should not decide if utmp is more important than sshd especially as that decision may be different under different circumstances. In this particular case it matters not if the utmp filesytem is mounted, sshd would still have worked even though the utmp records may get a bit screwed and in this instance uptime was more important than audit logs.
By the way, please do not extoll the virtues of utmp for user logging in a highly secure environment. We log and monitor every system call, every exec, every connect, every bind and everything typed at a keyboard.
I can guarantee it because sys V init doesnt enforce dependencies so sshd would have been started and it would have created a new utmp file in the root filesystem and I could have logged in and fixed everything.
It wasnt that the vendor (this is Solaris if you remember) got the dependencies wrong. The dependencies are actually right if you look at it from one perspective (user audit logs should be preserved) and wrong if you look at it from another perspective (I am not interested in standard unix auditing logs and need sshd up). With systemd type systems the vendor has to choose one and can not know which is right for might feel differently on different occasions.
For sys v init, no option is chosen and everything attempts to start and if it fails tells me why in a standard log file that anybody can read, even my monitoring software.
If you need dependency checking it is trivial for an administrator to set that up using the sys V model.
Point 1. No. I guarantee that on a sys V init system I would have been able to log in and easily see that a filesystem that should have been mounted wasnt mounted and then diagnose and fix it. ssh will cope just fine if there is no utmp file but you might end up with another utmp file than your usual one which is presumably why the dependency exists (those records might be important to you). However, I dont depend on utmp for my auditing records but smf (or systemd) can not know that and there you have the essence of the problem. It didnt let me log in because it thought I needed something which I dont.
Point 2. Indeed but
Please stop. Ive been at this game a very long time. It took very little time to determine what was wrong and to fix it.
My point was that I couldnt ssh in because a filesystem was corrupt and had to use the console. That is stupid as well as very time consuming and expensive in the high security environment in which these systems live.
I see the logic of course. utmp is updated when you log in with ssh so sshd depends on utmp and having utmp requires having a file system to put it on so there is a dependency on the mounts. What concerns me is that the init system was trying to be clever rather than realising that if the filesystem with utmp didnt mount, a utmp file would still be created by ssh and be useable albeit probably on the root filesystem and fundamentally, I could log in.
Im not even concerned that the mounts are marked as failed if one completely unrelated filesytem fails to mount as that is merely a problem of implementation and can be fixed.
The fundamental problem here is that systemd (or smf in this case) are not clever enough to understand the intricacies of all the things along the dependency chain, can leave you in a very bad state and are far more difficult to debug than a single file whose last few lines contain the words "could not mount