No, what I'm saying is that your complaint about sid breaking is misplaced. systemd's problems in sid, if they exist, may be systemd's or Debian's bugs, I wouldn't know and you or the other AC just wrote a general unspecific complaint so it would be difficult to say. But even so, yes, software has bugs, this is why sid exists.
One Debian unstable breakage due to systemd is understandable.
Two Debian unstable breakages due to systemd is disgraceful.
A Debian unstable installation that will likely not boot properly after each update due to systemd, month after month, is unacceptable.
Unacceptable according to whom? The description says:
'"sid" is subject to massive changes and in-place library updates. This can result in a very "unstable" system which contains packages that cannot be installed due to missing libraries, dependencies that cannot be fulfilled etc. Use it at your own risk!'
Debian unstable is a misnomer. Before systemd was introduced, Debian unstable was very stable. Ubuntu's packages are based on the Debian unstable packages, as far as I know.
Before systemd, "stable" in the Debian lexicon referred to an extraordinarily high degree of stability, unmatched by other Linux distros. Even extreme stability appears to be "unstable" when compared to Debian's (former?) overachieving definition of "stable".
Somebody like yourself, who obviously has never used a truly stable Linux distro, probably couldn't understand this.
I ran Debian 15 years ago, you don't need to explain the fundamentals. The point stands that a development branch can break any time by definition, and the introduction of a new init system leading to boot failures here and there comes as part of your decision to run unstable. It's your fault if you upgrade without checking first, it's not systemd's fault. I've lost X or couldn't boot after an upgrade more than once.
Or use the switch built into the settings (under Security and Privacy)
I had been running Debian unstable for years, which contrary to its name was very stable (more stable than the stable releases of many other distros I'd tried, even). But once systemd was installed during an update, it was one broken thing after another.
Development branch of a distro, which is called unstable, sees breakage when switching the init system. A TOTAL SHOCKER.
Matrox HeadCast in 2001.
What I am saying is that what systemd calls "log database" is no such thing and massively inferior to the real thing.
Please don't confuse the term database with a RDBMS or similar. A database can be a simple text file with structured data like
systemd's journal is by any definition a database file and since it contains logging info, I fail to see the problem in calling it a "log database".
I also _know_ that people that put logs into databases uses real databases, you know the ones that come with ACID.
Some use ACID RDBMS, some use non-ACID NoSQL databases, etc. It all depends on needs and to why they collect the logs in the first place.
What you do _not_ do is change the on-machine logs into a wannabe-database. That is about the worst thing you can do.
You are of course wrong about this, Enterprise log analysers like Splunk uses simple files and indexes to store events etc. pretty much the same way that systemd does.
There is considerably overhead when using ACID RDBMS', so when things needs to be really fast you do stuff like systemd and Splunk does. Logstash (probably one of the more popular log-analysers) doesn't use a ACID compliant RDMBS as backend either (though it can if you want to).
The point is that the world of logging and databases have moved on considerably the last decade. The primary reason being more and more data that needs to be analysed (fast) for one reason or another (Business, real-time security etc.).
systemd's journal is a pretty significant upgrade to the otherwise rather fossilised world of Linux logging. Don't get me wrong, I have tremendous respect for the Rsyslog team, but they have been struggling for over a decade to solve just some of the problems systemd's journal now have solved.
And I have have never really seen any good argumentation against using binary, structured and indexed log files:
They can be read by all standard Linux text tools with piping.
There exist multiple independent readers for them.
The logs can be programmatically accessed through a myriad of languages.
They provide functionality that can't be matched with legacy syslog files.
There is no non-contrived scenario where they can't be read one way or another.
They can be exported in any format and have default export options for all relevant industry standards.
Unlike syslog output, they have a stable and documented API.
So there isn't any real downside to using binary journal log files, while there is considerably advantages.
So now you are going from saying that people do not use DB's for logging unless they are clueless; (Let me quote you on this:)
And yes, I have extensive experience with log analysis. You do not turn the logs into a database unless you are fully clueless.
And please remember that ACID doesn't give any protection against file corruption, nor does it prevent user space programs corrupting the logs by generating obviously impossible field values. But then again, neither does the journal nor plain text logs.
Again, for those who need to run a full DB for their logs, systemd's binary file format is great, because it allows the journal to be exported in industry standard formats like JSON. That way the remote log-sink database can receive and store rich meta data in a totally stable and structured way; changing hostnames, IP's, or even different wordings or even different and changing languages used by the daemon log output isn't a problem with the journal, since it is based on field values, not complex regex-ing of unstructured, undocumented, unstable, language specific words.
Oh, and tamper proof, cryptographically "sealed" logs too (FFS) if you want that.
If your remote log-sink solution isn't a full DB, you still gets all the benefits of receiving structured log entries with eg. full microsecond precision timestamps _and_ monotonic timestamps. It is trivial to convert eg. JSON output with defined field names to any other structured format, while converting and aggregating unstructured text files is a pain.
It is also great for those who only needs local logging; the journal has many of the advantages of a full DB, but without the complexity and overhead. Its append based file format is also much more robust against file level corruption than databases. Since the log files are structured and indexed with field values, they are easy to perform powerful, yet simple queries on.
How do you find all syslog entries with the priority level "error" generated by the previous boot only?
With the journal it is : "journalctl -b -1 -p err"
And how do you generate a full list of every executable, including their path?
Since you can "tab" trough the values in the journal this can easily be done: "journalctl -F _EXE "
And with the -x switch, the help database is activated, giving further explanation on what the log entry means, and gives direct links to upstream support, perhaps linking directly to a page that explains the error code etc:
# journalctl -b -x -u systemd-logind.service
mar 07 16:46:16 localhost systemd-logind: New session 1 of user Peter H.S.
(log entry above, help database output below:)
-- Subject: A new session 1 has been created for user Peter H.S
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/m...
-- Documentation: http://www.freedesktop.org/wik...
There are seriously many people who just trawls through long log files (even using vi apparently), because it is hard to grep for anything unless you already know what to grep for, and because regex'es are well, regex'es; difficult to use, understand and to remember all the many variations. Basically, newbies can't read or filter Linux syslog log-files by other means than trawling through them.
They can't have a useful GUI either because it is impossible to make a distro-agnostic syslog gui. Again, a problem that systemd's journal solves.
The bottom line is, that the only virtue syslog files have, namely that they are human readable, is a serious hindrance for their use too: they can't add monotonic timestamps and micro-precision timestamps and other meta-data without being excessively difficult to read for humans.
With the journal you can give machine parsers exactly the structured log info they need and can benefit from, while still allowing easy readable log output: a simple cmd-line switch determines the format. So you get all the benefits of legacy text logs without their severe limitations. It is a win-win situation.
These days, using simple, unstructured text log files, is simply obsolete. The market and various industries have already decided on this issue. Logs are meant to analysed, and the sheer volume of them means they need to be structured, indexed and have rich logging info, just like systemd's journal. That way they can be aggregated and machine parsed with minimal effort.
I am not twisting your words. Let me quote you verbatim:
That does not make systemd better. It makes it worse as it is so complex is not had dependencies on tools. Of course this also means external contributors can easily be fended off, as nobody can afford the licenses on their private budget.
Isn't it straight up obvious that you claim that external contributors can be fended off because they can't afford licenses? Isn't this exactly what you are claiming?
Since this claim is totally untrue, isn't it then obvious that you are utterly confused about how Coverty and Jenkins work, and have no clue whatsoever on how systemd development is done?
No, I'm not misunderstanding. You will need a repair option to make the logs readable.
Oh yes you were, let me quote you verbatim on this:
They aren't meant to be corrupted in this manner either, but apparently journald has a fsck option in an attempt to fix it. Crazy.
You thought that apparently journald had a fsck option, it doesn't, which just goes to show how little you know about systemd.
As a sys admin I'm not interested in what it is doing.
As a Sysadmin you should know how your logging system works and the limitations it has. This is just basic SA stuff.
In this case you should be able to discern about malformed log entries and actual file corruption of the logs. This is apparently something that is really hard for you to understand.
Neither you or the tin pot systemd crew have any idea what logging actually means.
Yeah right, like you were the smart one; I am sure you are a hot shot Linux developer with a CS degree that has to fend off job offers every week from leading firms, because that pretty much describe the leading systemd developers, that includes among many, Greg KH, the kernel developer who maintains the stable Linux kernels etc. probably number two after Linus T. in the kernel group, or Poettering who has over a decades Linux developing experience etc.
I could go on, but the CV's of the systemd developers are really impressive, and many of them work for leading Linux distros like Red Hat, Canonical, Suse, Debian etc. ; if these Linux distros know nothing about logging, please write a peer reviewed paper that sets them all straight about this.
Good luck with your BSD adventure, be sure to tell them that BSD is all about choice, so that they should support several init-systems simultaneously and by the way, break up their source repos in many smaller independent groups or else they are "bloated" and "monolithic".
And don't cry when your BSD fork makes a systemd clone and throws away their old obsolete legacy script based init-system, because this is exactly what is going to happen, and yes, BSD's will get binary logging too, it is only a matter of time.
All these changes will be "forced down your throat" no matter how much you whine about.
BSD is for people who hate Linux, so you will feel right welcome there. It is just a matter of time before you will kowtow to the party-line about how GPL is bad because it doesn't allow BSD sponsors to close source the code etc.
Yes you are confused about several things, including licensing payment.
Your claim that people are discouraged from contributed because Coverty/Jenkins may discover that their patch have security issues or breaks the build is laughable; of course such patches should be rewritten, and the submitter would be glad they such problems was discovered. Asking the submitter to rewrite the patch, or that the developer with commit access fix it, is everyday work in FOSS land. It happens all the time and it discourages nobody; this is something the submitter learns from.
Really, should the alternative be to accept every broken patch just so not hurt the submitters feeling? I don't think anybody wants that.
Regarding your claim that the systemd developers don't play nice to others and suffer from "God complex" (talk about hyperboles), then this is plain wrong:
You can find lots of statements from people actually contributing, even small patches, that the systemd developers where really friendly and helpful. The proof is also in the numbers; there hundreds of such minor contributors to systemd, something that strongly indicate a good and including developer community. Compare that to the almost non-existing non-systemd developer eco systems.
Sure, the systemd developers sometimes says no to certain patches or features, but again, this isn't having a "God complex" but about project leadership and the technical know-how to reject things that are bad for the project. This happens in all FOSS projects.
Fedora - was working then broke again.
Ah, so you where trying to use zfs on Fedora, despite that this isn't supported. Problems may occur in such cases but the problems have nothing to do with systemd since systemd works fine with zfs.
Seriously? Try again. I am not confused at all. But you are blind to what is happening.
But you really seems to be very confused since you erroneously talk about systemd contributors needing to pay licensing fees: "Of course this also means external contributors can easily be fended off, as nobody can afford the licenses on their private budget.
That is just totally misunderstanding how things works, so it really seems like you are very confused about even basic facts how systemd is developed.
Disagree or not; logging directly to a full DB is very common, which is why it is a selling point for every commercial syslog implementation I know of.
Regarding the indexing and structuring of log files, this is exactly what journald does. This dramatically reduces work needed to analyse them, even if you re-index them and convert them another structured file format like JSON.
Working with indexed and structured files with defined field values, is always better than working with unstructured, un-indexed text files.
systemd's journal log files will always have an advantage over the poorly defined syslogd text logs, no matter how you spin it.