So now you are going from saying that people do not use DB's for logging unless they are clueless; (Let me quote you on this:)
And yes, I have extensive experience with log analysis. You do not turn the logs into a database unless you are fully clueless.
..to that DB's can be a good thing?
And please remember that ACID doesn't give any protection against file corruption, nor does it prevent user space programs corrupting the logs by generating obviously impossible field values. But then again, neither does the journal nor plain text logs.
Again, for those who need to run a full DB for their logs, systemd's binary file format is great, because it allows the journal to be exported in industry standard formats like JSON. That way the remote log-sink database can receive and store rich meta data in a totally stable and structured way; changing hostnames, IP's, or even different wordings or even different and changing languages used by the daemon log output isn't a problem with the journal, since it is based on field values, not complex regex-ing of unstructured, undocumented, unstable, language specific words.
Oh, and tamper proof, cryptographically "sealed" logs too (FFS) if you want that.
If your remote log-sink solution isn't a full DB, you still gets all the benefits of receiving structured log entries with eg. full microsecond precision timestamps _and_ monotonic timestamps. It is trivial to convert eg. JSON output with defined field names to any other structured format, while converting and aggregating unstructured text files is a pain.
It is also great for those who only needs local logging; the journal has many of the advantages of a full DB, but without the complexity and overhead. Its append based file format is also much more robust against file level corruption than databases. Since the log files are structured and indexed with field values, they are easy to perform powerful, yet simple queries on.
How do you find all syslog entries with the priority level "error" generated by the previous boot only?
With the journal it is : "journalctl -b -1 -p err"
And how do you generate a full list of every executable, including their path?
Since you can "tab" trough the values in the journal this can easily be done: "journalctl -F _EXE "
And with the -x switch, the help database is activated, giving further explanation on what the log entry means, and gives direct links to upstream support, perhaps linking directly to a page that explains the error code etc:
Example:
# journalctl -b -x -u systemd-logind.service
mar 07 16:46:16 localhost systemd-logind[546]: New session 1 of user Peter H.S.
(log entry above, help database output below:)
-- Subject: A new session 1 has been created for user Peter H.S
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/m...
-- Documentation: http://www.freedesktop.org/wik...
There are seriously many people who just trawls through long log files (even using vi apparently), because it is hard to grep for anything unless you already know what to grep for, and because regex'es are well, regex'es; difficult to use, understand and to remember all the many variations. Basically, newbies can't read or filter Linux syslog log-files by other means than trawling through them.
They can't have a useful GUI either because it is impossible to make a distro-agnostic syslog gui. Again, a problem that systemd's journal solves.
The bottom line is, that the only virtue syslog files have, namely that they are human readable, is a serious hindrance for their use too: they can't add monotonic timestamps and micro-precision timestamps and other meta-data without being excessively difficult to read for humans.
With the journal you can give machine parsers exactly the structured log info they need and can benefit from, while still allowing easy readable log output: a simple cmd-line switch determines the format. So you get all the benefits of legacy text logs without their severe limitations. It is a win-win situation.
These days, using simple, unstructured text log files, is simply obsolete. The market and various industries have already decided on this issue. Logs are meant to analysed, and the sheer volume of them means they need to be structured, indexed and have rich logging info, just like systemd's journal. That way they can be aggregated and machine parsed with minimal effort.