I do find it annoying to deal with binary formats for some non executable files like configs and small logs. That said the argument you make about fragility of binary is false and has little merit.
There is no "mostly ACID"--a database is or isn't, and the human-readability of a file has no bearing on how corruptable it is. Things like underlying file system and implementation have more to do with it.
PostgreSQL for example is a very robust, multi-concurrent ACID compliant data store, so much so that it is often used as the back end for logging in large important systems. Failed transactions roll back cleanly and single byte errors most certainly do NOT render all data theteafter inaccessible! Despite that you have binary formatted data, even if it is all VARCHAR fields.
It is all in the design and implementation. Binary formats and protocols generally have field and record delimiters, as well as error detection and correcting features like checksums. If a byte is corrupt you lose one record at most and usually just a single field. Delimited text is just a very primitive binary format in that sense, and without checksum or error correction at that. I've never seen a truly robust data store built atop a text format!
On more than one occasion I have seen log files in text format become corrupt, and in most cases the missing or unreadable lines of text were exactly at the point i was interested in seeing. It is quite possible for a text log to go haywire and stay that way until a process is killed and restarted. Text does not help here. Similarly a binary database store can be useless if poorly implemented, such as not using transaction statements in SQL or using myISAM storage for your mySQL database.
My criticism of systemd in this particular instance is not because using binary formats is more fragile than text...indeed i dont know enough to say either way. It is really just a minor annoyance to me due to the fact it creates a need to use an unfamiliar, less generalised tool to view and analyse the data than cat and grep and so on.
In any case for a REAL hgh volume critical system i would push all my data via syslog to a robust storage system underpinned by a database like PostgreSQL or other ACID compliant system. There are some times when a system crashes on boot at a point before such facilities are online, however bootups happen very occasionally on servers and if i have to view a binary log out of a failed system i will just deal with the annoyance and use the provided log viewing on a functioning system or rescue boot environment.
Yes it seems to ignore the unix ideal, but Linux, mac OS and other contemporary platforms and applications with unix roots abandoned the unix way a long time ago as tech moved on and pragmatism set in. It gets tedious to manage text files; they do not scale. I guess the decision was made to use binary for compactness and to forego the need to rotate logs and use general text processing tools to do log analysis, which i am comfortable using but could be thought of as cumbersome.
I am still getting used to systemd. I am a bit disoriented and find the scope of what they are trying to tackle rather wide to put under management of a single project, but hey, the Linux kernel is huge monolithic and has thousands of tightly coupled binary modules, and it works well enough. And, in my experience once i figure out the systemd way of doing it i find it a big improvement over the old crufty init way. Making your own init scripts, even using LSB template, and using things like monit or other more kludgy ways to monitor and restart processes is not a status quo i miss anymore.
I hope the petty bickering in debian community over what boils down to politics over technical merits does not stifle innovation. Debian is not known as an innovation leader but it has done and will have to continue to embrace change and progress when it meets stability standards.