The time when people who had no idea started to believe they could create something better then UNIX.
This created the mess we had in the 1990s when it was OK to have log files in binary files and you could only view them through a small non-resizable window. A time when you could display the owners of open files on your network share... but you couldn't do anything with that info except for writing it down and manually act upon it. Of course that data was also displayed in a small non-resizable window.
There is a reason why normal init is based on shell scripts, and that reason is simply because there is no reason against it. Shell scripts are perfectly adequate for that job. Binaries on Unix however make it much more difficult to deal with them. If you want to edit a binary you have to get the source code, edit it, make it compile, and then hope it'll run. It's even worse when you have dynamically linked binaries since they depend on other files, particularly for init.
Binaries are just a botched solution to somehow get faster execution. The whole design of Unix doesn't require them. Unlike Windows you shouldn't need to link in a library to have some API you should instead have a little program you can call. (actually Windows now has a little wrapper allowing you to make arbitrary calls to DLLs since even Microsoft has recognized the problem)
Dynamically linked libraries are just a botched solution for the problem of library bloat. You shouldn't need them, if you want some feature you should just call the program implementing it. That's how bc worked originally. All the calculation functionality was in dc and bc just re-formated its input to what dc expects. The problem why this doesn't work well any more is long startup times caused by library bloat.