Once it's shown you can still use scripts you have to find another spurious angle.
You can call script snippets, but they are not allowed to do the same as init scripts did, including calling each other, detaching, or prompting. You're shoehorned into the limited context of systemd.
Why have loads of duplicated code in all those scripts? They all do basically the same thing.
The answer is in your one word "basically". There is a word for "95% the same", and that's "different". There's another word for not dealing correctly with the 5%, and that's "broken".
So what do you do to monitor those services started by the scripts? Manually watch them or add another binary to watch and restart them?
When something needs to know the status, it calls the start/stop script with a "status" parameter. And for most jobs, you don't need to monitor anything. Some tasks are one-time jobs, and others are monitored from the outside.
And if there needs to be a watchdog, you use an actual watchdog. One that fits the task at hand, not one that can't handle special requirements. One that's capable of things like asking "is the master up?", and not just "am I up?". One that's capable of negotiating with neighbors on who should propagate to become a master. One that isn't interested in whether a process is present, but whether a service is present. One that allows for systems to have different runtime configurations. including dynamic changes, like ensuring services are not running, but present and configured to start when needed. Like adding/removing services without requiring a reboot.
The great thing about init scrpts is that they give you the freedom.
Computing is all about automation, nothing wrong in getting the OS service management More automated.
You got that too wrong. What matters the most to businesses are reliability and costs. Whether that is achieved through minions or hardware or licenses is irrelevant.
Automation is only good if it can be relied on. And when the unexpected happens, as it is wont to, that it can be troubleshot and repaired with a minimum of impact.
If it means a week of production downtime when things go wrong, because nobody can troubleshoot and fix what's wrong, it's not a benefit.
A good sysadmin plans for the unexpected. Not just for sunshine days, or things that can be anticipated. Sure, plan for that too, but don't rely on it. Things will go pear shaped, and that's when you need a human to be able to troubleshoot and fix things, with a minimum of impact to the customer. Who, quite frankly, doesn't give a damn about automation and other methods, but whether the product is available, and how long it will take to fix it when it isn't. Not how. That's the domain of the sysadmin.
And the experienced sysadmin says loud and clear that systemd is utter shite, that puts all the eggs in one basket, adds restrictions, and is too abstracted to troubleshoot in any meaningful way when things go wrong.
I see systems set up by other admins that have important jobs started through at, cron or even remote runners, to avoid systemd, and regain control. Even a /dev2 system that avoids systemd-udev trampling all over the place like an elephant in a china factory. I cannot blame them one bit.