I mostly agree with your remarks, but I think there is a question of scale. I've done several startups, and in a startup, one person is often the CTO (that would have been me, twice), the primary developer of the core software (ditto), and the system manager for the entire network of computers owned by the company, which on occasion have been pretty much my cluster of computers plus eventually a "company owned" server or two when we had enough capital or cash flow to afford them.
From this level of "my basement plus your basement" startup, there are a set of scaling steps that lead through VC incubators to getting actual VC to hiring a skeleton staff (cheap, quite possibly fresh out of school and paid in part with options or the prospect of options) to making money but not as fast as you burn it to making money (one hopes) faster than one burns it and moving on to fame and fortune and early retirement.
At intermediate steps in this process, IT is not the polished gem that it might be for a fully developed and capitalized and profitable company. Backups might have been set up by the original founders (and done correctly) but all it takes is one hire that necessarily is given serious responsibility with little oversight in a tiny startup who doesn't completely understand backup to make a small change that fucks it up without even realizing it. I've been around a long time, and trust me, it happens, and if you are LUCKY you discover it when some tiny unimportant file is overwritten and you try to restore it and can't.
If this was a startup in the stage just past profitability -- small but important database of actual customers and/or their data, team of maybe three or four IT people still wearing many hats, so the database person was also the systems manager and the primary software developer was also the web manager and the original CTO/developer/general factotum was distracted by the demands of sales and corporate politics with a board made up partly of the VC people who want rapid growth to a liquidity event and so on, this scenario isn't that unlikely. DB person says I'm going nuts adding all these new workstations to sales desks, CTO says let's hire a DB person, CEO/COO says we can't afford a $150-200K position right now (duh!) so CTO says we'lll hire some kid straight from college with DB chops, kid comes in, they show kid the DB and sit him or her down, he/she tries a few tentative SQL commands but has no experience with whatever actual DB they are using and has credentials that are more talk than substance, tries to make a copy of DB to play with without breaking original and fucks it up. In the meantime, they have been backing up to a RAID, and the RAID started throwing errors but the same DB person who left the kid to play and orient themselves had JUST STARTED the rebuild or was trying to fix the problem and rerun the failed backup when...
Life is a comedy of errors like that more often than one might think. Most times they are non-fatal, but every now and then the ENT surgeon slashes the carotid the first day of his/hir first surgery and the patient dies on the table, or the kid working on steel 200 feet above the ground loses their balance on their fist day trying to emulate people who have been doing it for years and walk a narrow beam over nothingness. Sometimes people can get back up on the horse that threw them and sometimes they end up living under an overpass or dead at their own hand.
Humans are highly error prone information processing systems. We deliberately design systems that are critical -- as much as we can -- to have multiple levels of mutual auditing to catch and prevent errors before they occur, but it simply isn't possible to idiot-proof every process, and accidents can happen even to those who are not idiots. Few are the people who have root privileges who have NEVER EVER entered rm *.junk in some directory, but accidentally entered an extra space before the ".junk" (blush, been there, done that, done WORSE than that). If you are the only systems manager for a network, there IS no one to audit you for errors. If backup fails, and it does, it is YOUR ass on the line, but that doesn't mean backup never fails no matter how hard you work and there are always windows of vulnerability or chances of mistyping a character and not catching the bug right away. Root people learn to work as meticulously as the ENT surgeon -- never ever slash a patients primary vessels by accident, never ever enter a command as root without checking it twice or even three times, especially commands like rm or cp.
For a small startup, erasing the primary database whle backups were screwed up might well be enough to destroy the company on the spot. This might cost the VC people all of their $500K or $1M investment. It would cost all of the employees their options and their jobs. It might leave the founders holding a giant liability in LOC expenses unpaid now with no hope of income to pay them. It could cost a lot of people a LOT of money. So it is understandable that there would be a certain amount of anger and blamethrowing and ass covering and sure, even lawsuits. It isn't really the kid's fault, but then, it may not have reallly been ANYBODY's "fault", merely something that adds up to incredibly bad luck on a few calculated risks.
Will "the kid" get back up on the horse that threw him/her? Dunno. Probably. Sad tale either way.