A month or so after I was "promoted" from lowly developer to "Systems Infrastructure Manager" during a whole-scale move from an old green screen AIX based system to a brand new in house custom rewrite in modern tech, we had some of the new replacement hardware onsite and being built up (although the replacement applications werent ready to go, but thats not important to this story).
One friday, the UPS support contractor came in to do his servicing of the UPS - that went well, he finished up and switched it back from "bypass" to "protected". That triggered a surge on the electrical supply to both server rooms, which took the AIX box off line. Due to the nature of the green screen application, there was no way for it to be high availability - the data couldnt be replicated in real time, it didnt even talk to anything other than its own binary database files...
A few hours later, the corrupted AIX box was restored and ready to go - by this time, the company (a busy call centre) had been on manual processes for the entire afternoon. On the advice of the UPS contractor, who said the surge was probably the result of too much load on the UPS at the time, we decided to do a full shut down of the entire system, switch the UPS back over into "protected" and bring everything back up - so we waited until 6pm and did just that...
At 6pm, I threw the switch - and promptly looked over my shoulder at the comms racks behind me in the server room. The comms racks were billowing smoke. The comms equipment was burning. Before I could react, I heard loads of loud pops and bangs - both inside the server room and outside it.
Another surge. This one did real damage - a dozen network switches dead, over 40 PSUs in the servers dead, one server dead outright, and loads of call centre desktops went (loudly) pop.
Panic time. UPS contractor called back in - they gave the UPS a clean bill of health and promptly left, disavowing any responsibility.
The board of directors shat themselves - at that point we didnt know the ultimate damage count, but suffice to say the company was dead in the water to any observer.
Cue a desperate night of testing servers, pulling dead PSUs and swapping redundant PSUs between servers so that each server had at least one good PSU. Comms equipment was harder to solve, having to get some expensive switches from our local shop to tide us over. Desktops were bought from the local consumer PC store to give us enough desktops to run the company.
Ultimately, we were back up and running for 8am Saturday - it wasn't pretty, but it was up and running. 3 of us in the IT tech team worked through the night scraping the bare minimum together.
My predecessors DR plan was fleshed out to the point of "we have a DR site" (a commercial site a town over that we had a contract to use - no equipment there, no plans for how to fail over to it etc etc).
So, on to the management failure....
It just so happens that one of my things "to do" on the following Monday was to submit my DR plan for the "new world infrastructure" to the board, who were having their quarterly board meeting the following week (10 days after the company almost died). It was a modest one, but required some equipment outlay to make any DR event as smooth as possible - kept the same contract with the off site unit etc etc.
They turned it down, said it wasn't needed.
I quit the following week.