Well, I have a few issues with the cloud hype, starting with the scarcity of evidence to support claims about cloud services being cheaper and/or more secure and/or more reliable than doing things yourself. Every major cloud provider has had serious downtime, and there is only so much you can attribute to being more visible at greater scale or to users not configuring HA tools properly. Far too many on-line services also run into significant security/privacy problems. And cost-wise going with the cloud rather than your own systems tends to be favourable at certain levels (other things being equal) but it can be outrageously expensive in other cases.
These myths aren't really the point here anyway. The point in this case is that no matter how fast your recovery time may be, whatever was happening on your hardware at the time it failed is lost, and in some cases you simply can't make that transparent to your users. Not everything in the world of programming is a distributed map-reduce where losing a hardware node means you just redistribute the 0.0001% of the job it was doing to another and no-one notices. Not everything in the world of networking can tolerate a multi-second failover process without an observable blip in connectivity. As for redundant/HA storage, the CAP theorem called and asked to speak with you about your database, but I think you were on with physics at the time so I just took a message.
It's not just about whether the wastage due to more frequent failures works out cheaper economically than paying a premium for better hardware. It's also about how much downtime you (or your customers) are willing to tolerate and what proportion of overall system time is spent just recovering from failures. If you've ever had the joy of watching the (N+1)-th drive fail in your RAID with N-way redundancy while it's still rebuilding from replacing the earlier failures, you'll know what I mean.