Meltdown allowed you to dump the contents of memory of other VMs on the same nodes as you...
I know of at least one other critical Azure vulnerability that would have let tenants in separate VMs on the same hyper visor futz with each other's memory addresses. That one never made it public though, because the researcher responsibly disclosed. The only reason I know about it is because a guy I grew up with was in the Incident response chain at Microsoft and helped to coordinate the patching.
I got Azure patches for the Meltdown flaw a good couple of days before Cisco had the UCS patches available. MS even initially allowed us to schedule VMs in patches to mitigate the application impact. Though when we were about half way through, MS forced the reboots because the vulnerabilities were disclosed at that point.
What's your plan for preventing that in the future and for dealing with it if it's happened?
The future plan is the same as the current plan. Detect the breach. Verify the accessibility and integrity of the data. Notify the client.
Similarly, people are more likely to try to DDoS Azure or AWS than they are your in-house server.
Nobody can DDoS all of Azure. Have you seen how many regions they have? Plus, lots of luck DDoSing those ExpressRoute circuits. Totally different infrastructure and paths into the data center than the stuff that they front out via Azure to clients. Besides, haven't you heard of local caching? How long is the DDoS going to last? The business impact will be minimal. The drives in our laptops are 250GB of SSD. They can cache plenty of recently accessed files, and thanks to OneDrive, do so just fine. (Of course there are plenty of other places to get file services in the cloud, Box, etc.)
MS also offers geo-redundant storage for ridiculously affordable rates. All you need is a like set of VMs in another region and you can be back up and running in minutes, if that. Besides, the web tier for all of the major apps is already redundantly load balanced across regions. How long do you think it takes to play the transaction logs into a recovery database? That is assuming that you aren't already replicating the changes at the DB layer.
The danger of the cloud is that it's a single point of failure
A cloud failure is no more scary than an on-prem failure. Downtime is downtime. At the end of the day, who is going to give you the most resources to get your job done? The cloud is just another stack of hardware in a building somewhere. Or multiple stacks of similar hardware all over the global, depending how much you want to pay for redundancy. Who is going to recover from the failure faster?