It seems to me that the issue lies in whether the data pieces are on the cloud, or if just the programs are. If I lose the ability to edit a Word document from Office-For-Cloud but I have the file stored locally, I grumble that 'the idiots who run the thing' broke the program, and wait for the 'smart guy white knights' to come fix it for them. But in this case I'm holding those bits (exclusively, or a copy) so I know the data are safe. Nuke the server from orbit, for all I care - I'm annoyed that I lost the ability to continue working, but I've only lost time (bad enough, I know...) Downtime length and frequency becomes the only factors to my unhappiness
If the whole thing is on the cloud without a user-held copy, my SuperImportantLifeWork.doc can turn into vapor if the worst case happens. Now, we add a new factor - what files I lost, and what's involved in regenerating them. This is the predominate factor in my user unhappiness - phone numbers are hard enough to pull together again for many of us, but when we expand that to everything else on the phones (or extrapolate to what may eventually be on-cloud - pictures, documents, schedules, patient data, etc.) these losses become more catastrophic.
In the end, we usually hear about the same set of factors being important for 'good' backups - different physical hardware, offsite, different power system, geographically-separate, etc., in something like that order (depending on data, usage, etc.) These companies really need to make sure that the user has the opportunity to implement these factors by maintaining a complete (or optionally partial) copy of the data local-to-user.