As what I would consider a medium-weight AWS user (our account is about 4 grand a month) I am still quite happy with AWS. We built our system across multiple availability zones, all in us-east and had zero downtime today as a result. We had a couple of issues where we tried to scale up to meet load levels and couldn't spin up anything in us-east-1a (or if we could, we couldn't attach it successfully to a load balance because of internal connectivity issues), but we spun up a new instance in us-east-1b and attached it completely fine and were able to handle the load just fine. The load balancers worked as expected (and hoped for) and the segregation of issues between availability zones was fairly successful.
I think that fixing these issues are just as high an issue with Amazon as they would be with any internal IT infrastructure, so I don't give much credence to the arguments that having your own servers and your own internal IT team would truly solve the problem any more effectively: I think it just gives you more the illusion of control because you can see that you're working on it, as opposed to trusting to the fact that Amazon is working on it.
If there is any AWS lesson to be taken away from this it is that:
1) EBS may not be ready for prime time - most of our servers are instance-store anyway, both for performance reasons and for other reliability problems we have had in the past.
2) You should keep your server templates set up as up-to-date AMIs so you can deploy across any availability zone you want at any time you want. Right now, we have our load balancer attachment configuration all scripted as well, so spinning up new instances to feed a cluster is a single CLI execution with us specifying the availability zones.
Check out http://perfcap.blogspot.com/2011/03/understanding-and-using-amazon-ebs.html for a nice explanation of some of the issues you may come across with EBS and the internals of why.
Overall, I still give Amazon a good rating. This was a major outage and we felt barely a hiccup.