A few months back, we had completed initial development on a new persistence layer on a demanding application. We'd put it all into PostgreSQL, and were enjoying the easy JSON and other features. It worked great.
So we got it up and running on high-end hardware in our five data centers, then we turned on the pipes for all the writes. But our systems team members were going insane, trying to get High Availability working right. It turns out that there is just no good way to accomplish this in PostgreSQL. It could fail over to the slave if the master stopped responding, but fail-back was basically impossible. It had to do an rsync on the file system level, which was expected to fail. When it failed, the docs said, just do it again. It took almost a full day to run, each time.
And it failed with alarming regularity! When under load, every couple of days the database would just freeze for ten to fifteen minutes, choking on some non-scary query. It would just sit there, stuck. Calls to it would just block, and eventually timeout. When this happened, it would fail over to the slave, and we're days away from getting back to a sane state.
Don't think we didn't do our best to solve this issue. We spent many thousands of dollars on two different highly recommended consulting companies, who specialized in PostgreSQL. They came onsite and looked at everything, and recommended a number of configuration adjustments, but nothing helped.
In desperation, the project now seriously behind schedule, we worked over Christmas, and branched the code to use MySQL as the database, instead of PostgreSQL. Then we set up two parallel systems. Both on identical high-end hardware ($50,000 machines), one for each database, and turned on all the pipes.
The result? MySQL answered its queries in 50% less time than PostgreSQL. Plus we already knew that it did HA quite well, and it never just froze up like PostgreSQL would.
We have since completely obliterated all traces of PostgreSQL from our code base.