One thing we did was upgrade from Postgres 8.1 to 8.3. From what I read, 8.1 performance degrades rapidly with multiple concurrent long queries. 8.3 also has more efficient storage, which helps with the main problem - hard drive throughput. IIRC, we got about a 10% improvement in query times with 8.3.
We also had two databases on one server, so the other thing that helped a lot was to run them on two separate servers. The largest table we had was clustered by one of the fields which made queries on that field fast. We didn't use autovacuuming and instead vacuumed overnight. A hardware upgrade also helped. We did some query profiling and made sure everything was indexed appropriately. None of this is rocket science of course, and just shows that as your database grows you have to get more and more involved in ensuring good performance.
We investigated vertical scaling with a better, more expensive server, and that would have helped for a while, but the database was projected to double in size in 1-2 years, so that would be no more than a stopgap measure. The conclusion I came to was that we had to move away from standard relational databases. One option was to use sharding (but I think sharding is a workaround for RDBMS limitations, so I don't like it that much), and the other option was to use something like a key-value store that can scale horizontally. Unfortunately, I didn't stay at the company long enough to implement this, so I can't tell you which of those would be a successful solution.