A few hundred million rows is no trouble to PostgreSQL, if configured right. And if you go beyond that there are some great ways to deal with the problem:
1. Partitioning: Make a large table composed of smaller subset tables. This is a great way to deal with what is primarily historical data, since you can partition by month, quarter, or whatever time period makes sense for your application. Then, when it comes time to archive or delete old data, all you have to do is migrate that month's table to the archive location, or just drop it. MUCH less expensive than a DELETE with a WHERE clause.
2. BigSQL: if you want the power of NoSQL but the querying ability of PostgreSQL, check out this package.
3. If you are starting to get serious data, hopefully you are making serious money. There are scores of commercial entities that can help you get a lot more performance out of PostgreSQL. Some of them have add-ons for performance, or have just gotten a lot of experience and good ideas on how to deisgn a solution.
These steps may sound like a pain, but NoSQL brings all sorts of pain with it, also. Limited querying ability, many extra measures required for data integrity, stability issues... bizarre limitations in some areas... Think these things through carefully, and don't fall for anyone's hype.