Comment Re:"it depends" (Score 2, Informative) 95
Real workload performance testing is easy with MySQL: Dump your database (mysqldump or a real filesystem snapshot if you have the HW) in a clean state, turn on the query log (don't use binary logging - it only contains statements modifying your db's contents such as UPDATEs and INSERTs) and wait some hours or days. Then deploy the dumped database to a testing system, select an appropriate starting point in the query logs (choose the first statement that arrived after you dumped your db) and feed all statements after to a mysql shell. Measure the time how long it takes to complete with different MySQL settings.
This will give you a rough approximation on real life performance. Counterpart: It doesn't respect that queries are processed in parallel because they come from different clients and are processes by different MySQL server threads.
(Hm, a perl script parsing the query log - it contains server thread id's - simulating different client threads should be easy to code for someone who knows perl well enough).
However, when MySQL's usual knobs aren't enough, you may use MySQL's master-slave replication. While absolutely simple to setup and run it can be problematic on the client side.
Using master-slave replication leads to the scenario that you direct ALL data modifying statements (INSERTs, UPDATEs) to a single MySQL master processes while executing read-only statements (SELECT) on multiplte slave processes. The master as well as the multiplte slave machines maintain a full copy of your data; all statements executed on the master are implicitly executed on the slaves as well, because MySQL replication simply means that the master sends all UPDATEs and INSERTs to the slaves as well (so their copy of the data gets updated as well).
Sounds nice, but it can lead to quite serious problems. In such a setup, the slaves' data will always lag behind the master's data.
Sometimes that doesn't matter. Your users will regret if they find a new blog entry posted on your site 2 seconds after it was posted. But your users won't regret if they post a blog entry and it won't show up in the index page (to which they are redirected immediately after they have posted the new stuff) unless they hit F5 two seconds later.
This can happen because the INSERT of the blog entry happens on the master server whist the SELECT * FROM user_blog WHERE uid=foo from the client happens on one of the slave servers which haven't caught up to the master server's point of view (data).
(Fancy software developers will see lots of possible race conditions pop up here which can lead to more seriuos malfunction of a well-working secure software.)
However, all of these problems can be cured, but that requires a careful design of your application. Or some serious piece of code-auditing and refactoring.
That's the downside - MySQL replication is absolutely easy to set up but you have to do more on the client side. Anyways, it's worth a look.