Instead of "sharding" (split customers across multiple copies of the database) you should try a NoSQL solution to handle the flood of writes as the first layer. Then an recurring process can query the data in your NoSQL object store (by timestamp) and aggregate it into an SQL database for reporting. You could archive those processed entries, or wait until they get old, to another object store for your "data warehouse" -- basically just an archive in case you need to do different aggregate reporting in the future (depending on storage size of course).
I must ask, do you really need to store each full piece of information written by these clients at such a high volume?
Depending on your use of the data, you could even just store the results in memory for X hours/minutes, and then aggregate-process that and write the results to your SQL DB. A single DB with many application servers would be fine in this condition, with writes every X hours/minutes. (You are probably already flat-file logging the incoming requests; that is an archive if you *really* need to go back.) If you cannot afford memory loss if an app server dies, solutions like EhCache (java) will persist the memory to disk, in case of hardware/software failure.