Actually, on any DB it's better to create the index after the fact if possible for a simple reason. The most common index is a B-tree, and creating it after the fact leaves you with a perfectly balanced tree. Creating it while loading data requires a lookup for every row, which takes much longer, and it also results in an unbalanced tree, so your queries will not be as efficient. In my initial attempt using MySQL, I actually did create the index ahead of time, but the time required to load the data was much too long. I researched this issue quite a bit, and found this article, which echoed the sentiments of many, indicating that it's much more efficient to create the index after the fact.
http://www.devshed.com/c/a/MySQL/MySQL-Optimizatio n-part-1/6/This seemed like it would work but lead me to the previously described problem. I would also like to add that the company I work for does use MySQL in certain instances with tables over 100 million rows; however, these databases are maintained by a third-party company specializing in the application, and even they wrestle with corruption quite frequently. I've used MySQL quite a bit in the past, and I'm not saying it's impossible to use with large amounts of data. I'm saying it's a PAIN, and out of the box Postgres is much easier to work with and much easier to maintain. Anyway, this is just my experience, so take it FWIW. BTW, it's pretty obvious that you're trolling at this point, so I'm only responding for the benefit of those who might actually be interested in doing this for a living in the "real world".