An anonymous reader writes "I work for a small company that does vehicle tracking using rfid. All of the rfid data is sent to our servers where it is processed and we generate useful reports for our users. The company is growing and we are starting to get more and more data and as a result, our data processing engine is having trouble keeping up with the load. Currently all of the data is stored in a mysql database instance. I have identified some areas in the engine where we could make things more efficient but as we get more data, we will need to implement a solution that can scale. If you had to implement a scalable data processing solution, would you choose a MapReduce type solution, e.g. a Hadoop cluster or use an RDBMS, e.g. Oracle with parallel queries? Is there a point at which the dataset warrants a MapReduce approach vs a traditional RDBMS approach? If you chose an RDBMS approach what RDBMS would you use? What are the tradeoffs between the two approaches?"