The company I consulted for uses SAS (on the mainframe, AIX boxes, and PC's) for almost all of its dataprocessing needs, including ETL work. Now they're looking at "Big Data" and discovered they need parallel processing to make it cost-effective (outperforms the mainframe, no per CPU-second charges, ability to let analysts work on AIX boxes or PC's etc.).
I was able to show significant cost and performance savings in SQL queries over the mainframe (and AIX boxes). Interestingly substantial (50%-100%) speedups were also possible by accessing the Teradata machine in its native SQL (bypassing the SAS "in-database" Teradata support).
The interesting thing about Teradata is that they offer genuine parallel processing (like Hadoop), but offer it as an end-user ready SQL interface to a database engine (you still need sysadmins though). Contrast this to Hadoop where the Hadoop layer is basically the start of the road and you usually have to worry about hardware issues and software architecture issues (such as which database engine to choose) as well. Sometimes you have to take the custom-made route (e.g. Wall-street firms doing automated trading) but sometimes it's an outright liability in a DIY-hostile environment (e.g. in large corporations).
The teradata machine I worked with supports SQL, SAS, and R (which competes with SAS of course, and usually out-competes it when it comes to advanced statistics if you know what you're doing but we had to use SAS exclusively, by order) and could easily handle terabytes of data.
So my suggestion is to take a look at it.
It's not Open Source (although it does support R), and it's less fun for tinkerers, and it's harder to custom-parallise your own algorithms on (I hear, I never tried). On the other hand it does provide a ready-to-run parallelised SQL database and lots of storage. It's not cheap though, but in a corporate environment that's usually not the first consideration.