There is a classical problem here. R is great for getting trained and productive VERY quickly. It has 4,600 packages that will do almost anything you need to and it does some very sophisticated statistical methods right out of the box. What can't be done out of the box (or from the core download since it's not really a boxed product) has likely been coded in a package -- even very complex biostatistical and bioinformatics methods. Also R has a lot of graphical data visualization functionality built in and extended by some awesome packages like ggplot2. Additionally, R does a great job with documentation as it can inject data, visualizations and code into markdown documents, which makes publication a whole lot easier. R's functional/imperative/quasi-object oriented approaches have their quirks (but then what language doesn't?). One thing to note however is that R is not in itself multithreaded and it requires that all the data it is working on reside in memory. For very large, very complex data sets that could be a bit of problem.
So where R is great from a quick ramp up perspective, Python will probably scale better to huge datasets in the tera- and peta- byte range. It has come along way especially with scipy, numpy and other packages listed above. So if you anticipate having to scale in this way, then Python maybe a better long term toolset.
I like them both and use them both. I choose which one I am going to use for a project (and stick with the toolset for the whole project) based on dataset size, statistical/visualization complexity and documentation requirements. R tends to win out a bit more often for me.