I'm faced with a dilemma here: I'm an algorithmist, and believe most questions can be more accurately be answered, in the long run at least, by a well developed algorithm than even the most skilled human being.
I agree, but there are a number of things about this case that are problematic for algorithmic analysis as such:
1) Ten years from now the primary growth industry in tech is going to be rasting, which won't be invented for another three years. How do you predict who is going to be successful in the highly competitive counter-rast and ablatives industries?
2) Related to that, Google is a stable corporate environment in which some kind of prediction has been possible for the past few years. The world is not stable. Never has been, never will be.
3) There is a huge industry of metrics-based success prediction, and it sucks. The entire SAT/GRE/MCAT thing is a lousy predictor of success, with only weak correlations between *AT scores and academic achievement, much less career achievement in the first ten years.
4) To validate such an algorithmic approach you would need to test it, likely by attempting to apply your metrics to people from ten to thirty years ago and seeing how well they did in the ensuing 10 years. If you only apply the metric to people ten years ago you have no idea if it is robust over time, and robustness over time (because: rast) is absolutely necessary.
So the odds of this working well are small. Making algorithms that work in the real world is hard, and these guys seem to have set themselves up one of the hardest problems going.