The naive CRR (Cox, Ross, Rubinstein) method for pricing options is O(n^2) where n is the number of levels in a recombinant binomial pricing lattice. That is, a lattice like a binary tree, but where you have cross links connecting nodes. The naive approach requires visiting each one of these nodes and hence O(n^2) and the error of the produced option goes down only proportional to the node spacing.
For at least 15 years this problem has been converted to "linear time" (really the important relation is between the price error and the CPU time) by means of a variety of extrapolation methods (this began with Richardson extrapolation) using evaluation with two trees to get a much smaller error. There are in fact numerical methods that for special options can do slightly better than this. Broadie 1996 is one reference. While pretty fast and very easy to understand, there are yet faster methods using adaptive mesh crank-nicolson PDE solvers that do a bit better.
Just a couple of years ago, Dai, et al. published a paper showing how to get linear time an entirely different approach involving combinatorial sums. This may have improved performance bounds for some exotic options, but did NOT do much for improving real-world implemented algorithmic performance of pricing the European and American options that are so commonly traded on exchanges, in the US and worldwide. So, at least for the most important class of options Dai et al was kind of a snoozer.
The paper referenced in the summary above is entirely a follow-up paper to Dai, et al 2008. This new paper merely shows that there is no "short cut" in evaluating the relevant sums with hypergeometric functions, a kind of special function common in mathematical physics. So, in short, all this says is that the already "non fastest method" cannot be made faster by one numerical methods approach. It is certainly deserving of publication and dissemination, but changes the world not at all.