To describe it as "getting in the way" is a gross misrepresentation of the theory of computation and the theory of algorithms.
Yes, it does not really get in the way, much in the same way that OO does not really get in the way of algorithm analysis any more than imperative programming does. The real question is whether it is always a good idea to replace one abstraction with another, especially given that the replacement has greater problems with fidelity to the actual machine. More on that after I zoom in on your relevant statement:
nor is it generally the case that algorithms expressed in imperative languages are more efficient or easier to analyze than those expressed in functional languages.
It was never a question of whether it is more efficient or easier to analyze its running time. Again, it was a question of fidelity--does the abstraction faithfully model the actual implementation, and does discarding this abstraction in favor of a simpler-to-understand abstraction pose any real benefit to understanding? The problem, as the GGGP is pointing out, is that most systems in actual use are imperative in nature, and along with it comes assumptions that do not hold in functional languages, assumptions that can *hinder* understanding if students are not aware of them.
Also, as a side note, the Wikipedia article's assertion about false lower bounds is misleading. There are a variety of different delays associated with memory access, which may be address dependent (e.g. seek times on hard drives if you are using swap, cache misses, etc.). If these costs are polynomial in the amount of memory used by the program (assuming some sort of locality effects), then the algorithm will still be a polynomial time algorithm.
Actually it's not at all misleading--the article was talking about a specific machine model, and the assumption holds true for *that* model. On the other hand, your assumption that there are different delays in memory access (due to seek times and whatnot) *was* misleading--you are now effectively trading the RASP model for a more general one that does not assume constant access time to memory. Of *course* there's going to be a difference in the lower bound running time than in the RASP model, and depending on the actual difference you propose, it may or may not equalize it in memory access performance with that of the Turing machine. But that *still* doesn't mean that this alternate model is completely equivalent to that of the Turing machine--not in terms of running time complexity, because they depend on mechanisms that are *not* common between them. (For that matter, have you ever seen a real system that has *infinite* memory? The Turing machine assumes that.) The Church-Turing thesis, after all, only states that similar machines CAN compute the same decideable problems, not that they can compute them with the *same* running times.
Some abstraction must be fixed before an algorithm can be analyzed; a Turing machine is a convenient abstraction, but it is not the only such abstraction.
But is the actual proposed abstraction--functional instead of imperative--an adequate (not just "convenient") abstraction? I agree with the GGGP here -- in terms of algorithm complexity analysis, it is not, just as the Turing machine is not an adequate replacement of the RASP machine in terms of theoretical performance.