Sorry if it appears that I'm conflating the two; I didn't mean to imply that accuracy and complexity are necessarily related. I meant that sometimes one can find ways to reduce the complexity of a correct implementation without affecting accuracy, and sometimes one will find that a numerical method has been implemented incorrectly, or that the selected method isn't applicable to the problem at hand, and therefore the results produced may be wrong.
The general case I'm talking about for reducing complexity is when the original coder chose some sub-optimal way to implement something. For example, some step in the algorithm requires using the value of an integral inside a loop, and so they make O(n) calls to an O(n) function to calculate the integral, rather than computing the integral first and then using the computed values in the loop.*
* And, no, we're not talking about small values of n, or limited memory, or any other reason somebody might have for doing this. It's just that they didn't realize they were doing something in an O(n^2) fashion when it could be done just as accurately in O(n).