Comment Integer, floating and interval arithmetic (Score 1) 226
I remember a quote, attributed (likely incorrectly) to Seymour Cray: "Do you want it fast, or do you want it accurate?"
If you want absolutely exact arithmetic, code it entirely with arbitrary precision exact integer arithmetic. All rational real numbers can be expressed in terms of integers, and you can directly control the precision of approximation for irrational real numbers. Indeed, if your rational numbers get unwieldy, you can even control how they are approximated. And complex numbers, of course, are just pairs of real numbers in practice. (Especially if you stick to rectangular representations.) If you stick to exact, arbitrary precision integer arithmetic and representations derived from that arithmetic that you control, then you can build a bit-exact, reproducible mathematics environment. This is because integer arithmetic is exact, and you have full control of the representation built on top of that. Such an environment is very expensive, and not necessarily helpful. You can even relax the order of operations, if you can defer losses of precision. (For example, you can add a series of values in any order in integer arithmetic as long as you defer any truncation of the representation until after the summation.)
If you venture into floating point, IEEE-754 gives you a lot of guarantees. But, you need to specify the precision of each operation, the exact order of operations, and the rounding modes applied to each operation. And you need to check the compliance of the implementation, such as whether subnormals flush to zero (a subtle and easy to overlook non-conformance). Floating point arithmetic rounds at every step, due to its exponent + mantissa representation. So, order of operations matters. Vectorization and algebraic simplification both change the results of floating point computations. (Vectorization is less likely to if you can prove that all the computations are independent. Algebraic simplification, however, can really change the results of a series of adds and subtracts. It's less likely to largely affect a series of multiplies, although it can affect that too.)
And behind curtain number three is interval arithmetic. That one is especially interesting, because it keeps track at every step what the range of outcomes might be, based on the intervals associated with the inputs. For most calculations, this will just result in relatively accurate error bars. For calculations with sensitive dependence on initial conditions (ie. so-called "chaotic" computations), you stand a chance of discovering fairly early in the computation that the results are unstable.