Institute of Electrical & Electronic Engineers
Floating-point arithmetic is notoriously non-associative due to the limited precision representation which demands intermediate values be rounded to fit in the available precision. The resulting cyclic dependency in floating-point accumulation inhibits parallelization of the computation, including efficient use of pipelining. In practice, however, the authors observe that floating-point operations are \"Mostly\" associative. This observation can be exploited to parallelize floating-point accumulation using a form of optimistic concurrency. In this scheme, they compute an optimistic associative approximation to the sum and then relax the computation by iteratively propagating errors until the correct sum is obtained.