Deterministic and Probabilistic Error Bounds for Floating Point Summation Algorithms
Abstract: We analyse the forward error in the floating point summation of real numbers, from algorithms that do not require recourse to higher precision or better hardware. We derive informative explicit expressions, and new deterministic and probabilistic bounds for errors in three classes of algorithms: general summation,shifted general summation, and compensated (sequential) summation. Our probabilistic bounds for general and shifted general summation hold to all orders. For compensated summation, we also present deterministic and probabilistic first and second order bounds, with a first order bound that differs from existing ones. Numerical experiments illustrate that the bounds are informative and that among the three algorithm classes, compensated summation is generally the most accurate method.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.