Interval Floating-Point Arithmetic
- Interval floating-point arithmetic is a computational paradigm that represents each value as an interval, rigorously enclosing the exact real number and accounting for rounding errors.
- It underpins reliable error analysis, automated verification, and robust software/hardware design by leveraging IEEE 754 reinterpretations and directed rounding techniques.
- Advanced methods such as abstract domains, probabilistic extensions, and hardware-supported instructions enhance precision, mitigate overestimation, and support formal verification.
Interval floating-point arithmetic is a computational paradigm in which each value is represented as an interval that is guaranteed to enclose the mathematically exact real value, capturing not only the result of the intended arithmetic operation but also any uncertainty arising from rounding, finite precision, or under-specified computation. Interval arithmetic forms the foundation for rigorous error analysis, automated verification of numerical algorithms, and the development of robust software and hardware systems subject to floating-point imprecisions. This article surveys core concepts, key methodologies, theoretical frameworks, practical enhancements, and important recent advancements in interval floating-point arithmetic.
1. Principles of Interval Semantics and IEEE 754 Reinterpretation
Interval semantics provides a rigorous set-theoretic reinterpretation of IEEE 754 floating-point arithmetic. Each nonzero, finite floating-point number is treated as the "point interval" . Special values are widened to cover all mathematically plausible reals: maps to , to (where is the largest finite float), and zeros gain "fuzziness," e.g., as , as for the smallest positive representable (0810.4196).
Interval arithmetic then defines core binary operations as
where denotes the interval hull.
Crucially, hardware rounding modes (e.g., toward for upper bounds and for lower bounds) enable extracting exact interval bounds for an operation. Undefined IEEE 754 operations, such as , , , and $0/0$, are thus assigned mathematically meaningful intervals ([0, +∞), , [0, +∞), and [0, +∞), respectively), converting all arithmetic into total operations with set-valued results. This approach underpins formal verification by guaranteeing that each floating-point result encloses the underlying exact real value (0810.4196).
2. Dependency, Overestimation, and Advanced Abstract Domains
Naïve interval arithmetic, which treats each variable occurrence as independent, suffers from the dependency problem: repeated variables in a function expression can result in significant interval overestimation, undermining utility for nontrivial algorithms. To address this, partially relational abstract domains such as Floating-Point Slopes (FPS) expand the functional range using a first-order expansion:
where is an interval of slopes computed over (Chapoutot, 2010). When adapted to floating-point semantics, this expansion incorporates roundoff error by multiplying terms by and adding explicit additive errors .
Within FPS, each abstract value comprises a main interval and a vector of slope intervals :
- : interval for the variable,
- : slope vector carrying sensitivity to independent inputs.
This structure accurately models absorptions (domination of one operand by another), cancellations (partial or complete loss of significant digits), and the propagation of both relative and absolute roundoff errors. Theoretical soundness and practical sharpness are demonstrated via validation on embedded control system models and classic numerical kernels (Chapoutot, 2010).
3. Robust Extensions: Error Intervals, Movability Flags, and Input Search
Error intervals, movability flags, and input search are three orthogonal extensions that enhance practical interval arithmetic for floating-point verification (Flatt et al., 2021):
- Error intervals augment each numeric interval with a Boolean pair indicating whether all (guaranteed) or some (possible) elements of the interval violate preconditions or yield errors (e.g., domain violations like with ). This enables robust propagation and early detection of potential or definite failure.
- Movability flags annotate whether endpoints can be refined by subsequent increases in computational precision—formally, an interval at precision refines to at precision , unless an endpoint is marked "immovable" (e.g., via overflow to infinity).
- Input search partitions the input domain into axis-aligned hyperrectangles, allowing for efficient rejection of unsamplable or infeasible regions by exploiting computed error intervals and movability flags, thus focusing computational effort only on valid, informative regions.
These techniques significantly reduce indeterminate results and prevent infinite or futile recomputation loops. Comparative experiments demonstrate up to more challenging inputs resolved and a reduction in indeterminate outcomes relative to existing tools such as Mathematica (Flatt et al., 2021).
4. Rounding, Hardware, and Algorithmic Implementations
Interval arithmetic for floating-point values fundamentally relies on directed rounding, especially in IEEE 754-compliant hardware. Modern processors provide multiple rounding modes—toward , , and zero—allowing software or hardware libraries to compute lower and upper interval bounds efficiently (0810.4196).
Emerging research proposals for new instructions (e.g., FPADDRE) target direct computation of round-off errors in addition, reducing double-double addition latency by up to and boosting throughput by up to (Dukhan et al., 2016). Designs employing such instructions can execute compensated, high-precision arithmetic (including compensated summation and polynomial evaluation) at substantially improved performance, enabling broader adoption of interval methods in scientific and engineering applications.
On the software side, libraries such as Arb implement arbitrary-precision interval arithmetic using a midpoint-radius ("ball arithmetic") representation: each value is a pair , where is the high-precision midpoint and is a rapidly computable upper bound on the error (Johansson, 2016). This representation supports advanced features including polynomial and special function evaluation, adaptive error tracking, and asymptotically large dynamic range at performance competitive with non-interval floating-point libraries.
5. Probabilistic and Stochastic Extensions
While classical interval arithmetic provides rigorous deterministic enclosures, stochastic and probabilistic frameworks offer complementary approaches:
- Stochastic arithmetic, as exemplified in MCA and CESTAC, treats outputs as random variables subject to perturbations modeling floating-point error. The number of "significant" digits can be estimated statistically, with frameworks providing formulas for confidence intervals under assumptions such as normality or via Bernoulli modeling in the general case (Sohier et al., 2018). These methods admit practical scalability for large scientific codes, avoiding gross overestimation typical of purely interval-based error bounds.
- Probabilistic analysis propagates explicit error distributions through program execution, supporting compositional calculation of the probability that the cumulative rounding error remains within a fixed interval. The density function of the error can be computed exactly at low precision or approximated at high precision via a universal "typical" distribution (e.g., for ) (Dahlqvist et al., 2019). This enables far tighter bounds than worst-case deterministic intervals, quantifying rare events such as overflow probability or unreachably wide error bands.
6. Theoretical Frameworks and Alternative Number Formats
Research into alternative number formats—such as Unums—grounds interval arithmetic directly in the machine number format itself rather than as an auxiliary layer (Hunhold, 2017). In the Unum system, each representable value is an interval ("Flake") over the projectively extended reals, with basic operations formulated as the "blurred" set-theoretic sum or product over a predefined lattice. This framework guarantees interval enclosures for each operation and circumvents undefined or exceptional values by construction. However, practical challenges include combinatorially scaling lookup tables, the dependency problem, "sticking" and "creeping" phenomena in numerically sensitive computations, as well as complexity in hardware implementation for high bit counts.
7. Applications and Case Studies
Interval floating-point arithmetic enables foundational advances in formal verification, robust numerical software, and privacy-preserving computation:
- Formally verified model-checking for MDPs: Recent work formalizes and verifies interval iteration algorithms in interactive theorem provers such as Isabelle/HOL, refining real-number semantics to rigorous IEEE 754 floating-point implementations with directed rounding. The synthesized LLVM code provides both competitive performance and formal guarantees, supporting soundness and convergence in probabilistic model checking for quantitative verification tasks (Kohlen et al., 17 Jan 2025).
- Privacy-preserving mechanisms: Interval refining is employed to synthesize differentially private mechanisms (e.g., the Laplace mechanism) robust to precision-based attacks. By iteratively narrowing the sampling interval until all values contained map to the same floating-point representation, this approach guarantees the output's distribution is arbitrarily close (in total variation) to that of sampling and then rounding, thus preserving privacy guarantees and avoiding floating-point artifacts (such as "holes" in the output distribution) (Haney et al., 2022).
- Constraint-based testing and hybrid real/interval static analysis: Modern tools combine static analysis (interval domains, abstract interpretation) with constraint programming and interval solvers to search for concrete test cases that witness suspicious or catastrophic floating-point behaviors—e.g., unsafe program paths triggered by over-approximation in Heron's formula or critical system states in embedded controllers (Collavizza et al., 2015).
These applications illustrate the dual role of interval arithmetic in both providing guarantees (enclosures, safety) and supporting the practical engineering of reliable, reproducible, and verifiable numerical software.
Interval floating-point arithmetic continues to evolve through advances in mathematical modeling, hardware support, abstract domains, probabilistic frameworks, and robust algorithmic enhancements. Its rigor is essential in contexts demanding reliable numerical results under finite-precision constraints and is foundational to modern techniques for error analysis, verification, and the synthesis of trustworthy computational methods.