Finite Precision Residual Behavior in CG/Steepest Descent

Determine explicit conditions on a symmetric positive definite matrix A under which, in finite precision arithmetic, the recursively computed residual vectors r_k in conjugate gradient or steepest descent drop below machine precision; or construct counterexamples where they do not.

Background

In practical finite precision implementations of short-recurrence methods, recursively maintained residuals can exhibit behavior different from exact residuals, sometimes appearing to drop well below machine precision.

While this phenomenon has been used in analyses to infer true residual accuracy, it lacks a rigorous proof characterizing when it occurs.

References

It is pointed out in that these vectors often shrink well below the machine precision, and this is used to prove results about the size of the true residuals. But this is not proved. Taking the above recurrences to represent the CG or steepest descent algorithm, determine conditions on $A$ that will insure that the norms of the vectors $r_k$ drop below the machine precision in finite precision arithmetic or show some examples where they do not.

Linear Systems and Eigenvalue Problems: Open Questions from a Simons Workshop  (2602.05394 - Amsel et al., 5 Feb 2026) in Subsection "Symmetric Krylov methods in finite precision arithmetic" (Section 2)