Fixed-Point Iterations: Theory & Acceleration
- Fixed-point iterations are iterative methods that find a point y* satisfying y* = T(y*) by successively applying the operator T.
- They are fundamental in numerical analysis, optimization, and nonlinear equation solving, with techniques including Picard, Krasnosel'skiĭ–Mann, and Halpern iterations.
- Recent advances have established optimal acceleration mechanisms with matching lower bounds for various operator classes, improving both theoretical and practical convergence rates.
A fixed-point iteration (FPI) is an iterative computational scheme designed to find a point such that for a given operator . FPI methodologies are fundamental to numerical analysis, optimization, nonlinear equation solving, and operator splitting methods. Recent work has rigorously characterized the exact optimal acceleration rates achievable in this general framework, delivering both tight algorithmic mechanisms and matching lower bounds for contractive, nonexpansive, and generalized operator classes (Park et al., 2022). These advances unify and extend the theoretical underpinnings and practical methodologies for FPI across a wide range of modern computational applications.
1. Operator Classes and Standard Fixed-Point Schemes
Fixed-point iteration seeks by generating a sequence typically via . The convergence behavior and attainable complexity rates depend on the properties of :
- Contractive Mapping: is -contractive for iff
- Nonexpansive Mapping: is nonexpansive if
- Averaged Operator: is -averaged () if for some nonexpansive .
Connected to this framework is the class of maximal monotone operator inclusions, as many optimization and saddle-point problems are reducible to fixed-point forms through their resolvent mappings.
Classical Iteration Schemes:
- Picard Iteration: .
- Krasnosel'skiŭ–Mann (KM): .
- Halpern Iteration: .
2. Optimal Acceleration Mechanisms
The exact optimal acceleration scheme for FPI depends on operator properties (Park et al., 2022):
- Optimal Contractive Halpern (OC-Halpern): For -contractive (),
- Optimal Strongly-Monotone Proximal Point (OS-PPM): For maximal -strongly monotone :
with .
The two are equivalent via a change of variable mapping fixed-point residuals to resolvent-based residuals (see Lemma 2.2 of the cited work).
3. Complexity Bounds and Exact Optimality
The main theoretical advancement is the establishment of both upper and lower bounds, proving that the introduced acceleration mechanisms for FPI are exactly optimal:
- Contractive Case ():
- Strongly Monotone Case ():
- Nonexpansive/Monotone Case: For , , achieves the optimal rate for squared residuals.
The matching lower bound is constructed via the "span condition" and resisting oracle techniques, showing for any deterministic algorithm, there exists a worst-case operator instance for which these rates are unimprovable. No deterministic, possibly adaptive, algorithm can achieve a better rate.
4. Acceleration under H\"older-Type Growth and Restarting
For operators that are not strictly contractive but satisfy a uniform monotonicity condition of H\"older-type,
with , , , accelerated rates can be achieved using a restart schedule:
improving over non-accelerated (e.g., Mann) rates . This is realized by exponentially increasing the schedule of restart lengths for the accelerated method (see Theorem 5.3).
5. Algorithmic Formulation and Mathematical Summary
The table below organizes main algorithmic components and rates:
| Operator | Iteration | Optimal Rate on |
|---|---|---|
| Contractive | OC-Halpern | |
| Strongly Monotone | OS-PPM | |
| Nonexpansive | OC-Halpern / Halpern () | |
| H\"older Monotone | Restarted OS-PPM (exponentially) |
6. Empirical Validation and Applications
Comprehensive experiments validate the theoretical rates and demonstrate strong practical improvements in a range of application domains:
- CT Imaging (TV-regularized, PDHG): Restarted OC-Halpern outperforms both standard PDHG and PDHG with basic Halpern acceleration, with faster reduction of fixed-point residuals and objective value.
- Optimal Transport (Wasserstein-1): When the problem is cast as a primal-dual FPI, OC-Halpern acceleration yields consistently faster functional and residual convergence.
- Decentralized Compressed Sensing (PG-EXTRA): In distributed signal recovery, both the restarted and non-restarted optimal schemes outperform baselines in convergence of solution distance and residuals.
Toy 2D examples confirm that cycling behaviors present in traditional FPI methods are optimally suppressed by the newly proposed algorithms.
7. Impact and Broader Implications
This body of work closes the complexity gap for general fixed-point problems, providing both a practical, implementable optimal algorithm (OC-Halpern anchoring) and a fundamental theory (matching complexity lower bounds) for accelerated FPI. The analyses are robust across operator classes and directly applicable to problems in convex optimization, imaging science, signal processing, and large-scale distributed systems.
Beyond theory, the methods are simple, explicit, and can serve as universally optimal "drop-in" accelerations for a wide class of operator splitting, primal-dual, and splitting-type algorithms, immediately improving computational efficiency in real-world setups.
References: Key results and the full account of the acceleration mechanisms, convergence proofs, lower bound constructions, and experimental validations appear in (Park et al., 2022), with algorithms detailed in Section 3, rates and lower bounds in Theorems and Corollaries of Sections 3 and 4, and application evidence in Section 6.