Krasnoselskii-Mann Iteration
- Krasnoselskii-Mann Iteration is an iterative method that computes fixed points of nonexpansive operators via a weighted blend of the current iterate and its operator evaluation.
- The algorithm achieves convergence under divergent-series step-size conditions and has been extended to include inertial, stochastic, and accelerated schemes with robust performance.
- Its practical applications span convex optimization, signal processing, and reinforcement learning, offering reliable solutions even amid perturbations and adaptive updates.
The Krasnoselskii-Mann (KM) iteration is a foundational iterative scheme for finding fixed points of nonexpansive operators in linear and nonlinear settings. It underpins a wide range of algorithms in monotone operator theory, convex optimization, game theory, stochastic approximation, and reinforcement learning. KM-type schemes balance contraction (relaxation) with inertia and are robust to perturbations, variable step sizes, and stochastic noise. Their convergence properties, rate bounds, and generalizations have been the subject of intensive analysis, leading to sharp characterizations in Hilbert, Banach, CAT(0), and even general normed spaces.
1. Definition, Origin, and Operator-Theoretic Formulation
The Krasnoselskii-Mann iteration is defined by
where is a nonexpansive operator (), is typically a Hilbert or Banach space, and is a sequence of relaxation parameters.
Classically, convergence is guaranteed if , a condition ensuring the scheme does not stagnate (Cominetti et al., 2012). Equivalently, the KM iteration can be interpreted in terms of averaged operators: is -averaged if for a nonexpansive . The iteration generalizes Banach-Picard, Krasnoselskiĭ, and Mann schemes, and subsumes the Halpern iteration as a limiting case (Bot et al., 2022).
For linear , and in multi-agent systems and game theory, the KM iteration connects fundamentally to the concept of strict pseudocontractiveness, which underpins necessary and sufficient conditions for convergence (Belgioioso et al., 2018).
2. Convergence Theorems and Rate Analysis
Weak and Strong Convergence
KM iterations converge weakly to a fixed point under nonexpansive and the divergent-series step-size condition (Cominetti et al., 2012). For linear nonexpansive , this iteration converges \emph{strongly} (in norm) to the metric projection of onto , requiring only the divergent-series condition, without uniform positive lower/upper bounds on (Bartz et al., 28 Dec 2025). This generalizes the Baillon–Bruck–Reich theorem and later results by Bauschke–Combettes.
In uniformly convex Banach spaces, asymptotic regularity and convergence rates can be quantified using "proof mining" techniques; for constant , the quadratic rate is attainable in Hilbert spaces and under suitable regularity conditions (Firmino et al., 16 Jan 2025).
Rate Bounds and Optimality
The sharp universal upper bound for the fixed-point residual is
and this rate is optimal for general normed spaces (Cominetti et al., 2012, Contreras et al., 2021), with Halpern-type iterations attaining , which KM cannot generally match. The residual decay rates for various schemes are summarized below.
| Scheme | Typical Rate | Source(s) |
|---|---|---|
| Classical KM | (Cominetti et al., 2012, Contreras et al., 2021) | |
| Halpern (linear/firm) | (Contreras et al., 2021) | |
| Generalized stochastic | Linear & Quadratic (, ) | (Pischke et al., 2024) |
| Fast/Nesterov-KM | (momentum, accelerated) | (Bot et al., 2022, He et al., 28 Oct 2025) |
| AdaGrad-regret-KM | , data-adaptive | (Kwon, 25 Sep 2025) |
3. Extensions: Inertia, Perturbation, and Generalized Schemes
Inertial Krasnoselskii-Mann (IKM)
IKM iterations inject momentum: with controlling inertia (Cui et al., 2019, Maulén et al., 2022). Provided parameter sequences and error terms are summable, weak (and under quasi-contractive maps, strong/linear) convergence is established, along with nonasymptotic bounds on the best residual (Cui et al., 2019).
The practical advantage is acceleration, observable in primal-dual splitting and multi-operator monotone inclusions. Inertia allows faster empirical convergence at modest risk of divergence, necessitating careful parameter selection (Maulén et al., 2022).
Stochastic and Inexact Iterations
Noise-resilient versions (with martingale-difference or arbitrary perturbations) admit almost sure convergence and explicit nonasymptotic residual bounds. Under bounded variance or summable errors, rates mirror deterministic decay, with minor log or step-size corrections (Bravo et al., 2017, Bravo et al., 2022, Sababe et al., 2 Jun 2025).
The general proof architecture for stochastic variants relies on the Robbins–Siegmund supermartingale lemma, and Fejér-type monotonicity, which extends to adaptive Bregman geometries and heavy-tailed noise models (Sababe et al., 2 Jun 2025).
4. Advanced Schemes and Algorithmic Accelerations
Nesterov-Type and Adaptive-Momentum Variants
Momentum-accelerated KM algorithms, such as Fast KM and TKMA, utilize Nesterov's extrapolation or adaptive local geometry, blending information from (Picard) and momentum steps, with analytically derived or geometrically motivated momentum parameters (Bot et al., 2022, He et al., 28 Oct 2025). The resulting schemes achieve or rates on iterate differences and often outperform classical and Halpern algorithms in image denoising and matrix completion applications (He et al., 28 Oct 2025).
Tikhonov Regularization and Forward-Backward Splitting
Tikhonov regularization augments KM with shrinking steps: where and control regularization and relaxation, respectively (Bot et al., 2019). This yields strong convergence to the minimal-norm solution for countable families of operators and in monotone inclusion settings, especially when coupled with variable step sizes (forward-backward algorithms).
The framework robustly accommodates errors and variable steps, directly translating into accelerated splits for convex optimization and signal processing (Bot et al., 2019).
5. Geometric, Nonlinear, and Game-Theoretic Generalizations
Nonlinear (CAT(0), Hyperbolic) KM Iterations
KM extends to metric and geodesic spaces (CAT(0)), traditionally formulated as: where denotes the geodesic convex combination (Foglia et al., 29 Oct 2025). Asymptotic regularity is preserved, and the same rate applies. Convergence to a fixed point (in the sense of -convergence) is proved under mild control on step sizes.
Halpern iteration in metric settings further accelerates rates () and motivates hyperbolic variants for deep learning optimizers (Foglia et al., 29 Oct 2025).
Consensus, Equilibrium, and Relative KM Iteration in Games
In multi-agent consensus problems, KM iteration finds equilibria even when network topology is only partially known. Convergence is guaranteed iff the underlying operator is strictly pseudocontractive, connectable to spectral and LMI criteria (Belgioioso et al., 2018).
In stochastic mean-payoff and entropy games, "relative" KM schemes leverage normalized operators under Hilbert seminorms, exploiting the additive homogeneity of Shapley operators to achieve complexity for -approximation, significantly improving upon prior bounds (Akian et al., 2023).
6. Practical Implementations and Application Domains
KM-type iterations and their variants (stochastic, inertial, adaptive, Tikhonov-regularized) underpin algorithms in:
- Convex optimization (proximal point, forward-backward, Douglas-Rachford)
- Signal and image processing (deblurring, inpainting, denoising) (He et al., 28 Oct 2025)
- Matrix completion (low-rank recovery) (He et al., 28 Oct 2025)
- Reinforcement learning (Q-learning with monotone updates, policy iteration) (Bot et al., 2019, Bravo et al., 2022, Pischke et al., 2024)
- Distributed consensus (Belgioioso et al., 2018)
- Zero-sum games and variational inequalities (Akian et al., 2023)
Empirical comparisons show that momentum-enhanced and adaptive-KM variants consistently outperform classical schemes in both computational speed and convergence rate, but can exhibit oscillatory or spiraling behavior typical of momentum methods (Bot et al., 2022, He et al., 28 Oct 2025).
7. Parameter Tuning, Rate Explicitness, and Theoretical Implications
Quantitative bounds on asymptotic regularity and convergence rates are now explicit due to advances in proof mining and optimal transport analysis (Firmino et al., 16 Jan 2025). This enables detailed complexity planning for practical implementations, for instance, explicit oracle complexity for minibatch KM in stochastic environments (Pischke et al., 2024). Parameter dependencies on the relaxation and inertia sequences, convexity moduli, and problem geometry guide optimal algorithm design.
KM-type schemes are robust to perturbations, approximation, and stochastic deviations, provided error terms are appropriately controlled (summable or diminishing), and inertia/momentum parameters are carefully chosen to avoid instability (Maulén et al., 2022, Cui et al., 2019).
References
- (Cominetti et al., 2012) Cominetti, Soto, Vaisman: Rate of convergence, Bernoulli sum connection, explicit universal bounds.
- (Bartz et al., 28 Dec 2025) Bartz, Bauschke, Gao: Strong convergence in linear case, Baillon–Bruck–Reich revisited.
- (Belgioioso et al., 2018) Belgioioso et al.: Strict pseudocontractiveness, operator-theoretic characterizations.
- (Firmino et al., 16 Jan 2025) Firmino, Leuștean: Proof mining, quadratic rates, explicit complexity.
- (Bot et al., 2022) Bot, Nguyen: Fast KM (Nesterov), residual decay.
- (He et al., 28 Oct 2025) Bot et al.: Two-step KM with adaptive momentum, image/matrix experiments.
- (Bot et al., 2019) Bot, Csetnek, Meier: Tikhonov-KM, strong convergence with variable steps.
- (Maulén et al., 2022) Combettes, Salzo: Inertial KM, weak/strong/linear variants.
- (Bravo et al., 2017) Bravo, Cominetti, Pavez: Inexact KM, error bounds, continuous time.
- (Cui et al., 2019) Cui, Yang, Tang, Zhu: Inexact inertial KM, residual rates.
- (Contreras et al., 2021) Bravo, Cominetti: Optimal residual bounds, Halpern iteration.
- (Kwon, 25 Sep 2025) Hendrickx et al.: Regret minimization, AdaGrad-KM extension.
- (Sababe et al., 2 Jun 2025) Erdinc, Salzo: Bregman SKM, adaptive geometries, stochastic stability.
- (Akian et al., 2023) Akian, Gaubert, Naepels, Terver: Relative KM for games, complexity bounds.
- (Pischke et al., 2024) Pischke, Powell: Generalized stochastic Halpern-KM, oracle complexity.
- (Foglia et al., 29 Oct 2025) Pinto, Pischke: CAT(0) KM iterations, hyperbolic optimization.
Summary Table: Scheme Variants and Convergence Properties
| Iteration Variant | Rate & Convergence | Noise/Stability | Parameter Control |
|---|---|---|---|
| Classical KM | , weak (linear: strong) | Robust to summable errors | |
| Inertial KM | – best residual | Sensitive to inertia choice | Bounded/increasing |
| Tikhonov-KM | Strong convergence, min-norm | Variable step size, small Tikhonov | , decay |
| Fast, Adaptive KM | , | Mildly oscillatory | Momentum schedule, geometric |
| Stochastic/Bregman SKM | residual averages | Martingale/noise trimming | |
| Proof-mined Generalized | Explicit rate, quadratic possible | Error modulus imposed | Fejér, convexity modulus |
The Krasnoselskii-Mann iteration remains a central tool in nonlinear analysis, monotone operator theory, and optimization. Its generalizations, inertial and regularized variants, stochastic extensions, and precise rate theory constitute a mature and versatile algorithmic arsenal with rigorous theoretical guarantees and broad practical impact across mathematical and computational sciences.