Dampened Mann Iteration
- Dampened Mann iteration is a generalized fixed-point scheme that employs damping and relaxation parameters to enhance convergence robustness in nonexpansive operator settings.
- It achieves convergence under conditions of operator perturbations, stochastic noise, and asynchronous coordinate updates in both normed and Hilbert spaces.
- This method underpins advanced applications in convex optimization, reinforcement learning, stochastic games, and regularization for ill-posed problems.
The dampened Mann iteration is a generalization of classical fixed-point iteration schemes designed for nonexpansive operators in normed spaces and Hilbert spaces. It incorporates relaxation or damping parameters to increase robustness and enable convergence under more flexible and realistic conditions, including operator approximation, stochastic perturbations, and asynchronous (chaotic) coordinate updates. Dampened Mann methods underpin advanced algorithms in nonlinear analysis, convex optimization, reinforcement learning, stochastic games, and regularization of ill-posed problems.
1. Mathematical Definition and Standard Formulations
The classical Mann iteration, for a nonexpansive operator (with a real Hilbert space), generates a sequence by
where are relaxation coefficients (Ouyang, 2022). The dampened Mann (also: relaxed, Krasnosel'skiǐ–Mann) iteration generalizes this by (i) allowing additional shrinkage via a dampening factor , (ii) permitting step-sizes or relaxation weights to vary more broadly, and (iii) including perturbations : or more generally for coordinate-wise updates and operators varying with : (Baldan et al., 22 Jan 2026, Baldan et al., 15 Jan 2025).
A critical aspect is the choice and evolution of damping and relaxation sequences, which can converge to zero, remain bounded away from one, or satisfy summed or product divergence conditions depending on the analytical context.
2. Convergence Theory: Deterministic, Perturbed, and Stochastic Schemes
Fejér Monotonicity and Boundedness
A central concept in analyzing dampened Mann schemes is Fejér monotonicity: a sequence is Fejér monotone with respect to a closed convex set if
(Ouyang, 2022). This property guarantees boundedness and often asymptotic regularity.
General Convergence Results
For nonexpansive (a-averaged) operators, weak convergence of the classical or dampened Mann iteration is ensured under conditions such as:
- Summability of error norms (Ouyang, 2022, May, 22 Apr 2025).
For strong convergence and linear rates, additional regularity such as metric subregularity of $\Id - T$ may be required (Ouyang, 2022).
In stochastic settings with martingale-difference noise,
the following conditions assure almost sure convergence:
- , () (Bravo et al., 2022).
Chaotic and Asynchronous Updates
Recent advances allow for asynchronous (chaotic) Mann updates, with only subsets of coordinates updated per iteration. Under a progressing parameter scheme—where dampening eventually becomes negligible compared to update sizes—convergence to the least fixed-point is preserved even in high-dimensional, partially updated systems (Baldan et al., 22 Jan 2026).
3. Rate Bounds, Optimality, and Regularization
Error Bounds and Rates
For the classical Krasnosel'skiǐ–Mann iteration, the optimal rate for the fixed-point residual is
where is the diameter of the convex set (Contreras et al., 2021). This yields an unavoidable decay and is proven tight in general normed spaces.
Improved rates (O(1/n)) can be achieved in Halpern-type schemes that reference an anchor point or in dampened variants with specific parameter choices, but not in the self-referential KM setup (Contreras et al., 2021, Cheval et al., 2022).
For stochastic variants under bounded variance, explicit computable bounds can be given, e.g.,
for constant stepsizes optimized over horizon length; for power-law stepsizes , rates are with worst-case (Bravo et al., 2022).
Regularization in Ill-posed Problems
The segmenting/dampened Mann scheme is applied as a regularizing procedure for inverse problems, such as elliptic Cauchy problems. The core iteration in function space: ensures regularization, with convergence characterized by divergence of the series and strong contraction properties of the underlying operator (Engl et al., 2020).
4. Model-based and Data-driven Applications
Markov Decision Processes and Reinforcement Learning
Dampened Mann iteration is a key ingredient in average-reward Q-learning for MDPs. Under unichain conditions, the stochastic relaxation variant
guarantees almost sure convergence to the optimal value function, with explicit rates depending on the decay of the stepsizes and boundedness of the noise (Bravo et al., 2022, Baldan et al., 15 Jan 2025, Baldan et al., 22 Jan 2026).
Stochastic Games and Bellman Equations
Enhanced convergence rates (O(log ε)) for mean-payoff stochastic games are achieved via relative dampened Mann schemes in Hilbert's semi-norm, exploiting operator homogeneity and ergodicity properties. The analysis yields explicit complexity bounds dependent on minimal transition probabilities and game structure (Akian et al., 2023).
5. Operator Theoretic Characterizations and Structural Requirements
Strict Pseudocontractiveness and Linear Systems
KM iteration converges for linear operators iff strict pseudocontractiveness holds. The operator is strictly pseudocontractive if its spectrum lies in a closed disk in the complex plane and no non-trivial Jordan blocks are present at eigenvalue 1. This is equivalent to the existence of a positive definite matrix solution to a specific linear matrix inequality, which can be checked to certify convergence (Belgioioso et al., 2018).
Robustness to Approximations and Perturbations
Dampened Mann iterations can accommodate both time-varying operators (approximations ) and inexact evaluations (perturbation terms ). Under mild summability and monotonicity conditions, convergence to the least fixed point is retained (Baldan et al., 22 Jan 2026, Baldan et al., 15 Jan 2025, Ouyang, 2022).
Anchored and Multivariate Variants
Tikhonov–Mann and modified Halpern iterations introduce anchor points and dual control sequences, supporting strong convergence and metastability with explicit rates in geodesic contexts including CAT(0) spaces (Cheval et al., 2022). Multivariate generalizations enable selective coordinate updates and relaxed sweeping strategies.
6. Practical Algorithmic Considerations and Parameter Selection
Parameter choices significantly affect convergence. Canonical guidelines include:
- Dampening factors with
- Relaxation/stepsizes or bounded away from zero
- For stochastic or sampled approximation, ensure the sum and control noise via tail bounds (Bravo et al., 2022, Baldan et al., 15 Jan 2025)
In practice, block or asynchronous updates leveraging vector-valued are effective for high-dimensional systems (Baldan et al., 22 Jan 2026). Restart or adaptivity techniques can accelerate convergence in regularization contexts—though step-size decay must remain sufficiently slow to preserve divergence of the required series (Engl et al., 2020, May, 22 Apr 2025).
7. Broader Context and Comparative Analysis
Dampened Mann iteration generalizes and extends the classical Mann–Kleene and Picard fixed-point schemes, enabling robust performance under operator uncertainty, stochasticity, and partial updates. It is proven to be rate-optimal () in general for nonexpansive mappings but does not surpass Halpern iteration in settings where anchoring is possible and desired (Contreras et al., 2021, Cheval et al., 2022). Extensions to games, MDPs, and nonlinear analysis exploit these features to solve high-dimensional, structurally complex, and sampled systems with theoretical guarantees on convergence, regularization, and statistical stability.
Key References:
- (Ouyang, 2022) On the Stability of Krasnosel'skiǐ-Mann Iterations
- (Bravo et al., 2022) Stochastic fixed-point iterations for nonexpansive maps
- (Baldan et al., 22 Jan 2026) Computing Fixpoints of Learned Functions: Chaotic Iteration
- (Akian et al., 2023) Solving irreducible stochastic mean-payoff games
- (May, 22 Apr 2025) On the convergence of a perturbed one dimensional Mann's process
- (Contreras et al., 2021) Optimal error bounds for nonexpansive fixed-point iterations
- (Baldan et al., 15 Jan 2025) Approximating Fixpoints of Approximated Functions
- (Belgioioso et al., 2018) On the convergence of discrete-time linear systems
- (Cheval et al., 2022) On modified Halpern and Tikhonov-Mann iterations
- (Engl et al., 2020) A Mann iterative regularization method for elliptic Cauchy problems