Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dampened Mann Iteration

Updated 29 January 2026
  • Dampened Mann iteration is a generalized fixed-point scheme that employs damping and relaxation parameters to enhance convergence robustness in nonexpansive operator settings.
  • It achieves convergence under conditions of operator perturbations, stochastic noise, and asynchronous coordinate updates in both normed and Hilbert spaces.
  • This method underpins advanced applications in convex optimization, reinforcement learning, stochastic games, and regularization for ill-posed problems.

The dampened Mann iteration is a generalization of classical fixed-point iteration schemes designed for nonexpansive operators in normed spaces and Hilbert spaces. It incorporates relaxation or damping parameters to increase robustness and enable convergence under more flexible and realistic conditions, including operator approximation, stochastic perturbations, and asynchronous (chaotic) coordinate updates. Dampened Mann methods underpin advanced algorithms in nonlinear analysis, convex optimization, reinforcement learning, stochastic games, and regularization of ill-posed problems.

1. Mathematical Definition and Standard Formulations

The classical Mann iteration, for a nonexpansive operator T:HHT: H \to H (with HH a real Hilbert space), generates a sequence by

xn+1=(1λn)xn+λnT(xn)x_{n+1} = (1-\lambda_n)x_n + \lambda_n T(x_n)

where 0<λn<10 < \lambda_n < 1 are relaxation coefficients (Ouyang, 2022). The dampened Mann (also: relaxed, Krasnosel'skiǐ–Mann) iteration generalizes this by (i) allowing additional shrinkage via a dampening factor βn\beta_n, (ii) permitting step-sizes or relaxation weights to vary more broadly, and (iii) including perturbations ene_n: xn+1=(1βn)[(1λn)xn+λnT(xn)]+enx_{n+1} = (1-\beta_n)\left[(1-\lambda_n)x_n + \lambda_n T(x_n)\right] + e_n or more generally for coordinate-wise updates and operators TnT_n varying with nn: xn+1(i)={(1βn(i))[xn(i)+λn(i)(Tn(xn)(i)xn(i))] if coordinate i is updated xn(i)otherwisex_{n+1}(i) = \begin{cases} (1-\beta_n(i)) \left[ x_n(i) + \lambda_n(i)(T_n(x_n)(i) - x_n(i)) \right] \ & \text{if coordinate }i\text{ is updated}\ x_n(i) &\text{otherwise} \end{cases} (Baldan et al., 22 Jan 2026, Baldan et al., 15 Jan 2025).

A critical aspect is the choice and evolution of damping and relaxation sequences, which can converge to zero, remain bounded away from one, or satisfy summed or product divergence conditions depending on the analytical context.

2. Convergence Theory: Deterministic, Perturbed, and Stochastic Schemes

Fejér Monotonicity and Boundedness

A central concept in analyzing dampened Mann schemes is Fejér monotonicity: a sequence {xn}\{x_n\} is Fejér monotone with respect to a closed convex set CC if

xn+1zxnzzC, n\|x_{n+1} - z\| \leq \|x_n - z\| \quad \forall z \in C, \ \forall n

(Ouyang, 2022). This property guarantees boundedness and often asymptotic regularity.

General Convergence Results

For nonexpansive (a-averaged) operators, weak convergence of the classical or dampened Mann iteration is ensured under conditions such as:

  • 0<infλnsupλn<10<\inf \lambda_n \leq \sup \lambda_n < 1
  • λn(1λn)=\sum \lambda_n (1 - \lambda_n) = \infty
  • Summability of error norms en<\sum \|e_n\| < \infty (Ouyang, 2022, May, 22 Apr 2025).

For strong convergence and linear rates, additional regularity such as metric subregularity of $\Id - T$ may be required (Ouyang, 2022).

In stochastic settings with martingale-difference noise,

xk=(1αk)xk1+αk(T(xk1)+Uk)x_k = (1-\alpha_k)x_{k-1} + \alpha_k\left(T(x_{k-1}) + U_k\right)

the following conditions assure almost sure convergence:

  • αk(1αk)=\sum \alpha_k (1 - \alpha_k) = \infty
  • αkθk1<\sum \alpha_k \theta_{k-1} < \infty, αk2θk2<\sum \alpha_k^2 \theta_k^2 < \infty (θk2=EUk2\theta_k^2 = \mathbb{E}\|U_k\|^2) (Bravo et al., 2022).

Chaotic and Asynchronous Updates

Recent advances allow for asynchronous (chaotic) Mann updates, with only subsets of coordinates updated per iteration. Under a progressing parameter scheme—where dampening eventually becomes negligible compared to update sizes—convergence to the least fixed-point is preserved even in high-dimensional, partially updated systems (Baldan et al., 22 Jan 2026).

3. Rate Bounds, Optimality, and Regularization

Error Bounds and Rates

For the classical Krasnosel'skiǐ–Mann iteration, the optimal rate for the fixed-point residual xnTxn\|x^n - T x^n\| is

xnTxnDπi=1nλi(1λi)\|x^n - T x^n\| \leq \frac{D}{\sqrt{\pi \sum_{i=1}^n \lambda_i (1 - \lambda_i)}}

where DD is the diameter of the convex set (Contreras et al., 2021). This yields an unavoidable O(n1/2)O(n^{-1/2}) decay and is proven tight in general normed spaces.

Improved rates (O(1/n)) can be achieved in Halpern-type schemes that reference an anchor point or in dampened variants with specific parameter choices, but not in the self-referential KM setup (Contreras et al., 2021, Cheval et al., 2022).

For stochastic variants under bounded variance, explicit computable bounds can be given, e.g.,

ExnTxnCn1/6\mathbb{E}\|x_n - T x_n\| \leq C n^{-1/6}

for constant stepsizes optimized over horizon length; for power-law stepsizes αn=1/(n+1)a\alpha_n = 1/(n+1)^a, rates are O(na+1/2)O(n^{-a+1/2}) with worst-case a=2/3a=2/3 (Bravo et al., 2022).

Regularization in Ill-posed Problems

The segmenting/dampened Mann scheme is applied as a regularizing procedure for inverse problems, such as elliptic Cauchy problems. The core iteration in function space: φk+1=(1αk)φk+αkT(φk)\varphi_{k+1} = (1 - \alpha_k) \varphi_k + \alpha_k T(\varphi_k) ensures regularization, with convergence characterized by divergence of the series kαk(1αk)\sum_{k}\alpha_k(1-\alpha_k) and strong contraction properties of the underlying operator (Engl et al., 2020).

4. Model-based and Data-driven Applications

Markov Decision Processes and Reinforcement Learning

Dampened Mann iteration is a key ingredient in average-reward Q-learning for MDPs. Under unichain conditions, the stochastic relaxation variant

Qk=(1αk)Qk1+αk[g+maxuQk1f(Qk1)]Q_k = (1-\alpha_k)Q_{k-1} + \alpha_k\left[g + \max_{u'} Q_{k-1} - f(Q_{k-1})\right]

guarantees almost sure convergence to the optimal value function, with explicit rates depending on the decay of the stepsizes and boundedness of the noise (Bravo et al., 2022, Baldan et al., 15 Jan 2025, Baldan et al., 22 Jan 2026).

Stochastic Games and Bellman Equations

Enhanced convergence rates (O(log ε)) for mean-payoff stochastic games are achieved via relative dampened Mann schemes in Hilbert's semi-norm, exploiting operator homogeneity and ergodicity properties. The analysis yields explicit complexity bounds dependent on minimal transition probabilities and game structure (Akian et al., 2023).

5. Operator Theoretic Characterizations and Structural Requirements

Strict Pseudocontractiveness and Linear Systems

KM iteration converges for linear operators iff strict pseudocontractiveness holds. The operator AA is strictly pseudocontractive if its spectrum lies in a closed disk in the complex plane and no non-trivial Jordan blocks are present at eigenvalue 1. This is equivalent to the existence of a positive definite matrix solution to a specific linear matrix inequality, which can be checked to certify convergence (Belgioioso et al., 2018).

Robustness to Approximations and Perturbations

Dampened Mann iterations can accommodate both time-varying operators (approximations fnff_n \to f) and inexact evaluations (perturbation terms ene_n). Under mild summability and monotonicity conditions, convergence to the least fixed point is retained (Baldan et al., 22 Jan 2026, Baldan et al., 15 Jan 2025, Ouyang, 2022).

Anchored and Multivariate Variants

Tikhonov–Mann and modified Halpern iterations introduce anchor points and dual control sequences, supporting strong convergence and metastability with explicit rates in geodesic contexts including CAT(0) spaces (Cheval et al., 2022). Multivariate generalizations enable selective coordinate updates and relaxed sweeping strategies.

6. Practical Algorithmic Considerations and Parameter Selection

Parameter choices significantly affect convergence. Canonical guidelines include:

  • Dampening factors βn0\beta_n \to 0 with βn=\sum \beta_n = \infty
  • Relaxation/stepsizes αn0\alpha_n \to 0 or bounded away from zero
  • For stochastic or sampled approximation, ensure the sum αn(1αn)=\sum \alpha_n (1 - \alpha_n) = \infty and control noise via tail bounds (Bravo et al., 2022, Baldan et al., 15 Jan 2025)

In practice, block or asynchronous updates leveraging vector-valued αn,βn\alpha_n, \beta_n are effective for high-dimensional systems (Baldan et al., 22 Jan 2026). Restart or adaptivity techniques can accelerate convergence in regularization contexts—though step-size decay must remain sufficiently slow to preserve divergence of the required series (Engl et al., 2020, May, 22 Apr 2025).

7. Broader Context and Comparative Analysis

Dampened Mann iteration generalizes and extends the classical Mann–Kleene and Picard fixed-point schemes, enabling robust performance under operator uncertainty, stochasticity, and partial updates. It is proven to be rate-optimal (O(n1/2)O(n^{-1/2})) in general for nonexpansive mappings but does not surpass Halpern iteration in settings where anchoring is possible and desired (Contreras et al., 2021, Cheval et al., 2022). Extensions to games, MDPs, and nonlinear analysis exploit these features to solve high-dimensional, structurally complex, and sampled systems with theoretical guarantees on convergence, regularization, and statistical stability.


Key References:

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dampened Mann Iteration.