Papers
Topics
Authors
Recent
2000 character limit reached

Almost-Supermartingale Processes

Updated 25 November 2025
  • Almost-supermartingale processes are recursive stochastic sequences that relax strict martingale conditions to achieve explicit convergence rates.
  • They provide a unifying framework for iterative schemes like stochastic gradient descent, Oja’s PCA, and the Robbins–Monro algorithm under minimal assumptions.
  • The methodology leverages normalized supermartingale techniques, auxiliary slowdown functions, and concentration inequalities to secure quantitative, time-uniform convergence guarantees.

Almost-supermartingale processes generalize classical supermartingale sequences, providing a unifying analytical framework for the paper of stochastic iterative algorithms and convergence phenomena encountered in modern probability and optimization theory. These processes are defined by recursive inequalities that relax the strict contraction properties of martingales, enabling sharp quantitative and time-uniform convergence rates with minimal requirements on the underlying structure. Central instances include the Robbins–Siegmund convergence lemma, Dvoretzky’s theorem for noisy Hilbert-space recursions, and stochastic quasi-Fejér monotonicity in metric spaces, with direct implications for stochastic approximation schemes such as stochastic gradient descent, Oja’s PCA algorithm, and the Robbins–Monro procedure (Neri et al., 17 Apr 2025, Pham et al., 23 Nov 2025).

1. Formal Definitions and Relaxed Supermartingale Conditions

The almost-supermartingale condition is formulated as follows: Let (Xn),(An),(Cn)(X_n), (A_n), (C_n) be nonnegative, integrable, Fn\mathcal{F}_n–adapted processes on a filtered probability space (Ω,F,P)(\Omega,\mathcal{F},P). The “relaxed supermartingale” or almost-supermartingale condition is

E[Xn+1Fn](1+An)Xn+Cna.s., for all n.E\bigl[X_{n+1}\mid\mathcal{F}_n\bigr] \le (1+A_n) X_n + C_n \quad \text{a.s., for all } n.

This is complemented by:

  • Bounded perturbations: i=0(1+Ai)<K\prod_{i=0}^\infty (1+A_i) < K a.s. for some finite KK,
  • Summable error-terms: There exists χ:(0,)N\chi: (0,\infty)\to\mathbb{N} such that for all ε>0\varepsilon>0,

i=χ(ε)E[Ci]<ε.\sum_{i=\chi(\varepsilon)}^\infty E[C_i] < \varepsilon.

A canonical instance is the process {Lt}t0\{L_t\}_{t\ge 0} with noise process {Ut}t1\{U_t\}_{t\ge 1} and stepsizes {ηt}t1\{\eta_t\}_{t\ge 1}, satisfying for deterministic constants C1(0,1)C_1\in(0,1), C2,C3>0C_2,C_3>0, and exponents ai,bi,ci,di>0a_i,b_i,c_i,d_i>0: Lt(1C1ηt)Lt1+Ut,L_t \le (1 - C_1 \eta_t) L_{t-1} + U_t, with suitably bounded conditional mean and magnitude of UtU_t (Pham et al., 23 Nov 2025).

2. General Convergence Theorems and Quantitative Rates

Almost-supermartingale recursions admit explicit convergence rates in mean and almost surely via auxiliary “slowdown” functions ff which are required to be super-multiplicative, increasing, concave, and continuous (s.i.c.c.). Precisely, if ff is s.i.c.c.\ with moduli ψ,κ\psi, \kappa, and φ\varphi is a lim inf\liminf-modulus for (E[f(Xn)])(E[f(X_n)]), then:

  • E[f(Xn)]0E[f(X_n)] \to 0 at a rate

ρ(ε)=φ(εψ(K1)2,χ(κ(εψ(K1)2))),\rho(\varepsilon) = \varphi\left(\frac{\varepsilon\,\psi(K^{-1})}{2},\, \chi\left(\kappa\left(\frac{\varepsilon\,\psi(K^{-1})}{2}\right)\right)\right),

  • Xn0X_n\to 0 almost surely with rate

ρ(λ,ε)=ρ(λf(ε)),\rho'(\lambda,\varepsilon) = \rho(\lambda f(\varepsilon)),

meaning P(nρ(λ,ε):Xnε)<λP(\exists n\ge \rho'(\lambda,\varepsilon): X_n\ge \varepsilon) < \lambda (Neri et al., 17 Apr 2025).

The proof strategy involves normalizing the process to a true supermartingale, applying Jensen’s inequality to ff, using Ville’s inequality for high-probability bounds, and leveraging the tail-sum bound on CnC_n. These rates depend only on perturbation and error moduli, not on additional process structure.

3. Key Theoretical Instantiations

Specific instantiations of the almost-supermartingale framework include:

  • Quantitative Robbins–Siegmund Theorem: Given

E[Xn+1Fn](1+an)XnunVn+CnE[X_{n+1}\mid \mathcal F_n] \le (1+a_n) X_n - u_n V_n + C_n

with an<\sum a_n<\infty, Cn<\sum C_n<\infty, and un=\sum u_n=\infty (divergence-rate θ\theta), explicit convergence rates for E[f(Xn)]0E[f(X_n)] \to 0 and Xn0X_n\to 0 a.s. are obtained via explicit functionals of the summability moduli and regularity of the auxiliary process (Neri et al., 17 Apr 2025).

  • Quantitative Dvoretzky’s Theorem: For Hilbert-space-valued recursions xn+1=Tn+1(x0,,xn)+ynx_{n+1} = T_{n+1}(x_0,\dots,x_n) + y_n with E[ynFn]=0E[y_n\mid\mathcal{F}_n]=0, a.s. convergence and high-probability concentration rates are derived, relying solely on process summability and rate moduli (Neri et al., 17 Apr 2025).
  • Stochastic quasi-Fejér Monotonicity: For sequences (xn)(x_n) in a metric space with quasi-Fejér property

E[ϕ(xn+1,z)Fn](1+ζn)ϕ(xn,z)+ξn,E[\phi(x_{n+1},z)\mid\mathcal{F}_n] \le (1+\zeta_n) \phi(x_n,z) + \xi_n,

rates for E[ϕ(xn,z)]0E[\phi(x_n, z)]\to 0 and almost sure convergence are given in terms of rate moduli for ζn\zeta_n and error process ξn\xi_n (Neri et al., 17 Apr 2025).

  • Robbins–Monro Algorithm: For xn+1=xnanynx_{n+1} = x_n - a_n y_n under moment, monotonicity, and regularity constraints, convergence is established with explicit rates in the strongly monotone and general cases (Neri et al., 17 Apr 2025).

4. Time-Uniform Bounds and Concentration Sequences

A major development is the derivation of time-uniform or any-time high-probability bounds. Under strengthened almost-supermartingale recursions,

Lt(1C1ηt)Lt1+Ut,L_t \le (1-C_1\eta_t) L_{t-1} + U_t,

with noise control and mini{ai+bi,ci+di}>1\min_{i}\{a_i + b_i, c_i + d_i\} > 1, one obtains

(t0:LtMlog(1/δ)+loglog(t+10)t+10)12δ\P\left(\forall t\ge 0: L_t \le M\,\frac{\log(1/\delta) + \log\log(t+10)}{t+10}\right) \ge 1-2\delta

for appropriate MM, matching law-of-iterated-logarithm lower bounds (Pham et al., 23 Nov 2025). The proof employs interval stopping, drift-dominated concentration inequalities (Azuma/Freedman type), and stitching arguments.

Compared to exponential supermartingale approaches—where martingale transforms of the form Mt(λ)=exp[λStψ(λ)Vt]M_t(\lambda) = \exp[\lambda S_t - \psi(\lambda)V_t] are constructed—almost-supermartingale methods bypass the need for tractable exponential martingales and apply directly in settings such as Oja's algorithm or stochastic approximation where classical approaches are not feasible.

5. Applications in Stochastic Approximation and Beyond

Almost-supermartingale frameworks yield comprehensive, quantitative guarantees for a wide array of stochastic iterative algorithms:

  • Stochastic Gradient Descent (SGD): In the strongly convex case, for SGD recursions, squared-error processes satisfy almost-supermartingale inequalities. The result is

(t1:xtx2O(log(1/δ)+loglogtt))1δ\P\Bigl(\forall t \ge 1: \|x_t-x^*\|^2 \le O\left(\frac{\log(1/\delta)+\log\log t}{t}\right)\Bigr) \ge 1-\delta

with explicit prefactors depending on noise and curvature parameters (Pham et al., 23 Nov 2025).

  • Polyak–Łojasiewicz Processes: For objectives satisfying the PL condition, time-uniform bounds for suboptimality gaps F(xt)F(x)F(x_t) - F(x^*) match the same rate (Pham et al., 23 Nov 2025).
  • Oja's Streaming PCA: After an initial “warm-up” to ensure Lt1/4L_t \leq 1/4 with high probability, the squared-sine angle error sequence for top-eigenvector estimation satisfies the almost-supermartingale property, yielding

(t:sin2(vt,v1)max{O(Llog(1/δ)),O(log(1/δ)+loglogtt)})12(e+1)δ\P\left(\forall t: \sin^2\angle(v_t, v_1) \leq \max\left\{O\left(\frac{L}{\log(1/\delta)}\right), O\left(\frac{\log(1/\delta) + \log\log t}{t}\right)\right\}\right) \geq 1-2(e+1)\delta

(Pham et al., 23 Nov 2025).

In the Robbins–Monro context, O(1/n)O(1/n) convergence rates are recovered under linear regularity, and O(1/n)O(1/\sqrt{n}) rates for subgradient or more general settings. Applications extend to stochastic subgradient methods, proximal-point splitting, metric Fréchet means estimation, and Hadamard-space splitting with minimal additional assumptions (Neri et al., 17 Apr 2025).

6. Role of Moduli and Minimal-Data Dependence

A salient feature of almost-supermartingale convergence rates is their uniformity and mild data dependence: all rates are explicit in terms of

  • Product bounds on step perturbations (KK),
  • Tail-sum moduli for error terms (χ\chi),
  • Lim-inf or divergence moduli (φ\varphi, θ\theta),
  • Regularity moduli linking auxiliary and main processes (τ\tau).

No additional structural or geometric assumptions are required, and the methodology adapts to classical and modern iterative schemes with diverse stochastic perturbations. This minimal-data dependency underpins the wide applicability of the theory (Neri et al., 17 Apr 2025).

7. Comparative Methodologies and Significance

Classical exponential-supermartingale constructions (empirical Bernstein bounds, mixture martingales, self-normalized martingales) are powerful when exact exponential martingale structures are accessible. However, almost-supermartingale methods:

  • Require only a recursive contraction plus bounded noise,
  • Apply to matrix-product and other intractable update structures,
  • Yield the optimal O(loglogtt)O\left(\frac{\log\log t}{t}\right) rate law, as proven for a wide spectrum of algorithms (Pham et al., 23 Nov 2025).

A plausible implication is that as optimization and learning algorithms grow in architectural complexity and nonlinearity, almost-supermartingale process theory supplies a flexible and robust analytical platform for precise convergence and concentration analysis.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Almost-Supermartingale Processes.