Papers
Topics
Authors
Recent
2000 character limit reached

Proximal Fixed-Point Scheme

Updated 2 December 2025
  • Proximal fixed-point scheme is an iterative method that employs resolvent mappings to solve convex minimization, monotone inclusions, or equilibrium problems on diverse geometric spaces.
  • The method leverages the Moreau–Yosida resolvent and Fejér monotonicity to ensure convergence, linking discrete iterations with continuous gradient flows.
  • This unified framework generalizes classical proximal algorithms and splitting methods, offering robust and predictable convergence in both linear Hilbert and nonlinear CAT(0) spaces.

A proximal fixed-point scheme refers to any iterative process in which the current iterate is mapped (by a resolvent or proximal mapping) to a new point, aiming to reach a fixed point that solves a convex minimization problem, a monotone inclusion, or a generalized equilibrium problem. The fixed-point operator is constructed from the proximal mapping associated to the object of interest—convex function, monotone operator, or bifunction—and the iterates are designed to converge (in a suitable sense) to a solution. This framework unifies classical proximal point algorithms, generalized fixed-point methods, and various splitting and preconditioned schemes across Hilbert spaces, CAT(0) spaces, and broader nonlinear geometric settings (Bacak, 2012).

1. Ambient Geometric Structures and Convexity

A major generalization of the proximal fixed-point paradigm arises in the context of geodesic metric spaces of nonpositive curvature, specifically CAT(0) spaces. In such settings, convexity is defined along geodesics: a function f:C(,+]f:C\to(-\infty,+\infty] on a convex subset CXC \subset X is geodesically (metric) convex if its pullback along any geodesic is convex. The CAT(0) condition provides a quadratic metric inequality fundamental for Fejér monotonicity and convergence analysis: d(x,γ(t))2(1t)d(x,γ(0))2+td(x,γ(1))2t(1t)d(γ(0),γ(1))2.d\bigl(x,\gamma(t)\bigr)^2 \le (1-t)d\bigl(x,\gamma(0)\bigr)^2 + t d\bigl(x,\gamma(1)\bigr)^2 - t(1-t)d\bigl(\gamma(0),\gamma(1)\bigr)^2. This setting strictly extends linear Hilbert space geometry and allows proximal schemes in spaces lacking linear structure, while retaining convexity, nonexpansive mappings, and projection operations (Bacak, 2012).

2. The Moreau–Yosida Resolvent and Proximal Mapping

For a proper, lower semicontinuous, geodesically convex function ff, the Moreau–Yosida resolvent (proximal mapping) is defined for any xXx \in X, λ>0\lambda > 0 as: Jλ(x)=arg minyX{f(y)+12λd2(x,y)}.J_\lambda(x) = \argmin_{y \in X} \Bigl\{f(y) + \frac{1}{2\lambda} d^2(x, y)\Bigr\}. In metric spaces with the CAT(0) property, this proximal mapping is well-defined, single-valued, and possesses the nonexpansivity property: d(Jλ(x),Jλ(y))d(x,y).d(J_\lambda(x), J_\lambda(y)) \le d(x, y). This mapping plays the role of both implicit Euler operator for gradient flows and as a nonexpansive fixed-point map whose fixed points are minimizers of ff (Bacak, 2012).

3. The Proximal Fixed-point Iteration Scheme

Given an initial point x0Xx_0 \in X, and a sequence of divergence step-sizes {λn}\{\lambda_n\} with nλn=\sum_n \lambda_n = \infty, the proximal fixed-point iteration is defined by: xn+1=Jλn(xn),n=0,1,2,x_{n+1} = J_{\lambda_n}(x_n), \quad n = 0, 1, 2, \dots This scheme generalizes the classical Picard iteration. Provided ff has minimizers, the sequence {xn}\{x_n\} is bounded and Fejér monotone with respect to the minimizer set: d(xn+1,c)d(xn,c),cC.d(x_{n+1}, c) \le d(x_n, c), \quad \forall c \in C^*. A crucial convergence property is established: under the divergence condition for {λn}\{\lambda_n\}, the sequence {xn}\{x_n\} weakly (Δ-) converges to a minimizer of ff, i.e., to a fixed point of the resolvent (Bacak, 2012).

4. Fejér Monotonicity and Convergence Analysis

The convergence proof employs the CAT(0) geometry and a Fejér-monotonicity argument. The key steps include:

  • Fejér monotonicity of iterates toward the minimizer set:

d(xn+1,c)d(xn,c)d(x_{n+1}, c) \le d(x_n, c)

  • A quadratic estimate yields a rate-of-convergence result:

f(xn)d2(x0,c)k=1nλk,cCf(x_n) \le \frac{d^2(x_0, c)}{\sum_{k=1}^{n}\lambda_k}, \quad c \in C^*

which implies f(xn)infff(x_n) \to \inf f as nn \to \infty.

  • Any weak cluster point is shown to belong to the minimizer set by lower semicontinuity, upgrading to full weak convergence via the nonexpansivity and Fejér monotonicity (Bacak, 2012).

5. Discrete vs. Continuous-Time Schemes

There is a deep link between discrete proximal fixed-point schemes and their continuous-time analogues. The gradient flow semigroup is defined as: Tλx=limm(Jλ/m)m(x)T_\lambda x = \lim_{m \to \infty} (J_{\lambda/m})^m(x) and is nonexpansive in xx and strongly continuous in λ\lambda. As λ\lambda \to \infty, TλxT_\lambda x weakly converges to a minimizer of ff. The discrete proximal point iteration corresponds to an implicit Euler discretization of this gradient flow, and both share the same Fejér-monotonicity and lower semicontinuity arguments for weak convergence (Bacak, 2012).

6. Extensions and Generalizations

The proximal fixed-point framework is robust and extensible:

  • Applicable to geodesically convex minimization in CAT(0) spaces, Hilbert spaces, or other nonlinear metric spaces.
  • Unified approach for monotone inclusions, variational inequalities, or equilibrium problems, where the fixed points of suitably defined resolvent/proximal maps correspond to problem solutions.
  • Generalization admits nonexpansive and firmly nonexpansive operators, often enabling convergence rates and stability in settings lacking inner product structure.

This suggests that by translating regularization and splitting methods from Hilbert space optimization to arbitrary CAT(0) geometries, one can solve an extensive class of nonlinear optimization and inclusion problems without reference to linear (vector space) structure, relying instead on metric convexity and nonexpansive mapping theory (Bacak, 2012).

7. Implications and Applications

The scheme provides a theoretical foundation for both classical and modern proximal algorithms in convex optimization, monotone operator theory, and algorithmic fixed-point theory. Key practical implications include:

  • Stability and robustness owing to nonexpansivity.
  • Broad geometric flexibility via CAT(0) metric structure.
  • Predictable convergence under minimal assumptions (divergence of stepsizes).
  • Direct link between discrete iterative optimization and continuous dynamical systems.

A plausible implication is that algorithmic design, complexity analysis, and convergence proofs for proximal schemes in nonlinear or infinite-dimensional settings should leverage the metric/convex/geodesic principles formalized in the proximal fixed-point framework (Bacak, 2012).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Proximal Fixed-Point Scheme.