Papers
Topics
Authors
Recent
2000 character limit reached

Zero of Finite Sum of Maximally Monotone Operators

Updated 17 December 2025
  • The paper introduces solving 0 in A₁(x)+…+Aₙ(x) on Hilbert spaces, laying a foundation for structured convex optimization and monotone inclusions.
  • It employs minimal-lifting resolvent splitting that uses recursive resolvent evaluations and fixed-point iterations to guarantee convergence.
  • The approach has practical applications in decentralized optimization, multi-block ADMM, imaging, and networked control, demonstrating scalable algorithm design.

A zero of a finite sum of maximally monotone operators concerns solutions to operator inclusions of the form 0A1(x)++An(x)0 \in A_1(x) + \cdots + A_n(x) on a real Hilbert space HH, where A1,,An:HHA_1,\ldots,A_n:H \rightrightarrows H are maximally monotone. This class of problems subsumes a wide spectrum of structured convex optimization and monotone inclusion models central to modern analysis and large-scale computational algorithms. The intricate structure of maximally monotone summands, the nontrivial difficulty of evaluating the resolvent of the sum, and the essential role of splitting algorithms in applications ranging from distributed optimization to imaging motivates a comprehensive study of this inclusion.

1. Problem Formulation and Foundational Concepts

Given a real Hilbert space (H,,)(H,\langle \cdot,\cdot\rangle) with induced norm \|\cdot\|, and n2n\geq 2 maximally monotone operators A1,,An:H2HA_1,\ldots,A_n:H\to 2^H, the central problem is to find xHx\in H such that

0i=1nAi(x).0 \in \sum_{i=1}^n A_i(x).

Rewriting, the zero set is zer(i=1nAi)\operatorname{zer}\left(\sum_{i=1}^n A_i\right), which is a closed convex set under standard monotonicity and maximality assumptions.

The resolvent of AiA_i with stepsize λ>0\lambda>0 is JλAi=(I+λAi)1:HHJ_{\lambda A_i} = (I + \lambda A_i)^{-1}:H\to H, a single-valued, firmly nonexpansive operator whenever AiA_i is maximally monotone. If λ=1\lambda=1, JAiJ_{A_i} denotes the unscaled resolvent.

The sum operator A1++AnA_1+\cdots+A_n is again maximally monotone (by Rockafellar's theorem, under standard constraint qualifications), yet crucially, explicit computation of the resolvent Ji=1nAiJ_{\sum_{i=1}^n A_i} is intractable save for limited cases, motivating so-called splitting schemes: decompositions utilizing only the resolvents (or proximal operators) of the individual AiA_i.

2. Minimal-Lifting Resolvent Splitting and Lower Bound Theory

Traditional operator-splitting methods, such as Douglas–Rachford, naturally address n=2n=2; for n>2n>2, splitting each JAiJ_{A_i} only once per iteration without auxiliary "lifting" is impossible except in special cases. The minimal-lifting framework asserts:

  • Any frugal resolvent splitting for the sum of nn maximally monotone operators—one which uses each JAiJ_{A_i} once per iteration—requires a Cartesian product space HdH^d with dn1d\geq n-1.
  • This lower bound is unimprovable: there exists, for all n2n\geq 2, explicit frugal splittings acting on Hn1H^{n-1} (and no fewer), with each iterate constructed recursively via a sequence of nn resolvent evaluations and n1n-1 auxiliary variables.

A canonical recursion for γ(0,1)\gamma\in(0,1) employs z=(z1,,zn1)Hn1z = (z_1,\ldots,z_{n-1}) \in H^{n-1} and auxiliary x=(x1,,xn)x = (x_1,\ldots,x_n): x1k=JA1(z1k), xik=JAi(zikzi1k+xi1k),i=2,,n1, xnk=JAn(x1k+xn1kzn1k),\begin{aligned} &x_1^k = J_{A_1}(z_1^k),\ &x_i^k = J_{A_i}(z_i^k - z_{i-1}^k + x_{i-1}^k),\quad i=2,\ldots,n-1,\ &x_n^k = J_{A_n}(x_1^k + x_{n-1}^k - z_{n-1}^k), \end{aligned} then update

zk+1=zk+γ(x2kx1k,x3kx2k,,xnkxn1k)z^{k+1} = z^k + \gamma \cdot (x_2^k - x_1^k,\, x_3^k - x_2^k,\,\ldots,\,x_n^k - x_{n-1}^k)

and output S(zk)=x1kS(z^k) = x_1^k. This iteration is γ\gamma-averaged and converges weakly to a fixed point encoding a solution xx^* to the original inclusion. When n=2n=2, this specialization reduces to the classical Douglas–Rachford splitting on HH (Malitsky et al., 2021).

3. Fixed-Point Theory and Convergence Analysis

The minimal-lifting resolvent splitting operator T:Hn1Hn1T:H^{n-1}\to H^{n-1} defined above satisfies:

  • TT is γ\gamma-averaged for γ(0,1)\gamma\in(0,1),
  • FixT\operatorname{Fix} T\neq \emptyset if and only if zer(i=1nAi)\operatorname{zer}(\sum_{i=1}^n A_i)\neq\emptyset,
  • Any zFixTz^*\in \operatorname{Fix} T maps to a common value x=S(z)x^* = S(z^*) with 0Ai(x)0\in \sum A_i(x^*).

The sequence (zk)(z^k) converges weakly to zz^*, and the associated (xik)(x_i^k) all converge to the same xx^*. If A2,,AnA_2,\ldots,A_n are uniformly monotone, strong convergence can be asserted in the limiting regime γ1\gamma\to 1 as in the Peaceman–Rachford variant.

The dimension bound (lifting degree dn1d\geq n-1) is established by formalizing the dependency of each xik+1x_i^{k+1} on current and preceding zjz_j and xjx_j, then arguing via a structural matrix form and a rank calculation that n1n-1 degrees of freedom are essential for unrestricted maximally monotone inputs (Malitsky et al., 2021).

4. Operator Splitting in the Finite Sum Regime

Several extensions and alternative schemes exist for addressing finite sums of maximally monotone operators:

Product-Space Reformulation: Cast 0i=1nAi(x)0\in\sum_{i=1}^n A_i(x) as 0NV(x1,,xn)+(A1××An)(x1,,xn)0\in N_V(x_1,\ldots,x_n) + (A_1 \times\cdots\times A_n)(x_1,\ldots,x_n) on HnH^n, where V={(x,,x):xH}V=\{(x,\ldots,x):x\in H\} (the diagonal). The normal cone NVN_V resolves consensus. Douglas–Rachford and related 2-operator splittings then apply in this higher-dimensional setting (Bot et al., 2012, Chen et al., 2022).

Primal-Dual and Forward-Backward(-Forward) Splitting: Extensions to problems with both maximally monotone and Lipschitzian (possibly single-valued) summands employ composite or hybrid forward-backward, reflected-backward, or forward-reflected-backward schemes, potentially using variable metrics. These approaches enable splitting algorithms to handle composite, sum-of-composite, or distributed settings with efficient per-iteration complexity and broad convergence guarantees (Bot et al., 2012, Vũ, 2012, Tam et al., 14 Dec 2025).

Adaptive and Relaxed Variants: In the presence of monotonicity constants (e.g., strong/weak monotonicity, Lipschitz continuity), adaptive Douglas–Rachford designs modulate reflection and averaging parameters to restore nonexpansivity (or contraction) and guarantee global or even linear convergence, particularly in two-operator settings but with partial extensions to sums (Dao et al., 2018).

5. Application Domains and Distributed/Decentralised Optimization

Frugal, minimal-lifting splittings have direct applications to:

  • Decentralized optimization over networked agents: Each operator AiA_i may correspond to an agent's local objective or constraint. Minimal-lifting schemes assign local variables (ziz_i) to agents, require only communication among neighbors (e.g., on cycle graphs), and evaluate each JAiJ_{A_i} precisely once per iteration without a central coordinator. Convergence and low per-iteration complexity are provable, and network-step-size independence can be attained in specifically designed forward-backward-type algorithms (Malitsky et al., 2021, Tam et al., 14 Dec 2025).
  • Multi-block ADMM: For multi-block linearly constrained convex programs, the dual inclusion 0iFi(x)0\in \sum_i F_i(x) with FiF_i maximally monotone admits resolution via n-operator splitting as above, yielding convergent multi-block extensions to ADMM. In the generic case n=2n=2, this recovers the (convergent) two-block ADMM/Douglas–Rachford on the dual; for n>2n>2, it provides new guarantees not enjoyed by the standard direct multi-block ADMM (Malitsky et al., 2021).
  • Composite monotone inclusion: Inclusions with linear composition and block structure are amenable to minimal-lifting resolvent splittings where each operator and each application of the linear map and its adjoint is invoked only once per iteration. This is particularly vital when LL has large norm or is expensive to apply (Briceño-Arias, 2021).
  • Structured convex minimization, monotone games, networked Nash equilibria: Product-space and minimal-lifting splitting techniques underlie efficient solvers for (possibly non-differentiable) convex programs in imaging, regression, location theory, and distributed control (Bot et al., 2012, Tam et al., 14 Dec 2025).

6. Duality Frameworks, Extended Solution Concepts, and Generalization

Attouch–Théra duality reveals deep connections between solutions of 0(A+B)(x)0\in (A+B)(x) and 0(A1+B)(k)0\in (A^{-1}+B^\otimes)(k), with the dual operator B:=(Id)B1(Id)B^\otimes := (-\operatorname{Id})\circ B^{-1}\circ(-\operatorname{Id}). Existence of primal and dual solutions, paramonotonicity, and the structure of extended solution sets (as in the graph of K(z):=A(z)(B(z))K(z):=A(z)\cap(-B(z))) enable recovery of all primal solutions from a single dual element and vice versa (Bauschke et al., 2011). These dual interpretations carry over to the finite sum regime through appropriate product-space constructs.

Normal problem and generalized zeros: For potentially inconsistent inclusions (zer(Ai)=\operatorname{zer}(\sum A_i)=\emptyset), the normal problem framework introduces a systematic perturbation—using the infimal displacement vector of the associated Douglas–Rachford splitting operator—to define a "least-distance" generalized solution set that reduces to the original if solutions exist, but is always nonempty by construction (Bauschke et al., 2013). This unifies classical least squares, best approximation problems, and infeasibility regularization under a common normal equations paradigm.

7. Methodological Table: Minimal-Lifting Frugal Splittings and Operator Splitting Strategies

Splitting Method Operator Class Lifting Dimension Main Iteration
Minimal-lifting resolvent split i=1n\sum_{i=1}^n maximally monotone d=n1d=n-1 Recursive xik=JAi(),zk+1=zk+γx_i^k = J_{A_i}(*),\, z^{k+1}=z^k+\gamma*
Douglas–Rachford n=2n=2 d=1d=1 T(z)=z+JA2(2JA1(z)z)JA1(z)T(z)=z+J_{A_2}(2J_{A_1}(z)-z)-J_{A_1}(z)
Primal-dual product-space i=1n\sum_{i=1}^n maximally monotone d=nd=n Product-space splitting, consensus constraint
Decentralized FBS/FBF Sum plus Lipschitzian NN or $2N$ (agents) Forward-backward (possibly extragradient)
Multi-block ADMM (dual) i=1n\sum_{i=1}^n subdifferential d=n1d=n-1 or as above Dual splitting; primal-dual updates

The breadth of monotone inclusion splitting techniques for the zeros of finite sums of maximally monotone operators reflects the deep interplay between operator-theoretic structural properties, fixed-point and duality frameworks, and the design of scalable, distributed, and provably convergent algorithms in applied computation (Malitsky et al., 2021, Tam et al., 14 Dec 2025, Bot et al., 2012, Briceño-Arias, 2021, Vũ, 2012, Dao et al., 2018, Bauschke et al., 2013, Bauschke et al., 2011, Chen et al., 2022).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Zero of a Finite Sum of Maximally Monotone Operators.