Papers
Topics
Authors
Recent
2000 character limit reached

Extended Centralized Circumcentered Reflection (ecCRM)

Updated 10 December 2025
  • Extended Centralized Circumcentered Reflection Method (ecCRM) is an accelerated feasibility algorithm framework that unifies and extends classical projection–reflection schemes using a modular centralization operator.
  • It incorporates a tunable kernel operator and relaxation parameter to balance per-iteration complexity with convergence dynamics, achieving both linear and superlinear rates under appropriate conditions.
  • Its versatility is proven in high-dimensional convex settings like matrix completion and image reconstruction, reducing iterations and runtime compared to traditional methods.

The Extended Centralized Circumcentered Reflection Method (ecCRM) is a general framework for accelerated feasibility algorithms in convex and affine settings that unifies and extends several earlier projection–reflection schemes. It replaces the fixed centralization step of the classical centralized CRM (cCRM) with a modular centralization operator and a relaxation parameter, providing tunable control over per-iteration complexity and convergence dynamics. EcCRM retains global convergence, achieves linear rates under error bound regularity, and under smoothness or vanishing step sizes exhibits provably superlinear acceleration. The method's versatility and performance are demonstrated both theoretically and through extensive large-scale numerical experimentation (Barros, 5 Dec 2025).

1. Mathematical Foundation and Problem Setting

EcCRM operates primarily on two-set convex feasibility problems: find zXY,X,YRn closed, convex, XY.\text{find } z\in X\cap Y,\qquad X,Y\subset\mathbb R^n\text{ closed, convex},\ X\cap Y\neq\emptyset. Classical projection–reflection schemes such as alternating projections, Douglas–Rachford, and Cimmino are generally limited to linear convergence under regularity assumptions. The circumentered-reflection method (CRM) improved this by achieving superlinear convergence in certain cases, specifically under smooth boundary conditions.

The generalization to ecCRM introduces an admissible centralization operator T:RnYT:\mathbb R^n\to Y (with ImTY\operatorname{Im}T\subset Y and Tzszs\|Tz-s\|\le\|z-s\| for all sXYs\in X\cap Y), and a relaxation parameter α(0,1)\alpha\in(0,1): Nαz  =  αT(z)+(1α)PX(T(z)).N^\alpha z \;=\; \alpha\, T(z) + (1-\alpha) \, P_X(T(z)). The ecCRM update step is then defined as: zk+1=circ(w,  2vw,  2uw)z_{k+1} = \mathrm{circ}\bigl(w,\;2v-w,\;2u-w\bigr) where w=Nα(zk)w=N^\alpha(z_k), v=PX(T(zk))v=P_X(T(z_k)), and u=PY(w)u=P_Y(w).

Common choices for TT include:

  • T=PYT=P_Y (three projections per step)
  • T=PYPXT=P_Y P_X (four projections, coinciding with cCRM)
  • T=PYPXPYT=P_Y P_X P_Y (five projections, "deep" kernel)

This modularity enables trade-offs between contraction strength and computational expense per iteration (Barros, 5 Dec 2025).

2. Algorithmic Structure and Implementation

A typical iteration of ecCRM consists of:

  • Application of the kernel TT to the current iterate.
  • Projection onto XX from T(zk)T(z_k).
  • Formation of a centralized point via convex combination governed by α\alpha.
  • Projection/reflection onto YY and XX.
  • Circumcenter computation for the three points derived above.

ecCRM Pseudocode (Two-set Case)

1
2
3
4
5
6
7
8
9
10
11
12
z = z0
k = 0
while max(dist(z, X), dist(z, Y)) > eps:
    t = T(z)
    x = P_X(t)
    w = alpha_k * t + (1 - alpha_k) * x
    u = P_Y(w)
    v = x
    z_new = circumcenter(w, 2*v - w, 2*u - w)
    z = z_new
    k += 1
return z
Each iteration comprises one TT-application, two projections, two reflections, and a three-point affine circumcenter computation.

Extensions exist for multi-set feasibility problems, firm nonexpansive operator intersections, and affine subspace contexts, adapting the centralization and circumcenter computation accordingly (Behling et al., 2017, Arefidamghani et al., 2022, Bauschke et al., 2019).

3. Convergence Theory

Global Convergence

Assuming S=XYS = X \cap Y \neq \emptyset, the ecCRM sequence (zk)(z_k) is Fejér monotone with respect to SS and converges to a point in SS for any admissible TT and any sequence (αk)(0,1)(\alpha_k) \subset (0,1). The algorithm does not require strict regularity or Slater-type assumptions for convergence (Barros, 5 Dec 2025).

Linear Convergence Rate

Suppose a local error bound holds in the form

ωdist(z,S)max{dist(z,X),dist(z,Y)},ω(0,1).\omega\, \operatorname{dist}(z, S) \leq \max \{ \operatorname{dist}(z, X), \operatorname{dist}(z, Y) \}, \quad \omega \in (0, 1).

For αˉ=lim supαk\bar \alpha = \limsup \alpha_k, ecCRM achieves Q-linear convergence with rate

ρ=β(αˉ+(1αˉ)β)<1,β=1ω2\rho = \beta( \bar\alpha + (1 - \bar\alpha)\beta ) < 1, \quad \beta = \sqrt{1 - \omega^2}

and the improvement in each step is quantified: dist(zk+1,S)ρdist(zk,S).\operatorname{dist}(z_{k+1}, S) \leq \rho\, \operatorname{dist}(z_k, S). This holds for convex sets, finite intersections of affine subspaces, and products of firmly nonexpansive operators (Arefidamghani et al., 2022, Behling et al., 2017).

Superlinear Convergence

If X\partial X and Y\partial Y are C1C^1-smooth and intersect transversally, and either every centralized point wkw_k is strictly centralized or αk0\alpha_k \to 0, ecCRM achieves superlinear convergence: dist(zk+1,S)dist(zk,S)0\frac{\operatorname{dist}(z_{k+1}, S)}{\operatorname{dist}(z_k, S)} \to 0 as kk \to \infty. A vanishing schedule αk0\alpha_k \to 0 ensures superlinearity even if strict centralization fails (Barros, 5 Dec 2025, Behling et al., 2022).

4. Centralization, Kernel Choice, and Step Size Effects

Selecting the kernel TT and relaxation parameter α\alpha determines the balance:

  • "Deeper" kernels (TT involves more projections/reflections) yield stronger contractions and accelerate convergence, at higher computational cost.
  • Shallower kernels (e.g., T=PYT=P_Y) achieve cheaper steps but slower shrinkage of the feasibility gap per iteration.
  • Fixed α0.5\alpha \approx 0.5 typically balances centralization and step length.
  • Smaller α\alpha increases centralization, supporting superlinear convergence but may reduce movement per iteration.
  • Vanishing schedules αk0\alpha_k \to 0 accelerate convergence in nearly tangent or smooth manifold settings (Barros, 5 Dec 2025, Behling et al., 2022).

5. Applications and Comparative Numerical Performance

EcCRM is applicable wherever two-set feasibility or intersection problems appear:

  • Matrix and tensor completion (e.g., PSD completion, rank constraints)
  • Image reconstruction
  • Signal recovery
  • Intersection of high-dimensional geometric sets (ellipsoids, subspaces)
  • Fixed-point problems for firmly nonexpansive operators
  • Primal–dual and ADMM-type splitting in optimization contexts (Barros, 5 Dec 2025, Lindstrom, 2020, Arefidamghani et al., 2022)

In matrix completion with n=100n=100 and rank 5:

  • Deep kernel ecCRM (T=PYPXPYT = P_Y P_X P_Y) at α0.5\alpha \approx 0.5 reduced total runtime and iteration count by roughly 9% compared to cCRM, despite extra projections.

In intersections of high-dimensional ellipsoids (R2000)(\mathbb{R}^{2000}):

  • Vanishing step sizes (αk=1/(k+2))(\alpha_k = 1/(k+2)) in ecCRM reduced iterations by \sim15% and runtime by \sim20% versus fixed-α\alpha cCRM at tight tolerances (101210^{-12}).

Tests on random intersections of multiple sets demonstrated ecCRM can use orders of magnitude fewer projections than sequential or product-space methods, with efficiency gains increasing in higher dimensions and larger set cardinalities (Behling et al., 2022, Arefidamghani et al., 2022).

6. Relationship to Prior and Alternative Feasibility Schemes

EcCRM strictly generalizes cCRM and is compatible with various operator contexts, including affine isometries (Bauschke et al., 2019), convex combinations of projections, classical alternated projections (MAP), Douglas–Rachford, and even primal–dual schemes via iterated operator application (Barros, 5 Dec 2025, Lindstrom, 2020).

The circumcenter operator provides the closest point to the intersection among affine combinations of reflection trajectories, yielding contraction factors that can match or improve upon those for classical methods. The centralization operator—either as a fixed projection or via kernel composition—suppresses zig-zagging and accelerates convergence in practice.

EcCRM inherits or improves upon the convergence rate of the underlying kernel, and its modular structure allows practitioners to tune projections and contractions to computational resources and problem geometry. Convex combinations and centralization steps are shown formally to preserve nonexpansiveness and monotonicity required for convergence (Arefidamghani et al., 2022).

7. Summary Table: Key Algorithmic and Theoretical Elements

Component ecCRM Feature Implications
Centralization Modular operator TT and parameter α\alpha Tunable contraction/cost
Kernel choices T=PY,  PYPX,  PYPXPYT=P_Y,\;P_Y P_X,\;P_Y P_X P_Y Depth–rate trade-off
Convergence Global (Fejér monotonicity), linear (error bound), superlinear (smoothness/vanishing α\alpha) Robust across regimes
Complexity (T+2)(|T|+2) projections, 2 reflections, one circ per iteration Scalable
Applicability Convex/affine sets, fixed-point problems, primal–dual Versatile algorithm

The extended centralized circumcentered reflection method constitutes a modular, accelerated baseline for projection–reflection schemes applicable to a variety of feasibility, fixed-point, and optimization problems. Its analysis unifies and extends earlier convergence guarantees and demonstrates robust empirical efficacy on high-dimensional and large-scale instances (Barros, 5 Dec 2025, Behling et al., 2017, Arefidamghani et al., 2022, Behling et al., 2022).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Extended Centralized Circumcentered Reflection Method (ecCRM).