Forward-Reflected-Backward Splitting Algorithm
- Forward-reflected-backward splitting is an operator splitting method that decomposes monotone inclusions by combining explicit forward steps, implicit backward steps, and reflection techniques.
- It uses a reflection or extrapolation step to integrate past evaluations, enabling convergence under milder assumptions than standard forward-backward methods.
- Extensions such as inertial, Bregman, and primal-dual variants expand its application to optimization, imaging, and control, offering scalable and efficient performance.
The forward-reflected-backward splitting algorithm is a class of operator splitting methods for solving monotone inclusions and related structured convex or nonconvex optimization problems. Its design centers on decomposing a problem of finding a zero of a sum of monotone (in some cases generalized monotone or even non-monotone) operators by alternating explicit “forward” steps tied to single-valued operators and implicit “backward” steps via resolvents or proximal operators of set-valued (typically maximally monotone) operators. The distinctive feature is a “reflection” or extrapolation step that incorporates history via past operator evaluations or iterates, permitting provably convergent methods under weaker assumptions than standard schemes and often with enhanced computational efficiency. This family now encompasses a wide array of specialized instances—including the Malitsky-Tam forward-reflected-backward scheme, inertial and Bregman extensions, primal-dual reflected approaches, fast (accelerated) reflected variants, and strong-convergence schemes for structured inclusions—along with multiple application scenarios in optimization, imaging, control, and variational inequalities.
1. Fundamental Algorithmic Structure and Recurrences
At its core, the forward-reflected-backward (FRB) algorithm solves monotone inclusions of the form
where is maximally monotone (possibly set-valued) and is single-valued, monotone, and Lipschitz continuous. The standard iteration as introduced in the Malitsky-Tam scheme is
where denotes the resolvent of and is a stepsize parameter (Cevher et al., 2019, Dung et al., 2021).
A generalization, called the forward-(half)-reflected-backward (FHRB) scheme, considers
with the “reflection” occurring through the extrapolated difference , and is the resolvent for a possibly set-valued monotone operator (Rieger et al., 2020). Many recent schemes further admit variable step sizes, momentum/inertial corrections, and projections onto hyperplanes or more general constraint sets. Certain high-performance variants also integrate explicit nonlinear momentum terms or Bregman distances (Morin et al., 2021, Wang et al., 2022).
2. Reflection and Momentum: Analytical and Dynamical Justification
The reflection step, introduced via prior operator evaluations, serves to stabilize or accelerate the method, particularly in cases where the forward operator is not cocoercive. Instead of relying on strong cocoercivity—which is required in standard forward-backward splitting for convergence—the reflection mechanism exploits information from two successive iterates to compensate for the potential “nonexpansiveness” deficiency of (Rieger et al., 2020, Dung et al., 2021, Cevher et al., 2019). This idea can be interpreted as a discretization of the continuous-time proximal point method with a linearization on the forward term: with the discrete step mimicking a forward-difference approximation of the time derivative (Rieger et al., 2020).
Momentum corrections can be systematically incorporated through auxiliary variables evolved as
where is a (possibly nonlinear) kernel, is a positive operator (often acting as a preconditioner), and is a tunable parameter. This approach subsumes classical FRB and permits further acceleration by embedding inertial or “double inertial” effects, and generalizes the framework to cover Chambolle-Pock and Vũ–Condat primal-dual methods (Morin et al., 2021).
3. Convergence Analysis and Rate Guarantees
FRB algorithms have been rigorously analyzed for weak and strong convergence under a variety of operator-theoretic settings. For the basic FRB scheme with a monotone, Lipschitz , convergence of the iterates to a solution is obtained under stepsize restrictions such as , where is the Lipschitz constant of (Cevher et al., 2019, Dung et al., 2021). Strong convergence can be achieved using Halpern-type anchoring/varying parameters or via a viscosity mapping, where auxiliary sequences with vanishing weights () ensure that the iterates approach the best approximation point (Izuchukwu et al., 2022).
Under strong monotonicity or uniform convexity of one or more underlying operators, the rate can be quantified. For example, for convex-concave saddle point problems or variational inequalities, last-iterate (pointwise) convergence rates of have been established for the velocity (difference between successive iterates) and for the tangent residual
both decaying as (Bot et al., 16 Dec 2024). In deterministic convex settings, the primal-dual gap diminishes as , while in strongly monotone settings the expected squared error decays as (Dung et al., 2021). For stochastic and nonconvex regimes, convergence to stationary points is ensured, with rates depending on the Kurdyka-Łojasiewicz (KL) property and on the subgradient sharpness of envelope or merit functions (Wang et al., 2021, Wang et al., 2022).
4. Extensions: Inertial, Bregman, and Primal-Dual Variants
Modern developments have extended FRB splitting to a suite of generalized schemes:
- Inertial and Double Inertial Schemes: Acceleration mechanisms, sometimes called “heavy ball” inertial effects, introduce terms like , providing faster convergence under appropriate parameter tuning (Tran, 11 Mar 2025).
- Bregman and Mirror FRB: In problems lacking a natural Euclidean geometry or Lipschitz gradient, Bregman distances enable working with Legendre kernels , and inertial corrections are performed in the dual space; the major update is
where is the Bregman divergence associated with (Wang et al., 2022).
- Primal-Dual and Three-Operator Extensions: By separating monotone inclusions with more than two operators, or with primal-dual structure, reflected terms are generalized: both for the primal and dual iterates, one includes reflected steps for each operator, often admitting inexact computations of Lipschitz or cocoercive terms (Bang et al., 10 Jan 2024, Rieger et al., 2020).
- Deviation Vector and History-Augmented Methods: Some recent accelerated schemes inject past iterate combinations and free deviation vectors under norm constraints, interpolating smoothly between classical and O(1/n²)-rate accelerated FB methods and including FRB as a limiting case (Sadeghi et al., 2022).
5. Special Cases and Unified Operator-Splitting Frameworks
The FRB paradigm unifies and extends many classical and contemporary splitting techniques:
- Specialization to Known Schemes:
- FRB: Recovers Malitsky-Tam’s FRB when the forward operator is monotone and Lipschitz, backward is set-valued monotone.
- Forward-Reflected-Douglas-Rachford: With appropriate kernel/scaling choices, the method encompasses Douglas-Rachford-type splittings with additional reflection steps (Morin et al., 2021).
- Vũ–Condat, Chambolle–Pock: Adjusting the nonlinear kernel and momentum parameters recovers primal-dual splitting algorithms common in imaging and saddle-point optimization.
- Accelerated and Nesterov-Type Variants: The Fast Reflected Forward-Backward (Fast RFB) algorithm adds explicit Nesterov momentum and a correction term, achieving last-iterate o(1/k) rates for both velocity and optimality residuals (Bot et al., 16 Dec 2024).
- Operator-Theoretic Characterization: Recent work gives a complete description of all “frugal” splitting algorithms with minimal memory (lifting), showing that each is captured by a block-structured resolvent operator with specific parameter constraints; a wide range of algorithms, including FRB and Davis–Yin three-operator splitting, are subsumed (Åkerman et al., 15 Apr 2025, Wang et al., 26 Sep 2025).
6. Applications, Implications, and Limitations
FRB and its variants are utilized in convex and nonconvex optimization, composite monotone inclusions, structured regularization in signal and image processing, variational inequalities, saddle-point models, and optimal control. The reduction in per-iteration complexity—due to requiring only a single (explicit) evaluation of Lipschitzian operators per iteration—is particularly advantageous in high-dimensional or large-scale settings. The reflected (history-augmented) structure allows for less restrictive operator assumptions and can accommodate nonsmoothness, constraints, or weaker generalized (non-monotone) operator settings (Tran, 11 Mar 2025).
The algorithms admit adaptations to stochastic settings, block-coordinate variants, inertial and Bregman frameworks, and are robust to inexact operator evaluations (with weak or strong convergence under summable error assumptions) (Dung et al., 2021, Bang et al., 10 Jan 2024). Notably, strong convergence and rates are available under mild conditions, with anchoring and viscosity-type relaxations supporting convergence to the best approximation (Izuchukwu et al., 2022).
Some limitations remain: for certain parameter regimes, stepsize restrictions can be more conservative compared with classic cocoercive settings; in highly non-monotone, ill-conditioned, or nonconvex cases, convergence may depend on the KL property or problem-specific structural properties (Wang et al., 2021, Wang et al., 2022). The choice of momentum and reflection parameters, kernel operators, and deviation vectors can significantly affect empirical performance and require tuning.
7. Recent Advances and Future Research Directions
Current research is active in:
- Extending reflection-based splitting to settings with general (weakly) monotone, non-monotone, or structured nonlinear operators.
- Integrating adaptive and linesearch strategies to Vary step sizes and inertial parameters, as well as leveraging KL-type analysis for sharper nonconvex convergence rates (Wang et al., 2022).
- Designing distributed and graph-based splitting methods via parameterized operator decomposition and minimal-lifting frameworks, which enable robust large-scale distributed optimization (Åkerman et al., 15 Apr 2025).
- Further connecting reflected and anchored backward-forward, forward-backward-backward, and multi-operator variants for more general composite inclusion problems (Wang et al., 26 Sep 2025, Tran, 11 Mar 2025).
A plausible implication is that, given ongoing analytical and algorithmic developments, the class of reflected, inertial, and history-augmented splitting methods will continue to play a central role in scalable, structure-exploiting monotone inclusion and modern optimization problems, particularly where operator evaluations or proximal mappings are computationally expensive.