FBHF Splitting Algorithm
- FBHF algorithm is a splitting method for solving monotone inclusion and variational inequality problems by combining backward resolvent steps with forward and half-forward corrections.
- It leverages a composite framework of maximally monotone, Lipschitz monotone, and cocoercive operators to effectively handle structured optimization tasks.
- Extensions include stochastic variance reduction, momentum acceleration, and four-operator/primal-dual variants, enhancing convergence rates and practical performance.
The forward-backward-half forward splitting algorithm (FBHF) is a family of operator splitting algorithms for structured monotone inclusion and variational inequality problems. FBHF exploits the composite nature of problems involving a maximally monotone operator, a monotone (possibly Lipschitz) operator, and a cocoercive operator by interleaving resolvent-based (backward) steps with explicit (forward) evaluations and additional correction steps that leverage Lipschitz or block structure. Recent advances extend the algorithm to nonlinear settings, stochastic/variance-reduced regimes, multivariate/primal-dual forms, and momentum/inertia-augmented algorithms.
1. Mathematical Framework and Operator Structure
FBHF is concerned with monotone inclusions in a Hilbert space , targeting the structured root-finding problem: where:
- is maximally monotone (e.g., subdifferential or normal cone),
- is monotone, typically -Lipschitz, often decomposed as a finite sum for block or finite-sum structure,
- is cocoercive, i.e., there exists so that
This triple-splitting model generalizes classical scenarios such as convex minimization with non-smooth regularization/constraints, composite variational inequalities, and primal-dual saddle-point problems.
FBHF naturally extends to four-operator settings with additional Lipschitz and linear-composite terms, as demonstrated in (Roldán, 2023), and allows nonlinear preconditioning as in (Giselsson, 2019, Qin et al., 28 Oct 2025).
2. Core Algorithmic Structure
The canonical FBHF algorithm performs, per iteration, a maximal monotone resolvent (backward) step, a forward step with the Lipschitz operator, and a half-forward correction to offset lack of cocoercivity in . For deterministic settings, one typical iteration has the update (see (Roldán, 2023, Qin et al., 28 Oct 2025)): In algorithms with nonlinear or block preconditioning (see (Giselsson, 2019, Qin et al., 28 Oct 2025)), the kernel for the resolvent may be nonlinear or nonsymmetric, enabling further flexibility: Momentum or inertia terms, i.e., variables such as and inertia updates, can be explicitly included for acceleration and improved empirical performance.
For four-operator and primal-dual variants (see (Roldán, 2023)), FBHF can be structured as
where is cocoercive and is a linear operator.
3. Stochastic and Variance-Reduced FBHF
Recent advances incorporate stochastic and variance-reduced computations into the FBHF framework, critical for large-scale and finite-sum problems (Qin et al., 2023, Qin et al., 28 Oct 2025). In these regimes, the operator is accessed only through a stochastic oracle with unbiasedness and bounded variance:
The variance-reduced FBHF (VRFBHF) methodology uses a Polyak-like averaging or momentum on the reference point and employs a control variate decomposed update: Here, controls averaging, the frequency of reference updates. This update is robust to high variance and allows for much smaller (reference seldom updated) in large-scale settings, substantially reducing total oracle calls compared to standard FBHF.
Stochastic quasi-Fejér monotonicity and Lyapunov function methodologies underpin the convergence proofs and rate guarantees (Qin et al., 2023, Qin et al., 28 Oct 2025, Vũ, 2015).
4. Convergence Properties
Weak and Almost Sure Convergence
Under general monotonicity, cocoercivity, and step-size constraints, almost sure weak convergence to a solution is obtained for both deterministic and stochastic FBHF variants (Qin et al., 2023, Qin et al., 28 Oct 2025, Vũ, 2015). Specifically, with satisfying
the VRFBHF iterates satisfy for some . Lyapunov functions of the type
are shown to contract in expectation, enabling invocation of supermartingale convergence theorems.
Linear Convergence under Strong Monotonicity
When one operator is strongly monotone, explicit -linear rates are derived. For example, when is -strongly monotone, with parameters , and stepsize , the expected error decays as
with explicit dependence on and (Qin et al., 2023, Qin et al., 28 Oct 2025). This result extends to nonlinear preconditioning and momentum-augmented settings and is novel for variance-reduced and stochastic operator-splitting algorithms.
5. Extensions and Generalizations
Nonlinear/Kernerlized and Projection-Corrected FBHF
Recent frameworks generalize FBHF to nonlinear and nonsymmetric kernels (nonlinear ), allowing for the backward step to be performed in a non-Euclidean or Bregman geometry, or over block and coordinatewise preconditioning. The NOFOB algorithm (Giselsson, 2019) achieves this via a nonlinear forward-backward resolvent and an additional relaxed projection onto a hyperplane that ensures Fejér monotonicity and enables potentially larger step sizes and faster convergence.
Four-Operator and Primal-Dual Extensions
FBHF can be embedded as a special case of generalized four-operator splitting algorithms combining maximally monotone, Lipschitz, cocoercive, and composite linear terms (Roldán, 2023). These generalizations recover algorithms such as Condat–Vu and AFBA and facilitate multivariate splitting, saddle-point problems, and applications with complex regularization.
Momentum and Acceleration
Augmenting FBHF with inertia/momentum, both linear and nonlinear, is shown to be efficient both theoretically and empirically for accelerating convergence and handling ill-conditioning. These momentum terms interact with the splitting framework in nontrivial ways and require tailored Lyapunov-based analysis (Qin et al., 28 Oct 2025).
6. Practical Applications and Numerical Performance
FBHF and its stochastic/variance-reduced variants are applied to broad classes of structured problems:
- Sparse and regularized convex optimization (e.g., LASSO, TV regularization),
- Composite saddle-point and constrained minimax learning,
- Stochastic finite-sum and large-scale machine learning problems,
- Imaging and signal processing (e.g., deblurring, denoising) with primal-dual composite regularizers (Roldán, 2023),
- Portfolio optimization and quadratic programming with blockwise and sparse constraints (Qin et al., 28 Oct 2025).
Empirical studies demonstrate that VRFBHF and momentum-based variants achieve lower iteration counts and wall-clock times, particularly for large or expensive operator evaluations, and outperform standard forward-backward or extragradient-type schemes. This advantage is pronounced for problems with composite operators, high variance, or ill-conditioned constraint structure.
7. Relation to Other Operator Splitting and Optimization Methods
FBHF encompasses and extends several classic splitting schemes:
- Forward-backward splitting: recovered by omitting the half-forward correction.
- Forward-backward-forward (FBF): a special case of FBHF for two-operator inclusions.
- Extragradient and Mirror-Prox methods: related via the structure of evaluation points and correction steps.
- Nonlinear forward-backward splitting with projection correction: generalizes FBHF to allow for nonlinear, nonsymmetric kernels with relaxed projections (Giselsson, 2019).
FBHF admits integration with stochastic and variance-reduced oracles (SVRG, SAGA, SARAH), allowing tuning for specific data structure and evaluation cost (Qin et al., 2023, Qin et al., 28 Oct 2025). The theoretical foundations leverage stochastic quasi-Fejér convergence and Lyapunov descent arguments, supporting robust convergence—even in the presence of randomness, nonlinearity, or inexactness.
In summary, the forward-backward-half forward splitting algorithm and its recent generalizations define a flexible, theoretically principled, and computationally efficient family of methods for large-scale and structured monotone inclusions, offering significant advantages in convergence rate, oracle complexity, and practical implementability over classical splitting schemes, particularly in stochastic, varianced-reduced, and momentum-accelerated regimes (Giselsson, 2019, Roldán, 2023, Qin et al., 2023, Qin et al., 28 Oct 2025).