Papers
Topics
Authors
Recent
Search
2000 character limit reached

Proximal-Gradient and Sparsity-Driven Schemes

Updated 15 February 2026
  • Proximal-gradient and sparsity-driven schemes are iterative optimization methods that solve high-dimensional composite problems by combining smooth data-fitting components with sparsity constraints.
  • They leverage efficient proximal operators to handle nonsmooth regularizers, ensuring convergence rates and precise active-set identification under rigorous theoretical analysis.
  • Advanced variants incorporate acceleration, adaptive step-sizing, and stochastic techniques to enhance performance in diverse applications such as signal recovery and machine learning.

Proximal-gradient and sparsity-driven schemes form the foundation of modern optimization frameworks for high-dimensional estimation, signal recovery, and statistical learning with explicit or structured sparsity. These methods leverage composite convex (and sometimes nonconvex) formulations, combining smooth data-fitting terms with sparsity-inducing regularizers or constraints, typically handled via efficient proximal operators. Rigorous theoretical analysis, together with practical algorithmic innovations and diverse applications, has resulted in a rich ecosystem of schemes balancing computational tractability, convergence guarantees, and exact support identification.

1. Formulation and Theoretical Principles

Consider the archetypal composite problem: minxRn    F(x)=f(x)+g(x)\min_{x\in\mathbb{R}^n}\;\; F(x) = f(x) + g(x) where ff is convex and L-smooth (often μ-strongly convex), and gg is proper, convex, separable or structured (possibly nonsmooth or extended-valued for constraints). Canonical instances include regularized regression with 1\ell_1-norm (Lasso), group Lasso, indicator of nonnegativity, and structured penalties inducing block, graph, or fused sparsity (Nikolovski et al., 2024, Nutini et al., 2017, Chen et al., 2012, Chen et al., 2010, Deleu et al., 2021, Argyriou et al., 2011).

The proximal-gradient iteration with step size α=1/L\alpha=1/L is

xk+1=proxg/L(xk1/Lf(xk))x^{k+1} = \operatorname{prox}_{g/L}\bigl(x^k - 1/L \nabla f(x^k)\bigr)

where

proxτg(v)=argminx{12τxv22+g(x)}\operatorname{prox}_{\tau g}(v) = \arg\min_x \left\{ \frac{1}{2\tau}\|x-v\|_2^2 + g(x) \right\}

For g(x)=λx1g(x)=\lambda\|x\|_1, the proximal map is soft-thresholding; for group Lasso it is blockwise shrinkage.

Convergence rates are directly dictated by the structure of ff and gg:

  • In the convex case, O(1/k)O(1/k) objective residual (Nikolovski et al., 2024).
  • With additional μ-strong convexity of ff, linear (geometric) convergence: F(xk)F(x)O(exp(k/κ))|F(x^k)-F(x^*)| \leq O(\exp(-k/\kappa)), with condition number κ=L/μ\kappa=L/\mu (Nutini et al., 2017).

Extensions include variable step-size schemes leveraging local curvature, achieving substantial practical acceleration without loss of theoretical guarantees (Nikolovski et al., 2024, Gu et al., 2015).

2. Active-Set Identification and Complexity

A pivotal aspect of sparsity-driven optimization is the identification of the optimal sparsity pattern (active set). For separable convex regularizers (e.g., ℓ₁, nonnegativity), the proximal-gradient iterates are theoretically guaranteed to lock in the active set in finitely many steps under nondegeneracy (strict complementarity) conditions. The "active-set complexity," defined as the minimal number of iterations needed to recover the correct sparsity pattern, satisfies the bound (Nutini et al., 2017): K=κlog2Lx0xδK = \left\lceil \kappa \, \log \frac{2L\|x^0 - x^*\|}{\delta} \right\rceil where κ=L/μ\kappa = L/\mu, δ\delta encodes the minimal gap in the subdifferential at optimality. After KK iterations, all coordinates in the active set satisfy xik=xix^k_i = x^*_i, i.e., exact support recovery (Nutini et al., 2017).

Notably, this logarithmic dependence on the "distance to the boundary" δ\delta sharply improves on earlier O(1/δ2)O(1/\delta^2) bounds, yet is essentially tight—recovery of the support requires iterates to get sufficiently close to xx^*.

In primal or dual settings with decomposable norms, parallel results hold: if the sampling operator satisfies suitable restricted isometry or eigenvalue conditions, proximal-gradient and its homotopy variants achieve linear convergence in the objective and support (Xiao et al., 2012, Eghbali et al., 2015).

3. Acceleration, Adaptivity, and Advanced Schemes

Acceleration strategies—most notably momentum-based (FISTA/Nesterov) and multi-secant (Anderson Acceleration)—offer provable speedups especially in early or local convergence phases. Two-phase frameworks leverage rapid Nesterov sequencing to globally descend, then switch to Anderson acceleration for superlinear local convergence, with subsetted subproblems further exploiting sparsity to reduce computational overhead (Henderson et al., 16 Aug 2025, Gu et al., 2015, Schmidt et al., 2011). Adaptive step-sizing, whether via Barzilai–Borwein, local Lipschitz estimates, or model-based quadratic regularization, eliminates the need for imprecise global constants and empirically halves wall-clock time (Nikolovski et al., 2024, Gu et al., 2015, Lakhmiri et al., 2022).

Smoothing-based proximal-gradient methods address nonseparable structured penalties, e.g., overlapping-group lasso, graph-guided fusion, and graph-based constraints, replacing nonsmooth penalties with smooth surrogates parameterized by a smoothing parameter μ\mu, for which FISTA yields O(1/ϵ)O(1/\epsilon) or O(1/k2)O(1/k^2) convergence (Chen et al., 2012, Chen et al., 2010). Exact or inexact implementation of the proximal operator (i.e., inner iterative errors) does not destroy the rates provided errors decay suitably per-iteration (Schmidt et al., 2011).

Sparsity constraints can also be handled via hard-thresholding or combinatorial projections (e.g., iterative hard thresholding, block-coordinate descent), with smoothing of the nonsmooth loss and Moreau envelope facilitating convergence guarantees to strong stationarity notions (Yuan, 2021).

4. Stochastic, Nonconvex, and High-Dimensional Settings

Proximal-gradient schemes in finite-sum, high-dimensional, or nonconvex regimes require adaptive sample splitting and bespoke stochastic variants. Modern algorithms utilize proximal stochastic gradient (Prox-SG), variance-reduced variants (Prox-SVRG, Prox-SPIDER, Prox-SARAH), and dynamic support-pruning steps (orthant or half-space projections) to aggressively reduce solution sparsity while retaining convergence (Chen et al., 2020, Chen et al., 2020, Liang et al., 2020, Lakhmiri et al., 2022). A recurring design pattern is a two-phase process: standard prox-SG for preliminary support prediction, followed by targeted manifold projection to shrink the active set more aggressively (Chen et al., 2020, Chen et al., 2020).

For nonconvex, nonsmooth regularizers (e.g., ℓ₀, MCP, SCAD, ℓ_p, p<1), proximal steps remain computable, and convergence proofs rely on generalized subdifferential calculus and local regularity properties; in the stochastic setting, robust performance under arbitrary sampling is achieved through carefully controlled variance and batch-size schemes (Liang et al., 2020, Shimmura et al., 2022, Lakhmiri et al., 2022).

Adaptive stochastic regularization allows for step size (σt\sigma_t) tuning without any prior knowledge of the Lipschitz constant and delivers O(ϵ2)O(\epsilon^{-2}) complexity for reaching first-order stationarity, matching classic methods tuned with precise Lipschitz information (Lakhmiri et al., 2022).

5. Structured, Grouped, and Constrained Sparsity

Beyond entry-wise sparsity, structured-sparsity frameworks encode groups, hierarchies, connected components, or constraints via more general regularizers. Proximal-gradient methods accommodate these forms provided the proximal operator is efficiently solvable, either in closed form or via efficient subroutines (blockwise shrinkage, network flows, conic projections, or dual smoothing) (Chen et al., 2012, Chen et al., 2010, Argyriou et al., 2011, Eghbali et al., 2015, Deleu et al., 2021). For example:

  • Overlapping group lasso and graph-fused lasso penalties are represented as maximizations over dual balls or via incidence matrices, allowing smoothing and fast gradient updates (Chen et al., 2012, Chen et al., 2010).
  • Hierarchical or contiguous-region support is modeled via conic or norm constraints in auxiliary variables (λ), with the composite norm minimized via a coupled proximal-gradient/fixed-point iteration (Argyriou et al., 2011).
  • Blockwise or group-wise nonconvex penalties (e.g., group MCP) are handled via numerically tractable root-finding for their associated weighted prox-operators (Deleu et al., 2021).

Homotopy and continuation methods further enhance performance by tracking the solution path for a decreasing sequence of regularization parameters, yielding global geometric convergence rates even when the original loss is not globally strongly convex (Xiao et al., 2012, Eghbali et al., 2015).

6. Applications, Empirical Performance, and Parameter Selection

Proximal-gradient and sparsity-driven algorithms are ubiquitous in compressed sensing, regression, high-dimensional statistics, machine learning, neural network pruning, convex clustering, trend filtering, PDE control, and network estimation (Nikolovski et al., 2024, Xiao et al., 2012, Shimmura et al., 2021, Zheng et al., 2022, Deleu et al., 2021, Chen et al., 2020). Closed-form or numerically efficient proximal mappings for canonical regularizers (1\ell_1, group-2\ell_2, nuclear, fused, ℓ₀, MCP, SCAD) ensure broad applicability. Empirical evidence consistently demonstrates:

Parameter choices, including regularization weight (λ\lambda), step size scheduling, and switch epochs for aggressive sparsity promotion, are typically set via cross-validation, adaptive schedules, or empirical tuning. For variable step size, practical schemes avoid computation or estimation of global Lipschitz constants altogether (Nikolovski et al., 2024, Gu et al., 2015, Lakhmiri et al., 2022). In inexact settings, inner-loop errors must decay appropriately (e.g., O(1/k3)O(1/k^3) for basic PG, O(1/k4)O(1/k^4) for accelerated PG) to preserve iteration complexity (Schmidt et al., 2011, Gu et al., 2015).

7. Connections, Limitations, and Open Directions

The proximal-gradient paradigm unifies disparate threads in convex, nonconvex, and structured sparsity—simultaneously supporting splitting, smoothing, homotopy, and acceleration. Its flexibility in handling broad regularization families (including nonconvex) and highly composite constraints makes it foundational across domains.

Nevertheless, several challenges and active areas remain:

The integration of data-driven and trainable proximal mappings, as in unfolded network architectures that learn both gradient and proximity updates, opens promising directions for adapting sparsity-driven optimization to complex signal, estimation, or inverse problems (Zheng et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Proximal-Gradient and Sparsity-Driven Schemes.