Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stochastic Coordinate Descent

Updated 15 April 2026
  • Stochastic Coordinate Descent is a first-order optimization method that randomly updates individual or block coordinates to efficiently tackle high-dimensional problems.
  • It employs techniques such as adaptive sampling, variance reduction, and parallel/asynchronous updates to enhance convergence rates and scalability.
  • Recent advancements include block and primal-dual variants, spectral methods, and asynchronous implementations that improve practical performance in large-scale machine learning applications.

Stochastic coordinate descent (SCD) refers to a family of first-order optimization algorithms in which, at each iteration, a randomly selected coordinate (or block of coordinates) is updated while the others are held fixed. This approach is motivated by the need to efficiently solve large-scale, high-dimensional optimization problems—especially when the computation of the full gradient is prohibitive. SCD generalizes classical coordinate descent by randomizing the selection of coordinates and allows further algorithmic extensions enabling parallelism, adaptive sampling, variance reduction, and primal-dual formulations.

1. Algorithmic Foundations and Variants

At its core, SCD applies a partial-gradient update to a randomly chosen coordinate or block:

xk+1=xkheiiF(xk)x_{k+1} = x_k - h \, e_i \, \nabla_i F(x_k)

where eie_i is the ii–th standard basis vector and ii is sampled, typically uniformly at random, from {1,,d}\{1,\dots,d\} for dd–dimensional xx (Giovacchino et al., 2023). In more general forms, subsets of coordinates are updated in parallel (as in the NSync algorithm (Richtárik et al., 2013)) or according to adaptive distributions, allowing for non-uniform and importance sampling to optimize convergence rates and empirical performance.

Block and minibatch extensions—such as asynchronous stochastic block coordinate descent—improve algorithmic scalability, especially in distributed and parallel environments (Gu et al., 2016, Cheung et al., 2018). Primal-dual variants are deployed in composite convex optimization and saddle-point problems, where either blocks of primal or dual variables (or both) are randomly updated per iteration (Wen et al., 2016, Zhu et al., 2015).

2. Convergence Analysis and Rate Optimization

Convergence of SCD depends fundamentally on the strong convexity and smoothness of the objective and the coordinate sampling scheme. Under γ\gamma-strong convexity and the so-called Expected Separable Overapproximation (ESO) condition, SCD achieves a geometric (linear) rate

E[F(xk)F(x)](1ρ)k[F(x0)F(x)]\mathbb{E}[F(x_k) - F(x^\star)] \leq (1 - \rho)^k [F(x_0) - F(x^\star)]

with contraction factor ρ\rho determined by the coordinate Lipschitz constants, strong convexity coefficients, and the sampling policy (Richtárik et al., 2013, Konečný et al., 2014). In the parallel, non-uniform framework, optimal choice of coordinate probabilities (possibly proportional to the corresponding Lipschitz constants) yields the sharpest rates and may outperform full-gradient or all-coordinate update strategies by orders of magnitude (Richtárik et al., 2013).

Variance reduction (e.g., using SVRG/SAGA-inspired gradient estimators) can further improve convergence to optimal eie_i0 rates even in the absence of strong convexity (Gu et al., 2016). Accelerated SCD variants (e.g., utilizing Nesterov-type momentum) achieve optimal eie_i1 or even better rates for stochastic optimization as shown in accelerated randomized coordinate descent algorithms (Bhandari et al., 2018).

3. Asynchronous and Parallel Implementations

Asynchronous parallel SCD has emerged as a principal approach for exploiting architectural parallelism in large-scale settings. In asynchronous SCD, multiple threads process coordinate updates in a lock-free fashion, reading and updating a (possibly inconsistent) shared iterate. Such schemes achieve near-linear speedup up to a hardware-determined limit:

  • Up to eie_i2 processors for unconstrained smooth problems (Liu et al., 2013)
  • Up to eie_i3 processors for separable or constrained objectives (Liu et al., 2014)
  • Tight lower bounds of eie_i4 for allowed overlap degree in fully asynchronous, non-smooth settings (Cheung et al., 2018)

The key to theoretical guarantees in asynchronous SCD is the restriction on staleness/overlap (eie_i5), governing the number of updates allowed to occur before any given coordinate is overwritten. Provided these bounds hold and stepsizes are chosen to absorb asynchrony-induced variance, the convergence rate essentially matches that of the serial scheme (Liu et al., 2013, Liu et al., 2014, Cheung et al., 2018).

4. Extensions: Block, Primal-Dual, and Variance Reduction Schemes

Modern SCD methods encompass a range of extensions, including block-coordinate and primal-dual schemes. Block-cyclic SCD (BCSC) for deep neural networks partitions parameters and data into blocks, cycling through blocks and assigning freshly permuted minibatches to each, yielding better robustness to outliers and faster empirical convergence (Nakamura et al., 2017). Stochastic block coordinate descent (SBCD), often combined with proximal mapping and adaptive step sizes, allows flexible updates and mini-batching, and is effective for nonsmooth or linearly coupled objectives (Zhu et al., 2015, Wen et al., 2016, Zhu et al., 2015).

Variance-reduced SCD, such as AsySBCDVR, achieves linear rates under strong convexity and sublinear eie_i6 convergence in general convexity, while achieving near-linear multicore speedup (Gu et al., 2016). Primal-dual block-coordinate SCD methods enable efficient solutions for large-scale saddle-point problems, robust PCA, Lasso, and group Lasso (Zhu et al., 2015).

5. Adaptive Sampling and Generalized Search Directions

Recent advances introduce generalizations in sampling strategies and search directions:

  • Adaptive coordinate sampling (e.g., MUSKETEER algorithm) leverages reinforcement learning ideas to bias updates toward "high-gain" coordinates while maintaining exploration, achieving almost-sure convergence and eie_i7 rates under the Polyak-Łojasiewicz condition (Leluc et al., 2021).
  • Generalized SCD algorithms use random search directions not restricted to basis vectors. For example, stochastic gradient descent with random search directions (SCORS) allows the update direction eie_i8 to be drawn from any distribution with mean identity covariance, thus interpolating between coordinate descent, full gradient, and random subspace/descent methods. Such schemes maintain convergence guarantees and allow analysis of the asymptotic covariance induced by the search direction distribution (Gbaguidi, 25 Mar 2025).
  • Stochastic subspace descent (SSD) generalizes SCD by updating along an eie_i9-dimensional random subspace at each iteration, with convergence rates dependent on the ratio ii0 and diminishing dimension dependence as ii1 increases (Kozak et al., 2020).

6. Spectral and Dual-Coordinate Methods

Augmenting coordinate-search spaces with spectral or conjugate directions further accelerates convergence:

  • Stochastic Spectral Coordinate Descent (SSCD) and stochastic conjugate descent methods sample enriched sets of directions (eigenvectors, conjugate directions) and interpolate between the baseline RCD rate and a rate independent of the condition number in the fully spectral regime (Kovalev et al., 2018).
  • In dual settings (e.g., stochastic dual coordinate descent), adaptive momentum and block sketching strategies yield linearly convergent algorithms covering a host of classical row-action methods such as randomized Kaczmarz and linearized Bregman iterations, all under a unifying stochastic block update viewpoint (Zeng et al., 2023).

7. Theoretical Insights and Practical Recommendations

The analysis of SCD (and its variants) reveals several crucial insights for algorithm design and deployment:

  • Optimal coordinate/block probabilities (often proportional to coordinate Lipschitz constants) are necessary for sharpest convergence, particularly in non-uniform settings and high dimensions (Richtárik et al., 2013, Giovacchino et al., 2023).
  • Stepsizes must be adjusted to compensate for coordinate-wise Lipschitz constants, dimension, system asynchrony, and block size (Giovacchino et al., 2023, Wen et al., 2016).
  • Variance reduction is critical when unbiased gradient estimates are expensive or highly volatile, and primal-dual or block-based algorithms are recommended for linearly coupled objectives and nonsmooth regularization.
  • In large-scale parallel implementations, lock-free, asynchronous SCD with bounded delay is practically essential for scaling to tens or hundreds of cores (Liu et al., 2013, Liu et al., 2014, Cheung et al., 2018, Gu et al., 2016).
  • Recent generalizations to subspace, random direction, and adaptively sampled update rules offer improved trade-offs between per-iteration cost and convergence, and in special applications (e.g., high-dimensional PDE-constrained learning) may dramatically outperform baseline coordinate schemes (Kozak et al., 2020, Gbaguidi, 25 Mar 2025, Leluc et al., 2021).

In summary, stochastic coordinate descent forms a central algorithmic primitive in contemporary large-scale convex and nonconvex optimization. Its theoretical properties, parallelizability, extensibility, and capacity for algorithmic innovation continue to make it a key research focus across machine learning, signal processing, and computational mathematics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stochastic Coordinate Descent.