Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stochastic Quasi-Newton Method

Updated 3 February 2026
  • Stochastic quasi-Newton methods are optimization techniques that approximate Hessian curvature using noisy, first-order information for large-scale and nonconvex problems.
  • They integrate quasi-Newton updates with variance reduction, damping, and regularization techniques to enhance convergence reliability and efficiency.
  • Variants exploit structured curvature, adaptive step sizes, and proximal operations to achieve fast convergence in high-dimensional applications such as deep learning.

A stochastic quasi-Newton method is a class of optimization algorithms that adapt quasi-Newton methodology—approximating or exploiting curvature (Hessian) information—in settings where only noisy first-order information is available, typical of large-scale or online learning tasks. These methods generalize classic quasi-Newton schemes (e.g. BFGS, L-BFGS) for stochastic, high-dimensional, and often nonconvex or nonsmooth objectives, enabling curvature exploitation under stringent computational and oracle constraints. Across contemporary literature, stochastic quasi-Newton methods encompass variants for composite optimization, nonconvex and finite-sum minimization, variance-reduced schemes, line-search-regularized approaches, coordinate and block-structured updates, proximal extensions, and robust stochastic implementations.

1. Formalization and Foundational Principles

Stochastic quasi-Newton methods solve optimization problems of the form

minxRnEξ[f(x;ξ)]+R(x)\min_{x\in\mathbb R^n}\, \mathbb E_\xi[f(x;\xi)] + R(x)

where f(x;ξ)f(x;\xi) is (weighted) smooth and possibly nonconvex, and R(x)R(x) is a convex regularizer or constraint penalty (possibly nonsmooth). The method accesses only a stochastic first-order oracle delivering unbiased gradient estimates g(x,ξ)g(x,\xi) with controlled variance. The key idea is to enhance vanilla stochastic (proximal) gradient steps by applying a state-dependent positive-definite preconditioner Hk[2f(xk)]1H_k \approx [\nabla^2f(x_k)]^{-1}, constructed using a quasi-Newton (most commonly L-BFGS) update based on empirical quasi-Newton pairs (sk,yk)(s_k,y_k) formed from stochastic gradients and iterates (Wang et al., 2014, Wang et al., 2016, Byrd et al., 2014, Luo et al., 2016, Yang et al., 2019, Chen et al., 2019, Lucchi et al., 2015).

The necessity of statistical safeguards (damping, regularization, coordinate selection) is widely recognized, with most modern schemes requiring that HkH_k maintain uniform spectral bounds, often through Powell/Byrd damping or explicit regularization (Wang et al., 2016, Chen et al., 2019, Li et al., 2018, Wills et al., 2019). Integration of variance-reduction is pivotal to achieving optimal or near-optimal oracle complexity and linear convergence in strongly convex regimes (Lucchi et al., 2015, Zhang et al., 2020, Song et al., 2024, Sun et al., 2024).

2. Algorithmic Frameworks and Methodological Advances

A representative stochastic quasi-Newton iteration is given by

xk+1=xkαkHkgkx_{k+1} = x_k - \alpha_k H_k g_k

where HkH_k is constructed via L-BFGS (using, e.g., the last mm pairs (sj,yj)(s_j,y_j)), and gkg_k is a mini-batch stochastic gradient. Variations exist for composite settings, such as the stochastic extra-step quasi-Newton (SEQN) method (Yang et al., 2019), which handles objectives of the form f(x)+g(x)f(x) + g(x), using: dk=WkFvkΛk(xk),zk=xk+βkdk,xk+1=proxgΛk,+(xk+αkdkΛk,+1v+k)d^k = -W_k F^{\Lambda_k}_{v^k}(x^k),\quad z^k = x^k+\beta_k d^k,\quad x^{k+1} = \operatorname{prox}_{g}^{\Lambda_{k,+}}(x^k+\alpha_k d^k - \Lambda_{k,+}^{-1} v^k_+) with FvΛ(x)F^\Lambda_v(x) encoding a proximal fixed-point condition and WkW_k the quasi-Newton preconditioner.

For general unconstrained nonconvex optimization, the stochastic damped L-BFGS (SdLBFGS) updates the inverse Hessian estimate via a “damped” secant correction that ensures skTyk0s_k^T y_k \gg 0 in expectation, even under high noise (Wang et al., 2016, Li et al., 2018). Variable sample-size methods extend these ideas to state-dependent or nonstationary oracle variance (Jalilzadeh et al., 2018). Methods with adaptive step lengths use accept/reject Markov proposals and stochastic line search, as in (Wills et al., 2018, Wills et al., 2019), combining L-BFGS secant fits with selection over receding windows of iterates and gradients, often with backtracking and adaptive step-size schedules.

Structured stochastic quasi-Newton methods exploit block, low-rank, or Kronecker product structure in the Hessian—for example, deep learning settings—to accelerate computation and exploit scale (Yang et al., 2020), and coordinate/active-set selection further reduces per-iteration costs (Yang et al., 2019).

Variance-reduced quasi-Newton methods, such as Vite (Lucchi et al., 2015) and SpiderSQN (Zhang et al., 2020), integrate SVRG- or SPIDER-style gradient estimators with limited-memory BFGS to achieve optimal sample complexity. Single-loop and asynchronous variants have been developed for better parallel efficiency and practical scalability (Tong et al., 2020, Song et al., 2024).

Table: Typical Components in State-of-the-Art S-QN Methods

Component Description Example Papers
Quasi-Newton Update Damped/regularized L-BFGS with spectral bounds (Wang et al., 2016, Li et al., 2018)
Variance Reduction SVRG, SPIDER, SAGA gradient estimators (Lucchi et al., 2015, Zhang et al., 2020)
Proximal Handling Scaled/semismooth Newton-Prox subroutines (Yang et al., 2019, Song et al., 2024)
Step-Size Policy Fixed, diminishing, adaptive, or clipped (Wills et al., 2018, Sun et al., 2024)
Structured Curvature Block/coordinate/low-rank Hessian correction (Yang et al., 2020, Yang et al., 2019)

3. Theoretical Guarantees and Complexity

For strongly convex objectives with bounded Hessians and stochastic gradients, classical stochastic quasi-Newton methods achieve an O(1/k)O(1/k) convergence rate in expected objective gap with properly decaying step sizes, matching SGD, but with typically smaller constants due to improved conditioning (Byrd et al., 2014). Variance-reduced stochastic quasi-Newton methods—with suitably designed variance-reduction (VR)—achieve global linear (geometric) rates and optimal oracle complexities O((n+n1/2ϵ2)O((n + n^{1/2}\epsilon^{-2}) for finite-sum problems (Lucchi et al., 2015, Zhang et al., 2020). Recent momentum-accelerated quasi-Newton variants further match the best lower bounds on first-order stationarity (Zhang et al., 2020).

General-purpose nonconvex variants guarantee that lim infkEf(xk)2=0\liminf_{k\to\infty} \mathbb E \|\nabla f(x_k)\|^2 = 0 almost surely, with O(ϵ2)O(\epsilon^{-2}) sample complexity for returning an ϵ\epsilon-stationary point (Wang et al., 2016, Wang et al., 2014). More advanced methods, such as those for (L0,L1)(L_0, L_1)-smoothness, achieve the best-known O(ϵ3)O(\epsilon^{-3}) sample complexity while leveraging curvature and gradient clipping (Sun et al., 2024).

Composite and nonsmooth methods based on stochastic quasi-Newton support proximal steps with per-iteration O(d)O(d) complexity using compact representations and semismooth Newton updates (Song et al., 2024). In large-scale linear systems, stochastic quasi-Newton converges almost surely to the true least-squares solution under mild moment assumptions on sketched gradients (Chung et al., 2017).

4. Practical Techniques, Variants, and Computational Aspects

Implementations universally rely on the two-loop recursion for L-BFGS, with typical memory m=5m=5–20, yielding O(md)O(md) per-iteration cost, competitive with SGD for moderate mm and dominating the cost profile in high-dimensional regimes (Li et al., 2018, Byrd et al., 2014). Curvature pairs are updated at intervals to amortize Hessian-vector products. Spectral regularization, damping, and normalization are standard to avoid divergence or stagnation (Wang et al., 2016, Li et al., 2018, Chen et al., 2019). Coordinate selection or blockwise updating can trade per-iteration cost for some accuracy loss, but is effective in ultra-high-dimensional settings (Yang et al., 2019, Yang et al., 2020).

Adaptive and line-search policies, including stochastic line search and step rejection, provide robustness in deep learning and ill-conditioned problems (Wills et al., 2018, Wills et al., 2019). Semismooth Newton proximal methods with efficient inner solvers make S-QN competitive for composite and 1\ell_1-regularized objectives (Yang et al., 2019, Song et al., 2024). Methods have been implemented in production frameworks (e.g. TensorFlow, PyTorch), showing convergence stability and performance competitive with, or superior to, first-order methods on deep networks and classical ML benchmarks (Li et al., 2018, Indrapriyadarsini et al., 2019).

Parallel and asynchronous designs offer significant wall-clock speedups and scalability, provided the memory and curvature update synchronization is managed properly (Tong et al., 2020, Song et al., 2024). Absence of common random numbers (CRNs) in sampling oracles requires explicit eigenvalue control in L-BFGS updates and careful adaptive step size routines (Menickelly et al., 2023).

5. Empirical Performance and Comparative Evaluation

Stochastic quasi-Newton algorithms consistently outperform SGD and basic first-order variance-reduced methods in both convergence speed and ultimate objective gap, especially for ill-conditioned, sparse, or large-scale problems (Lucchi et al., 2015, Byrd et al., 2014, Yang et al., 2019). Variance-reduced quasi-Newton methods such as Vite and SpiderSQN reach lower objective in fewer epochs and significantly less wall-clock time on convex and nonconvex losses, including deep learning settings (Lucchi et al., 2015, Zhang et al., 2020). In highly ill-conditioned or large logistic regression tasks, coordinate/structured methods yield 2–10× faster convergence than classical alternatives (Yang et al., 2019, Yang et al., 2020). In both linear least-squares and learning tasks, S-QN achieves robustness (convergence under broad sketching/sampling distributions), algorithmic stability (avoidance of divergence or numerical instability with moderate regularization), and scalability in memory and compute (Chung et al., 2017, Wills et al., 2018, Song et al., 2024). Adaptive and robust step-size schemes further enhance stability in unconstrained stochastic environments (Wills et al., 2018, Wills et al., 2019). Hybrid methods incorporating Nesterov acceleration or momentum exhibit further improvements, especially in deep learning (Indrapriyadarsini et al., 2019, Zhang et al., 2020).

6. Notable Extensions, Limitations, and Future Directions

Recent progress includes adaptation to non-uniform smoothness via (L0,L1)(L_0, L_1)-smoothness and gradient-clipped quasi-Newton steps (Sun et al., 2024), guaranteed linear convergence in nonsmooth and composite settings via single-loop and semismooth Newton solvers (Song et al., 2024), and advanced block/coordinate structures for deep learning (Yang et al., 2020, Yang et al., 2019). Analytical frameworks now cover settings without common random numbers (Menickelly et al., 2023) and heavy-tailed Hessian approximations (Pinta, 28 Feb 2025), using high-probability and stopping time analyses.

Ongoing research directions include (i) further scalability for distributed and asynchronous settings, (ii) theory and design for high-noise and limited memory regimes, (iii) efficient integration of curvature exploitation in federated and decentralized learning, and (iv) principled handling of interplay between stochasticity, nonsmoothness, and curvature, especially for modern high-dimensional nonconvex deep architectures (Tong et al., 2020, Pinta, 28 Feb 2025, Yang et al., 2020, Song et al., 2024). Open theoretical challenges remain in establishing universally optimal rates for general nonconvex stochastic composite problems and practical schemes under minimal spectral regularity assumptions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stochastic Quasi-Newton Method.