Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic Nested Variance Reduction for Nonconvex Optimization (1806.07811v2)

Published 20 Jun 2018 in cs.LG, math.OC, and stat.ML

Abstract: We study finite-sum nonconvex optimization problems, where the objective function is an average of $n$ nonconvex functions. We propose a new stochastic gradient descent algorithm based on nested variance reduction. Compared with conventional stochastic variance reduced gradient (SVRG) algorithm that uses two reference points to construct a semi-stochastic gradient with diminishing variance in each iteration, our algorithm uses $K+1$ nested reference points to build a semi-stochastic gradient to further reduce its variance in each iteration. For smooth nonconvex functions, the proposed algorithm converges to an $\epsilon$-approximate first-order stationary point (i.e., $|\nabla F(\mathbf{x})|_2\leq \epsilon$) within $\tilde O(n\land \epsilon{-2}+\epsilon{-3}\land n{1/2}\epsilon{-2})$ number of stochastic gradient evaluations. This improves the best known gradient complexity of SVRG $O(n+n{2/3}\epsilon{-2})$ and that of SCSG $O(n\land \epsilon{-2}+\epsilon{-10/3}\land n{2/3}\epsilon{-2})$. For gradient dominated functions, our algorithm also achieves better gradient complexity than the state-of-the-art algorithms. Thorough experimental results on different nonconvex optimization problems back up our theory.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dongruo Zhou (51 papers)
  2. Pan Xu (68 papers)
  3. Quanquan Gu (198 papers)
Citations (136)

Summary

We haven't generated a summary for this paper yet.