Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finding Local Minima via Stochastic Nested Variance Reduction (1806.08782v1)

Published 22 Jun 2018 in cs.LG and stat.ML

Abstract: We propose two algorithms that can find local minima faster than the state-of-the-art algorithms in both finite-sum and general stochastic nonconvex optimization. At the core of the proposed algorithms is $\text{One-epoch-SNVRG}+$ using stochastic nested variance reduction (Zhou et al., 2018a), which outperforms the state-of-the-art variance reduction algorithms such as SCSG (Lei et al., 2017). In particular, for finite-sum optimization problems, the proposed $\text{SNVRG}{+}+\text{Neon2}{\text{finite}}$ algorithm achieves $\tilde{O}(n{1/2}\epsilon{-2}+n\epsilon_H{-3}+n{3/4}\epsilon_H{-7/2})$ gradient complexity to converge to an $(\epsilon, \epsilon_H)$-second-order stationary point, which outperforms $\text{SVRG}+\text{Neon2}{\text{finite}}$ (Allen-Zhu and Li, 2017) , the best existing algorithm, in a wide regime. For general stochastic optimization problems, the proposed $\text{SNVRG}{+}+\text{Neon2}{\text{online}}$ achieves $\tilde{O}(\epsilon{-3}+\epsilon_H{-5}+\epsilon{-2}\epsilon_H{-3})$ gradient complexity, which is better than both $\text{SVRG}+\text{Neon2}{\text{online}}$ (Allen-Zhu and Li, 2017) and Natasha2 (Allen-Zhu, 2017) in certain regimes. Furthermore, we explore the acceleration brought by third-order smoothness of the objective function.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dongruo Zhou (51 papers)
  2. Pan Xu (68 papers)
  3. Quanquan Gu (198 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.