Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SVRG Meets AdaGrad: Painless Variance Reduction (2102.09645v2)

Published 18 Feb 2021 in cs.LG, math.OC, and stat.ML

Abstract: Variance reduction (VR) methods for finite-sum minimization typically require the knowledge of problem-dependent constants that are often unknown and difficult to estimate. To address this, we use ideas from adaptive gradient methods to propose AdaSVRG, which is a more robust variant of SVRG, a common VR method. AdaSVRG uses AdaGrad in the inner loop of SVRG, making it robust to the choice of step-size. When minimizing a sum of n smooth convex functions, we prove that a variant of AdaSVRG requires $\tilde{O}(n + 1/\epsilon)$ gradient evaluations to achieve an $O(\epsilon)$-suboptimality, matching the typical rate, but without needing to know problem-dependent constants. Next, we leverage the properties of AdaGrad to propose a heuristic that adaptively determines the length of each inner-loop in AdaSVRG. Via experiments on synthetic and real-world datasets, we validate the robustness and effectiveness of AdaSVRG, demonstrating its superior performance over standard and other "tune-free" VR methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Benjamin Dubois-Taine (5 papers)
  2. Sharan Vaswani (35 papers)
  3. Reza Babanezhad (18 papers)
  4. Mark Schmidt (74 papers)
  5. Simon Lacoste-Julien (95 papers)
Citations (15)