Revisiting Convergence of AdaGrad with Relaxed Assumptions (2402.13794v2)
Abstract: In this study, we revisit the convergence of AdaGrad with momentum (covering AdaGrad as a special case) on non-convex smooth optimization problems. We consider a general noise model where the noise magnitude is controlled by the function value gap together with the gradient magnitude. This model encompasses a broad range of noises including bounded noise, sub-Gaussian noise, affine variance noise and the expected smoothness, and it has been shown to be more realistic in many practical applications. Our analysis yields a probabilistic convergence rate which, under the general noise, could reach at (\tilde{\mathcal{O}}(1/\sqrt{T})). This rate does not rely on prior knowledge of problem-parameters and could accelerate to (\tilde{\mathcal{O}}(1/T)) where (T) denotes the total number iterations, when the noise parameters related to the function value gap and noise level are sufficiently small. The convergence rate thus matches the lower rate for stochastic first-order methods over non-convex smooth landscape up to logarithm terms [Arjevani et al., 2023]. We further derive a convergence bound for AdaGrad with mometum, considering the generalized smoothness where the local smoothness is controlled by a first-order function of the gradient norm.
- Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199(1-2):165–214, 2023.
- SGD with AdaGrad stepsizes: full adaptivity with high probability to unknown parameters, unbounded gradients and affine variance. In International Conference on Machine Learning, 2023.
- Gradient convergence in gradient methods with errors. SIAM Journal on Optimization, 10(3):627–642, 2000.
- Optimization methods for large-scale machine learning. SIAM Review, 60(2):223–311, 2018.
- Robustness to unbounded smoothness of generalized signSGD. In Advances in Neural Information Processing Systems, 2022.
- A simple convergence proof of Adam and Adagrad. Transactions on Machine Learning Research, 2022.
- Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(7):2121–2159, 2011.
- The power of adaptivity in SGD: self-tuning step sizes with unbounded gradients and affine variance. In Conference on Learning Theory, 2022.
- Beyond uniform smoothness: a stopped analysis of adaptive SGD. In Conference on Learning Theory, 2023.
- Wayne A Fuller. Measurement error models. John Wiley & Sons, 2009.
- Global convergence of the heavy-ball method for convex optimization. In European Control Conference, 2015.
- Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.
- Stochastic quasi-gradient methods: Variance reduction via jacobian sketching. Mathematical Programming, 188:135–192, 2021.
- SGD: General analysis and improved rates. In International Conference on Machine Learning, pages 5200–5209. PMLR, 2019.
- Benjamin Grimmer. Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity. SIAM Journal on Optimization, 29(2):1350–1365, 2019.
- High probability convergence of Adam under unbounded gradients and affine variance noise. arXiv preprint arXiv:2311.02000, 2023.
- High probability bounds for a class of nonconvex algorithms with AdaGrad stepsize. In International Conference on Learning Representations, 2022.
- Better theory for SGD in the nonconvex world. Transactions on Machine Learning Research, 2023. ISSN 2835-8856.
- Feature noise induces loss discrepancy across groups. In International Conference on Machine Learning, pages 5209–5219. PMLR, 2020.
- Online adaptive methods, universality and acceleration. In Advances in Neural Information Processing Systems, 2018.
- On the convergence of stochastic gradient descent with adaptive stepsizes. In International Conference on Artificial Intelligence and Statistics, 2019.
- A high probability analysis of adaptive SGD with momentum. In Workshop on International Conference on Machine Learning, 2020.
- On the convergence of AdaGrad(Norm) on ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT: beyond convexity, non-asymptotic rate and acceleration. In International Conference on Learning Representations, 2023a.
- High probability convergence of stochastic gradient methods. In International Conference on Machine Learning, 2023b.
- BT Poljak and Ya Z Tsypkin. Pseudogradient adaptation and training algorithms. Automation and Remote Control, 34:45–67, 1973.
- Boris T Polyak. Some methods of speeding up the convergence of iteration methods. Ussr Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
- Understanding gradient clipping in incremental gradient methods. In International Conference on Artificial Intelligence and Statistics, 2021.
- Variance-reduced clipping for non-convex optimization. arXiv preprint arXiv:2303.00883, 2023.
- Stochastic reformulations of linear systems: algorithms and convergence theory. SIAM Journal on Matrix Analysis and Applications, 41(2):487–524, 2020.
- A stochastic approximation method. Annals of Mathematical Statistics, pages 400–407, 1951.
- Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
- A unified analysis of AdaGrad with weighted aggregation and momentum acceleration. IEEE Transactions on Neural Networks and Learning Systems, 2023.
- Less regret via online conditioning. arXiv preprint arXiv:1002.4862, 2010.
- Fast and faster convergence of SGD for over-parameterized models and an accelerated perceptron. In International Conference on Artificial Intelligence and Statistics, pages 1195–1204. PMLR, 2019.
- Convergence of AdaGrad for non-convex objectives: simple proofs and relaxed assumptions. In Conference on Learning Theory, 2023.
- On the convergence of stochastic gradient descent with bandwidth-based step size. Journal of Machine Learning Research, 24(48):1–49, 2023.
- Adagrad stepsizes: sharp convergence over nonconvex landscapes. Journal of Machine Learning Research, 21(1):9047–9076, 2020.
- Robust regression and Lasso. In Advances in Neural Information Processing Systems, 2008.
- Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. arXiv preprint arXiv:1604.03257, 2016.
- Improved analysis of clipping algorithms for non-convex optimization. In Advances in Neural Information Processing Systems, 2020a.
- Why gradient clipping accelerates training: a theoretical justification for adaptivity. In International Conference on Learning Representations, 2020b.
- On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64:1–13, 2021.
- On the convergence of adaptive gradient methods for nonconvex optimization. In Annual Workshop on Optimization for Machine Learning, 2020.
- A sufficient condition for convergences of Adam and RMSProp. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Yusu Hong (5 papers)
- Junhong Lin (29 papers)