Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Entropic Risk-Averse Generalized Momentum Methods (2204.11292v4)

Published 24 Apr 2022 in math.OC

Abstract: In the context of first-order algorithms subject to random gradient noise, we study the trade-offs between the convergence rate (which quantifies how fast the initial conditions are forgotten) and the "risk" of suboptimality, i.e. deviations from the expected suboptimality. We focus on a general class of momentum methods (GMM) which recover popular methods such as gradient descent (GD), accelerated gradient descent (AGD), and heavy-ball (HB) method as special cases depending on the choice of GMM parameters. We use well-known risk measures "entropic risk" and "entropic value at risk" to quantify the risk of suboptimality. For strongly convex smooth minimization, we first obtain new convergence rate results for GMM with a unified theory that is also applicable to both AGD and HB, improving some of the existing results for HB. We then provide explicit bounds on the entropic risk and entropic value at risk of suboptimality at a given iterate which also provides direct bounds on the probability that the suboptimality exceeds a given threshold based on Chernoff's inequality. Our results unveil fundamental trade-offs between the convergence rate and the risk of suboptimality. We then plug the entropic risk and convergence rate estimates we obtained in a computationally tractable optimization framework and propose entropic risk-averse GMM (RA-GMM) and entropic risk-averse AGD (RA-AGD) methods which can select the GMM parameters to systematically trade-off the entropic value at risk with the convergence rate. We show that RA-AGD and RA-GMM lead to improved performance on quadratic optimization and logistic regression problems compared to the standard choice of parameters. To our knowledge, our work is the first to resort to coherent measures to design the parameters of momentum methods in a systematic manner.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube