First and Second Order Approximations to Stochastic Gradient Descent Methods with Momentum Terms (2504.13992v1)
Abstract: Stochastic Gradient Descent (SGD) methods see many uses in optimization problems. Modifications to the algorithm, such as momentum-based SGD methods have been known to produce better results in certain cases. Much of this, however, is due to empirical information rather than rigorous proof. While the dynamics of gradient descent methods can be studied through continuous approximations, existing works only cover scenarios with constant learning rates or SGD without momentum terms. We present approximation results under weak assumptions for SGD that allow learning rates and momentum parameters to vary with respect to time.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.