Adaptive control mechanisms in gradient descent algorithms (2508.19100v1)
Abstract: The problem of designing adaptive stepsize sequences for the gradient descent method applied to convex and locally smooth functions is studied. We take an adaptive control perspective and design update rules for the stepsize that make use of both past (measured) and future (predicted) information. We show that Lyapunov analysis can guide in the systematic design of adaptive parameters striking a balance between convergence rates and robustness to computational errors or inexact gradient information. Theoretical and numerical results indicate that closed-loop adaptation guided by system theory is a promising approach for designing new classes of adaptive optimization algorithms with improved convergence properties.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.