Anderson acceleration of gradient methods with energy for optimization problems (2211.08578v1)
Abstract: Anderson acceleration (AA) as an efficient technique for speeding up the convergence of fixed-point iterations may be designed for accelerating an optimization method. We propose a novel optimization algorithm by adapting Anderson acceleration to the energy adaptive gradient method (AEGD) [arXiv:2010.05109]. The feasibility of our algorithm is examined in light of convergence results for AEGD, though it is not a fixed-point iteration. We also quantify the accelerated convergence rate of AA for gradient descent by a factor of the gain at each implementation of the Anderson mixing. Our experimental results show that the proposed algorithm requires little tuning of hyperparameters and exhibits superior fast convergence.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.