Descent Properties of an Anderson Accelerated Gradient Method With Restarting (2206.01372v2)
Abstract: Anderson Acceleration (AA) is a popular acceleration technique to enhance the convergence of fixed-point iterations. The analysis of AA approaches typically focuses on the convergence behavior of a corresponding fixed-point residual, while the behavior of the underlying objective function values along the accelerated iterates is currently not well understood. In this paper, we investigate local properties of AA with restarting applied to a basic gradient scheme in terms of function values. Specifically, we show that AA with restarting is a local descent method and that it can decrease the objective function faster than the gradient method. These new results theoretically support the good numerical performance of AA when heuristic descent conditions are used for globalization and they provide a novel perspective on the convergence analysis of AA that is more amenable to nonconvex optimization problems. Numerical experiments are conducted to illustrate our theoretical findings.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.