Papers
Topics
Authors
Recent
2000 character limit reached

Accelerating Proximal Gradient-type Algorithms using Damped Anderson Acceleration with Restarts and Nesterov Initialization (2508.12177v1)

Published 16 Aug 2025 in stat.CO

Abstract: Despite their frequent slow convergence, proximal gradient schemes are widely used in large-scale optimization tasks due to their tremendous stability, scalability, and ease of computation. In this paper, we develop and investigate a general two-phase scheme for accelerating the convergence of proximal gradient algorithms. By using Nesterov's momentum method in an initialization phase, our procedure delivers fast initial descent that is robust to the choice of starting value. Once iterates are much closer to the solution after the first phase, we utilize a variation of Anderson acceleration to deliver more rapid local convergence in the second phase. Drawing upon restarting schemes developed for Nesterov acceleration, we can readily identify points where it is advantageous to switch from the first to the second phase, which enables use of the procedure without requiring one to specify the number of iterations used in each phase. For the second phase, we adapt and extend a version of Anderson acceleration with algorithm restarts, and we introduce a subsetted version of this procedure that improves performance in problems with substantial sparsity. Through simulation studies involving four representative optimization problems, we show that our proposed algorithm can generate substantial improvements over competing acceleration methods.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.