Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence
Abstract: In this paper, we study the proximal incremental aggregated gradient(PIAG) algorithm for minimizing the sum of L-smooth nonconvex component functions and a proper closed convex function. By exploiting the L-smooth property and with the help of an error bound condition, we can show that the PIAG method still enjoys some nice linear convergence properties even for nonconvex minimization. To illustrate this, we first demonstrate that the generated sequence globally converges to the stationary point set. Then, there exists a threshold such that the objective function value sequence and the iterate point sequence are R-linearly convergent when the stepsize is chosen below this threshold.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.