Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems (1708.07010v1)

Published 23 Aug 2017 in math.OC

Abstract: The $\ell_p$ regularization problem with $0< p< 1$ has been widely studied for finding sparse solutions of linear inverse problems and gained successful applications in various mathematics and applied science fields. The proximal gradient algorithm is one of the most popular algorithms for solving the $\ell_p$ regularisation problem. In the present paper, we investigate the linear convergence issue of one inexact descent method and two inexact proximal gradient algorithms (PGA). For this purpose, an optimality condition theorem is explored to provide the equivalences among a local minimum, second-order optimality condition and second-order growth property of the $\ell_p$ regularization problem. By virtue of the second-order optimality condition and second-order growth property, we establish the linear convergence properties of the inexact descent method and inexact PGAs under some simple assumptions. Both linear convergence to a local minimal value and linear convergence to a local minimum are provided. Finally, the linear convergence results of the inexact numerical methods are extended to the infinite-dimensional Hilbert spaces.

Summary

We haven't generated a summary for this paper yet.