Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Regularization: Convergence Of Iterative Jumping Thresholding Algorithm (1402.5744v3)

Published 24 Feb 2014 in math.OC

Abstract: In recent studies on sparse modeling, non-convex penalties have received considerable attentions due to their superiorities on sparsity-inducing over the convex counterparts. Compared with the convex optimization approaches, however, the non-convex approaches have more challenging convergence analysis. In this paper, we study the convergence of a non-convex iterative thresholding algorithm for solving sparse recovery problems with a certain class of non-convex penalties, whose corresponding thresholding functions are discontinuous with jump discontinuities. Therefore, we call the algorithm the iterative jumping thresholding (IJT) algorithm. The finite support and sign convergence of IJT algorithm is firstly verified via taking advantage of such jump discontinuity. Together with the assumption of the introduced restricted Kurdyka-{\L}ojasiewicz (rKL) property, then the strong convergence of IJT algorithm can be proved.Furthermore, we can show that IJT algorithm converges to a local minimizer at an asymptotically linear rate under some additional conditions. Moreover, we derive a posteriori computable error estimate, which can be used to design practical terminal rules for the algorithm. It should be pointed out that the $l_q$ quasi-norm ($0<q<1$) is an important subclass of the class of non-convex penalties studied in this paper. In particular, when applied to the $l_q$ regularization, IJT algorithm can converge to a local minimizer with an asymptotically linear rate under certain concentration conditions. We provide also a set of simulations to support the correctness of theoretical assertions and compare the time efficiency of IJT algorithm for the $l_{q}$ regularization ($q=1/2, 2/3$) with other known typical algorithms like the iterative reweighted least squares (IRLS) algorithm and the iterative reweighted $l_{1}$ minimization (IRL1) algorithm.

Summary

We haven't generated a summary for this paper yet.