Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sparse Regularization: Convergence Of Iterative Jumping Thresholding Algorithm

Published 24 Feb 2014 in math.OC | (1402.5744v3)

Abstract: In recent studies on sparse modeling, non-convex penalties have received considerable attentions due to their superiorities on sparsity-inducing over the convex counterparts. Compared with the convex optimization approaches, however, the non-convex approaches have more challenging convergence analysis. In this paper, we study the convergence of a non-convex iterative thresholding algorithm for solving sparse recovery problems with a certain class of non-convex penalties, whose corresponding thresholding functions are discontinuous with jump discontinuities. Therefore, we call the algorithm the iterative jumping thresholding (IJT) algorithm. The finite support and sign convergence of IJT algorithm is firstly verified via taking advantage of such jump discontinuity. Together with the assumption of the introduced restricted Kurdyka-{\L}ojasiewicz (rKL) property, then the strong convergence of IJT algorithm can be proved.Furthermore, we can show that IJT algorithm converges to a local minimizer at an asymptotically linear rate under some additional conditions. Moreover, we derive a posteriori computable error estimate, which can be used to design practical terminal rules for the algorithm. It should be pointed out that the $l_q$ quasi-norm ($0<q<1$) is an important subclass of the class of non-convex penalties studied in this paper. In particular, when applied to the $l_q$ regularization, IJT algorithm can converge to a local minimizer with an asymptotically linear rate under certain concentration conditions. We provide also a set of simulations to support the correctness of theoretical assertions and compare the time efficiency of IJT algorithm for the $l_{q}$ regularization ($q=1/2, 2/3$) with other known typical algorithms like the iterative reweighted least squares (IRLS) algorithm and the iterative reweighted $l_{1}$ minimization (IRL1) algorithm.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.