Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Recovery using Smoothed $\ell^0$ (SL0): Convergence Analysis (1001.5073v1)

Published 28 Jan 2010 in cs.IT and math.IT

Abstract: Finding the sparse solution of an underdetermined system of linear equations has many applications, especially, it is used in Compressed Sensing (CS), Sparse Component Analysis (SCA), and sparse decomposition of signals on overcomplete dictionaries. We have recently proposed a fast algorithm, called Smoothed $\ell0$ (SL0), for this task. Contrary to many other sparse recovery algorithms, SL0 is not based on minimizing the $\ell1$ norm, but it tries to directly minimize the $\ell0$ norm of the solution. The basic idea of SL0 is optimizing a sequence of certain (continuous) cost functions approximating the $\ell0$ norm of a vector. However, in previous papers, we did not provide a complete convergence proof for SL0. In this paper, we study the convergence properties of SL0, and show that under a certain sparsity constraint in terms of Asymmetric Restricted Isometry Property (ARIP), and with a certain choice of parameters, the convergence of SL0 to the sparsest solution is guaranteed. Moreover, we study the complexity of SL0, and we show that whenever the dimension of the dictionary grows, the complexity of SL0 increases with the same order as Matching Pursuit (MP), which is one of the fastest existing sparse recovery methods, while contrary to MP, its convergence to the sparsest solution is guaranteed under certain conditions which are satisfied through the choice of parameters.

Citations (40)

Summary

We haven't generated a summary for this paper yet.