Recovering Sparse Nonnegative Signals via Non-convex Fraction Function Penalty (1707.06576v2)
Abstract: Many real world practical problems can be formulated as $\ell_{0}$-minimization problems with nonnegativity constraints, which seek the sparsest nonnegative signals to underdetermined linear systems. They have been widely applied in signal and image processing, machine learning, pattern recognition and computer vision. Unfortunately, this $\ell_{0}$-minimization problem with nonnegativity constraint is computational and NP-hard because of the discrete and discontinuous nature of the $\ell_{0}$-norm. In this paper, we replace the $\ell_{0}$-norm with a non-convex fraction function, and study the minimization problem of this non-convex fraction function in recovering the sparse nonnegative signals from an underdetermined linear system. Firstly, we discuss the equivalence between $(P_{0}{\geq})$ and $(FP_{a}{\geq})$, and the equivalence between $(FP_{a}{\geq})$ and $(FP_{a,\lambda}{\geq})$. It is proved that the optimal solution of the problem $(P_{0}{\geq})$ could be approximately obtained by solving the regularization problem $(FP_{a,\lambda}{\geq})$ if some specific conditions satisfied. Secondly, we propose a nonnegative iterative thresholding algorithm to solve the regularization problem $(FP_{a,\lambda}{\geq})$ for all $a>0$. Finally, some numerical experiments on sparse nonnegative siganl recovery problems show that our method performs effective in finding sparse nonnegative signals compared with the linear programming.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.