Modified lp-norm regularization minimization for sparse signal recovery (1801.09172v2)
Abstract: In numerous substitution models for the $\l_{0}$-norm minimization problem $(P_{0})$, the $\l_{p}$-norm minimization $(P_{p})$ with $0<p\<1$ have been considered as the most natural choice. However, the non-convex optimization problem $(P_{p})$ are much more computational challenges, and are also NP-hard. Meanwhile, the algorithms corresponding to the proximal mapping of the regularization $\l_{p}$-norm minimization $(P_{p}^{\lambda})$ are limited to few specific values of parameter $p$. In this paper, we replace the $\ell_{p}$-norm $\|x\|_{p}^{p}$ with a modified function $\sum_{i=1}^{n}\frac{|x_{i}|}{(|x_{i}|+\epsilon_{i})^{1-p}}$. With change the parameter $\epsilon\>0$, this modified function would like to interpolate the $\l_{p}$-norm $|x|{p}{p}$. By this transformation, we translated the $\l{p}$-norm regularization minimization $(P_{p}{\lambda})$ into a modified $\l_{p}$-norm regularization minimization $(P_{p}{\lambda,\epsilon})$. Then, we develop the thresholding representation theory of the problem $(P_{p}{\lambda,\epsilon})$, and based on it, the IT algorithm is proposed to solve the problem $(P_{p}{\lambda,\epsilon})$ for all $0<p<1$. Indeed, we could get some much better results by choosing proper $p$, which is one of the advantages for our algorithm compared with other methods. Numerical results also show that, for some proper $p$, our algorithm performs the best in some sparse signal recovery problems compared with some state-of-art methods.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.