Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A projected gradient method for $α\ell_{1}-β\ell_{2}$ sparsity regularization (2007.15263v1)

Published 30 Jul 2020 in math.NA and cs.NA

Abstract: The non-convex $\alpha|\cdot|{\ell_1}-\beta| \cdot|{\ell_2}$ $(\alpha\ge\beta\geq0)$ regularization has attracted attention in the field of sparse recovery. One way to obtain a minimizer of this regularization is the ST-($\alpha\ell_1-\beta\ell_2$) algorithm which is similar to the classical iterative soft thresholding algorithm (ISTA). It is known that ISTA converges quite slowly, and a faster alternative to ISTA is the projected gradient (PG) method. However, the conventional PG method is limited to the classical $\ell_1$ sparsity regularization. In this paper, we present two accelerated alternatives to the ST-($\alpha\ell_1-\beta\ell_2$) algorithm by extending the PG method to the non-convex $\alpha\ell_1-\beta\ell_2$ sparsity regularization. Moreover, we discuss a strategy to determine the radius $R$ of the $\ell_1$-ball constraint by Morozov's discrepancy principle. Numerical results are reported to illustrate the efficiency of the proposed approach.

Citations (6)

Summary

We haven't generated a summary for this paper yet.