Elastic-net regularization versus $\ell^1$-regularization for linear inverse problems with quasi-sparse solutions
Abstract: We consider the ill-posed operator equation $Ax=y$ with an injective and bounded linear operator $A$ mapping between $\ell2$ and a Hilbert space $Y$, possessing the unique solution \linebreak $x\dag={x\dag_k}_{k=1}\infty$. For the cases that sparsity $x\dag \in \ell0$ is expected but often slightly violated in practice, we investigate in comparison with the $\ell1$-regularization the elastic-net regularization, where the penalty is a weighted superposition of the $\ell1$-norm and the $\ell2$-norm square, under the assumption that $x\dag \in \ell1$. There occur two positive parameters in this approach, the weight parameter $\eta$ and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in $\ell1$-regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator $A$ and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay $x\dag_k \to 0$ for $k \to \infty$ and the classical smoothness properties of $x\dag$ as an element in $\ell2$.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.