Papers
Topics
Authors
Recent
Search
2000 character limit reached

Elastic-net regularization versus $\ell^1$-regularization for linear inverse problems with quasi-sparse solutions

Published 12 Apr 2016 in math.FA | (1604.03364v2)

Abstract: We consider the ill-posed operator equation $Ax=y$ with an injective and bounded linear operator $A$ mapping between $\ell2$ and a Hilbert space $Y$, possessing the unique solution \linebreak $x\dag={x\dag_k}_{k=1}\infty$. For the cases that sparsity $x\dag \in \ell0$ is expected but often slightly violated in practice, we investigate in comparison with the $\ell1$-regularization the elastic-net regularization, where the penalty is a weighted superposition of the $\ell1$-norm and the $\ell2$-norm square, under the assumption that $x\dag \in \ell1$. There occur two positive parameters in this approach, the weight parameter $\eta$ and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in $\ell1$-regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator $A$ and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay $x\dag_k \to 0$ for $k \to \infty$ and the classical smoothness properties of $x\dag$ as an element in $\ell2$.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.