On $\ell^1$-regularization under continuity of the forward operator in weaker topologies (1711.08642v1)
Abstract: Our focus is on the stable approximate solution of linear operator equations based on noisy data by using $\ell1$-regularization as a sparsity-enforcing version of Tikhonov regularization. We summarize recent results on situations where the sparsity of the solution slightly fails. In particular, we show how the recently established theory for weak*-to-weak continuous linear forward operators can be extended to the case of weak*-to-weak* continuity. This might be of interest when the image space is non-reflexive. We discuss existence, stability and convergence of regularized solutions. For injective operators, we will formulate convergence rates by exploiting variational source conditions. The typical rate function obtained under an ill-posed operator is strictly concave and the degree of failure of the solution sparsity has an impact on its behavior. Linear convergence rates just occur in the two borderline cases of proper sparsity, where the solutions belong to $\ell0$, and of well-posedness. For an exemplary operator, we demonstrate that the technical properties used in our theory can be verified in practice. In the last section, we briefly mention the difficult case of oversmoothing regularization where $x\dag$ does not belong to $\ell1$.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.