Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Gradient-thresholding Algorithm for Sparse Regularization (2006.03437v1)

Published 4 Jun 2020 in math.NA, cs.NA, and math.OC

Abstract: Inverse problems arise in a wide spectrum of applications in fields ranging from engineering to scientific computation. Connected with the rise of interest in inverse problems is the development and analysis of regularization methods, such as Tikhonov-type regularization methods or iterative regularization methods, which are a necessity in most of the inverse problems. In the last few decades, regularization methods motivating sparsity has been the focus of research, due to the high dimensionalty of the real-life data, and $\mathcal{L}1$-regularization methods (such as LASSO or FISTA) has been in its center (due to their computational simplicity). In this paper we propose a new (semi-) iterative regularization method which is not only simpler than the mentioned algorithms but also yields better results, in terms of accuracy and sparsity of the recovered solution. Furthermore, we also present a very effective and practical stopping criterion to choose an appropriate regularization parameter (here, it's iteration index) so as to recover a regularized (sparse) solution. To illustrate the computational efficiency of this algorithm we apply it to numerically solve the image deblurring problem and compare our results with certain standard regularization methods, like total variation, FISTA, LSQR etc.

Summary

We haven't generated a summary for this paper yet.