Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Regularization Parameter Choice Rules for Large-Scale Problems (1907.05666v1)

Published 12 Jul 2019 in math.NA and cs.NA

Abstract: This paper derives a new class of adaptive regularization parameter choice strategies that can be effectively and efficiently applied when regularizing large-scale linear inverse problems by combining standard Tikhonov regularization and projection onto Krylov subspaces of increasing dimension (computed by the Golub-Kahan bidiagonalization algorithm). The success of this regularization approach heavily depends on the accurate tuning of two parameters (namely, the Tikhonov parameter and the dimension of the projection subspace): these are simultaneously set using new strategies that can be regarded as special instances of bilevel optimization methods, which are solved by using a new paradigm that interlaces the iterations performed to project the Tikhonov problem (lower-level problem) with those performed to apply a given parameter choice rule (higher-level problem). The discrepancy principle, the GCV, the quasi-optimality criterion, and Regi\'{n}ska criterion can all be adapted to work in this framework. The links between Gauss quadrature and Golub-Kahan bidiagonalization are exploited to prove convergence results for the discrepancy principle, and to give insight into the behavior of the other considered regularization parameter choice rules. Several numerical tests modeling inverse problems in imaging show that the new parameter choice strategies lead to regularization methods that are reliable, and intrinsically simpler and cheaper than other strategies already available in the literature.

Citations (1)

Summary

We haven't generated a summary for this paper yet.