Regularization Properties of the Krylov Iterative Solvers CGME and LSMR For Linear Discrete Ill-Posed Problems with an Application to Truncated Randomized SVDs (1812.04762v2)
Abstract: For the large-scale linear discrete ill-posed problem $\min|Ax-b|$ or $Ax=b$ with $b$ contaminated by Gaussian white noise, there are four commonly used Krylov solvers: LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method applied to $ATAx=ATb$, CGME, the CG method applied to $\min|AATy-b|$ or $AATy=b$ with $x=ATy$, and LSMR, the minimal residual (MINRES) method applied to $ATAx=ATb$. These methods have intrinsic regularizing effects, where the number $k$ of iterations plays the role of the regularization parameter. In this paper, we establish a number of regularization properties of CGME and LSMR, including the filtered SVD expansion of CGME iterates, and prove that the 2-norm filtering best regularized solutions by CGME and LSMR are less accurate than and at least as accurate as those by LSQR, respectively. We also prove that the semi-convergence of CGME and LSMR always occurs no later and sooner than that of LSQR, respectively. As a byproduct, using the analysis approach for CGME, we improve a fundamental result on the accuracy of the truncated rank $k$ approximate SVD of $A$ generated by randomized algorithms, and reveal how the truncation step damages the accuracy. Numerical experiments justify our results on CGME and LSMR.