Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods (1703.07809v2)
Abstract: We consider the statistical inverse problem to recover $f$ from noisy measurements $Y = Tf + \sigma \xi$ where $\xi$ is Gaussian white noise and $T$ a compact operator between Hilbert spaces. Considering general reconstruction methods of the form $\hat f_\alpha = q_\alpha \left(T*T\right)T*Y$ with an ordered filter $q_\alpha$, we investigate the choice of the regularization parameter $\alpha$ by minimizing an unbiased estimate of the predictive risk $\mathbb E\left[\Vert Tf - T\hat f_\alpha\Vert2\right]$. The corresponding parameter $\alpha_{\mathrm{pred}}$ and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk $\mathbb E\left[\Vert f - \hat f_{\alpha_{\mathrm{pred}}}\Vert2\right]$ with the oracle prediction risk $\inf_{\alpha>0}\mathbb E\left[\Vert Tf - T\hat f_{\alpha}\Vert2\right]$. From this oracle inequality we are then able to conclude that the investigated parameter choice rule is of optimal order. Finally we also present numerical simulations, which support the order optimality of the method and the quality of the parameter choice in finite sample situations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.