Convergence analysis in convex regularization depending on the smoothness degree of the penalizer (1406.1227v6)
Abstract: The problem of minimization of the least squares functional with a smooth, lower semi-continuous, convex regularizer $J(\cdot)$ is considered to be solved. Over some compact and convex subset $\Omega$ of the Hilbert space $\mathcal{H},$ the regularizer is implicitly defined as $ J(\cdot) : \mathcal{C}{k}(\Omega , \mathcal{H}) \rightarrow \mathbb{R}{+}$ where $k \in {1,2}.$ So the cost functional associated with some given linear, compact and injective forward operator $\mathcal{T} :\Omega \subset \mathcal{H} \rightarrow \mathcal{H},$ \begin{align} F{\alpha}(\cdot , f{\delta}) := \frac{1}{2} \Vert \mathcal{T}( \cdot ) - f{\delta}\Vert_{\mathcal{H}}2 + \alpha J(\cdot) , \nonumber \end{align} where $f{\delta}$ is the given perturbed data with its perturbation amount $\delta$ in it. Convergence of the regularized optimum solution $\varphi_{\alpha(\delta)} \in \mbox{argmin} F_{\alpha}(\varphi , f{\delta})$ to the true solution $\varphi{\dagger}$ is analysed depending on the smoothness degree of the regularizer, \textit{i.e.} the cases $k \in {1,2}$ in $ J(\cdot) : \mathcal{C}{k}(\Omega , \mathcal{H}) \rightarrow \mathbb{R}{+}.$ In both cases, we define such a regularization parameter that is in cooperation with the condition \begin{align} \alpha(\delta , f{\delta}) \in { \alpha > 0 \mbox{ }\vert \mbox{ }\Vert\mathcal{T}\varphi{\alpha}{\delta} - f{\delta}\Vert \leq \tau\delta } , \nonumber \end{align} for some fixed $\tau \geq 1.$ In the case of $k = 2,$ we are able to evaluate the discrepancy $\Vert\mathcal{T}\varphi_{\alpha(\delta)} - f{\delta}\Vert\leq \tau\delta$ with the Hessian Lipschitz constant $L_H$ of the functional $F_{\alpha}(\cdot , f{\delta}).$