A globally convergent and locally quadratically convergent modified B-semismooth Newton method for $\ell_1$-penalized minimization
Abstract: We consider the efficient minimization of a nonlinear, strictly convex functional with $\ell_1$-penalty term. Such minimization problems appear in a wide range of applications like Tikhonov regularization of (non)linear inverse problems with sparsity constraints. In (2015 Inverse Problems (31) 025005), a globalized Bouligand-semismooth Newton method was presented for $\ell_1$-Tikhonov regularization of linear inverse problems. Nevertheless, a technical assumption on the accumulation point of the sequence of iterates was necessary to prove global convergence. Here, we generalize this method to general nonlinear problems and present a modified semismooth Newton method for which global convergence is proven without any additional requirements. Moreover, under a technical assumption, full Newton steps are eventually accepted and locally quadratic convergence is achieved. Numerical examples from image deblurring and robust regression demonstrate the performance of the method.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.