Revisiting $L_q(0\leq q<1)$ Norm Regularized Optimization (2306.14394v5)
Abstract: Sparse optimization has seen its advances in recent decades. For scenarios where the true sparsity is unknown, regularization turns out to be a promising solution. Two popular non-convex regularizations are the so-called $L_0$ norm and $L_q$ norm with $q\in(0,1)$, giving rise to extensive research on their induced optimization. However, the majority of these work centered around the main function that is twice continuously differentiable and the best convergence rate for an algorithm solving the optimization with $q\in(0,1)$ is superlinear. This paper explores the $L_q$ norm regularized optimization in a unified way for any $q\in[0,1)$, where the main function has a semismooth gradient. In particular, we establish the first-order and the second-order optimality conditions under mild assumptions and then integrate the proximal operator and semismooth Newton method to develop a proximal semismooth Newton pursuit algorithm. Under the second sufficient condition, the whole sequence generated by the algorithm converges to a unique local minimizer. Moreover, the convergence is superlinear and quadratic if the gradient of the main function is semismooth and strongly semismooth at the local minimizer, respectively. Hence, this paper accomplishes the quadratic rate for an algorithm designed to solve the $L_q$ norm regularization problem for any $q\in(0,1)$. Finally, some numerical experiments have showcased its nice performance when compared with several existing solvers.