Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting $L_q(0\leq q<1)$ Norm Regularized Optimization (2306.14394v5)

Published 26 Jun 2023 in math.OC

Abstract: Sparse optimization has seen its advances in recent decades. For scenarios where the true sparsity is unknown, regularization turns out to be a promising solution. Two popular non-convex regularizations are the so-called $L_0$ norm and $L_q$ norm with $q\in(0,1)$, giving rise to extensive research on their induced optimization. However, the majority of these work centered around the main function that is twice continuously differentiable and the best convergence rate for an algorithm solving the optimization with $q\in(0,1)$ is superlinear. This paper explores the $L_q$ norm regularized optimization in a unified way for any $q\in[0,1)$, where the main function has a semismooth gradient. In particular, we establish the first-order and the second-order optimality conditions under mild assumptions and then integrate the proximal operator and semismooth Newton method to develop a proximal semismooth Newton pursuit algorithm. Under the second sufficient condition, the whole sequence generated by the algorithm converges to a unique local minimizer. Moreover, the convergence is superlinear and quadratic if the gradient of the main function is semismooth and strongly semismooth at the local minimizer, respectively. Hence, this paper accomplishes the quadratic rate for an algorithm designed to solve the $L_q$ norm regularization problem for any $q\in(0,1)$. Finally, some numerical experiments have showcased its nice performance when compared with several existing solvers.

Citations (1)

Summary

We haven't generated a summary for this paper yet.