On the complexity of proximal gradient and proximal Newton-CG methods for \(\ell_1\)-regularized Optimization (2504.15752v1)
Abstract: In this paper, we propose two second-order methods for solving the (\ell_1)-regularized composite optimization problem, which are developed based on two distinct definitions of approximate second-order stationary points. We introduce a hybrid proximal gradient and negative curvature method, as well as a proximal Newton-CG method, to find a strong* approximate second-order stationary point and a weak approximate second-order stationary point for (\ell_1)-regularized optimization problems, respectively. Comprehensive analyses are provided regarding the iteration complexity, computational complexity, and the local superlinear convergence rates of the first phases of these two methods under specific error bound conditions. Specifically, we demonstrate that the proximal Newton-CG method achieves the best-known iteration complexity for attaining the proposed weak approximate second-order stationary point, which is consistent with the results for finding an approximate second-order stationary point in unconstrained optimization. Through a toy example, we show that our proposed methods can effectively escape the first-order approximate solution. Numerical experiments implemented on the (\ell_1)-regularized Student's t-regression problem validate the effectiveness of both methods.