An inexact $q$-order regularized proximal Newton method for nonconvex composite optimization (2311.06871v4)
Abstract: This paper concerns the composite problem of minimizing the sum of a twice continuously differentiable function $f$ and a nonsmooth convex function. For this class of nonconvex and nonsmooth problems, by leveraging a practical inexactness criterion and a novel selection strategy for iterates, we propose an inexact $q$-order regularized proximal Newton method for $q\in[2,3]$, which becomes an inexact cubic regularization (CR) method for $q=3$. We prove that the whole iterate sequence converges to a stationary point for the KL objective function; and when the objective function has the KL property of exponent $\theta\in(0,\frac{q-1}{q})$, the convergence has a local $Q$-superlinear rate of order $\frac{q-1}{\theta q}$. In particular, under a local H\"{o}lderian error bound of order $\gamma\in(\frac{1}{q-1},1]$ on a second-order stationary point set, we show that the iterate and objective value sequences converge to a second-order stationary point and a second-order stationary value, respectively, with a local $Q$-superlinear rate of order $\gamma(q!-!1)$, specified as the $Q$-quadratic rate for $q=3$ and $\gamma=1$. This is the first practical inexact CR method with $Q$-quadratic convergence rate for nonconvex composite optimization. We validate the efficiency of the CR method with ZeroFPR as the inner solver by applying it to composite optimization problems with highly nonlinear $f$.