Complexity of a linearized augmented Lagrangian method for nonconvex minimization with nonlinear equality constraints
Abstract: In this paper, we consider a nonconvex optimization problem with nonlinear equality constraints. We assume that both, the objective function and the functional constraints are locally smooth. For solving this problem, we propose a linearized augmented Lagrangian method, i.e., we linearize the objective function and the functional constraints in a Gauss-Newton fashion at the current iterate within the augmented Lagrangian function and add a quadratic regularization, yielding a subproblem that is easy to solve, and whose solution is the next primal iterate. The update of the dual multipliers is also based on the linearization of functional constraints. Under a novel dynamic regularization parameter choice, we prove boundedness and global asymptotic convergence of the iterates to a first-order solution of the problem. We also derive convergence guarantees for the iterates of our method to an $\epsilon$-first-order solution in $\mathcal{O}(\sqrt{\rho} \epsilon{-2})$ Jacobian evaluations, where $\rho$ is the penalty parameter. Moreover, when the problem exhibits a benign nonconvex property, we derive improved convergence results to an $\epsilon$-second-order solution. Finally, we validate the performance of the proposed algorithm by numerically comparing it with the existing methods and software from the literature.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.