Convergence rates for an inexact linearized ADMM for nonsmooth nonconvex optimization with nonlinear equality constraints (2503.01060v1)
Abstract: In this paper, we consider nonconvex optimization problems with nonsmooth nonconvex objective function and nonlinear equality constraints. We assume that both the objective function and the functional constraints can be separated into 2 blocks. To solve this problem, we introduce a new inexact linearized alternating direction method of multipliers (ADMM) algorithm. Specifically, at each iteration, we linearize the smooth part of the objective function and the nonlinear part of the functional constraints within the augmented Lagrangian and add a dynamic quadratic regularization. We then compute the new iterate of the block associated with nonlinear constraints inexactly. This strategy yields subproblems that are easily solvable and their (inexact) solutions become the next iterates. Using Lyapunov arguments, we establish convergence guarantees for the iterates of our method toward an $\epsilon$-first-order solution within $\mathcal{O}(\epsilon{-2})$ iterations. Moreover, we demonstrate that in cases where the problem data exhibit e.g., semi-algebraic properties or more general the KL condition, the entire sequence generated by our algorithm converges, and we provide convergence rates. To validate both the theory and the performance of our algorithm, we conduct numerical simulations for several nonlinear model predictive control and matrix factorization problems.