Augmented Lagrangian methods for infeasible convex optimization problems and diverging proximal-point algorithms (2506.22428v1)
Abstract: This work investigates the convergence behavior of augmented Lagrangian methods (ALMs) when applied to convex optimization problems that may be infeasible. ALMs are a popular class of algorithms for solving constrained optimization problems. We establish progressively stronger convergence results, ranging from basic sequence convergence to precise convergence rates, under a hierarchy of assumptions. In particular, we demonstrate that, under mild assumptions, the sequences of iterates generated by ALMs converge to solutions of the ``closest feasible problem''. This study leverages the classical relationship between ALMs and the proximal-point algorithm applied to the dual problem. A key technical contribution is a set of concise results on the behavior of the proximal-point algorithm when applied to functions that may not have minimizers. These results pertain to its convergence in terms of its subgradients and of the values of the convex conjugate.