Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmented Lagrangian methods for infeasible convex optimization problems and diverging proximal-point algorithms (2506.22428v1)

Published 27 Jun 2025 in math.OC, cs.NA, and math.NA

Abstract: This work investigates the convergence behavior of augmented Lagrangian methods (ALMs) when applied to convex optimization problems that may be infeasible. ALMs are a popular class of algorithms for solving constrained optimization problems. We establish progressively stronger convergence results, ranging from basic sequence convergence to precise convergence rates, under a hierarchy of assumptions. In particular, we demonstrate that, under mild assumptions, the sequences of iterates generated by ALMs converge to solutions of the ``closest feasible problem''. This study leverages the classical relationship between ALMs and the proximal-point algorithm applied to the dual problem. A key technical contribution is a set of concise results on the behavior of the proximal-point algorithm when applied to functions that may not have minimizers. These results pertain to its convergence in terms of its subgradients and of the values of the convex conjugate.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com