Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints (2105.02425v3)

Published 6 May 2021 in math.OC

Abstract: The augmented Lagrangian method (ALM) is a benchmark for convex programming problems with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literature. However, much less attention has been paid to ALM for efficiently solving linearly inequality-constrained convex minimization models. In this paper, we exploit an enlightening reformulation of the newly developed indefinite linearized ALM for the equality-constrained convex optimization problem, and present a new indefinite linearized ALM scheme for efficiently solving the convex optimization problem with linear inequality constraints. The proposed method enjoys great advantages, especially for large-scale optimization cases, in two folds mainly: first, it largely simplifies the challenging key subproblem of the classic ALM by employing its linearized reformulation, while keeping low complexity in computation; second, we show that only a smaller proximity regularization term is needed for provable convergence, which allows a bigger step-size and hence significantly better performance. Moreover, we show the global convergence of the proposed scheme upon its equivalent compact expression of prediction-correction, along with a worst-case $\mathcal{O}(1/N)$ convergence rate. Numerical results on some application problems demonstrate that a smaller regularization term can lead to a better experimental performance, which further confirms the theoretical results presented in this study.

Summary

We haven't generated a summary for this paper yet.