Two Innovations in Inexact Augmented Lagrangian Methods for Convex Optimization (2503.11809v1)
Abstract: This paper presents two new techniques relating to inexact solution of subproblems in augmented Lagrangian methods for convex programming. The first involves combining a relative error criterion for solution of the subproblems with over- or under-relaxation of the multiplier update step. In one interpretation of our proposed iterative scheme, a predetermined amount of relaxation effects the criterion for an acceptably accurate solution value. Alternatively, the amount of multiplier step relaxation can be adapted to the accuracy of the subproblem subject to a viability test employing the discriminant of a certain quadratic function. The second innovation involves solution of augmented Lagrangian subproblems for problems posed in standard Fenchel-Rockafellar form. We show that applying alternating minimization to this subproblem, as in the first two steps of the ADMM, is equivalent to executing the classical proximal gradient method on a dual formulation of the subproblem. By substituting more sophisticated variants of the proximal gradient method for the classical one, it is possible to construct new ADMM-like methods with better empirical performance than using ordinary alternating minimization within an inexact augmented Lagrangian framework. The paper concludes by describing some computational experiments exploring using these two innovations, both separately and jointly, to solve LASSO problems.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.