Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lagrangian-based methods in convex optimization: prediction-correction frameworks with ergodic convergence rates

Published 10 Jun 2022 in math.OC | (2206.05088v2)

Abstract: We study the convergence rates of the classical Lagrangian-based methods and their variants for solving convex optimization problems with equality constraints. We present a generalized prediction-correction framework to establish $O(1/K2)$ ergodic convergence rates. Under the strongly convex assumption, based on the presented prediction-correction framework, some Lagrangian-based methods with $O(1/K2)$ ergodic convergence rates are presented, such as the augmented Lagrangian method with the indefinite proximal term, the alternating direction method of multipliers (ADMM) with a larger step size up to $(1+\sqrt{5})/2$, the linearized ADMM with the indefinite proximal term, and the multi-block ADMM type method (under an alternative assumption that the gradient of one block is Lipschitz continuous).

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.