Lagrangian-based methods in convex optimization: prediction-correction frameworks with ergodic convergence rates
Abstract: We study the convergence rates of the classical Lagrangian-based methods and their variants for solving convex optimization problems with equality constraints. We present a generalized prediction-correction framework to establish $O(1/K2)$ ergodic convergence rates. Under the strongly convex assumption, based on the presented prediction-correction framework, some Lagrangian-based methods with $O(1/K2)$ ergodic convergence rates are presented, such as the augmented Lagrangian method with the indefinite proximal term, the alternating direction method of multipliers (ADMM) with a larger step size up to $(1+\sqrt{5})/2$, the linearized ADMM with the indefinite proximal term, and the multi-block ADMM type method (under an alternative assumption that the gradient of one block is Lipschitz continuous).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.