Lagrangian-based methods in convex optimization: prediction-correction frameworks with non-ergodic convergence rates
Abstract: Lagrangian-based methods are classical methods for solving convex optimization problems with equality constraints. We present novel prediction-correction frameworks for such methods and their variants, which can achieve $O(1/k)$ non-ergodic convergence rates for general convex optimization and $O(1/k2)$ non-ergodic convergence rates under the assumption that the objective function is strongly convex or gradient Lipschitz continuous. We give two approaches ($updating~multiplier~once$ $or~twice$) to design algorithms satisfying the presented prediction-correction frameworks. As applications, we establish non-ergodic convergence rates for some well-known Lagrangian-based methods (esp., the ADMM type methods and the multi-block ADMM type methods).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.