Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A rank-two relaxed parallel splitting version of the augmented Lagrangian method with step size in (0,2) for separable convex programming (2205.02723v2)

Published 5 May 2022 in math.OC

Abstract: The augmented Lagrangian method (ALM) is classic for canonical convex programming problems with linear constraints, and it finds many applications in various scientific computing areas. A major advantage of the ALM is that the step for updating the dual variable can be further relaxed with a step size in $(0,2)$, and this advantage can easily lead to numerical acceleration for the ALM. When a separable convex programming problem is discussed and a corresponding splitting version of the classic ALM is considered, convergence may not be guaranteed and thus it is seemingly impossible that a step size in $(0,2)$ can be carried on to the relaxation step for updating the dual variable. We show that for a parallel splitting version of the ALM, a step size in $(0,2)$ can be maintained for further relaxing both the primal and dual variables if the relaxation step is simply corrected by a rank-two matrix. Hence, a rank-two relaxed parallel splitting version of the ALM with a step size in $(0,2)$ is proposed for separable convex programming problems. We validate that the new algorithm can numerically outperform existing algorithms of the same kind significantly by testing some applications.

Summary

We haven't generated a summary for this paper yet.