Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework (2002.10515v1)

Published 24 Feb 2020 in eess.SY and cs.SY

Abstract: In this paper, the alternating direction method of multipliers (ADMM) is investigated for distributed optimization problems in a networked multi-agent system. In particular, a new adaptive-gain ADMM algorithm is derived in a closed form and under the standard convex property in order to greatly speed up convergence of ADMM-based distributed optimization. Using Lyapunov direct approach, the proposed solution embeds control gains into weighted network matrix among the agents and uses those weights as adaptive penalty gains in the augmented Lagrangian. It is shown that the proposed closed loop gain adaptation scheme significantly improves the convergence time of underlying ADMM optimization. Convergence analysis is provided and simulation results are included to demonstrate the effectiveness of the proposed scheme.

Citations (4)

Summary

We haven't generated a summary for this paper yet.