Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subgradient averaging for multi-agent optimisation with different constraint sets (1909.04351v2)

Published 10 Sep 2019 in math.OC

Abstract: We consider a multi-agent setting with agents exchanging information over a possibly time-varying network, aiming at minimising a separable objective function subject to constraints. To achieve this objective we propose a novel subgradient averaging algorithm that allows for non-differentiable objective functions and different constraint sets per agent. Allowing different constraints per agent simultaneously with a time-varying communication network constitutes a distinctive feature of our approach, extending existing results on distributed subgradient methods. To highlight the necessity of dealing with a different constraint set within a distributed optimisation context, we analyse a problem instance where an existing algorithm does not exhibit a convergent behaviour if adapted to account for different constraint sets. For our proposed iterative scheme we show asymptotic convergence of the iterates to a minimum of the underlying optimisation problem for step sizes of the form $ \frac{\eta}{k+1} $, $ \eta > 0 $. We also analyse this scheme under a step size choice of $ \frac{\eta}{\sqrt{k+1}} $, $ \eta > 0 $, and establish a convergence rate of $ \mathcal{O}(\frac{\ln k}{\sqrt{k}}) $ in objective value. To demonstrate the efficacy of the proposed method, we investigate a robust regression problem and an $ \ell_2 $ regression problem with regularisation.

Citations (15)

Summary

We haven't generated a summary for this paper yet.