Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Practical sufficient conditions for convergence of distributed optimisation algorithms over communication networks with interference (2105.04230v1)

Published 10 May 2021 in math.OC, cs.MA, and cs.NI

Abstract: Information exchange over networks can be affected by various forms of delay. This causes challenges for using the network by a multi-agent system to solve a distributed optimisation problem. Distributed optimisation schemes, however, typically do not assume network models that are representative for real-world communication networks, since communication links are most of the time abstracted as lossless. Our objective is therefore to formulate a representative network model and provide practically verifiable network conditions that ensure convergence of distributed algorithms in the presence of interference and possibly unbounded delay. Our network is modelled by a sequence of directed-graphs, where to each network link we associate a process for the instantaneous signal-to-interference-plus-noise ratio. We then formulate practical conditions that can be verified locally and show that the age of information (AoI) associated with data communicated over the network is in $\mathcal{O}(\sqrt{n})$. Under these conditions we show that a penalty-based gradient descent algorithm can be used to solve a rich class of stochastic, constrained, distributed optimisation problems. The strength of our result lies in the bridge between practical verifiable network conditions and an abstract optimisation theory. We illustrate numerically that our algorithm converges in an extreme scenario where the average AoI diverges.

Citations (2)

Summary

We haven't generated a summary for this paper yet.