Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fault-Tolerant Distributed Optimization (Part IV): Constrained Optimization with Arbitrary Directed Networks (1511.01821v1)

Published 5 Nov 2015 in cs.DC and math.OC

Abstract: We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct, we provide a matrix representation for iterative approximate crash consensus. The matrix representation allows us to characterize the convergence rate for crash iterative consensus.

Citations (27)

Summary

We haven't generated a summary for this paper yet.