Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Distributed Optimization via Fixed-time Convergent Flows: Extensions to Non-convex Functions and Consistent Discretization (1905.10472v5)

Published 24 May 2019 in cs.SY and math.OC

Abstract: Distributed optimization has gained significant attention in recent years, primarily fueled by the availability of a large amount of data and privacy-preserving requirements. This paper presents a fixed-time convergent optimization algorithm for solving a potentially non-convex optimization problem using a first-order multi-agent system. Each agent in the network can access only its private objective function, while local information exchange is permitted between the neighbors. The proposed optimization algorithm combines a fixed-time convergent distributed parameter estimation scheme with a fixed-time distributed consensus scheme as its solution methodology. The results are presented under the assumption that the team objective function is strongly convex, as opposed to the common assumptions in the literature requiring each of the local objective functions to be strongly convex. The results extend to the class of possibly non-convex team objective functions satisfying only the Polyak-\L ojasiewicz (PL) inequality. It is also shown that the proposed continuous-time scheme, when discretized using Euler's method, leads to consistent discretization, i.e., the fixed-time convergence behavior is preserved under discretization. Numerical examples comprising large-scale distributed linear regression and training of neural networks corroborate our theoretical analysis.

Summary

We haven't generated a summary for this paper yet.