Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Nonconvex Optimization: Gradient-free Iterations and $ε$-Globally Optimal Solution (2008.00252v5)

Published 1 Aug 2020 in math.OC, cs.SY, and eess.SY

Abstract: Distributed optimization utilizes local computation and communication to realize a global aim of optimizing the sum of local objective functions. This article addresses a class of constrained distributed nonconvex optimization problems involving univariate objectives, aiming to achieve global optimization without requiring local evaluations of gradients at every iteration. We propose a novel algorithm named CPCA, exploiting the notion of combining Chebyshev polynomial approximation, average consensus, and polynomial optimization. The proposed algorithm is i) able to obtain $\epsilon$-globally optimal solutions for any arbitrarily small given accuracy $\epsilon$, ii) efficient in both zeroth-order queries (i.e., evaluations of function values) and inter-agent communication, and iii) distributed terminable when the specified precision requirement is met. The key insight is to use polynomial approximations to substitute for general local objectives, distribute these approximations via average consensus, and solve an easier approximate version of the original problem. Due to the nice analytic properties of polynomials, this approximation not only facilitates efficient global optimization, but also allows the design of gradient-free iterations to reduce cumulative costs of queries and achieve geometric convergence for solving nonconvex problems. We provide a comprehensive analysis of the accuracy and complexities of the proposed algorithm.

Summary

We haven't generated a summary for this paper yet.