Papers
Topics
Authors
Recent
Search
2000 character limit reached

Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation

Published 8 Nov 2020 in math.OC and cs.MA | (2011.04100v2)

Abstract: This paper proposes a distributed algorithm for a network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained problem as the unconstrained optimization of a smooth (continuously differentiable) exact penalty function. Computing the gradient of this penalty function in a distributed way is challenging even under the separability assumptions on the original optimization problem. Our technical approach shows that the distributed computation problem for the gradient can be formulated as a system of linear algebraic equations defined by separable problem data. To solve it, we design an exponentially fast, input-to-state stable distributed algorithm that does not require the individual agent matrices to be invertible. We employ this strategy to compute the gradient of the penalty function at the current network state. Our distributed algorithmic solver for the original constrained optimization problem interconnects this estimation with the prescription of having the agents follow the resulting direction. Numerical simulations illustrate the convergence and robustness properties of the proposed algorithm.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.