Papers
Topics
Authors
Recent
Search
2000 character limit reached

DN-ADMM: Distributed Newton ADMM for Multi-agent Optimization

Published 29 Sep 2021 in math.OC | (2109.14243v1)

Abstract: In a multi-agent network, we consider the problem of minimizing an objective function that is expressed as the sum of private convex and smooth functions, and a (possibly) non-differentiable convex regularizer. We propose a novel distributed second-order method based on the framework of Alternating Direction Method of Multipliers (ADMM), by invoking approximate Newton iterations to the primal update corresponding to the differentiable part. In order to achieve a distributed implementation, the total Hessian matrix is split into a diagonal component (locally computable) and an off-diagonal component (that requires communication between neighboring agents). Subsequently, the Hessian inverse is approximated by a truncation of the Taylor expansion to $K$ terms: this amounts to fully distributed updates entailing $K$ distributed communication rounds. We establish global linear convergence to the primal-dual optimal solution under the assumption that the private functions are strongly convex and have Lipschitz continuous gradient. Numerical experiments demonstrate the merits of the approach comparatively with state-of-the-art methods.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.