Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DN-ADMM: Distributed Newton ADMM for Multi-agent Optimization (2109.14243v1)

Published 29 Sep 2021 in math.OC

Abstract: In a multi-agent network, we consider the problem of minimizing an objective function that is expressed as the sum of private convex and smooth functions, and a (possibly) non-differentiable convex regularizer. We propose a novel distributed second-order method based on the framework of Alternating Direction Method of Multipliers (ADMM), by invoking approximate Newton iterations to the primal update corresponding to the differentiable part. In order to achieve a distributed implementation, the total Hessian matrix is split into a diagonal component (locally computable) and an off-diagonal component (that requires communication between neighboring agents). Subsequently, the Hessian inverse is approximated by a truncation of the Taylor expansion to $K$ terms: this amounts to fully distributed updates entailing $K$ distributed communication rounds. We establish global linear convergence to the primal-dual optimal solution under the assumption that the private functions are strongly convex and have Lipschitz continuous gradient. Numerical experiments demonstrate the merits of the approach comparatively with state-of-the-art methods.

Summary

We haven't generated a summary for this paper yet.