Bregman Parallel Direction Method of Multipliers for Distributed Optimization via Mirror Averaging
Abstract: Distributed optimization aims to optimize a global objective formed by a sum of coupled local convex functions over a graph via only local computation and communication. In this paper, we propose the Bregman parallel direction method of multipliers (PDMM) based on a generalized averaging step named mirror averaging. We establish the global convergence and $O(1/T)$ convergence rate of the Bregman PDMM, along with its $O(n/\ln n)$ improvement over existing PDMM, where $T$ denotes the number of iterations and $n$ the dimension of solution variable. In addition, we can enhance its performance by optimizing the spectral gap of the averaging matrix. We demonstrate our results via a numerical example.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.