Distributed, scalable and gossip-free consensus optimization with application to data analysis (1705.02469v2)
Abstract: Distributed algorithms for solving additive or consensus optimization problems commonly rely on first-order or proximal splitting methods. These algorithms generally come with restrictive assumptions and at best enjoy a linear convergence rate. Hence, they can require many iterations or communications among agents to converge. In many cases, however, we do not seek a highly accurate solution for consensus problems. Based on this we propose a controlled relaxation of the coupling in the problem which allows us to compute an approximate solution, where the accuracy of the approximation can be controlled by the level of relaxation. The relaxed problem can be efficiently solved in a distributed way using a combination of primal-dual interior-point methods (PDIPMs) and message-passing. This algorithm purely relies on second-order methods and thus requires far fewer iterations and communications to converge. This is illustrated in numerical experiments, showing its superior performance compared to existing methods.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.