Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DCG: Distributed Conjugate Gradient for Efficient Linear Equations Solving (2107.13814v1)

Published 29 Jul 2021 in cs.DC

Abstract: Distributed algorithms to solve linear equations in multi-agent networks have attracted great research attention and many iteration-based distributed algorithms have been developed. The convergence speed is a key factor to be considered for distributed algorithms, and it is shown dependent on the spectral radius of the iteration matrix. However, the iteration matrix is determined by the network structure and is hardly pre-tuned, making the iterative-based distributed algorithms may converge very slowly when the spectral radius is close to 1. In contrast, in centralized optimization, the Conjugate Gradient (CG) is a widely adopted idea to speed up the convergence of the centralized solvers, which can guarantee convergence in fixed steps. In this paper, we propose a general distributed implementation of CG, called DCG. DCG only needs local communication and local computation, while inheriting the characteristic of fast convergence. DCG guarantees to converge in $4Hn$ rounds, where $H$ is the maximum hop number of the network and $n$ is the number of nodes. We present the applications of DCG in solving the least square problem and network localization problem. The results show the convergence speed of DCG is three orders of magnitude faster than the widely used Richardson iteration method.

Citations (3)

Summary

We haven't generated a summary for this paper yet.