Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning (1911.08250v1)

Published 19 Nov 2019 in cs.DC, cs.LG, and math.OC

Abstract: Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model. In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Aritra Dutta (26 papers)
  2. El Houcine Bergou (24 papers)
  3. Ahmed M. Abdelmoniem (27 papers)
  4. Chen-Yu Ho (4 papers)
  5. Atal Narayan Sahu (5 papers)
  6. Marco Canini (37 papers)
  7. Panos Kalnis (13 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.