Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RedSync : Reducing Synchronization Traffic for Distributed Deep Learning (1808.04357v3)

Published 13 Aug 2018 in cs.DC, cs.CV, and cs.LG

Abstract: Data parallelism has become a dominant method to scale Deep Neural Network (DNN) training across multiple nodes. Since synchronizing a large number of gradients of the local model can be a bottleneck for large-scale distributed training, compressing communication data has gained widespread attention recently. Among several recent proposed compression algorithms, Residual Gradient Compression (RGC) is one of the most successful approaches---it can significantly compress the transmitting message size (0.1\% of the gradient size) of each node and still achieve correct accuracy and the same convergence speed. However, the literature on compressing deep networks focuses almost exclusively on achieving good theoretical compression rate, while the efficiency of RGC in real distributed implementation has been less investigated. In this paper, we develop an RGC-based system that is able to reduce the end-to-end training time on real-world multi-GPU systems. Our proposed design called RedSync, which introduces a set of optimizations to reduce communication bandwidth requirement while introducing limited overhead. We evaluate the performance of RedSync on two different multiple GPU platforms, including 128 GPUs of a supercomputer and an 8-GPU server. Our test cases include image classification tasks on Cifar10 and ImageNet, and LLMing tasks on Penn Treebank and Wiki2 datasets. For DNNs featured with high communication to computation ratio, which have long been considered with poor scalability, RedSync brings significant performance improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiarui Fang (16 papers)
  2. Haohuan Fu (25 papers)
  3. Guangwen Yang (40 papers)
  4. Cho-Jui Hsieh (211 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.