Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Quantitative Survey of Communication Optimizations in Distributed Deep Learning (2005.13247v2)

Published 27 May 2020 in cs.DC, cs.LG, and cs.NI

Abstract: Nowadays, large and complex deep learning (DL) models are increasingly trained in a distributed manner across multiple worker machines, in which extensive communications between workers pose serious scaling problems. In this article, we present a quantitative survey of communication optimization techniques for data parallel distributed DL. We first identify the major communication challenges and classify the existing solutions into three levels, namely the learning algorithm, the system architecture, and the network infrastructure. We present the state-of-the-art communication optimization techniques and conduct a comparative study of seven common lossless distributed DL methods on a 32-GPU cluster with 100Gbps InfiniBand (IB). We show that (1) the DL models with low model intensity (such as BERT and BERT-Large) are difficult to scale out even with the best available lossless algorithm over 100Gbps IB; (2) the system architecture and scheduling algorithms have a critical impact on the scaling property. We conclude the article with discussions on the open issues for further investigations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shaohuai Shi (47 papers)
  2. Zhenheng Tang (38 papers)
  3. Xiaowen Chu (108 papers)
  4. Chengjian Liu (2 papers)
  5. Wei Wang (1793 papers)
  6. Bo Li (1107 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.