Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Efficiency in Large-Scale Decentralized Distributed Training (2002.01119v1)

Published 4 Feb 2020 in cs.LG, cs.DC, and stat.ML

Abstract: Decentralized Parallel SGD (D-PSGD) and its asynchronous variant Asynchronous Parallel SGD (AD-PSGD) is a family of distributed learning algorithms that have been demonstrated to perform well for large-scale deep learning tasks. One drawback of (A)D-PSGD is that the spectral gap of the mixing matrix decreases when the number of learners in the system increases, which hampers convergence. In this paper, we investigate techniques to accelerate (A)D-PSGD based training by improving the spectral gap while minimizing the communication cost. We demonstrate the effectiveness of our proposed techniques by running experiments on the 2000-hour Switchboard speech recognition task and the ImageNet computer vision task. On an IBM P9 supercomputer, our system is able to train an LSTM acoustic model in 2.28 hours with 7.5% WER on the Hub5-2000 Switchboard (SWB) test set and 13.3% WER on the CallHome (CH) test set using 64 V100 GPUs and in 1.98 hours with 7.7% WER on SWB and 13.3% WER on CH using 128 V100 GPUs, the fastest training time reported to date.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Wei Zhang (1489 papers)
  2. Xiaodong Cui (55 papers)
  3. Abdullah Kayi (5 papers)
  4. Mingrui Liu (44 papers)
  5. Ulrich Finkler (10 papers)
  6. Brian Kingsbury (54 papers)
  7. George Saon (39 papers)
  8. Youssef Mroueh (66 papers)
  9. Alper Buyuktosunoglu (6 papers)
  10. Payel Das (104 papers)
  11. David Kung (11 papers)
  12. Michael Picheny (32 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.