Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Communication Optimization for Distributed Training: Architecture, Advances, and Opportunities (2403.07585v2)

Published 12 Mar 2024 in cs.DC and cs.LG

Abstract: The past few years have witnessed the flourishing of large-scale deep neural network models with ever-growing parameter numbers. Training such large-scale models typically requires massive memory and computing resources, necessitating distributed training. As GPU performance has rapidly evolved in recent years, computation time has shrunk, making communication a larger portion of the overall training time. Consequently, optimizing communication for distributed training has become crucial. In this article, we briefly introduce the general architecture of distributed deep neural network training and analyze relationships among Parallelization Strategy, Collective Communication Library, and Network from the perspective of communication optimization, which forms a three-layer paradigm. We then review current representative research advances within this three-layer paradigm. We find that layers in the current three-layer paradigm are relatively independent and there is a rich design space for cross-layer collaborative optimization in distributed training scenarios. Therefore, we advocate "Vertical" and "Horizontal" co-designs which extend the three-layer paradigm to a five-layer paradigm. We also advocate "Intra-Inter" and "Host-Net" co-designs to further utilize the potential of heterogeneous resources. We hope this article can shed some light on future research on communication optimization for distributed training.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro et al., “Efficient large-scale language model training on gpu clusters using megatron-lm,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021, pp. 1–15.
  2. W. Wang, M. Khazraee, Z. Zhong, M. Ghobadi, Z. Jia, D. Mudigere, Y. Zhang, and A. Kewitsch, “Topoopt: Co-optimizing network topology and parallelization strategy for distributed training jobs,” in 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 2023, pp. 739–767.
  3. Y. Jiang, Y. Zhu, C. Lan, B. Yi, Y. Cui, and C. Guo, “A unified architecture for accelerating distributed dnn training in heterogeneous gpu/cpu clusters,” in 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), 2020, pp. 463–479.
  4. N. Jouppi, G. Kurian, S. Li, P. Ma, R. Nagarajan, L. Nai, N. Patil, S. Subramanian, A. Swing, B. Towles et al., “Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings,” in Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023, pp. 1–14.
  5. A. Shah, V. Chidambaram, M. Cowan, S. Maleki, M. Musuvathi, T. Mytkowicz, J. Nelson, O. Saarikivi, and R. Singh, “Taccl: Guiding collective algorithm synthesis using communication sketches,” in 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 2023, pp. 593–612.
  6. S. Rajasekaran, M. Ghobadi, and A. Akella, “Cassini: Network-aware job scheduling in machine learning clusters,” in 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24) (to appear), 2024.
  7. M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, “Megatron-lm: Training multi-billion parameter language models using model parallelism,” arXiv preprint arXiv:1909.08053, 2019.
  8. Y. Zhuang, H. Zhao, L. Zheng, Z. Li, E. Xing, Q. Ho, J. Gonzalez, I. Stoica, and H. Zhang, “On optimizing the communication of model parallelism,” Proceedings of Machine Learning and Systems, vol. 5, 2023.
  9. J. Li, Y. Jiang, Y. Zhu, C. Wang, and H. Xu, “Accelerating distributed moe training and inference with lina,” in 2023 USENIX Annual Technical Conference (USENIX ATC 23), 2023, pp. 945–959.
  10. J. Liu, J. H. Wang, and Y. Jiang, “Janus: A unified distributed training framework for sparse mixture-of-experts models,” in Proceedings of the ACM SIGCOMM 2023 Conference, 2023, pp. 486–498.
  11. G. Wang, S. Venkataraman, A. Phanishayee, N. Devanur, J. Thelin, and I. Stoica, “Blink: Fast and generic collectives for distributed ml,” Proceedings of Machine Learning and Systems, vol. 2, pp. 172–186, 2020.
  12. Z. Cai, Z. Liu, S. Maleki, M. Musuvathi, T. Mytkowicz, J. Nelson, and O. Saarikivi, “Synthesizing optimal collective algorithms,” in Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2021, pp. 62–75.
  13. K. Mahajan, C.-H. Chu, S. Sridharan, and A. Akella, “Better together: Jointly optimizing ml collective scheduling and execution planning using syndicate,” in 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 2023, pp. 809–824.
  14. R. Pan, Y. Lei, J. Li, Z. Xie, B. Yuan, and Y. Xia, “Efficient flow scheduling in distributed deep learning training with echelon formation,” in Proceedings of the 21st ACM Workshop on Hot Topics in Networks, 2022, pp. 93–100.
  15. C. Lao, Y. Le, K. Mahajan, Y. Chen, W. Wu, A. Akella, and M. Swift, “Atp: In-network aggregation for multi-tenant learning,” in 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21), 2021, pp. 741–761.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yunze Wei (3 papers)
  2. Tianshuo Hu (2 papers)
  3. Cong Liang (6 papers)
  4. Yong Cui (29 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com