Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MCR-DL: Mix-and-Match Communication Runtime for Deep Learning (2303.08374v1)

Published 15 Mar 2023 in cs.DC and cs.LG

Abstract: In recent years, the training requirements of many state-of-the-art Deep Learning (DL) models have scaled beyond the compute and memory capabilities of a single processor, and necessitated distribution among processors. Training such massive models necessitates advanced parallelism strategies to maintain efficiency. However, such distributed DL parallelism strategies require a varied mixture of collective and point-to-point communication operations across a broad range of message sizes and scales. Examples of models using advanced parallelism strategies include Deep Learning Recommendation Models (DLRM) and Mixture-of-Experts (MoE). Communication libraries' performance varies wildly across different communication operations, scales, and message sizes. We propose MCR-DL: an extensible DL communication framework that supports all point-to-point and collective operations while enabling users to dynamically mix-and-match communication backends for a given operation without deadlocks. MCR-DL also comes packaged with a tuning suite for dynamically selecting the best communication backend for a given input tensor. We select DeepSpeed-MoE and DLRM as candidate DL models and demonstrate a 31% improvement in DS-MoE throughput on 256 V100 GPUs on the Lassen HPC system. Further, we achieve a 20% throughput improvement in a dense Megatron-DeepSpeed model and a 25% throughput improvement in DLRM on 32 A100 GPUs with the Theta-GPU HPC system.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Quentin Anthony (25 papers)
  2. Ammar Ahmad Awan (15 papers)
  3. Jeff Rasley (10 papers)
  4. Yuxiong He (59 papers)
  5. Aamir Shafi (13 papers)
  6. Mustafa Abduljabbar (13 papers)
  7. Hari Subramoni (16 papers)
  8. Dhabaleswar Panda (3 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.