Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression (2301.09830v1)

Published 24 Jan 2023 in cs.LG and cs.DC

Abstract: In training of modern large NLP models, it has become a common practice to split models using 3D parallelism to multiple GPUs. Such technique, however, suffers from a high overhead of inter-node communication. Compressing the communication is one way to mitigate the overhead by reducing the inter-node traffic volume; however, the existing compression techniques have critical limitations to be applied for NLP models with 3D parallelism in that 1) only the data parallelism traffic is targeted, and 2) the existing compression schemes already harm the model quality too much. In this paper, we present Optimus-CC, a fast and scalable distributed training framework for large NLP models with aggressive communication compression. Optimus-CC differs from existing communication compression frameworks in the following ways: First, we compress pipeline parallel (inter-stage) traffic. In specific, we compress the inter-stage backpropagation and the embedding synchronization in addition to the existing data-parallel traffic compression methods. Second, we propose techniques to avoid the model quality drop that comes from the compression. We further provide mathematical and empirical analyses to show that our techniques can successfully suppress the compression error. Lastly, we analyze the pipeline and opt to selectively compress those traffic lying on the critical path. This further helps reduce the compression error. We demonstrate our solution on a GPU cluster, and achieve superior speedup from the baseline state-of-the-art solutions for distributed training without sacrificing the model quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jaeyong Song (11 papers)
  2. Jinkyu Yim (3 papers)
  3. Jaewon Jung (13 papers)
  4. Hongsun Jang (8 papers)
  5. Hyung-Jin Kim (27 papers)
  6. Youngsok Kim (13 papers)
  7. Jinho Lee (44 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.