Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMSP: Reducing Communication Overhead of ZeRO for Efficient LLM Training (2311.00257v2)

Published 1 Nov 2023 in cs.DC

Abstract: Training LLMs encounters challenges in GPU memory consumption due to the high memory requirements of model states. The widely used Zero Redundancy Optimizer (ZeRO) addresses this issue through strategic sharding but introduces communication challenges at scale. To tackle this problem, we propose AMSP, a system designed to optimize ZeRO for scalable LLM training. AMSP incorporates three flexible sharding strategies: Full-Replica, Full-Sharding, and Partial-Sharding, and allows each component within the model states (Parameters, Gradients, Optimizer States) to independently choose a sharding strategy as well as the device mesh. We conduct a thorough analysis of communication costs, formulating an optimization problem to discover the optimal sharding strategy. Additionally, AMSP optimizes distributed LLM training by efficiently overlapping communication with computation. Evaluations demonstrate up to 52\% Model FLOPs Utilization (MFU) when training the LLaMA-based model on 1024 GPUs, resulting in a 1.56 times improvement in training throughput compared to newly proposed systems like MiCS and ZeRO++.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Qiaoling Chen (14 papers)
  2. Qinghao Hu (31 papers)
  3. Guoteng Wang (6 papers)
  4. Peng Sun (210 papers)
  5. Yonggang Wen (84 papers)
  6. Tianwei Zhang (199 papers)
  7. Ting Huang (26 papers)
  8. Xun Chen (166 papers)
  9. Yang Gao (761 papers)
  10. Hang Yan (86 papers)
  11. YingTong Xiong (5 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets