Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProTrain: Efficient LLM Training via Memory-Aware Techniques (2406.08334v1)

Published 12 Jun 2024 in cs.DC, cs.AI, cs.LG, and cs.PF

Abstract: It is extremely memory-hungry to train LLMs (LLM). To solve this problem, existing work exploits the combination of CPU and GPU for the training process, such as ZeRO-Offload. Such a technique largely democratizes billion-scale model training, making it possible to train with few consumer graphics cards. However, based on our observation, existing frameworks often provide coarse-grained memory management and require experienced experts in configuration tuning, leading to suboptimal hardware utilization and performance. This paper proposes ProTrain, a novel training system that intelligently balances memory usage and performance by coordinating memory, computation, and IO. ProTrain achieves adaptive memory management through Chunk-Based Model State Management and Block-Wise Activation Management, guided by a Memory-Aware Runtime Profiler without user intervention. ProTrain does not change the training algorithm and thus does not compromise accuracy. Experiments show that ProTrain improves training throughput by 1.43$\times$ to 2.71$\times$ compared to the SOTA training systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hanmei Yang (2 papers)
  2. Jin Zhou (45 papers)
  3. Yao Fu (83 papers)
  4. Xiaoqun Wang (94 papers)
  5. Ramine Roane (1 paper)
  6. Hui Guan (34 papers)
  7. Tongping Liu (10 papers)

Summary

We haven't generated a summary for this paper yet.