ProTrain: Efficient LLM Training via Memory-Aware Techniques (2406.08334v1)
Abstract: It is extremely memory-hungry to train LLMs (LLM). To solve this problem, existing work exploits the combination of CPU and GPU for the training process, such as ZeRO-Offload. Such a technique largely democratizes billion-scale model training, making it possible to train with few consumer graphics cards. However, based on our observation, existing frameworks often provide coarse-grained memory management and require experienced experts in configuration tuning, leading to suboptimal hardware utilization and performance. This paper proposes ProTrain, a novel training system that intelligently balances memory usage and performance by coordinating memory, computation, and IO. ProTrain achieves adaptive memory management through Chunk-Based Model State Management and Block-Wise Activation Management, guided by a Memory-Aware Runtime Profiler without user intervention. ProTrain does not change the training algorithm and thus does not compromise accuracy. Experiments show that ProTrain improves training throughput by 1.43$\times$ to 2.71$\times$ compared to the SOTA training systems.
- Hanmei Yang (2 papers)
- Jin Zhou (45 papers)
- Yao Fu (83 papers)
- Xiaoqun Wang (94 papers)
- Ramine Roane (1 paper)
- Hui Guan (34 papers)
- Tongping Liu (10 papers)