Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training (2410.19313v3)

Published 25 Oct 2024 in cs.LG and cs.AI

Abstract: FP8 training has emerged as a promising method for improving training efficiency. Existing frameworks accelerate training by applying FP8 computation to linear layers while leaving optimizer states and activations in higher precision, which fails to fully optimize memory usage. This paper introduces COAT (Compressing Optimizer States and Activations for FP8 Training), a novel FP8 training framework designed to significantly reduce memory footprint when training large models. COAT addresses current limitations through two key innovations: (1) Dynamic Range Expansion, which aligns optimizer state distributions more closely with the FP8 representation range, thereby reducing quantization error, and (2) Mixed-Granularity Activation Quantization, which optimizes activation memory using a combination of per-tensor and per-group quantization strategies. Experiments demonstrate that COAT effectively reduces end-to-end training memory footprint by 1.54x compared to BF16 while achieving nearly lossless performance across various tasks, such as LLM pretraining and fine-tuning and Vision LLM training. COAT also achieves a 1.43x end-to-end training speedup compared to BF16, performing on par with or surpassing TransformerEngine's speedup. COAT enables efficient full-parameter training of large models on fewer GPUs, and facilitates doubling the batch size in distributed training settings, providing a practical solution for scaling large-scale model training. The code is available at https://github.com/NVlabs/COAT.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. Nemotron-4 340b technical report. arXiv preprint arXiv:2406.11704, 2024.
  2. Memory efficient adaptive optimization. Advances in Neural Information Processing Systems, 32, 2019.
  3. Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  4. Tinytl: Reduce activations, not trainable parameters for efficient on-device learning. arXiv preprint arXiv:2007.11622, 2020.
  5. A statistical framework for low-bitwidth training of deep neural networks. Advances in neural information processing systems, 33:883–894, 2020.
  6. Actnn: Reducing training memory footprint via 2-bit activation compressed training. In International Conference on Machine Learning, pp.  1803–1813. PMLR, 2021.
  7. Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793, 2023.
  8. Symbolic discovery of optimization algorithms. Advances in neural information processing systems, 36, 2024.
  9. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
  10. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
  11. Vs-quant: Per-vector scaled quantization for accurate low-precision neural network inference. Proceedings of Machine Learning and Systems, 3:873–884, 2021.
  12. Advancing mathematics by guiding human intuition with ai. Nature, 600(7887):70–74, 2021.
  13. 8-bit optimizers via block-wise quantization. arXiv preprint arXiv:2110.02861, 2021.
  14. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
  15. Scaling fp8 training to trillion-token llms. arXiv preprint arXiv:2409.12517, 2024.
  16. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024.
  17. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
  18. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Eneko Agirre, Johan Bos, Mona Diab, Suresh Manandhar, Yuval Marton, and Deniz Yuret (eds.), *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pp.  394–398, Montréal, Canada, 7-8 June 2012. Association for Computational Linguistics. URL https://aclanthology.org/S12-1052.
  19. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  6904–6913, 2017.
  20. Olmo: Accelerating the science of language models. arXiv preprint arXiv:2402.00838, 2024.
  21. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  3608–3617, 2018.
  22. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
  23. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
  24. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  6700–6709, 2019.
  25. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322, 2019.
  26. Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  27. Memory efficient optimizers with 4-bit states. Advances in Neural Information Processing Systems, 36, 2024.
  28. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023a.
  29. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023b.
  30. On-device training under 256kb memory. Advances in Neural Information Processing Systems, 35:22941–22954, 2022.
  31. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  26689–26699, 2024.
  32. I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  33. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
  34. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
  35. Fp8 formats for deep learning. arXiv preprint arXiv:2209.05433, 2022.
  36. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks. arXiv preprint arXiv:2204.05660, 2022.
  37. Few-bit backward: Quantized gradients of activation functions for memory footprint reduction. In International Conference on Machine Learning, pp.  26363–26381. PMLR, 2023.
  38. NVIDIA. Nvidia h100 tensor core gpu, 2024a. URL https://www.nvidia.com/en-us/data-center/h100/. Accessed: 2024-09-19.
  39. NVIDIA. Transformerengine: An efficient library for training transformer models, 2024b. URL https://github.com/NVIDIA/TransformerEngine. Accessed: 2024-09-19.
  40. Open Compute Project. Ocp 8-bit floating point specification (ofp8), revision 1.0, December 2023. URL https://www.opencompute.org/documents/ocp-8-bit-floating-point-specification-ofp8-revision-1-0-2023-12-01-pdf-1. Accessed: 2024-10-09.
  41. Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191, 2021.
  42. Fp8-lm: Training fp8 large language models. arXiv preprint arXiv:2310.18313, 2023.
  43. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020.
  44. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  3505–3506, 2020.
  45. Flashattention-3: Fast and accurate attention with asynchrony and low-precision. arXiv preprint arXiv:2407.08608, 2024.
  46. Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020.
  47. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp.  4596–4604. PMLR, 2018.
  48. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
  49. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  8317–8326, 2019.
  50. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022.
  51. Dolma: An open corpus of three trillion tokens for language model pretraining research. arXiv preprint arXiv:2402.00159, 2024.
  52. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
  53. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  54. Training deep neural networks with 8-bit floating point numbers. Advances in neural information processing systems, 31, 2018.
  55. Crowdsourcing multiple choice science questions. arXiv preprint arXiv:1707.06209, 2017.
  56. Stable and low-precision training for large-scale vision-language models. Advances in Neural Information Processing Systems, 36:10271–10298, 2023.
  57. Training transformers with 4-bit integers. Advances in Neural Information Processing Systems, 36:49146–49168, 2023.
  58. Jetfire: Efficient and accurate transformer pretraining with int8 data flow and per-block quantization. arXiv preprint arXiv:2403.12422, 2024.
  59. Vision-flan: Scaling human-labeled tasks in visual instruction tuning. arXiv preprint arXiv:2402.11690, 2024.
  60. Mitigating quantization errors due to activation spikes in glu-based llms. arXiv preprint arXiv:2405.14428, 2024.
  61. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023.
  62. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9556–9567, 2024.
  63. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
  64. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32, 2019.
  65. Sageattention: Accurate 8-bit attention for plug-and-play inference acceleration. arXiv preprint arXiv:2410.02367, 2024.
  66. Galore: Memory-efficient llm training by gradient low-rank projection. arXiv preprint arXiv:2403.03507, 2024.
  67. Towards unified int8 training for convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1969–1979, 2020.
Citations (1)

Summary

  • The paper presents COAT, which utilizes dynamic range expansion to reduce quantization error by approximately 1.63× for optimizer states.
  • COAT employs mixed-granularity activation quantization to achieve up to 1.65× memory reduction and 1.57× speedup in training activations.
  • Experimental validation confirms that COAT enables larger batch sizes and full-parameter training on fewer GPUs while maintaining near-lossless model performance.

Memory-Efficient Model Training with FP8 Precision

The paper presents COAT (Compressing Optimizer States and Activations for FP8 Training), an innovative framework aimed at enhancing the memory efficiency of FP8 training for large-scale models. It leverages two primary techniques: Dynamic Range Expansion (DRE) and Mixed-Granularity Activation Quantization (MGAQ), addressing the limitations of existing FP8 training methodologies.

Key Contributions

  1. Dynamic Range Expansion:
    • The authors introduce DRE to optimize the quantization of optimizer states by aligning their distributions with the FP8 representation range. This technique significantly reduces quantization error by adjusting the dynamic range effectively.
    • DRE achieves a quantization error reduction of approximately 1.63×, evidencing its capability to enhance optimizer state quantization accuracy.
  2. Mixed-Granularity Activation Quantization:
    • COAT implements MGAQ to manage the quantization of activations, employing a hybrid approach with per-tensor and fine-grained quantization. This ensures a balance between accuracy and efficiency, particularly in non-linear layers.
    • It yields a memory reduction of 1.65× for activation storage compared to BF16, while the speedup is reported to reach 1.57× in specific scenarios.

Experimental Validation

The paper rigorously validates COAT across various tasks, such as LLM pretraining, fine-tuning, and VLM training. Key findings include:

  • Efficiency Gains: COAT reduces memory usage by 1.54× compared to BF16, allowing full-parameter training on fewer GPUs and enabling batch size doubling in distributed training.
  • Performance Stability: Despite extensive memory compression, the model maintains near-lossless performance across diverse datasets, demonstrating COAT's robustness.

Implications

The practical implications of COAT are profound, especially for researchers and practitioners facing hardware limitations. By enabling larger batch sizes and reducing memory consumption, COAT facilitates the training of expansive models with limited resources. Theoretically, this approach might inspire further exploration of low-precision training techniques and their applications to other neural network components.

Future Directions

Future research could explore:

  • Integration with Communication-Efficient Techniques: Combining COAT with existing gradient compression methodologies might further optimize overall training efficiency.
  • Adaptation to Different Architectures: Investigating the adaptability of COAT to different model architectures beyond those tested in the paper could widen its applicability.

Overall, COAT presents a promising approach to large-scale neural network training, addressing critical memory bottlenecks while preserving computational efficiency and model accuracy.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.