Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 32 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 83 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 197 tok/s Pro
2000 character limit reached

BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training (2410.19367v1)

Published 25 Oct 2024 in cs.LG, cs.AI, and cs.DC

Abstract: With the increasing scale of models, the need for efficient distributed training has become increasingly urgent. Recently, many synchronous pipeline parallelism approaches have been proposed to improve training throughput. However, these approaches still suffer from two major issues, i.e., pipeline bubbles caused by periodic flushing and extra communication due to the increasing number of pipeline stages. To this end, we propose BitPipe, a bidirectional interleaved pipeline parallelism for accelerating large models training. Specifically, a hybrid scheme of fusing interleaved pipelines with bidirectional pipelines is proposed to reduce the computational time of each single micro-batch and multiply the number of devices executing simultaneously. A V-shaped schedule with eager gradient synchronization is introduced to reduce and overlap the communication between devices. Experiments conducted on up to 32 GPUs show that BitPipe improves the training throughput of GPT-style and BERT-style models by 1.05x-1.28x compared to the state-of-the-art synchronous approaches. The code of our implementation is available at https://github.com/wuhouming/BitPipe.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Large scale distributed deep networks. Advances in Neural Information Processing Systems, 25.
  2. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  3. DAPPLE: A pipelined data parallel approach for training large models. In Proceedings of Principles and Practice of Parallel Programming, 431–445.
  4. GPipe: Efficient training of giant neural networks using pipeline parallelism. Advances in Neural Information Processing Systems, 32.
  5. GEMS: GPU-enabled memory-aware model-parallelism system for distributed DNN training. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–15.
  6. BPIPE: Memory-balanced pipeline parallelism for training large language models. In Proceedings of International Conference on Machine Learning, 16639–16653.
  7. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25(2).
  8. Lamy-Poirier, J. 2023. Breadth-first pipeline parallelism. In Proceedings of Machine Learning and Systems, 48–67.
  9. On model parallelization and scheduling strategies for distributed machine learning. Advances in Neural Information Processing Systems, 27.
  10. Scaling distributed machine learning with the parameter server. In Proceedings of Operating Systems Design and Implementation, 583–598.
  11. Chimera: Efficiently training large-scale neural networks with bidirectional pipelines. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–14.
  12. Hanayo: Harnessing wave-like pipeline parallelism for enhanced large model training efficiency. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–13.
  13. Memory-efficient pipeline-parallel DNN training. In Proceedings of International Conference on Machine Learning, 7937–7947.
  14. Efficient large-scale language model training on GPU clusters using Megatron-LM. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–15.
  15. Open clone of OpenAI’s unreleased webtext dataset scraper.
  16. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053.
  17. Supporting very large models using automatic dataflow graph partitioning. In Proceedings of EuroSys Conference, 1–17.
  18. Group-based interleaved pipeline parallelism for large-scale DNN training. In Proceedings of International Conference on Learning Representations.
  19. MixPipe: Efficient bidirectional pipeline parallelism for training large-scale models. In Proceedings of Design Automation Conference, 1–6.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube