BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training (2410.19367v1)
Abstract: With the increasing scale of models, the need for efficient distributed training has become increasingly urgent. Recently, many synchronous pipeline parallelism approaches have been proposed to improve training throughput. However, these approaches still suffer from two major issues, i.e., pipeline bubbles caused by periodic flushing and extra communication due to the increasing number of pipeline stages. To this end, we propose BitPipe, a bidirectional interleaved pipeline parallelism for accelerating large models training. Specifically, a hybrid scheme of fusing interleaved pipelines with bidirectional pipelines is proposed to reduce the computational time of each single micro-batch and multiply the number of devices executing simultaneously. A V-shaped schedule with eager gradient synchronization is introduced to reduce and overlap the communication between devices. Experiments conducted on up to 32 GPUs show that BitPipe improves the training throughput of GPT-style and BERT-style models by 1.05x-1.28x compared to the state-of-the-art synchronous approaches. The code of our implementation is available at https://github.com/wuhouming/BitPipe.
- Large scale distributed deep networks. Advances in Neural Information Processing Systems, 25.
- BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
- DAPPLE: A pipelined data parallel approach for training large models. In Proceedings of Principles and Practice of Parallel Programming, 431–445.
- GPipe: Efficient training of giant neural networks using pipeline parallelism. Advances in Neural Information Processing Systems, 32.
- GEMS: GPU-enabled memory-aware model-parallelism system for distributed DNN training. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–15.
- BPIPE: Memory-balanced pipeline parallelism for training large language models. In Proceedings of International Conference on Machine Learning, 16639–16653.
- ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25(2).
- Lamy-Poirier, J. 2023. Breadth-first pipeline parallelism. In Proceedings of Machine Learning and Systems, 48–67.
- On model parallelization and scheduling strategies for distributed machine learning. Advances in Neural Information Processing Systems, 27.
- Scaling distributed machine learning with the parameter server. In Proceedings of Operating Systems Design and Implementation, 583–598.
- Chimera: Efficiently training large-scale neural networks with bidirectional pipelines. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–14.
- Hanayo: Harnessing wave-like pipeline parallelism for enhanced large model training efficiency. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–13.
- Memory-efficient pipeline-parallel DNN training. In Proceedings of International Conference on Machine Learning, 7937–7947.
- Efficient large-scale language model training on GPU clusters using Megatron-LM. In Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, 1–15.
- Open clone of OpenAI’s unreleased webtext dataset scraper.
- Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053.
- Supporting very large models using automatic dataflow graph partitioning. In Proceedings of EuroSys Conference, 1–17.
- Group-based interleaved pipeline parallelism for large-scale DNN training. In Proceedings of International Conference on Learning Representations.
- MixPipe: Efficient bidirectional pipeline parallelism for training large-scale models. In Proceedings of Design Automation Conference, 1–6.
Collections
Sign up for free to add this paper to one or more collections.