ISO: Overlap of Computation and Communication within Seqenence For LLM Inference (2409.11155v1)
Abstract: In the realm of LLM inference, the inherent structure of transformer models coupled with the multi-GPU tensor parallelism strategy leads to a sequential execution of computation and communication. This results in substantial underutilization of computing resources during the communication phase. To mitigate this inefficiency, various techniques have been developed to optimize the use of computational power throughout the communication process. These strategies primarily involve overlapping matrix computations and communications, as well as interleaving micro-batches across different requests. Nonetheless, these approaches either fall short of achieving ideal overlap or impose certain limitations on their application. To overcome these challenges, this paper introduces a novel strategy for computation-communication overlap that operates at the sequence level. This method not only enhances the degree of overlap but also minimizes the constraints on its applicability. Experimental evaluations conducted using 30b/70b models have demonstrated significant improvements in efficiency. Specifically, the proposed technique has been shown to reduce time consumption by approximately 35% on 4090 GPU and by roughly 15% on A800 GPU during the prefill stage of LLM inference.
- ChatGPT: Optimizing Language Models for Dialogue, 2022. https://openai.com/blog/chatgpt/.
- SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills. arXiv preprint arXiv:2308.16369, 2023.
- Language models are few-shot learners. In Advances in Neural Information Processing Systems, pages 1877–1901, 2020.
- Flux: Fast software-based communication overlap on gpus through kernel fusion, 2024.
- Liger: Interleaving Intra- and Inter-Operator Parallelism for Distributed Large Model Inference. https://dl.acm.org/doi/10.1145/3627535.3638466, 2024.
- Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads. arXiv preprint arXiv:2105.05720, 2022.
- T3: Transparent Tracking & Triggering for Fine-grained Overlap of Compute & Collectives. arXiv preprint arXiv:2401.16677, 2024.
- Efficiently scaling transformer inference. Proceedings of Machine Learning and Systems 5 (2023)., 2023.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.