Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FLUX: Fast Software-based Communication Overlap On GPUs Through Kernel Fusion (2406.06858v5)

Published 11 Jun 2024 in cs.LG and cs.DC

Abstract: Large deep learning models have demonstrated strong ability to solve many tasks across a wide range of applications. Those large models typically require training and inference to be distributed. Tensor parallelism is a common technique partitioning computation of an operation or layer across devices to overcome the memory capacity limitation of a single processor, and/or to accelerate computation to meet a certain latency requirement. However, this kind of parallelism introduces additional communication that might contribute a significant portion of overall runtime. Thus limits scalability of this technique within a group of devices with high speed interconnects, such as GPUs with NVLinks in a node. This paper proposes a novel method, Flux, to significantly hide communication latencies with dependent computations for GPUs. Flux over-decomposes communication and computation operations into much finer-grained operations and further fuses them into a larger kernel to effectively hide communication without compromising kernel efficiency. Flux can potentially overlap up to 96% of communication given a fused kernel. Overall, it can achieve up to 1.24x speedups for training over Megatron-LM on a cluster of 128 GPUs with various GPU generations and interconnects, and up to 1.66x and 1.30x speedups for prefill and decoding inference over vLLM on a cluster with 8 GPUs with various GPU generations and interconnects.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Wenlei Bao (7 papers)
  2. Qi Hou (13 papers)
  3. Chengquan Jiang (7 papers)
  4. Ningxin Zheng (15 papers)
  5. Xuanrun Zhang (1 paper)
  6. Zuquan Song (7 papers)
  7. Ziheng Jiang (23 papers)
  8. Haibin Lin (35 papers)
  9. Xin Liu (820 papers)
  10. Yinmin Zhong (11 papers)
  11. Xin Jin (285 papers)
  12. Li-Wen Chang (8 papers)
  13. Chengji Yao (1 paper)
Citations (8)

Summary

We haven't generated a summary for this paper yet.