Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LightSeq2: Accelerated Training for Transformer-based Models on GPUs (2110.05722v3)

Published 12 Oct 2021 in cs.CL and cs.MS

Abstract: Transformer-based neural models are used in many AI applications. Training these models is expensive, as it takes huge GPU resources and long duration. It is challenging because typical data like sentences have variable lengths, and Transformer's computation patterns are more complex than convolutional neural networks. Existing systems either only focus on model inference or optimization for only BERT-like encoder models. In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs. We propose a series of GPU optimization techniques tailored to the specific computation flow and memory access patterns of Transformer models. LightSeq2 supports many model architectures, including BERT (encoder-only), GPT (decoder-only), Transformer (encoder-decoder), and vision Transformer. Our experiments for a variety of models and benchmarks show that LightSeq2 is consistently faster (1.4-3.5x) than previous systems on different GPUs. In particular, it gains 308% training speedup compared with existing systems on a large public machine translation benchmark (WMT14 English-German).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiaohui Wang (34 papers)
  2. Yang Wei (18 papers)
  3. Ying Xiong (39 papers)
  4. Guyue Huang (11 papers)
  5. Xian Qian (4 papers)
  6. Yufei Ding (81 papers)
  7. Mingxuan Wang (83 papers)
  8. Lei Li (1293 papers)
Citations (29)