Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Blockwise Parallel Transformer for Large Context Models (2305.19370v3)

Published 30 May 2023 in cs.CL and cs.LG

Abstract: Transformers have emerged as the cornerstone of state-of-the-art natural language processing models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands posed by the self-attention mechanism and the large feedforward network in Transformers limit their ability to handle long sequences, thereby creating challenges for tasks involving multiple long sequences or long-term dependencies. We present a distinct approach, Blockwise Parallel Transformer (BPT), that leverages blockwise computation of self-attention and feedforward network fusion to minimize memory costs. By processing longer input sequences while maintaining memory efficiency, BPT enables training sequences 32 times longer than vanilla Transformers and up to 4 times longer than previous memory-efficient methods. Extensive experiments on LLMing and reinforcement learning tasks demonstrate the effectiveness of BPT in reducing memory requirements and improving performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hao Liu (497 papers)
  2. Pieter Abbeel (372 papers)
Citations (7)
Youtube Logo Streamline Icon: https://streamlinehq.com