Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention (2312.08618v1)

Published 14 Dec 2023 in cs.CL

Abstract: This paper introduces a novel approach to enhance the capabilities of LLMs in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information. Recognizing the inherent challenges in extending the context window for LLMs, primarily built on Transformer architecture, we propose a new model architecture, referred to as Zebra. This architecture efficiently manages the quadratic time and memory complexity issues associated with full attention in the Transformer by employing grouped local-global attention layers. Our model, akin to a zebra's alternating stripes, balances local and global attention layers, significantly reducing computational requirements and memory consumption. Comprehensive experiments, including pretraining from scratch, continuation of long context adaptation training, and long instruction tuning, are conducted to evaluate the Zebra's performance. The results show that Zebra achieves comparable or superior performance on both short and long sequence benchmarks, while also enhancing training and inference efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kaiqiang Song (32 papers)
  2. Xiaoyang Wang (134 papers)
  3. Sangwoo Cho (22 papers)
  4. Xiaoman Pan (25 papers)
  5. Dong Yu (328 papers)
Citations (6)