Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches (2407.01527v2)

Published 1 Jul 2024 in cs.CL

Abstract: Long context capability is a crucial competency for LLMs as it mitigates the human struggle to digest long-form texts. This capability enables complex task-solving scenarios such as book summarization, code assistance, and many more tasks that are traditionally manpower-intensive. However, transformer-based LLMs face significant challenges with long context input due to the growing size of the KV cache and the intrinsic complexity of attending to extended inputs; where multiple schools of efficiency-driven approaches - such as KV cache quantization, token dropping, prompt compression, linear-time sequence models, and hybrid architectures - have been proposed to produce efficient yet long context-capable models. Despite these advancements, no existing work has comprehensively benchmarked these methods in a reasonably aligned environment. In this work, we fill this gap by providing a taxonomy of current methods and evaluating 10+ state-of-the-art approaches across seven categories of long context tasks. Our work reveals numerous previously unknown phenomena and offers insights - as well as a friendly workbench - for the future development of long context-capable LLMs. The source code is available at https://github.com/henryzhongsc/longctx_bench.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jiayi Yuan (25 papers)
  2. Hongyi Liu (26 papers)
  3. Yu-Neng Chuang (28 papers)
  4. Songchen Li (2 papers)
  5. Guanchu Wang (33 papers)
  6. Duy Le (12 papers)
  7. Hongye Jin (15 papers)
  8. Vipin Chaudhary (34 papers)
  9. Zhaozhuo Xu (43 papers)
  10. Zirui Liu (58 papers)
  11. Xia Hu (186 papers)
  12. Shaochen Zhong (15 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com