KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches (2407.01527v2)
Abstract: Long context capability is a crucial competency for LLMs as it mitigates the human struggle to digest long-form texts. This capability enables complex task-solving scenarios such as book summarization, code assistance, and many more tasks that are traditionally manpower-intensive. However, transformer-based LLMs face significant challenges with long context input due to the growing size of the KV cache and the intrinsic complexity of attending to extended inputs; where multiple schools of efficiency-driven approaches - such as KV cache quantization, token dropping, prompt compression, linear-time sequence models, and hybrid architectures - have been proposed to produce efficient yet long context-capable models. Despite these advancements, no existing work has comprehensively benchmarked these methods in a reasonably aligned environment. In this work, we fill this gap by providing a taxonomy of current methods and evaluating 10+ state-of-the-art approaches across seven categories of long context tasks. Our work reveals numerous previously unknown phenomena and offers insights - as well as a friendly workbench - for the future development of long context-capable LLMs. The source code is available at https://github.com/henryzhongsc/longctx_bench.
- Jiayi Yuan (25 papers)
- Hongyi Liu (26 papers)
- Yu-Neng Chuang (28 papers)
- Songchen Li (2 papers)
- Guanchu Wang (33 papers)
- Duy Le (12 papers)
- Hongye Jin (15 papers)
- Vipin Chaudhary (34 papers)
- Zhaozhuo Xu (43 papers)
- Zirui Liu (58 papers)
- Xia Hu (186 papers)
- Shaochen Zhong (15 papers)