Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion (2405.16444v3)

Published 26 May 2024 in cs.LG

Abstract: LLMs often incorporate multiple text chunks in their inputs to provide the necessary contexts. To speed up the prefill of the long LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache when the context is reused as the prefix of another LLM input. However, the reused text chunks are not always the input prefix, which makes precomputed KV caches not directly usable since they ignore the text's cross-attention with the preceding texts. Thus, the benefits of reusing KV caches remain largely unrealized. This paper tackles just one challenge: when an LLM input contains multiple text chunks, how to quickly combine their precomputed KV caches in order to achieve the same generation quality as the expensive full prefill (i.e., without reusing KV cache)? This challenge naturally arises in retrieval-augmented generation (RAG) where the input is supplemented with multiple retrieved texts as the context. We present CacheBlend, a scheme that reuses the precomputed KV caches, regardless prefix or not, and selectively recomputes the KV values of a small subset of tokens to partially update each reused KV cache. In the meantime, the small extra delay for recomputing some tokens can be pipelined with the retrieval of KV caches within the same job, allowing CacheBlend to store KV caches in slower devices with more storage capacity while retrieving them without increasing the inference delay. By comparing CacheBlend with the state-of-the-art KV cache reusing schemes on three open-source LLMs of various sizes and four popular benchmark datasets of different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by 2.2-3.3x and increases the inference throughput by 2.8-5x from full KV recompute without compromising generation quality. The code is available at https://github.com/LMCache/LMCache.

Citations (3)

Summary

  • The paper introduces a selective key-value recompute method that speeds up LLM inference by efficiently reusing caches in multi-chunk inputs.
  • It achieves a 2.2-3.3x reduction in time-to-first-token and a 2.8-5x increase in throughput compared to full cache recomputation methods.
  • The approach maintains generation quality and supports cost-effective deployment by enabling KV cache storage on slower, less expensive devices.

CacheBlend: Optimal KV Cache Reuse in LLMs

The paper introduces CacheBlend, a novel framework aimed at accelerating the inference phase of LLMs in scenarios where the input consists of multiple text chunks, as often seen in Retrieval-Augmented Generation (RAG) tasks. CacheBlend focuses on the efficient reuse and recomputation of key-value (KV) caches, which are pivotal to prefill processes in LLMs. The primary objective is to combine the KV caches of these text chunks while preserving the generation quality of full KV computation, yet significantly reducing the time-to-first-token (TTFT).

Technical Overview

KV Cache Reuse Challenges:

Traditionally, LLMs recompute KV caches for each new input, a process that becomes computationally expensive with long inputs as in RAG. Existing approaches such as prefix caching and full KV reuse have not satisfactorily addressed the delay issue while maintaining generation quality. Prefix caching is limited to only the prefix chunk, missing the potential reuse of subsequent chunks. On the other hand, full KV reuse ignores cross-attention between chunks, leading to degraded response quality.

CacheBlend Proposal:

CacheBlend tackles KV cache reuse challenges by introducing a method named selective KV recompute. This approach intelligently selects a subset of tokens—termed High-KV-Deviation (HKVD) tokens—for which the KV values are recomputed while reusing the remaining token's KV values. This selection is inspired by the observation that attention matrices in transformers exhibit sparsity, and crucial information is typically concentrated in a fraction of input tokens.

Key Results and Contributions

The experimental results showcase CacheBlend's superiority in reducing TTFT by 2.2-3.3 times and increasing inference throughput by 2.8-5 times compared to full KV recompute methods, all while maintaining comparable quality. The selective recompute approach allows for storage of KV caches on slower, less expensive storage devices without increasing inference delay, which is a crucial advantage for deploying LLMs in cost-sensitive environments.

Implications for Future AI Developments

CacheBlend's approach offers a pioneering perspective on optimizing the performance of LLMs in practical applications, notably RAG tasks where context recall speed and quality are critical. By enabling effective KV cache reuse at non-prefix positions, CacheBlend paves the way for more efficient, scalable LLM deployments. This methodology could prompt further research into cache efficiency and the refinement of attention mechanisms in transformers, potentially leading to breakthroughs in how LLMs are trained and served in real-world settings.

Speculative Future Directions

As AI research progresses, it's plausible that methods like CacheBlend will serve as foundational components for environments where inputs require sophisticated and dynamic context handling, thus improving model responsiveness and practical utility. Further exploration could involve extending the CacheBlend framework to accommodate and integrate seamlessly with emerging models and hardware accelerations.

Overall, CacheBlend signifies an incremental yet significant optimization around the prefill inefficiencies in LLM inferences, embodying a step towards more responsive and economically viable AI systems.