Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference (2406.18139v1)

Published 26 Jun 2024 in cs.CL and cs.CV

Abstract: Long-context Multimodal LLMs (MLLMs) demand substantial computational resources for inference as the growth of their multimodal Key-Value (KV) cache, in response to increasing input lengths, challenges memory and time efficiency. Unlike single-modality LLMs that manage only textual contexts, the KV cache of long-context MLLMs includes representations from multiple images with temporal and spatial relationships and related textual contexts. The predominance of image tokens means traditional optimizations for LLMs' KV caches are unsuitable for multimodal long-context settings, and no prior works have addressed this challenge. In this work, we introduce LOOK-M, a pioneering, fine-tuning-free approach that efficiently reduces the multimodal KV cache size while maintaining performance comparable to a full cache. We observe that during prompt prefill, the model prioritizes more textual attention over image features, and based on the multimodal interaction observation, a new proposed text-prior method is explored to compress the KV cache. Furthermore, to mitigate the degradation of image contextual information, we propose several compensatory strategies using KV pairs merging. LOOK-M demonstrates that with a significant reduction in KV Cache memory usage, such as reducing it by 80% in some cases, it not only achieves up to 1.5x faster decoding but also maintains or even enhances performance across a variety of long context multimodal tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhongwei Wan (39 papers)
  2. Ziang Wu (2 papers)
  3. Che Liu (59 papers)
  4. Jinfa Huang (25 papers)
  5. Zhihong Zhu (45 papers)
  6. Peng Jin (91 papers)
  7. Longyue Wang (87 papers)
  8. Li Yuan (141 papers)
Citations (14)
X Twitter Logo Streamline Icon: https://streamlinehq.com