Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TokenCarve: Information-Preserving Visual Token Compression in Multimodal Large Language Models (2503.10501v1)

Published 13 Mar 2025 in cs.CV

Abstract: Multimodal LLMs (MLLMs) are becoming increasingly popular, while the high computational cost associated with multimodal data input, particularly from visual tokens, poses a significant challenge. Existing training-based token compression methods improve inference efficiency but require costly retraining, while training-free methods struggle to maintain performance when aggressively reducing token counts. In this study, we reveal that the performance degradation of MLLM closely correlates with the accelerated loss of information in the attention output matrix. This insight introduces a novel information-preserving perspective, making it possible to maintain performance even under extreme token compression. Based on this finding, we propose TokenCarve, a training-free, plug-and-play, two-stage token compression framework. The first stage employs an Information-Preservation-Guided Selection (IPGS) strategy to prune low-information tokens, while the second stage further leverages IPGS to guide token merging, minimizing information loss. Extensive experiments on 11 datasets and 2 model variants demonstrate the effectiveness of TokenCarve. It can even reduce the number of visual tokens to 22.2% of the original count, achieving a 1.23x speedup in inference, a 64% reduction in KV cache storage, and only a 1.54% drop in accuracy. Our code is available at https://github.com/ShawnTan86/TokenCarve.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xudong Tan (3 papers)
  2. Peng Ye (142 papers)
  3. Chongjun Tu (8 papers)
  4. Jianjian Cao (11 papers)
  5. Yaoxin Yang (4 papers)
  6. Lin Zhang (342 papers)
  7. Dongzhan Zhou (42 papers)
  8. Tao Chen (397 papers)
Github Logo Streamline Icon: https://streamlinehq.com