Task-Aware KV Compression For Cost-Effective Long Video Understanding (2506.21184v1)
Abstract: Long-video understanding (LVU) remains a severe challenge for existing multimodal LLMs (MLLMs), primarily due to the prohibitive computational cost. Recent approaches have explored KV compression to mitigate this issue, but they often suffer from significant information loss at high compression ratios. In this paper, we introduce Video-X2L, which flexibly preserves critical video information for each LVU task. Video-X2L involves two key operations. The first one is called bi-level KV compression. During the MLLM's pre-filling stage, Video-X2L generates two types of compressed KVs: low-compression KVs (L-KVs) to capture fine-grained video details and high-compression KVs (H-KVs) to offer compact video representations. The second one is called selective KV re-loading. During the MLLM's decoding stage, Video-X2L selectively re-loads L-KVs for the most critical video chunks while using H-KVs for other less important ones. This allows the MLLM to fully utilize task-specific information while maintaining the overall compactness. Video-X2L is simple yet effective: it is free from additional training and directly compatible with existing KV-compressible MLLMs. We evaluate Video-X2L with a variety of popular LVU benchmarks, including VideoMME, MLVU, LongVideoBench, and VNBench. Our experiment result shows that Video-X2L outperforms existing KV-compression methods by a huge advantage while substantially saving the computation cost.