Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A-VL: Adaptive Attention for Large Vision-Language Models (2409.14846v1)

Published 23 Sep 2024 in cs.AI and cs.CV

Abstract: The Large Vision-LLM (LVLM) integrates computer vision and natural language processing techniques, offering substantial application potential. However, these models demand extensive resources during inference. Adaptive attention techniques can dynamically reduce computational redundancy and thus improve efficiency. Although current adaptive attention methods significantly reduce the memory requirements of Transformer-based LLMs, they are not tailored for LVLMs. We observe that LVLMs generate responses from both remote image tokens and local text tokens, and different modalities have different attention patterns. This observation inspires us to manage the attention for each modality separately. Specifically, for visual input, we store the cache of potentially useful information but only compute the most critical parts. For language input, we care more about local information. Based on our observation and analysis of vision-language attention patterns, we develop A-VL, a plug-and-play adaptive attention tailored for LVLM inference. Extensive evaluations on three vision-language tasks and five datasets show the effectiveness of our designs. Our approach A-VL outperforms existing adaptive attention methods in reducing memory usage and computational load without compromising performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Junyang Zhang (22 papers)
  2. Mu Yuan (12 papers)
  3. Ruiguang Zhong (2 papers)
  4. Puhan Luo (1 paper)
  5. Huiyou Zhan (2 papers)
  6. Ningkang Zhang (1 paper)
  7. Chengchen Hu (6 papers)
  8. Xiangyang Li (58 papers)