AutoV: Learning to Retrieve Visual Prompt for Large Vision-Language Models (2506.16112v1)
Abstract: Inspired by text prompts in LLMs, visual prompts have been explored to enhance the reasoning capabilities of large vision-LLMs (LVLMs). Current methods design heuristic visual prompts, such as overlaying a text-query-guided attention heatmap on the original input image. However, designing effective prompts manually is challenging and time-consuming, and it often fails to explore the benefits of different visual prompts, leading to sub-optimal performance. To this end, we propose \textbf{AutoV} that learns to automatically select the optimal visual prompt from various candidates based on given textual queries and the input image. To train AutoV, we developed an automatic data collection and labeling pipeline that evaluates various visual prompts with a pre-trained LVLM. We input a set of visual prompts into the LVLM and rank them according to the prediction losses generated by the model. Using the ranking as a supervision signal, we train AutoV to automatically choose the optimal visual prompt from various visual prompts for LVLMs. Experimental results indicate that AutoV enhances the performance of various LVLMs across multiple popular image understanding tasks. For instance, LLaVA-OV with AutoV achieves $\textbf{1.7}\%$ accuracy gain on LLaVA${\text{Wild}}$, and AutoV boosts Qwen2.5-VL by $\textbf{1.9}\%$ on MMMU, highlighting its potential as an optimal visual prompting method for LVLMs.
- Yuan Zhang (331 papers)
- Chun-Kai Fan (4 papers)
- Tao Huang (203 papers)
- Ming Lu (157 papers)
- Sicheng Yu (13 papers)
- Junwen Pan (11 papers)
- Kuan Cheng (22 papers)
- Qi She (37 papers)
- Shanghang Zhang (173 papers)