Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models (2407.21534v4)

Published 31 Jul 2024 in cs.CV

Abstract: In this work, we propose a training-free method to inject visual referring into Multimodal LLMs (MLLMs) through learnable visual token optimization. We observe the relationship between text prompt tokens and visual tokens in MLLMs, where attention layers model the connection between them. Our approach involves adjusting visual tokens from the MLP output during inference, controlling which text prompt tokens attend to which visual tokens. We optimize a learnable visual token based on an energy function, enhancing the strength of referential regions in the attention map. This enables detailed region description and reasoning without the need for substantial training costs or model retraining. Our method offers a promising direction for integrating referential abilities into MLLMs. Our method support referring with box, mask, scribble and point. The results demonstrate that our method exhibits controllability and interpretability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Mingrui Wu (13 papers)
  2. Xinyue Cai (12 papers)
  3. Jiayi Ji (51 papers)
  4. Jiale Li (17 papers)
  5. Oucheng Huang (4 papers)
  6. Gen Luo (32 papers)
  7. Hao Fei (105 papers)
  8. Xiaoshuai Sun (91 papers)
  9. Rongrong Ji (315 papers)
  10. Guannan Jiang (24 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com