Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Don't Look Only Once: Towards Multimodal Interactive Reasoning with Selective Visual Revisitation (2505.18842v1)

Published 24 May 2025 in cs.CL and cs.CV

Abstract: We present v1, a lightweight extension to Multimodal LLMs (MLLMs) that enables selective visual revisitation during inference. While current MLLMs typically consume visual input only once and reason purely over internal memory, v1 introduces a simple point-and-copy mechanism that allows the model to dynamically retrieve relevant image regions throughout the reasoning process. This mechanism augments existing architectures with minimal modifications, enabling contextual access to visual tokens based on the model's evolving hypotheses. To train this capability, we construct v1g, a dataset of 300K multimodal reasoning traces with interleaved visual grounding annotations. Experiments on three multimodal mathematical reasoning benchmarks -- MathVista, MathVision, and MathVerse -- demonstrate that v1 consistently improves performance over comparable baselines, particularly on tasks requiring fine-grained visual reference and multi-step reasoning. Our results suggest that dynamic visual access is a promising direction for enhancing grounded multimodal reasoning. Code, models, and data will be released to support future research.

Summary

  • The paper introduces v1, a system enabling Multimodal Large Language Models to dynamically reaccess image regions during reasoning through selective visual revisitation.
  • This method employs a point-and-copy mechanism trained on the large v1g dataset featuring interleaved visual grounding annotations.
  • Empirically, v1 improves performance on multimodal mathematical reasoning benchmarks, demonstrating the benefit of dynamic visual access for integrating visual context.

Essay: Selective Visual Revisitation in Multimodal Interactive Reasoning

The paper "Don't Look Only Once: Towards Multimodal Interactive Reasoning with Selective Visual Revisitation" introduces v1, an innovative extension designed to optimize Multimodal LLMs (MLLMs) by enabling selective visual revisitation during inference. Traditionally, MLLMs process visual inputs only once at the beginning and rely solely on internal memory thereafter. v1 aerates this approach with a point-and-copy mechanism that allows models to dynamically reaccess and retrieve pertinent image regions throughout the reasoning process.

The central motivation stems from the recognition that human cognition frequently returns to visual inputs during reasoning, a behavior supported by cognitive science findings. Therefore, replicating this iterative engagement with images could enhance multimodal reasoning by allowing for refined and updated contextual interpretations. The authors address the inadequacy in current MLLMs, which often treat visual information as static. This limitation constrains the models' ability to emulate recursive visual analysis observed in human reasoning.

A pivotal component of the implementation is the v1g dataset, comprising 300,000 multimodal reasoning traces with interleaved visual grounding annotations. This dataset plays a critical role in training the model. The dataset construction involves oversampling reasoning paths from a pretrained MLLM, decomposing these traces into visual queries and retrieval steps, and grounding each visual reference with bounding box association. This setup supports the modular point-and-copy mechanism, facilitating dynamic visual revisitation during inference without the reliance on image generation, thus maintaining computational efficiency and stability.

Empirically, the paper demonstrates v1's enhanced performance across three multimodal mathematical reasoning benchmarks: MathVista, MathVision, and MathVerse. These tasks are particularly significant as they require models to integrate visual context into symbolic reasoning chains, a task v1 successfully navigates. The model consistently outperforms existing models of similar scale and approaches the capabilities of much larger models, indicating that dynamic visual access enhances multimodal reasoning capabilities significantly.

This approach is also lightly introduced with necessary architectural changes, making it compatible with popular MLLM architectures like Qwen2-VL and InternVL2.5. Through layers equipped with point-and-copy functionality, image embeddings can be proactively revisited during generation, further reinforcing step-wise interaction with visual inputs.

In conclusion, the development of v1 presents promising implications for grounded multimodal reasoning by effectively incorporating dynamic visual revisitation. This method aligns computational processes closer to human cognitive patterns, suggesting potential for more accurate and efficient reasoning in AI applications. The research opens avenues for future exploration of interactive visual mechanisms in advanced multimodal reasoning systems, transcending static representations to achieve more nuanced, comprehensive analytical capabilities. The path forward includes extending v1 to domains like scientific diagrams, medical imaging, and visual commonsense, potentially incorporating weak supervision and reinforcement learning to further optimize retrieval strategies.

Youtube Logo Streamline Icon: https://streamlinehq.com