Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs (2402.07872v1)

Published 12 Feb 2024 in cs.RO, cs.CL, cs.CV, and cs.LG

Abstract: Vision LLMs (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce only textual outputs, while robotic control and other spatial tasks require outputting continuous coordinates, actions, or trajectories. How can we enable VLMs to handle such settings without fine-tuning on task-specific data? In this paper, we propose a novel visual prompting approach for VLMs that we call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as iterative visual question answering. In each iteration, the image is annotated with a visual representation of proposals that the VLM can refer to (e.g., candidate robot actions, localizations, or trajectories). The VLM then selects the best ones for the task. These proposals are iteratively refined, allowing the VLM to eventually zero in on the best available answer. We investigate PIVOT on real-world robotic navigation, real-world manipulation from images, instruction following in simulation, and additional spatial inference tasks such as localization. We find, perhaps surprisingly, that our approach enables zero-shot control of robotic systems without any robot training data, navigation in a variety of environments, and other capabilities. Although current performance is far from perfect, our work highlights potentials and limitations of this new regime and shows a promising approach for Internet-Scale VLMs in robotic and spatial reasoning domains. Website: pivot-prompt.github.io and HuggingFace: https://huggingface.co/spaces/pivot-prompt/pivot-prompt-demo.

PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs

The paper presents "PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs", introducing a novel approach to leverage Vision LLMs (VLMs) for robotic control and spatial reasoning tasks via iterative visual optimization. The proposed method, PIVOT (Prompting with Iterative Visual Optimization), innovatively transforms robotic and spatial tasks into visual question answering (VQA) challenges, enabling VLMs to select and refine visual representations iteratively until the best solution is identified.

Core Methodology

PIVOT revolves around the concept of annotating images with visual representations of task-specific proposals—such as candidate robotic actions, localizations, or trajectories. These annotations allow VLMs to select the most promising actions. The selected proposals undergo iterative refinement, enhancing precision and ultimately converging on optimal solutions.

Visual Prompt Mapping

A fundamental aspect of PIVOT is its novel visual prompt mapping, which involves annotating an image with visual elements (e.g., numbered arrows indicating possible robotic actions) paired with textual labels. This approach enables VLMs, typically trained to handle textual outputs, to understand and evaluate spatial outputs effectively.

Iterative Optimization

The iterative nature of PIVOT, akin to the cross-entropy method, involves:

  1. Annotating the image with initial candidate proposals.
  2. Querying the VLM to rank these candidates.
  3. Refining the candidate pool based on VLM feedback.
  4. Repeating the process until optimal convergence.

Applications and Evaluation

The authors tested PIVOT across various domains, including:

  • Real-world robotic navigation and manipulation tasks.
  • Simulation-based robotic instruction following.
  • Spatial inference tasks such as keypoint localization.

Noteworthy is the application to zero-shot control of robotic systems, where PIVOT managed to perform tasks without any prior robot-specific training data.

Robotic Control Performance

The PIVOT method was applied to several robotic embodiments: mobile manipulators for both navigation and manipulation, a Franka robotic arm, and a RAVENS simulation environment. For real-world navigation, PIVOT demonstrated a success rate of up to 100% in some cases, notably outperforming non-iterative and non-parallel versions of the approach. Similarly, in manipulation tasks, PIVOT showed significant improvements in task success and action steps efficiency through iterative refinement and parallel processing.

Spatial Reasoning and Visual Grounding

In addition to robotic control, PIVOT’s effectiveness was evaluated on visual grounding tasks using the RefCOCO dataset, achieving strong performance in identifying target objects through iterative annotation selection.

Implications and Future Directions

The implications of this research extend to both practical robotics and broader AI applications involving spatial reasoning:

  • Practical Robotics: PIVOT offers a potential pathway to deploy VLMs for real-world robotics without extensive task-specific data collection or training. This flexibility can significantly reduce the effort and cost associated with deploying robotic systems in dynamic environments.
  • Theoretical AI Development: PIVOT showcases the potential for iterative optimization techniques in enhancing the capabilities of VLMs, pushing the envelope on how these models can be integrated into low-level control tasks and embodied interaction scenarios.

Scaling and Limitations

The paper also investigates the scalability of PIVOT with varying sizes of VLMs, reporting consistent performance improvements with larger models. This insight underscores the evolving landscape of AI capabilities and suggests that as VLMs continue to advance, the performance of approaches like PIVOT will concurrently escalate.

Despite its promising results, the authors highlight certain limitations:

  • 3D Understanding: VLMs, as observed, struggle with precise depth and 3D spatial understanding, impacting tasks requiring fine-grained depth perception.
  • Interaction and Precision: Tasks involving intricate interactions or close-quarters manipulation exposed weaknesses in dealing with occlusions and precision requirements.
  • Greedy Behavior: Observed myopic decision tendencies indicate that current VLM iterations might sometimes opt for suboptimal short-term gains.

Conclusion

PIVOT represents a significant step in harnessing VLM potentials for spatial reasoning and robotic control, proposing a robust framework for iterative visual prompting. While current limitations suggest avenues for further research and development, the scalability and flexibility offered by PIVOT mark substantial theoretical and practical advancements in the field of vision-language integration for robotics and spatial AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (23)
  1. Soroush Nasiriany (17 papers)
  2. Fei Xia (111 papers)
  3. Wenhao Yu (139 papers)
  4. Ted Xiao (40 papers)
  5. Jacky Liang (21 papers)
  6. Ishita Dasgupta (35 papers)
  7. Annie Xie (21 papers)
  8. Danny Driess (35 papers)
  9. Ayzaan Wahid (21 papers)
  10. Zhuo Xu (82 papers)
  11. Quan Vuong (41 papers)
  12. Tingnan Zhang (53 papers)
  13. Tsang-Wei Edward Lee (12 papers)
  14. Kuang-Huei Lee (23 papers)
  15. Peng Xu (357 papers)
  16. Sean Kirmani (18 papers)
  17. Yuke Zhu (134 papers)
  18. Andy Zeng (54 papers)
  19. Karol Hausman (56 papers)
  20. Nicolas Heess (139 papers)
Citations (58)