Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models (2403.11289v2)

Published 17 Mar 2024 in cs.RO

Abstract: While the integration of Multi-modal LLMs (MLLMs) with robotic systems has significantly improved robots' ability to understand and execute natural language instructions, their performance in manipulation tasks remains limited due to a lack of robotics-specific knowledge. Conventional MLLMs are typically trained on generic image-text pairs, leaving them deficient in understanding affordances and physical concepts crucial for manipulation. To address this gap, we propose ManipVQA, a novel framework that infuses MLLMs with manipulation-centric knowledge through a Visual Question-Answering (VQA) format. This approach encompasses tool detection, affordance recognition, and a broader understanding of physical concepts. We curated a diverse dataset of images depicting interactive objects, to challenge robotic understanding in tool detection, affordance prediction, and physical concept comprehension. To effectively integrate this robotics-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we leverage a unified VQA format and devise a fine-tuning strategy. This strategy preserves the original vision-reasoning abilities while incorporating the newly acquired robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. The code and dataset are publicly available at https://github.com/SiyuanHuang95/ManipVQA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Siyuan Huang (123 papers)
  2. Iaroslav Ponomarenko (4 papers)
  3. Zhengkai Jiang (42 papers)
  4. Xiaoqi Li (77 papers)
  5. Xiaobin Hu (42 papers)
  6. Peng Gao (401 papers)
  7. Hongsheng Li (340 papers)
  8. Hao Dong (175 papers)
Citations (7)
Youtube Logo Streamline Icon: https://streamlinehq.com