Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models (2404.04929v2)

Published 7 Apr 2024 in cs.RO

Abstract: Multimodal LLMs (MLLMs) have shown impressive reasoning abilities and general intelligence in various domains. It inspires researchers to train end-to-end MLLMs or utilize large models to generate policies with human-selected prompts for embodied agents. However, these methods exhibit limited generalization capabilities on unseen tasks or scenarios, and overlook the multimodal environment information which is critical for robots to make decisions. In this paper, we introduce a novel Robotic Multimodal Perception-Planning (RoboMP$2$) framework for robotic manipulation which consists of a Goal-Conditioned Multimodal Preceptor (GCMP) and a Retrieval-Augmented Multimodal Planner (RAMP). Specially, GCMP captures environment states by employing a tailored MLLMs for embodied agents with the abilities of semantic reasoning and localization. RAMP utilizes coarse-to-fine retrieval method to find the $k$ most-relevant policies as in-context demonstrations to enhance the planner. Extensive experiments demonstrate the superiority of RoboMP$2$ on both VIMA benchmark and real-world tasks, with around 10% improvement over the baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qi Lv (16 papers)
  2. Hao Li (803 papers)
  3. Xiang Deng (43 papers)
  4. Rui Shao (31 papers)
  5. Michael Yu Wang (45 papers)
  6. Liqiang Nie (191 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com