Open-vocabulary Pick and Place via Patch-level Semantic Maps (2406.15677v1)
Abstract: Controlling robots through natural language instructions in open-vocabulary scenarios is pivotal for enhancing human-robot collaboration and complex robot behavior synthesis. However, achieving this capability poses significant challenges due to the need for a system that can generalize from limited data to a wide range of tasks and environments. Existing methods rely on large, costly datasets and struggle with generalization. This paper introduces Grounded Equivariant Manipulation (GEM), a novel approach that leverages the generative capabilities of pre-trained vision-LLMs and geometric symmetries to facilitate few-shot and zero-shot learning for open-vocabulary robot manipulation tasks. Our experiments demonstrate GEM's high sample efficiency and superior generalization across diverse pick-and-place tasks in both simulation and real-world experiments, showcasing its ability to adapt to novel instructions and unseen objects with minimal data requirements. GEM advances a significant step forward in the domain of language-conditioned robot control, bridging the gap between semantic understanding and action generation in robotic systems.
- Mingxi Jia (11 papers)
- Haojie Huang (18 papers)
- Zhewen Zhang (3 papers)
- Chenghao Wang (19 papers)
- Linfeng Zhao (17 papers)
- Dian Wang (34 papers)
- Jason Xinyu Liu (7 papers)
- Robin Walters (73 papers)
- Robert Platt (70 papers)
- Stefanie Tellex (45 papers)