Augmented Reality Demonstrations for Scalable Robot Imitation Learning (2403.13910v1)
Abstract: Robot Imitation Learning (IL) is a widely used method for training robots to perform manipulation tasks that involve mimicking human demonstrations to acquire skills. However, its practicality has been limited due to its requirement that users be trained in operating real robot arms to provide demonstrations. This paper presents an innovative solution: an Augmented Reality (AR)-assisted framework for demonstration collection, empowering non-roboticist users to produce demonstrations for robot IL using devices like the HoloLens 2. Our framework facilitates scalable and diverse demonstration collection for real-world tasks. We validate our approach with experiments on three classical robotics tasks: reach, push, and pick-and-place. The real robot performs each task successfully while replaying demonstrations collected via AR.
- The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 5920–5929.
- Maya Cakmak and Leila Takayama. 2013. Towards a comprehensive chore list for domestic robots. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 93–94.
- Joint goal and strategy inference across heterogeneous demonstrators via reward network distillation. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction. 659–668.
- Learning from suboptimal demonstration via self-supervised reward regression. In Conference on robot learning. PMLR, 1262–1277.
- AR2-D2: Training a Robot Without a Robot. arXiv preprint arXiv:2306.13818 (2023).
- Inferring Versatile Behavior from Demonstrations by Matching Geometric Descriptors. arXiv preprint arXiv:2210.08121 (2022).
- Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248 (2017).
- OpenVR: Teleoperation for Manipulation. arXiv preprint arXiv:2305.09765 (2023).
- What matters in learning from offline human demonstrations for robot manipulation. arXiv preprint arXiv:2108.03298 (2021).
- Microsoft. 2020. MixedReality-QRCode-Sample. Retrieved February 14, 2024 from https://github.com/microsoft/MixedReality-QRCode-Sample.git
- Imitation learning for agile autonomous driving. The International Journal of Robotics Research 39, 2-3 (2020), 286–302.
- RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion.. In Robotics: Science and Systems, Vol. 14. Pittsburgh, PA, 26–30.
- Recent advances in robot learning from demonstration. Annual review of control, robotics, and autonomous systems 3 (2020), 297–330.
- Sqil: Imitation learning via reinforcement learning with sparse rewards. arXiv preprint arXiv:1905.11108 (2019).
- A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 627–635.
- Training human teacher to improve robot learning from demonstration: A pilot study on kinesthetic teaching. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 800–806.
- Multiple interactions made easy (mime): Large scale demonstrations data for imitation. In Conference on robot learning. PMLR, 906–915.
- A system for imitation learning of contact-rich bimanual manipulation policies. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 11810–11817.
- Unity-Technologies. 2020. Unity-Robotics-Hub. Retrieved February 14, 2024 from https://github.com/Unity-Technologies/Unity-Robotics-Hub.git
- Improving sample efficiency in model-free reinforcement learning from images. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 10674–10681.
- Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 5628–5635.
- Yue Yang (146 papers)
- Bryce Ikeda (3 papers)
- Gedas Bertasius (55 papers)
- Daniel Szafir (18 papers)