Learning When to Ask for Help: Efficient Interactive Navigation via Implicit Uncertainty Estimation (2305.16502v3)
Abstract: Robots operating alongside humans often encounter unfamiliar environments that make autonomous task completion challenging. Though improving models and increasing dataset size can enhance a robot's performance in unseen environments, data collection and model refinement may be impractical in every environment. Approaches that utilize human demonstrations through manual operation can aid in refinement and generalization, but often require significant data collection efforts to generate enough demonstration data to achieve satisfactory task performance. Interactive approaches allow for humans to provide correction to robot action in real time, but intervention policies are often based on explicit factors related to state and task understanding that may be difficult to generalize. Addressing these challenges, we train a lightweight interaction policy that allows robots to decide when to proceed autonomously or request expert assistance at estimated times of uncertainty. An implicit estimate of uncertainty is learned via evaluating the feature extraction capabilities of the robot's visual navigation policy. By incorporating part-time human interaction, robots recover quickly from their mistakes, significantly improving the odds of task completion. Incorporating part-time interaction yields an increase in success of 0.38 with only a 0.3 expert interaction rate within the Habitat simulation environment using a simulated human expert. We further show success transferring this approach to a new domain with a real human expert, improving success from less than 0.1 with an autonomous agent to 0.92 with a 0.23 human interaction rate. This approach provides a practical means for robots to interact and learn from humans in real-world settings.
- P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. Reid, S. Gould, and A. Van Den Hengel, “Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3674–3683.
- E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra, “Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames,” arXiv preprint arXiv:1911.00357, 2019.
- S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: Tutorial, review, and perspectives on open problems,” arXiv preprint arXiv:2005.01643, 2020.
- S. S. Saha, S. S. Sandha, and M. Srivastava, “Machine learning for microcontroller-class hardware-a review,” IEEE Sensors Journal, 2022.
- T. Xie, N. Jiang, H. Wang, C. Xiong, and Y. Bai, “Policy finetuning: Bridging sample-efficient offline and online reinforcement learning,” Advances in neural information processing systems, vol. 34, pp. 27 395–27 407, 2021.
- B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
- A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., “Searching for mobilenetv3,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314–1324.
- F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- D. Wofk, F. Ma, T.-J. Yang, S. Karaman, and V. Sze, “Fastdepth: Fast monocular depth estimation on embedded systems,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 6101–6108.
- “Models and pre-trained weights,” 2017. [Online]. Available: https://pytorch.org/vision/stable/models.html
- F. Torabi, G. Warnell, and P. Stone, “Behavioral cloning from observation,” arXiv preprint arXiv:1805.01954, 2018.
- A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne, “Imitation learning: A survey of learning methods,” ACM Computing Surveys (CSUR), vol. 50, no. 2, pp. 1–35, 2017.
- G. Konidaris, L. P. Kaelbling, and T. Lozano-Perez, “From skills to symbols: Learning symbolic representations for abstract high-level planning,” Journal of Artificial Intelligence Research, vol. 61, pp. 215–289, 2018.
- M. Garnelo, K. Arulkumaran, and M. Shanahan, “Towards deep symbolic reinforcement learning,” arXiv preprint arXiv:1609.05518, 2016.
- Y. Wang, L. R. Humphrey, Z. Liao, and H. Zheng, “Trust-based multi-robot symbolic motion planning with a human-in-the-loop,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 8, no. 4, pp. 1–33, 2018.
- R. Papallas and M. R. Dogar, “To ask for help or not to ask: A predictive approach to human-in-the-loop motion planning for robot manipulation tasks,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 649–656.
- M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik et al., “Habitat: A platform for embodied ai research,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9339–9347.
- Abhishek Kadian*, Joanne Truong*, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, M. Savva, S. Chernova, and D. Batra, “Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation,” in arXiv:1912.06321, 2019.
- A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, S. Song, A. Zeng, and Y. Zhang, “Matterport3D: Learning from RGB-D data in indoor environments,” International Conference on 3D Vision (3DV), 2017.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V. Koltun, J. Kosecka, J. Malik, R. Mottaghi, M. Savva et al., “On evaluation of embodied navigation agents,” arXiv preprint arXiv:1807.06757, 2018.
- Ifueko Igbinedion (1 paper)
- Sertac Karaman (77 papers)