Preemptive Motion Planning for Human-to-Robot Indirect Placement Handovers (2203.00156v3)
Abstract: As technology advances, the need for safe, efficient, and collaborative human-robot-teams has become increasingly important. One of the most fundamental collaborative tasks in any setting is the object handover. Human-to-robot handovers can take either of two approaches: (1) direct hand-to-hand or (2) indirect hand-to-placement-to-pick-up. The latter approach ensures minimal contact between the human and robot but can also result in increased idle time due to having to wait for the object to first be placed down on a surface. To minimize such idle time, the robot must preemptively predict the human intent of where the object will be placed. Furthermore, for the robot to preemptively act in any sort of productive manner, predictions and motion planning must occur in real-time. We introduce a novel prediction-planning pipeline that allows the robot to preemptively move towards the human agent's intended placement location using gaze and gestures as model inputs. In this paper, we investigate the performance and drawbacks of our early intent predictor-planner as well as the practical benefits of using such a pipeline through a human-robot case study.
- H. Nemlekar, D. Dutia, and Z. Li, “Object transfer point estimation for fluent human-robot handovers,” in 2019 International Conference on Robotics and Automation (ICRA), pp. 2627–2633, 2019.
- W. Yang, C. Paxton, M. Cakmak, and D. Fox, “Human grasp classification for reactive human-to-robot handovers,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11123–11130, 2020.
- A. Doshi and M. M. Trivedi, “On the roles of eye gaze and head dynamics in predicting driver’s intent to change lanes,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 3, pp. 453–462, 2009.
- Y. Razin and K. Feigh, “Learning to predict intent from gaze during robotic hand-eye coordination,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
- E. C. Townsend, E. Mielke, D. Wingate, and M. D. Killpack, “Estimating human intent for physical human-robot co-manipulation,” ArXiv, vol. abs/1705.10851, 2017.
- U. Tariq, R. Muthusamy, and V. Kyrki, “Grasp planning for load sharing in collaborative manipulation,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6847–6854, 2018.
- Y. F. Chen, M. Everett, M. Liu, and J. P. How, “Socially aware motion planning with deep reinforcement learning,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1343–1350, 2017.
- A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi, “Social gan: Socially acceptable trajectories with generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa, “Planning with trust for human-robot collaboration,” in Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), pp. 307 – 315, February 2018.
- P. Kratzer, N. B. Midlagajni, M. Toussaint, and J. Mainprice, “Anticipating human intention for full-body motion prediction in object grasping and placing tasks,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1157–1163, 2020.
- S. Bansal, R. Newbury, W. Chan, A. Cosgun, A. Allen, D. Kulić, T. Drummond, and C. Isbell, “Supportive actions for manipulation in human-robot coworker teams,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11261–11267, 2020.
- H. S. Park, E. Jain, and Y. Sheikh, “Predicting primary gaze behavior using social saliency fields,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3503–3510, 2013.
- C. L. Nehaniv, K. Dautenhahn, J. Kubacki, M. Haegele, C. Parlitz, and R. Alami, “A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction,” in ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005., pp. 371–377, IEEE, 2005.
- J. Joo, W. Li, F. F. Steen, and S.-C. Zhu, “Visual persuasion: Inferring communicative intents of images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 216–223, 2014.
- L. Fan, W. Wang, S. Huang, X. Tang, and S.-C. Zhu, “Understanding human gaze communication by spatio-temporal graph reasoning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5724–5733, 2019.
- H. Admoni and B. Scassellati, “Social eye gaze in human-robot interaction: a review,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp. 25–63, 2017.
- A. Saran, S. Majumdar, E. S. Short, A. Thomaz, and S. Niekum, “Human gaze following for human-robot interaction,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8615–8621, IEEE, 2018.
- P. Kratzer, S. Bihlmaier, N. B. Midlagajni, R. Prakash, M. Toussaint, and J. Mainprice, “Mogaze: A dataset of full-body motions that includes workspace geometry and eye-gaze,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 367–373, 2020.
- M. Cakmak, S. S. Srinivasa, M. K. Lee, J. Forlizzi, and S. Kiesler, “Human preferences for robot-human hand-over configurations,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1986–1993, 2011.
- A. C. Huamán Quispe, E. Martinson, and K. Oguchi, “Learning user preferences for robot-human handovers,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 834–839, 2017.
- A. Moon, D. M. Troniak, B. Gleeson, M. K. Pan, M. Zheng, B. A. Blumer, K. MacLean, and E. A. Croft, “Meet me where i’m gazing: How shared attention gaze affects human-robot handover timing,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’14, (New York, NY, USA), p. 334–341, Association for Computing Machinery, 2014.
- H. Admoni, A. Dragan, S. S. Srinivasa, and B. Scassellati, “Deliberate delays during robot-to-human handovers improve compliance with gaze communication,” in 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 49–56, 2014.
- P. Schydlo, M. Raković, L. Jamone, and J. Santos-Victor, “Anticipation in human-robot cooperation: A recurrent neural network approach for multiple action sequences prediction,” 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–6, 2018.
- A. Kshirsagar, M. Lim, S. Christian, and G. Hoffman, “Robot gaze behaviors in human-to-robot handovers,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6552–6558, 2020.
- M. K. Pan, E. A. Croft, and G. Niemeyer, “Evaluating social perception of human-to-robot handovers using the robot social attributes scale (rosas),” in 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 443–451, 2018.
- I. D. Walker, L. Mears, R. S. M. Mizanoor, R. Pak, S. Remy, and Y. Wang, “Robot-human handovers based on trust,” in 2015 Second International Conference on Mathematics and Computers in Sciences and in Industry (MCSI), pp. 119–124, 2015.
- K. Strabala, M. K. Lee, A. Dragan, J. Forlizzi, S. S. Srinivasa, M. Cakmak, and V. Micelli, “Toward seamless human-robot handovers,” J. Hum.-Robot Interact., vol. 2, p. 112–132, Feb. 2013.
- J. Mainprice and D. Berenson, “Human-robot collaborative manipulation planning using early prediction of human motion,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 299–306, 2013.
- R. Luo, R. Hayne, and D. Berenson, “Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces,” Autonomous Robots, vol. 42, pp. 631–648, 2018.
- J. Mainprice, R. Hayne, and D. Berenson, “Predicting human reaching motion in collaborative tasks using inverse optimal control and iterative re-planning,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 885–892, 2015.
- L. Sun, W. Zhan, and M. Tomizuka, “Probabilistic prediction of interactive driving behavior via hierarchical inverse reinforcement learning,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2111–2117, 2018.
- Q. Wu, C.-J. Wu, Y. Zhu, and J. Joo, “Communicative learning with natural gestures for embodied navigation agents with human-in-the-scene,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
- C.-M. Huang and B. Mutlu, “Anticipatory robot control for efficient human-robot collaboration,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 83–90, 2016.
- A. Jain, D. Chen, D. Bansal, S. Scheele, M. Kishore, H. Sapra, D. Kent, H. Ravichandar, and S. Chernova, “Anticipatory human-robot collaboration via multi-objective trajectory optimization,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11052–11057, 2020.
- Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh, “Openpose: Realtime multi-person 2d pose estimation using part affinity fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
- R. P. Joshi, M. K. Broek, X. Z. Tan, A. Choi, and R. Luo, “Ros openpose.” https://github.com/ravijo/ros_openpose, 2019.
- S. Lemaignan, F. Garcia, A. Jacq, and P. Dillenbourg, “From real-time attention assessment to “with-me-ness” in human-robot interaction,” in Proceedings of the 2016 ACM/IEEE Human-Robot Interaction Conference, 2016.
- M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal, “STOMP: Stochastic Trajectory Optimization for Motion Planning,” in International Conference on Robotics and Automation, 2011.