Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ContactHandover: Contact-Guided Robot-to-Human Object Handover (2404.01402v2)

Published 1 Apr 2024 in cs.RO, cs.AI, and cs.CV

Abstract: Robot-to-human object handover is an important step in many human robot collaboration tasks. A successful handover requires the robot to maintain a stable grasp on the object while making sure the human receives the object in a natural and easy-to-use manner. We propose ContactHandover, a robot to human handover system that consists of two phases: a contact-guided grasping phase and an object delivery phase. During the grasping phase, ContactHandover predicts both 6-DoF robot grasp poses and a 3D affordance map of human contact points on the object. The robot grasp poses are re-ranked by penalizing those that block human contact points, and the robot executes the highest ranking grasp. During the delivery phase, the robot end effector pose is computed by maximizing human contact points close to the human while minimizing the human arm joint torques and displacements. We evaluate our system on 27 diverse household objects and show that our system achieves better visibility and reachability of human contacts to the receiver compared to several baselines. More results can be found on https://clairezixiwang.github.io/ContactHandover.github.io

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. I. Akinola, J. Varley, B. Chen, and P. K. Allen, “Workspace aware online grasp planning,” CoRR, vol. abs/1806.11402, 2018. [Online]. Available: http://arxiv.org/abs/1806.11402
  2. J. Aleotti, V. Micelli, and S. Caselli, “An affordance sensitive system for robot to human object handover,” International Journal of Social Robotics, vol. 6, no. 4, pp. 653–666, Nov. 2014.
  3. P. Ardón, M. E. Cabrera, È. Pairet, R. P. A. Petrick, S. Ramamoorthy, K. S. Lohan, and M. Cakmak, “Affordance-aware handovers with human arm mobility constraints,” CoRR, vol. abs/2010.15436, 2020. [Online]. Available: https://arxiv.org/abs/2010.15436
  4. P. Ardón, E. Pairet, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan, “Learning grasp affordance reasoning through semantic relations,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4571–4578, 2019.
  5. S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays, “Contactdb: Analyzing and predicting grasp contact via thermal imaging,” CoRR, vol. abs/1904.06830, 2019. [Online]. Available: http://arxiv.org/abs/1904.06830
  6. S. Brahmbhatt, A. Handa, J. Hays, and D. Fox, “Contactgrasp: Functional multi-finger grasp synthesis from contact,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 2386–2393.
  7. B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-cmu-berkeley dataset for robotic manipulation research,” The International Journal of Robotics Research, vol. 36, no. 3, pp. 261–268, 2017. [Online]. Available: https://doi.org/10.1177/0278364917700714
  8. A. Castro, F. Silva, and V. Santos, “Trends of human-robot collaboration in industry contexts: Handover, learning, and metrics,” Sensors, vol. 21, no. 12, 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/12/4113
  9. W. P. Chan, Y. Kakiuchi, K. Okada, and M. Inaba, “Determining proper grasp configurations for handovers through observation of object movement patterns and inter-object interactions during usage,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 1355–1360.
  10. E. Corona, A. Pumarola, G. Alenya, F. Moreno-Noguer, and G. Rogez, “Ganhand: Predicting human grasp affordances in multi-object scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5031–5041.
  11. M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Knowledge Discovery and Data Mining, 1996. [Online]. Available: https://api.semanticscholar.org/CorpusID:355163
  12. Z. Fan, M. Parelli, M. E. Kadoglou, M. Kocabas, X. Chen, M. J. Black, and O. Hilliges, “Hold: Category-agnostic 3d reconstruction of interacting hands and objects from video,” arXiv preprint arXiv:2311.18448, 2023.
  13. C. Ferrari, J. F. Canny et al., “Planning optimal grasps.” in ICRA, vol. 3, no. 4, 1992, p. 6.
  14. C. Goldfeder, P. K. Allen, C. Lackner, and R. Pelossof, “Grasp planning via decomposition trees,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, pp. 4679–4684.
  15. Z. He, N. Chavan-Dafle, J. Huh, S. Song, and V. Isler, “Pick2place: Task-aware 6dof grasp estimation via object-centric perspective affordance,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 7996–8002.
  16. H. Jiang, S. Liu, J. Wang, and X. Wang, “Hand-object contact consistency reasoning for human grasps generation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 11 107–11 116.
  17. J. H. Kang, P. Limcaoco, N. Dhanaraj, and S. K. Gupta, “Safe robot to human tool handover to support effective collaboration,” in International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 87363.   American Society of Mechanical Engineers, 2023, p. V008T08A088.
  18. M. Katayama and H. Hasuura, “Optimization principle determines human arm postures and ”comfort”,” in SICE 2003 Annual Conference (IEEE Cat. No.03TH8734), vol. 1, 2003, pp. 1000–1005.
  19. D. Lehotsky, A. Christensen, and D. Chrysostomou, “Optimizing robot-to-human object handovers using vision-based affordance information,” in 2023 IEEE International Conference on Imaging Systems and Techniques (IST), 2023, pp. 1–6.
  20. D. Liu, X. Wang, M. Cong, Y. Du, Q. Zou, and X. Zhang, “Object transfer point predicting based on human comfort model for human-robot handover,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–11, 2021.
  21. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” arXiv preprint arXiv:1703.09312, 2017.
  22. D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for real-time object recognition,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 922–928.
  23. C. Meng, T. Zhang, and T. l. Lam, “Fast and comfortable interactive robot-to-human object handover,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 3701–3706.
  24. A. Miller, S. Knoop, H. Christensen, and P. Allen, “Automatic grasp planning using shape primitives,” in 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), vol. 2, 2003, pp. 1824–1829 vol.2.
  25. A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2901–2910.
  26. V. Ortenzi, F. Cini, T. Pardi, N. Marturi, R. Stolkin, P. Corke, and M. Controzzi, “The grasp strategy of a robot passer influences performance and quality of the robot-human object handover,” Frontiers in Robotics and AI, vol. 7, 2020. [Online]. Available: https://www.frontiersin.org/articles/10.3389/frobt.2020.542406
  27. V. Ortenzi, A. Cosgun, T. Pardi, W. P. Chan, E. Croft, and D. Kulić, “Object handovers: a review for robotics,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1855–1873, 2021.
  28. V. Ortenzi, M. Filipovica, D. Abdlkarim, T. Pardi, C. Takahashi, A. M. Wing, M. Di Luca, and K. J. Kuchenbecker, “Robot, pass me the tool: Handle visibility facilitates task-oriented handovers,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2022, pp. 256–264.
  29. S. Parastegari, B. Abbasi, E. Noohi, and M. Zefran, “Modeling human reaching phase in human-human object handover with application in robot-human handover,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 3597–3602.
  30. M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox, “Contact-graspnet: Efficient 6-dof grasp generation in cluttered scenes,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 438–13 444.
  31. R. Ye, W. Xu, Z. Xue, T. Tang, Y. Wang, and C. Lu, “H2o: A benchmark for visual human-human object handover analysis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 15 762–15 771.
  32. Y. Ye, X. Li, A. Gupta, S. De Mello, S. Birchfield, J. Song, S. Tulsiani, and S. Liu, “Affordance diffusion: Synthesizing hand-object interactions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 479–22 489.
  33. A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3dmatch: Learning local geometric descriptors from rgb-d reconstructions,” in CVPR, 2017.
  34. A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” The International Journal of Robotics Research, vol. 41, no. 7, pp. 690–705, 2022.
  35. Y. Zhang, J. Hang, T. Zhu, X. Lin, R. Wu, W. Peng, D. Tian, and Y. Sun, “Functionalgrasp: Learning functional grasp for robots via semantic hand-object representation,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 3094–3101, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com