Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GoalGrasp: Grasping Goals in Partially Occluded Scenarios without Grasp Training (2405.04783v1)

Published 8 May 2024 in cs.RO

Abstract: We present GoalGrasp, a simple yet effective 6-DOF robot grasp pose detection method that does not rely on grasp pose annotations and grasp training. Our approach enables user-specified object grasping in partially occluded scenes. By combining 3D bounding boxes and simple human grasp priors, our method introduces a novel paradigm for robot grasp pose detection. First, we employ a 3D object detector named RCV, which requires no 3D annotations, to achieve rapid 3D detection in new scenes. Leveraging the 3D bounding box and human grasp priors, our method achieves dense grasp pose detection. The experimental evaluation involves 18 common objects categorized into 7 classes based on shape. Without grasp training, our method generates dense grasp poses for 1000 scenes. We compare our method's grasp poses to existing approaches using a novel stability metric, demonstrating significantly higher grasp pose stability. In user-specified robot grasping experiments, our approach achieves a 94% grasp success rate. Moreover, in user-specified grasping experiments under partial occlusion, the success rate reaches 92%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. K. Xu, S. Zhao, Z. Zhou, Z. Li, H. Pi, Y. Zhu, Y. Wang, and R. Xiong, “A joint modeling of vision-language-action for target-oriented grasping in clutter,” arXiv preprint arXiv:2302.12610, 2023.
  2. M. Tröbinger, C. Jähne, Z. Qu, J. Elsner, A. Reindl, S. Getz, T. Goll, B. Loinger, T. Loibl, C. Kugler et al., “Introducing garmi-a service robotics platform to support the elderly at home: Design philosophy, system overview and first results,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5857–5864, 2021.
  3. H.-S. Fang, C. Wang, H. Fang, M. Gou, J. Liu, H. Yan, W. Liu, Y. Xie, and C. Lu, “Anygrasp: Robust and efficient grasp perception in spatial and temporal domains,” IEEE Transactions on Robotics, vol. 39, no. 5, pp. 3929–3945, 2023.
  4. X. Chen, J. Yang, Z. He, H. Yang, Q. Zhao, and Y. Shi, “Qwengrasp: A usage of large vision language model for target-oriented grasping,” arXiv preprint arXiv:2309.16426, 2023.
  5. X. Liu, X. Yuan, Q. Zhu, Y. Wang, M. Feng, J. Zhou, and Z. Zhou, “A depth adaptive feature extraction and dense prediction network for 6-d pose estimation in robotic grasping,” IEEE Transactions on Industrial Informatics, 2023.
  6. A. Cordeiro, L. F. Rocha, C. Costa, P. Costa, and M. F. Silva, “Bin picking approaches based on deep learning techniques: A state-of-the-art survey,” in 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC).   IEEE, 2022, pp. 110–117.
  7. K. Chen, R. Cao, S. James, Y. Li, Y.-H. Liu, P. Abbeel, and Q. Dou, “Sim-to-real 6d object pose estimation via iterative self-training for robotic bin picking,” in European Conference on Computer Vision.   Springer, 2022, pp. 533–550.
  8. X. Deng, Y. Xiang, A. Mousavian, C. Eppner, T. Bretl, and D. Fox, “Self-supervised 6d object pose estimation for robot manipulation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 3665–3671.
  9. A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, “6-dof grasping for target-driven object manipulation in clutter,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 6232–6238.
  10. Z. Liu, Z. Wang, S. Huang, J. Zhou, and J. Lu, “Ge-grasp: Efficient target-oriented grasping in dense clutter,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 1388–1395.
  11. C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: Separately leveraging rgb and depth for unseen object instance segmentation,” in Conference on robot learning.   PMLR, 2020, pp. 1369–1378.
  12. H. Yu, X. Lou, Y. Yang, and C. Choi, “Iosg: Image-driven object searching and grasping,” arXiv preprint arXiv:2308.05821, 2023.
  13. S. Gui and Y. Luximon, “Recursive cross-view: Use only 2d detectors to achieve 3d object detection without 3d annotations,” IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6659–6666, 2023.
  14. S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International journal of robotics research, vol. 37, no. 4-5, pp. 421–436, 2018.
  15. M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox, “Contact-graspnet: Efficient 6-dof grasp generation in cluttered scenes,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 438–13 444.
  16. B. Zhao, H. Zhang, X. Lan, H. Wang, Z. Tian, and N. Zheng, “Regnet: Region-based grasp network for end-to-end grasp detection in point clouds,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 474–13 480.
  17. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” arXiv preprint arXiv:1703.09312, 2017.
  18. A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2901–2910.
  19. H.-S. Fang, C. Wang, M. Gou, and C. Lu, “Graspnet-1billion: A large-scale benchmark for general object grasping,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 444–11 453.
  20. M. Sun and Y. Gao, “Gater: Learning grasp-action-target embeddings and relations for task-specific grasping,” IEEE Robotics and Automation Letters, vol. 7, no. 1, pp. 618–625, 2021.
  21. T. Li, J. An, K. Yang, G. Chen, and Y. Wang, “An efficient network for target-oriented robot grasping pose generation in clutter,” in 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA).   IEEE, 2022, pp. 967–972.
  22. G. Du, K. Wang, S. Lian, and K. Zhao, “Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review,” Artificial Intelligence Review, vol. 54, no. 3, pp. 1677–1734, 2021.
  23. H. Cao, L. Dirnberger, D. Bernardini, C. Piazza, and M. Caccamo, “6impose: bridging the reality gap in 6d pose estimation for robotic grasping,” Frontiers in Robotics and AI, vol. 10, p. 1176492, 2023.
  24. H. Zhang, Z. Liang, C. Li, H. Zhong, L. Liu, C. Zhao, Y. Wang, and Q. J. Wu, “A practical robotic grasping method by using 6-d pose estimation with protective correction,” IEEE Transactions on Industrial Electronics, vol. 69, no. 4, pp. 3876–3886, 2021.
  25. J. Jiang, Z. He, X. Zhao, S. Zhang, C. Wu, and Y. Wang, “Reg-net: Improving 6dof object pose estimation with 2d keypoint long-short-range-aware registration,” IEEE Transactions on Industrial Informatics, vol. 19, no. 1, pp. 328–338, 2022.
  26. Y. Hu, P. Fua, W. Wang, and M. Salzmann, “Single-stage 6d object pose estimation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2930–2939.
  27. I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Shun Gui (2 papers)
  2. Yan Luximon (6 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com