Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AGILE: Approach-based Grasp Inference Learned from Element Decomposition (2402.01303v2)

Published 2 Feb 2024 in cs.RO, cs.CV, cs.SY, and eess.SY

Abstract: Humans, this species expert in grasp detection, can grasp objects by taking into account hand-object positioning information. This work proposes a method to enable a robot manipulator to learn the same, grasping objects in the most optimal way according to how the gripper has approached the object. Built on deep learning, the proposed method consists of two main stages. In order to generalize the network on unseen objects, the proposed Approach-based Grasping Inference involves an element decomposition stage to split an object into its main parts, each with one or more annotated grasps for a particular approach of the gripper. Subsequently, a grasp detection network utilizes the decomposed elements by Mask R-CNN and the information on the approach of the gripper in order to detect the element the gripper has approached and the most optimal grasp. In order to train the networks, the study introduces a robotic grasping dataset collected in the Coppeliasim simulation environment. The dataset involves 10 different objects with annotated element decomposition masks and grasp rectangles. The proposed method acquires a 90% grasp success rate on seen objects and 78% on unseen objects in the Coppeliasim simulation environment. Lastly, simulation-to-reality domain adaptation is performed by applying transformations on the training set collected in simulation and augmenting the dataset, which results in a 70% physical grasp success performance using a Delta parallel robot and a 2 -fingered gripper.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. W. Zheng, N. Guo, B. Zhang, J. Zhou, G. Tian, and Y. Xiong, “Human grasp mechanism understanding, human-inspired grasp control and robotic grasping planning for agricultural robots,” Sensors, vol. 22, no. 14, p. 5240, 2022.
  2. K. Yao and A. Billard, “Exploiting kinematic redundancy for robotic grasping of multiple objects,” IEEE Transactions on Robotics, 2023.
  3. F. Liu, F. Sun, B. Fang, X. Li, S. Sun, and H. Liu, “Hybrid robotic grasping with a soft multimodal gripper and a deep multistage learning scheme,” IEEE Transactions on Robotics, 2023.
  4. J. Zhang, M. Li, Y. Feng, and C. Yang, “Robotic grasp detection based on image processing and random forest,” Multimedia Tools and Applications, vol. 79, pp. 2427–2446, 2020.
  5. M. Q. Mohammed, K. L. Chung, and C. S. Chyi, “Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations,” IEEE Access, vol. 8, pp. 178 450–178 481, 2020.
  6. S. Jauhri, I. Lunawat, and G. Chalvatzaki, “Learning any-view 6dof robotic grasping in cluttered scenes via neural surface rendering,” arXiv preprint arXiv:2306.07392, 2023.
  7. S. Ainetter and F. Fraundorfer, “End-to-end trainable deep neural network for robotic grasp detection and semantic segmentation from rgb,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 452–13 458.
  8. S. Kumra, S. Joshi, and F. Sahin, “Antipodal robotic grasping using generative residual convolutional neural network,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 9626–9633.
  9. I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, 2015.
  10. H. Hosseini, M. T. Masouleh, and A. Kalhor, “Improving the successful robotic grasp detection using convolutional neural networks,” in 2020 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS).   IEEE, 2020, pp. 1–6.
  11. D. Park, Y. Seo, D. Shin, J. Choi, and S. Y. Chun, “A single multi-task deep neural network with post-processing for object detection with reasoning and robotic grasp detection,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 7300–7306.
  12. U. Asif, J. Tang, and S. Harrer, “Graspnet: An efficient convolutional neural network for real-time grasp detection for low-powered devices.” in IJCAI, vol. 7, 2018, pp. 4875–4882.
  13. E. Balazadeh, N. Asadi Khomami, M. Tale Masouleh, and A. Kalhor, “Hugga: Human-like grasp generation with gripper’s approach state using deep learning,” in international conference on robotics and mechatronics (in press), 2023.
  14. D. Paschalidou, L. V. Gool, and A. Geiger, “Learning unsupervised hierarchical part decomposition of 3d objects from a single rgb image,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1060–1070.
  15. Z. Liu, W. T. Freeman, J. B. Tenenbaum, and J. Wu, “Physical primitive decomposition,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3–19.
  16. Y. Kawana, Y. Mukuta, and T. Harada, “Unsupervised pose-aware part decomposition for man-made articulated objects,” in European Conference on Computer Vision.   Springer, 2022, pp. 558–575.
  17. P. M. Kazaj, M. Koosheshi, A. Shahedi, and A. V. Sadr, “U-net-based models for skin lesion segmentation: More attention and augmentation,” arXiv preprint arXiv:2210.16399, 2022.
  18. B. Chen, C. Gong, and J. Yang, “Importance-aware semantic segmentation for autonomous vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 1, pp. 137–148, 2018.
  19. Y. Lin, C. Tang, F.-J. Chu, and P. A. Vela, “Using synthetic data and deep networks to recognize primitive shapes for object grasping,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 10 494–10 501.
  20. M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3d objects from real depth images using mask r-cnn trained on synthetic data,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 7283–7290.
  21. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  22. Y. Chen, E. Z. Zeng, M. Gilles, and A. Wong, “Metagraspnet_v0: A large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis,” arXiv preprint arXiv:2112.14663, 2021.
  23. E. Rohmer, S. P. Singh, and M. Freese, “V-rep: A versatile and scalable robot simulation framework,” in 2013 IEEE/RSJ international conference on intelligent robots and systems.   IEEE, 2013, pp. 1321–1326.
  24. J.-P. J. Louis-Alexis ALLEN DEMERS, Simon LEFRANCOIS, “A gripper having a two degree of freedom underactuated mechanical finger for encompassing and pinch grasping,” Patent CA2 856 622C, 2012.
  25. “Roboflow,” Dwyer, B., Nelson, J. (2022), Solawetz, J., et. al. Roboflow (Version 1.0) [Software]. Available from https://roboflow.com. computer vision.
  26. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  27. A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: fast and flexible image augmentations,” Information, vol. 11, no. 2, p. 125, 2020.
  28. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  29. J. Redmon and A. Angelova, “Real-time grasp detection using convolutional neural networks,” in 2015 IEEE international conference on robotics and automation (ICRA).   IEEE, 2015, pp. 1316–1322.
  30. P. Yarmohammadi, N. Asadi Khomami, and M. Tale Masouleh, “Experimental study on chess board setup using delta parallel robot based on deep learning,” in international conference on robotics and mechatronics (in press), 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com