Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Modern Take on Visual Relationship Reasoning for Grasp Planning

Published 3 Sep 2024 in cs.RO and cs.CV | (2409.02035v2)

Abstract: Interacting with real-world cluttered scenes pose several challenges to robotic agents that need to understand complex spatial dependencies among the observed objects to determine optimal pick sequences or efficient object retrieval strategies. Existing solutions typically manage simplified scenarios and focus on predicting pairwise object relationships following an initial object detection phase, but often overlook the global context or struggle with handling redundant and missing object relations. In this work, we present a modern take on visual relational reasoning for grasp planning. We introduce D3GD, a novel testbed that includes bin picking scenes with up to 35 objects from 97 distinct categories. Additionally, we propose D3G, a new end-to-end transformer-based dependency graph generation model that simultaneously detects objects and produces an adjacency matrix representing their spatial relationships. Recognizing the limitations of standard metrics, we employ the Average Precision of Relationships for the first time to evaluate model performance, conducting an extensive experimental benchmark. The obtained results establish our approach as the new state-of-the-art for this task, laying the foundation for future research in robotic manipulation. We publicly release the code and dataset at https://paolotron.github.io/d3g.github.io.

Authors (2)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. H. Zhang, X. Lan, X. Zhou, Z. Tian, Y. Zhang, and N. Zheng, “Visual manipulation relationship network for autonomous robotics,” in IEEE-RAS, 2018.
  2. G. Zuo, J. Tong, H. Liu, W. Chen, and J. Li, “Graph-based visual manipulation relationship reasoning network for robotic grasping,” Frontiers in Neurorobotics, vol. 15, 2021.
  3. M. Ding, Y. Liu, C. Yang, and X. Lan, “Visual Manipulation Relationship Detection based on Gated Graph Neural Network for Robotic Grasping,” in IROS, 2022.
  4. C. Yang, X. Lan, H. Zhang, X. Zhou, and N. Zheng, “Visual manipulation relationship detection with fully connected crfs for autonomous robotic grasp,” in ROBIO, 2018.
  5. H. Zhang, D. Yang, H. Wang, B. Zhao, X. Lan, J. Ding, and N. Zheng, “Regrad: A large-scale relational grasp dataset for safe and object-specific robotic grasping in clutter,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2929–2936, 2022.
  6. M. Gilles, Y. Chen, E. Z. Zeng, Y. Wu, K. Furmans, A. Wong, and R. Rayyes, “Metagraspnetv2: All-in-one dataset enabling fast and reliable robotic bin picking via object relationship reasoning and dexterous grasping,” IEEE Transactions on Automation Science and Engineering, vol. 21, no. 3, pp. 2302–2320, 2024.
  7. A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, et al., “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” International journal of computer vision, vol. 128, pp. 1956–1981, 2020.
  8. P. F. Felzenszwalb, R. B. Girshick, D. A. McAllester, and D. Ramanan, “Object detection with discriminatively trained part based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1627–1645, 2010.
  9. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks.” in NeurIPS, 2015.
  10. K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” in ICCV, 2017.
  11. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in CVPR, 2016.
  12. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in ECCV, 2015.
  13. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in ECCV, 2020.
  14. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable DETR: Deformable transformers for end-to-end object detection,” in ICLR, 2021.
  15. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, pp. 4–24, 2020.
  16. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017.
  17. S. Yun, M. Jeong, R. Kim, J. Kang, and H. J. Kim, “Graph transformer networks,” NeurIPS, 2019.
  18. V. P. Dwivedi and X. Bresson, “A generalization of transformer networks to graphs,” AAAI Workshop, 2021.
  19. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” ICLR, 2017.
  20. R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. S. Bernstein, and L. Fei-Fei, “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” International Journal of Computer Vision, vol. 123, pp. 32–73, 2016.
  21. C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei, “Visual relationship detection with language priors,” in ECCV, 2016.
  22. J. Im, J. Nam, N. Park, H. Lee, and S. Park, “Egtr: Extracting graph from transformer for scene graph generation,” in CVPR, 2024.
  23. Y. Zhu, J. Tremblay, S. Birchfield, and Y. Zhu, “Hierarchical planning for long-horizon manipulation with geometric and symbolic scene graphs,” ICRA, 2020.
  24. Q. Gu, A. Kuwajerwala, S. Morin, K. M. Jatavallabhula, B. Sen, A. Agarwal, C. Rivera, W. Paul, K. Ellis, R. Chellappa, et al., “Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning,” in ICRA, 2024.
  25. Z. Wu, J. Tang, X. Chen, C. Ma, X. Lan, and N. Zheng, “Prioritized planning for target-oriented manipulation via hierarchical stacking relationship prediction,” in IROS, 2023.
  26. Y. Huang, A. Conkey, and T. Hermans, “Planning for multi-object manipulation with graph neural network relational classifiers,” ICRA, 2022.
  27. H. Zhang, X. Lan, S. Bai, L. Wan, C. Yang, and N. Zheng, “A multi-task convolutional neural network for autonomous robotic grasping in object stacking scenes,” IROS, 2018.
  28. D. Park, Y. Seo, D. Shin, J. Choi, and S. Y. Chun, “A single multi-task deep neural network with post-processing for object detection with reasoning and robotic grasp detection,” in ICRA, 2020.
  29. V. Tchuiev, Y. Miron, and D. Di Castro, “Duqim-net: Probabilistic object hierarchy representation for multi-view manipulation,” in IROS, 2022.
  30. H. Wang, J. Zhang, L. Wan, X. Chen, X. Lan, and N. Zheng, “Mmrdn: Consistent representation for multi-view manipulation relationship detection in object-stacked scenes,” in ICRA, 2023.
  31. V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State, “Isaac gym: High performance GPU based physics simulation for robot learning,” in NeurIPS, 2021.
  32. M. Gilles, Y. Chen, T. Robin Winter, E. Zhixuan Zeng, and A. Wong, “Metagraspnet: A large-scale benchmark dataset for scene-aware ambidextrous bin picking via physics-based metaverse synthesis,” in CASE, 2022.
  33. R. Wang, J. Zhang, J. Chen, Y. Xu, P. Li, T. Liu, and H. Wang, “Dexgraspnet: A large-scale robotic dexterous grasp dataset for general objects based on simulation,” ICRA, 2022.
  34. H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union,” in CVPR, 2019.
  35. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in ICLR, 2019.
  36. A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret, “Transformers are rnns: Fast autoregressive transformers with linear attention,” in ICML, 2020.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.