Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Task-Oriented Grasping with Point Cloud Representation of Objects (2309.11689v1)

Published 20 Sep 2023 in cs.RO

Abstract: In this paper, we study the problem of task-oriented grasp synthesis from partial point cloud data using an eye-in-hand camera configuration. In task-oriented grasp synthesis, a grasp has to be selected so that the object is not lost during manipulation, and it is also ensured that adequate force/moment can be applied to perform the task. We formalize the notion of a gross manipulation task as a constant screw motion (or a sequence of constant screw motions) to be applied to the object after grasping. Using this notion of task, and a corresponding grasp quality metric developed in our prior work, we use a neural network to approximate a function for predicting the grasp quality metric on a cuboid shape. We show that by using a bounding box obtained from the partial point cloud of an object, and the grasp quality metric mentioned above, we can generate a good grasping region on the bounding box that can be used to compute an antipodal grasp on the actual object. Our algorithm does not use any manually labeled data or grasping simulator, thus making it very efficient to implement and integrate with screw linear interpolation-based motion planners. We present simulation as well as experimental results that show the effectiveness of our approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. M. Kokic, D. Kragic, and J. Bohg, “Learning task-oriented grasping from human activity datasets,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3352–3359, 2020.
  2. B. Wen, W. Lian, K. Bekris, and S. Schaal, “Catgrasp: Learning category-level task-relevant grasping in clutter from simulation,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 6401–6408.
  3. A. Fakhari, A. Patankar, J. Xie, and N. Chakraborty, “Computing a task-dependent grasp metric using second-order cone programs,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, pp. 4009–4016.
  4. C. Borst, M. Fischer, and G. Hirzinger, “Grasp planning: how to choose a suitable task wrench space,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 1, 2004, pp. 319–325.
  5. N. S. Pollard, “Synthesizing grasps from generalized prototypes,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 3, 1996, pp. 2124–2130.
  6. R. Haschke, J. J. Steil, I. Steuwer, and H. Ritter, “Task-oriented quality measures for dextrous grasping,” in International Symposium on Computational Intelligence in Robotics and Automation.   IEEE, 2005, pp. 689–694.
  7. R. Krug, A. J. Lilienthal, D. Kragic, and Y. Bekiroglu, “Analytic grasp success prediction with tactile feedback,” in IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 165–171.
  8. S. Song, A. Zeng, J. Lee, and T. Funkhouser, “Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4978–4985, 2020.
  9. D. Mahalingam and N. Chakraborty, “Human-guided planning for complex manipulation tasks using the screw geometry of motion,” in 2023 International Conference on Robotics and Automation (ICRA), 2023.
  10. A. Sarker, A. Sinha, and N. Chakraborty, “On screw linear interpolation for point-to-point path planning,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 9480–9487.
  11. A. Patankar, A. Fakhari, and N. Chakraborty, “Hand-object contact force synthesis for manipulating objects by exploiting environment,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 9182–9189.
  12. R. Newbury, M. Gu, L. Chumbley, A. Mousavian, C. Eppner, J. Leitner, J. Bohg, A. Morales, T. Asfour, D. Kragic et al., “Deep learning approaches to grasp synthesis: A review,” arXiv preprint arXiv:2207.02556, 2022.
  13. M. Kokic, J. A. Stork, J. A. Haustein, and D. Kragic, “Affordance detection for task-specific grasping using deep learning,” in 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids).   IEEE, 2017, pp. 91–98.
  14. R. Detry, J. Papon, and L. Matthies, “Task-oriented grasping with semantic and geometric scene understanding,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 3266–3273.
  15. T.-T. Do, A. Nguyen, and I. Reid, “Affordancenet: An end-to-end deep learning approach for object affordance detection,” in 2018 IEEE international conference on robotics and automation (ICRA).   IEEE, 2018, pp. 5882–5889.
  16. P. Ardón, E. Pairet, Y. Petillot, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan, “Self-assessment of grasp affordance transfer,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 9385–9392.
  17. S. Deng, X. Xu, C. Wu, K. Chen, and K. Jia, “3d affordancenet: A benchmark for visual object affordance understanding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1778–1787.
  18. K. Fang, Y. Zhu, A. Garg, A. Kurenkov, V. Mehta, L. Fei-Fei, and S. Savarese, “Learning task-oriented grasping for tool manipulation from simulated self-supervision,” The International Journal of Robotics Research, vol. 39, no. 2-3, pp. 202–216, 2020.
  19. A. Murali, W. Liu, K. Marino, S. Chernova, and A. Gupta, “Same object, different grasps: Data and semantic knowledge for task-oriented grasping,” in Conference on Robot Learning.   PMLR, 2021, pp. 1540–1557.
  20. M. Sun and Y. Gao, “Gater: Learning grasp-action-target embeddings and relations for task-specific grasping,” IEEE Robotics and Automation Letters, vol. 7, no. 1, pp. 618–625, 2021.
  21. C. Ferrari and J. F. Canny, “Planning optimal grasps.” in ICRA, vol. 3, no. 4, 1992, p. 6.
  22. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” In: Proceedings of Robotics: Science and Systems (RSS), 2017.
  23. A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt, “Grasp pose detection in point clouds,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1455–1473, 2017.
  24. A. Alliegro, M. Rudorfer, F. Frattin, A. Leonardis, and T. Tommasi, “End-to-end learning to grasp via sampling from object point clouds,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9865–9872, 2022.
  25. X. Lou, Y. Yang, and C. Choi, “Collision-aware target-driven object grasping in constrained environments,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 6364–6370.
  26. ——, “Learning to generate 6-dof grasp poses with reachability awareness,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 1532–1538.
  27. H. Kasaei and M. Kasaei, “Mvgrasp: Real-time multi-view 3d object grasping in highly cluttered environments,” arXiv preprint arXiv:2103.10997, 2021.
  28. A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2901–2910.
  29. M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming, version 2.1,” 2014.
  30. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  31. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning.   pmlr, 2015, pp. 448–456.
  32. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
  33. Z. Liu, W. Liu, Y. Qin, F. Xiang, M. Gou, S. Xin, M. A. Roa, B. Calli, H. Su, Y. Sun, and P. Tan, “Ocrtoc: A cloud-based competition and benchmark for robotic grasping and manipulation,” 2021.
  34. B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar, “The ycb object and model set: Towards common benchmarks for manipulation research,” in 2015 international conference on advanced robotics (ICAR).   IEEE, 2015, pp. 510–517.
  35. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  36. A. Fakhari, A. Patankar, and N. Chakraborty, “Motion and force planning for manipulating heavy objects by pivoting,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 9393–9400.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aditya Patankar (8 papers)
  2. Khiem Phi (3 papers)
  3. Dasharadhan Mahalingam (6 papers)
  4. Nilanjan Chakraborty (24 papers)
  5. IV Ramakrishnan (5 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.