Towards Feasible Dynamic Grasping: Leveraging Gaussian Process Distance Field, SE(3) Equivariance and Riemannian Mixture Models (2311.02576v3)
Abstract: This paper introduces a novel approach to improve robotic grasping in dynamic environments by integrating Gaussian Process Distance Fields (GPDF), SE(3) equivariant networks, and Riemannian Mixture Models. The aim is to enable robots to grasp moving objects effectively. Our approach comprises three main components: object shape reconstruction, grasp sampling, and implicit grasp pose selection. GPDF accurately models the shape of objects, which is essential for precise grasp planning. SE(3) equivariance ensures that the sampled grasp poses are equivariant to the object's pose changes, enhancing robustness in dynamic scenarios. Riemannian Gaussian Mixture Models are employed to assess reachability, providing a feasible and adaptable grasping strategies. Feasible grasp poses are targeted by novel task or joint space reactive controllers formulated using Gaussian Mixture Models and Gaussian Processes. This method resolves the challenge of discrete grasp pose selection, enabling smoother grasping execution. Experimental validation confirms the effectiveness of our approach in generating feasible grasp poses and achieving successful grasps in dynamic environments. By integrating these advanced techniques, we present a promising solution for enhancing robotic grasping capabilities in real-world scenarios.
- V.-D. Nguyen, “Constructing force-closure grasps,” The International Journal of Robotics Research, vol. 7, no. 3, pp. 3–16, 1988.
- D. Turpin, T. P. Zhong, S. Zhang, G. Zhu, J. Liu, E. Heiden, M. Macklin, S. Tsogkas, S. Dickinson, and A. Garg, “Fast-grasp’d: Dexterous multi-finger grasp generation through differentiable simulation,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 8082–8089, 2023.
- Y. Wu, W. Liu, Z. Liu, and G. S. Chirikjian, “Learning-free grasping of unknown objects using hidden superquadrics,” ArXiv, vol. abs/2305.06591, 2023.
- S.-H. Kim, T. Ahn, Y. Lee, J. Kim, M. Y. Wang, and F. C. Park, “Dsqnet: A deformable model-based supervised learning algorithm for grasping unknown occluded objects,” IEEE Transactions on Automation Science and Engineering, vol. 20, pp. 1721–1734, 2023.
- Y. Lin, C. Tang, F.-J. Chu, and P. A. Vela, “Using synthetic data and deep networks to recognize primitive shapes for object grasping,” 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 10494–10501, 2019.
- J. Varley, C. DeChant, A. Richardson, J. Ruales, and P. K. Allen, “Shape completion enabled robotic grasping,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2442–2447, 2016.
- A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2901–2910, 2019.
- J. Urain, N. Funk, J. Peters, and G. Chalvatzaki, “Se(3)-diffusionfields: Learning smooth cost functions for joint grasp and motion optimization through diffusion,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 5923–5930, 2022.
- M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox, “Contact-graspnet: Efficient 6-dof grasp generation in cluttered scenes,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 13438–13444, 2021.
- T. Weng, D. Held, F. Meier, and M. Mukadam, “Neural grasp distance fields for robot manipulation,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 1814–1821, 2022.
- L. Wang, Y. Xiang, and D. Fox, “Manipulation trajectory optimization with online grasp synthesis and selection,” ArXiv, vol. abs/1911.10280, 2019.
- Cambridge, USA: MIT Press, 2022.
- S. Kim, A. Shukla, and A. Billard, “Catching objects in flight,” IEEE Transactions on Robotics, vol. 30, pp. 1049–1065, 2014.
- I. Akinola, J. Xu, S. Song, and P. K. Allen, “Dynamic grasping with reachability and motion awareness,” 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9422–9429, 2021.
- J. Liu, R. Zhang, H. Fang, M. Gou, H. Fang, C. Wang, S. Xu, H. Yan, and C. Lu, “Target-referenced reactive grasping for dynamic objects,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8824–8833, 2023.
- H. Fang, C. Wang, H. Fang, M. Gou, J. Liu, H. Yan, W. Liu, Y. Xie, and C. Lu, “Anygrasp: Robust and efficient grasp perception in spatial and temporal domains,” ArXiv, vol. abs/2212.08333, 2022.
- S. Christen, W. Yang, C. P’erez-D’Arpino, O. Hilliges, D. Fox, and Y.-W. Chao, “Learning human-to-robot handovers from point clouds,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9654–9664, 2023.
- C. L. Gentil, O.-L. Ouabi, L. Wu, C. Pradalier, and T. Vidal-Calleja, “Accurate gaussian process distance fields with applications to echolocation and mapping,” ArXiv, vol. abs/2302.13005, 2023.
- A. Simeonov, Y. Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V. Sitzmann, “Neural descriptor fields: Se(3)-equivariant object representations for manipulation,” 2022 International Conference on Robotics and Automation (ICRA), pp. 6394–6400, 2021.
- B. Sen, A. Agarwal, G. Singh, B. Brojeshwar, S. Sridhar, and M. Krishna, “Scarp: 3d shape completion in arbitrary poses for improved grasping,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 3838–3845, 2023.
- C. Deng, O. Litany, Y. Duan, A. Poulenard, A. Tagliasacchi, and L. J. Guibas, “Vector neurons: A general framework for so(3)-equivariant networks,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12180–12189, 2021.
- S. Calinon, “Gaussians on riemannian manifolds: Applications for robot learning and adaptive control,” IEEE Robotics & Automation Magazine, vol. 27, pp. 33–45, 2019.
- C. Eppner, A. Mousavian, and D. Fox, “Acronym: A large-scale grasp dataset based on simulation,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6222–6227, 2020.
- J. J. Park, P. R. Florence, J. Straub, R. A. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 165–174, 2019.
- S. Jauhri, J. Peters, and G. Chalvatzaki, “Robot learning of mobile manipulation with reachability behavior priors,” IEEE Robotics and Automation Letters, vol. 7, pp. 8399–8406, 2022.
- N. Vahrenkamp, T. Asfour, and R. Dillmann, “Robot placement based on reachability inversion,” 2013 IEEE International Conference on Robotics and Automation, pp. 1970–1975, 2013.
- G. S. Chirikjian, “Stochastic models, information theory, and lie groups, volume 2,” 2012.
- B. Ames, J. Morgan, and G. D. Konidaris, “Ikflow: Generating diverse inverse kinematics solutions,” IEEE Robotics and Automation Letters, vol. 7, pp. 7177–7184, 2021.
- S. Zhong, T. Power, and A. Gupta, “PyTorch Kinematics,” 3 2023.
- B. Çalli, A. Walsman, A. Singh, S. S. Srinivasa, P. Abbeel, and A. M. Dollar, “Benchmarking in manipulation research: The ycb object and model set and benchmarking protocols,” ArXiv, vol. abs/1502.03143, 2015.
- C. Zhang, D. Han, Y. Qiao, J. U. Kim, S.-H. Bae, S. Lee, and C. S. Hong, “Faster segment anything: Towards lightweight sam for mobile applications,” arXiv preprint arXiv:2306.14289, 2023.