Learning Fine Pinch-Grasp Skills using Tactile Sensing from A Few Real-world Demonstrations (2307.04619v2)
Abstract: Imitation learning for robot dexterous manipulation, especially with a real robot setup, typically requires a large number of demonstrations. In this paper, we present a data-efficient learning from demonstration framework which exploits the use of rich tactile sensing data and achieves fine bimanual pinch grasping. Specifically, we employ a convolutional autoencoder network that can effectively extract and encode high-dimensional tactile information. Further, We develop a framework that achieves efficient multi-sensor fusion for imitation learning, allowing the robot to learn contact-aware sensorimotor skills from demonstrations. Our comparision study against the framework without using encoded tactile features highlighted the effectiveness of incorporating rich contact information, which enabled dexterous bimanual grasping with active contact searching. Extensive experiments demonstrated the robustness of the fine pinch grasp policy directly learned from few-shot demonstration, including grasping of the same object with different initial poses, generalizing to ten unseen new objects, robust and firm grasping against external pushes, as well as contact-aware and reactive re-grasping in case of dropping objects under very large perturbations. Furthermore, the saliency map analysis method is used to describe weight distribution across various modalities during pinch grasping, confirming the effectiveness of our framework at leveraging multimodal information.
- H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V. Kumar, “Dexterous manipulation with deep reinforcement learning: Efficient, general, and low-cost,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 3651–3657.
- S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto, “Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation,” arXiv preprint arXiv:2203.13251, 2022.
- Y. Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang, “Dexpoint: Generalizable point cloud reinforcement learning for sim-to-real dexterous manipulation,” in Conference on Robot Learning. PMLR, 2023, pp. 594–605.
- Q. Li, O. Kroemer, Z. Su, F. F. Veiga, M. Kaboli, and H. J. Ritter, “A review of tactile information: Perception and action through touch,” IEEE Transactions on Robotics, vol. 36, no. 6, pp. 1619–1634, 2020.
- N. F. Lepora, “Soft biomimetic optical tactile sensing with the tactip: A review,” IEEE Sensors Journal, vol. 21, no. 19, pp. 21 131–21 143, 2021.
- B. Ward-Cherrier, N. Pestell, L. Cramphorn, B. Winstone, M. E. Giannaccini, J. Rossiter, and N. F. Lepora, “The TacTip Family: Soft Optical Tactile Sensors with 3D-Printed Biomimetic Morphologies,” Soft Robotics, vol. 5, no. 2, pp. 216–227, 4 2018.
- N. F. Lepora, Y. Lin, B. Money-Coomes, and J. Lloyd, “Digitac: A digit-tactip hybrid tactile sensor for comparing low-cost high-resolution robot touch,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9382–9388, 2022.
- W. Yuan, S. Dong, and E. Adelson, “GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force,” Sensors, vol. 17, no. 12, p. 2762, 11 2017.
- M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer et al., “Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838–3845, 2020.
- K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.
- J. Jiang and S. Luo, “Robotic perception of object properties using tactile sensing,” in Tactile Sensing, Skill Learning, and Robotic Dexterous Manipulation. Elsevier, 2022, pp. 23–44.
- M. Oller, M. P. i Lisbona, D. Berenson, and N. Fazeli, “Manipulation via membranes: High-resolution and highly deformable tactile sensing and control,” in Conference on Robot Learning. PMLR, 2023, pp. 1850–1859.
- E. Psomopoulou, N. Pestell, F. Papadopoulos, J. Lloyd, Z. Doulgeri, and N. F. Lepora, “A robust controller for stable 3d pinching using tactile sensing,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 8150–8157, 2021.
- Y. Lin, A. Church, M. Yang, H. Li, J. Lloyd, D. Zhang, and N. F. Lepora, “Bi-touch: Bimanual tactile manipulation with sim-to-real deep reinforcement learning,” IEEE Robotics and Automation Letters, 2023.
- N. F. Lepora, A. Church, C. De Kerckhove, R. Hadsell, and J. Lloyd, “From pixels to percepts: Highly robust edge perception and contour following using deep learning and an optical biomimetic tactile sensor,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2101–2107, 2019.
- J. Lloyd and N. F. Lepora, “Goal-driven robotic pushing using tactile and proprioceptive feedback,” IEEE Transactions on Robotics, vol. 38, no. 2, pp. 1201–1212, 2021.
- M. Polic, I. Krajacic, N. Lepora, and M. Orsag, “Convolutional Autoencoder for Feature Extraction in Tactile Sensing,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3671–3678, 10 2019.
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep., 1985.
- S. Tian, F. Ebert, D. Jayaraman, M. Mudigonda, C. Finn, R. Calandra, and S. Levine, “Manipulation by feel: Touch-based control with deep predictive models,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 818–824.
- R. Caccavale, M. Saveriano, G. A. Fontanelli, F. Ficuciello, D. Lee, and A. Finzi, “Imitation learning and attentional supervision of dual-arm structured tasks,” in 2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, 2017, pp. 66–71.
- H. Li, Y. Zhang, J. Zhu, S. Wang, M. A. Lee, H. Xu, E. Adelson, L. Fei-Fei, R. Gao, and J. Wu, “See, hear, and feel: Smart sensory fusion for robotic manipulation,” arXiv preprint arXiv:2212.03858, 2022.
- R. Wen, K. Yuan, Q. Wang, S. Heng, and Z. Li, “Force-guided high-precision grasping control of fragile and deformable objects using sEMG-based force prediction,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2762–2769, 2020.
- R. Wen, Q. Rouxel, M. Mistry, Z. Li, and C. Tiseo, “Collaborative bimanual manipulation using optimal motion adaptation and interaction control,” arXiv preprint arXiv:2206.00528, 2022.
- Q. Rouxel, K. Yuan, R. Wen, and Z. Li, “Multicontact motion retargeting using whole-body optimization of full kinematics and sequential force equilibrium,” IEEE/ASME Transactions on Mechatronics, vol. 27, no. 5, pp. 4188–4198, 2022.
- T. Baltrušaitis, C. Ahuja, and L.-P. Morency, “Multimodal machine learning: A survey and taxonomy,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 2, pp. 423–443, 2018.
- B. Akbulut, T. Girgin, A. Mehrabi, M. Asada, E. Ugur, and E. Oztop, “Bimanual rope manipulation skill synthesis through context dependent correction policy learning from human demonstration,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 3904–3910.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
- A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.