Augmenting Tactile Simulators with Real-like and Zero-Shot Capabilities (2309.10409v1)
Abstract: Simulating tactile perception could potentially leverage the learning capabilities of robotic systems in manipulation tasks. However, the reality gap of simulators for high-resolution tactile sensors remains large. Models trained on simulated data often fail in zero-shot inference and require fine-tuning with real data. In addition, work on high-resolution sensors commonly focus on ones with flat surfaces while 3D round sensors are essential for dexterous manipulation. In this paper, we propose a bi-directional Generative Adversarial Network (GAN) termed SightGAN. SightGAN relies on the early CycleGAN while including two additional loss components aimed to accurately reconstruct background and contact patterns including small contact traces. The proposed SightGAN learns real-to-sim and sim-to-real processes over difference images. It is shown to generate real-like synthetic images while maintaining accurate contact positioning. The generated images can be used to train zero-shot models for newly fabricated sensors. Consequently, the resulted sim-to-real generator could be built on top of the tactile simulator to provide a real-world framework. Potentially, the framework can be used to train, for instance, reinforcement learning policies of manipulation tasks. The proposed model is verified in extensive experiments with test data collected from real sensors and also shown to maintain embedded force information within the tactile images.
- J. Xu, S. Kim, T. Chen, A. R. Garcia, P. Agrawal, W. Matusik, and S. Sueda, “Efficient tactile simulation with differentiability for robotic manipulation,” in Conference on Robot Learning. PMLR, 2023, pp. 1488–1498.
- A. Church, J. Lloyd, N. F. Lepora et al., “Tactile sim-to-real policy transfer via real-to-sim image translation,” in Conference on Robot Learning. PMLR, 2022, pp. 1645–1654.
- S. Suresh, Z. Si, S. Anderson, M. Kaess, and M. Mukadam, “Midastouch: Monte-carlo inference over distributions across sliding touch,” in Conference on Robot Learning. PMLR, 2023, pp. 319–331.
- S. Dong, D. K. Jha, D. Romeres, S. Kim, D. Nikovski, and A. Rodriguez, “Tactile-rl for insertion: Generalization to objects of unknown geometry,” IEEE International Conference on Robotics and Automation (ICRA), pp. 6437–6443, 2021.
- C. Higuera, B. Boots, and M. Mukadam, “Learning to read braille: Bridging the tactile reality gap with diffusion models,” arXiv preprint arXiv:2304.01182, 2023.
- D. F. Gomes, S. Luo, and P. Paoletti, “Beyond flat gelsight sensors: Simulation of optical tactile sensors of complex morphologies for sim2real learning,” in Robotics: Science and Systems XIX, Daegu, Republic of Korea, July 10-14, 2023, K. E. Bekris, K. Hauser, S. L. Herbert, and J. Yu, Eds., 2023.
- A. Maslyczyk, J.-P. Roberge, V. Duchaine et al., “A highly sensitive multimodal capacitive tactile sensor,” in IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 407–412.
- O. Azulay, I. Ben-David, and A. Sintov, “Learning haptic-based object pose estimation for in-hand manipulation control with underactuated robotic hands,” IEEE Transactions on Haptics, pp. 1–12, 2022.
- M.-Y. Cheng, C.-M. Tsao, Y.-T. Lai, and Y.-J. Yang, “A novel highly-twistable tactile sensing array using extendable spiral electrodes,” in IEEE Int. Conf. on Micro Electro Mech. Sys., 2009, pp. 92–95.
- I. H. Taylor, S. Dong, and A. Rodriguez, “Gelslim 3.0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger,” in IEEE Int. Conf. on Rob. & Auto., 2022, pp. 10 781–10 787.
- M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer et al., “Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838–3845, 2020.
- W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
- O. Azulay, N. Curtis, R. Sokolovsky, G. Levitski, D. Slomovik, G. Lilling, and A. Sintov, “Allsight: A low-cost and high-resolution round tactile sensor with zero-shot learning capability,” arXiv preprint arXiv:2307.02928, 2023.
- S. Wang, M. Lambeta, P.-W. Chou, and R. Calandra, “Tacto: A fast, flexible, and open-source simulator for high-resolution vision-based tactile sensors,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3930–3937, 2022.
- Z. Si and W. Yuan, “Taxim: An example-based simulation model for gelsight tactile sensors,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2361–2368, 2022.
- K. Patel, S. Iba, and N. Jamali, “Deep tactile experience: Estimating tactile sensor output from depth sensor data,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 9846–9853.
- Z. Si, Z. Zhu, A. Agarwal, S. Anderson, and W. Yuan, “Grasp stability prediction with sim-to-real transfer from tactile sensing,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 7809–7816.
- Z. Ding, N. Lepora, and E. Johns, “Sim-to-real transfer for optical tactile sensing,” in IEEE International Conference on Robotics and Automation (ICRA), 05 2020, pp. 1639–1645.
- D. Fernandes gomes, P. Paoletti, and S. Luo, “Generation of gelsight tactile images for sim2real learning,” IEEE Robotics and Automation Letters, vol. PP, pp. 1–1, 03 2021.
- Y. S. Narang, B. Sundaralingam, K. V. Wyk, A. Mousavian, and D. Fox, “Interpreting and predicting tactile signals for the syntouch biotac,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1467–1487, 2021.
- C. Sferrazza and R. D’Andrea, “Sim-to-real for high-resolution optical tactile sensing: From images to three-dimensional contact force distributions,” Soft Robotics, vol. 9, no. 5, pp. 926–937, 2022.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
- J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE international conference on computer vision, 2017, pp. 2223–2232.
- W. Chen, Y. Xu, Z. Chen, P. Zeng, R. Dang, R. Chen, and J. Xu, “Bidirectional sim-to-real transfer for gelsight tactile sensors with cyclegan,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6187–6194, 2022.
- X. Jing, K. Qian, T. Jianu, and S. Luo, “Unsupervised adversarial domain adaptation for sim-to-real transfer of tactile images,” IEEE Transactions on Instrumentation and Measurement, 2023.
- Y. Zhao, X. Jing, K. Qian, D. F. Gomes, and S. Luo, “Skill generalization of tubular object manipulation with tactile sensing and sim2real learning,” Robotics and Autonomous Systems, vol. 160, p. 104321, 2023.
- W. D. Kim, S. Yang, W. Kim, J.-J. Kim, C.-H. Kim, and J. Kim, “Marker-embedded tactile image generation via generative adversarial networks,” IEEE Robotics and Automation Letters, 2023.
- K. Rao, C. Harris, A. Irpan, S. Levine, J. Ibarz, and M. Khansari, “Rl-cyclegan: Reinforcement learning aware simulation-to-real,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 157–11 166.
- C. Chu, A. Zhmoginov, and M. Sandler, “Cyclegan, a master of steganography,” arXiv preprint arXiv:1712.02950, 2017.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
- D. Ho, K. Rao, Z. Xu, E. Jang, M. Khansari, and Y. Bai, “Retinagan: An object-aware approach to sim-to-real transfer,” in IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 10 920–10 926.
- M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in International Conference on Neural Information Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2017, p. 6629–6640.
- M. Binkowski, D. J. Sutherland, M. Arbel, and A. Gretton, “Demystifying MMD gans,” in International Conference on Learning Representations, 2018.
- P. Ruppel, Y. Jonetzko, M. Görner, N. Hendrich, and J. Zhang, “Simulation of the syntouch biotac sensor,” in Intelligent Autonomous Systems, M. Strand, R. Dillmann, E. Menegatti, and S. Ghidoni, Eds., 2019, pp. 374–387.