Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grasping, Part Identification, and Pose Refinement in One Shot with a Tactile Gripper (2312.17650v1)

Published 29 Dec 2023 in cs.RO

Abstract: The rise in additive manufacturing comes with unique opportunities and challenges. Rapid changes to part design and massive part customization distinctive to 3D-Print (3DP) can be easily achieved. Customized parts that are unique, yet exhibit similar features such as dental moulds, shoe insoles, or engine vanes could be industrially manufactured with 3DP. However, the opportunity for massive part customization comes with unique challenges for the existing production paradigm of robotics applications, as the current robotics paradigm for part identification and pose refinement is repetitive, where data-driven and object-dependent approaches are often used. Thus, a bottleneck exists in robotics applications for 3DP parts where massive customization is involved, as it is difficult for feature-based deep learning approaches to distinguish between similar parts such as shoe insoles belonging to different people. As such, we propose a method that augments patterns on 3DP parts so that grasping, part identification, and pose refinement can be executed in one shot with a tactile gripper. We also experimentally evaluate our approach from three perspectives, including real insertion tasks that mimic robotic sorting and packing, and achieved excellent classification results, a high insertion success rate of 95%, and a sub-millimeter pose refinement accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. G. Lowe, “Sift-the scale invariant feature transform,” Int. J, vol. 2, no. 91-110, p. 2, 2004.
  2. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” International Conference on Computer Vision, pp. 2564–2571, 2011.
  3. Z. Dong, S. Liu, T. Zhou, H. Cheng, L. Zeng, X. Yu, and H. Liu, “Ppr-net: point-wise pose regression network for instance segmentation and 6d pose estimation in bin-picking scenarios,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1773–1780, 2019.
  4. F. Michel, A. Kirillov, E. Brachmann, A. Krull, S. Gumhold, B. Savchynskyy, and C. Rother, “Global hypothesis generation for 6d object pose estimation,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 462–471, 2017.
  5. Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “Deepim: Deep iterative matching for 6d pose estimation,” European Conference on Computer Vision (ECCV), pp. 683–698, 2018.
  6. W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
  7. M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer et al., “Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838–3845, 2020.
  8. R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing—from humans to humanoids,” IEEE Transactions on Robotics, vol. 26, no. 1, pp. 1–20, 2009.
  9. H. Yousef, M. Boukallel, and K. Althoefer, “Tactile sensing for dexterous in-hand manipulation in robotics—a review,” Sensors and Actuators A: physical, vol. 167, no. 2, pp. 171–187, 2011.
  10. A. N. Chaudhury, T. Man, W. Yuan, and C. G. Atkeson, “Using collocated vision and tactile sensors for visual servoing and localization,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3427–3434, 2022.
  11. P. K. Murali, M. Gentner, and M. Kaboli, “Active visuo-tactile point cloud registration for accurate pose estimation of objects in an unknown workspace,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2838–2844, 2021.
  12. M. B. Villalonga, A. Rodriguez, B. Lim, E. Valls, and T. Sechopoulos, “Tactile object pose estimation from the first touch with geometric contact rendering,” Conference on Robot Learning, pp. 1015–1029, 2021.
  13. M. Bauza, O. Canal, and A. Rodriguez, “Tactile mapping and localization from high-resolution tactile imprints,” 2019 International Conference on Robotics and Automation (ICRA), pp. 3811–3817, 2019.
  14. M. Bauza, A. Bronars, and A. Rodriguez, “Tac2pose: Tactile object pose estimation from the first touch,” arXiv preprint arXiv:2204.11701, 2022.
  15. J. Lin, R. Calandra, and S. Levine, “Learning to identify object instances by touch: Tactile recognition via multimodal matching,” International Conference on Robotics and Automation (ICRA), pp. 3644–3650, 2019.
  16. T. Matsubara and K. Shibata, “Active tactile exploration with uncertainty and travel cost for fast shape estimation of unknown objects,” Robotics and Autonomous Systems, vol. 91, pp. 314–326, 2017.
  17. C. De Farias, N. Marturi, R. Stolkin, and Y. Bekiroglu, “Simultaneous tactile exploration and grasp refinement for unknown objects,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3349–3356, 2021.
  18. Y. Ono, E. Trulls, P. Fua, and K. M. Yi, “Lf-net: Learning local features from images,” CoRR, vol. abs/1805.09662, 2018. [Online]. Available: http://arxiv.org/abs/1805.09662
  19. D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018.
  20. N. Adrian, V.-T. Do, and Q.-C. Pham, “Dfbvs: Deep feature-based visual servo,” in IEEE International Conference on Automation Science and Engineering (CASE), 2022, pp. 1783–1789.
  21. D. Liu, S. Arai, J. Miao, J. Kinugawa, Z. Wang, and K. Kosuge, “Point pair feature-based pose estimation with multiple edge appearance models (ppf-meam) for robotic bin picking,” Sensors, vol. 18, no. 8, 2018.
  22. S. Luo, J. Bimbo, R. Dahiya, and H. Liu, “Robotic tactile perception of object properties: A review,” Mechatronics, vol. 48, pp. 54–67, 2017.
  23. Q. Li, O. Kroemer, Z. Su, F. F. Veiga, M. Kaboli, and H. J. Ritter, “A review of tactile information: Perception and action through touch,” IEEE Transactions on Robotics, vol. 36, no. 6, pp. 1619–1634, 2020.
  24. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  25. H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: A metric and a loss for bounding box regression,” IEEE/CVF Conference on Computer vision and Pattern Recognition, pp. 658–666, 2019.
  26. W. Gao and R. Tedrake, “Filterreg: Robust and efficient probabilistic point-set registration using gaussian filter and twist parameterization,” IEEE/CVF conference on computer vision and pattern recognition, pp. 11 095–11 104, 2019.
  27. A. Gartus and H. Leder, “The small step toward asymmetry: Aesthetic judgment of broken symmetries,” i-Perception, vol. 4, no. 5, pp. 361–364, 2013.
  28. A. Gartus, M. Völker, and H. Leder, “What experts appreciate in patterns: Art expertise modulates preference for asymmetric and face-like patterns,” Symmetry, vol. 12, no. 5, p. 707, 2020.
  29. S. Kirkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, “Optimization by simulated annealing,” science, vol. 220, no. 4598, pp. 671–680, 1983.
  30. M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE transactions on information theory, vol. 8, no. 2, pp. 179–187, 1962.
  31. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” 2018.
  32. X. Zhao, W. Ding, Y. An, Y. Du, T. Yu, M. Li, M. Tang, and J. Wang, “Fast segment anything,” arXiv preprint arXiv:2306.12156, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.