Automatic Robot Path Planning for Visual Inspection from Object Shape (2312.02603v1)
Abstract: Visual inspection is a crucial yet time-consuming task across various industries. Numerous established methods employ machine learning in inspection tasks, necessitating specific training data that includes predefined inspection poses and training images essential for the training of models. The acquisition of such data and their integration into an inspection framework is challenging due to the variety in objects and scenes involved and due to additional bottlenecks caused by the manual collection of training data by humans, thereby hindering the automation of visual inspection across diverse domains. This work proposes a solution for automatic path planning using a single depth camera mounted on a robot manipulator. Point clouds obtained from the depth images are processed and filtered to extract object profiles and transformed to inspection target paths for the robot end-effector. The approach relies on the geometry of the object and generates an inspection path that follows the shape normal to the surface. Depending on the object size and shape, inspection paths can be defined as single or multi-path plans. Results are demonstrated in both simulated and real-world environments, yielding promising inspection paths for objects with varying sizes and shapes. Code and video are open-source available at: https://github.com/CuriousLad1000/Auto-Path-Planner
- J. E. See, C. G. Drury, A. E. Speed, A. Williams, and N. Khalandi, “The role of visual inspection in the 21st century,” in Proc. of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, 2017, pp. 262 – 266.
- S. Agnisarman, S. Lopes, K. Chalil Madathil, K. Piratla, and A. Gramopadhye, “A survey of automation-enabled human-in-the-loop systems for infrastructure visual inspection,” Automation in Construction, vol. 97, pp. 52–76, 2019.
- N. Kato, M. Inoue, M. Nishiyama, and Y. Iwai, “Comparing the recognition accuracy of humans and deep learning on a simple visual inspection task,” in 5th Asian Conference on Pattern Recognition, 2020, pp. 184–197.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 770–778.
- W. Liu et al., “SSD: Single Shot MultiBox Detector,” in European Conference on Computer Vision (ECCV), 2016, pp. 21–37.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
- S. F. Dodge and L. Karam, “A study and comparison of human and deep learning recognition performance under visual distortions,” in 26th International Conference on Computer Communication and Networks (ICCCN), 2017, pp. 1–7.
- S. R. Kheradpisheh, M. Ghodrati, M. Ganjtabesh, and T. Masquelier, “Deep networks can resemble human feed-forward vision in invariant object recognition,” Scientific Reports, vol. 6, p. 32672, 2015.
- Q. Yu, Y. Yang, F. Liu, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Sketch-a-net: A deep neural network that beats humans,” International Journal of Computer Vision, vol. 122, pp. 411–425, 2017.
- C.-C. Ho, J.-C. Li, T.-H. Kuo, and C. Peng, “Multicamera fusion-based leather defects marking system,” Advances in Mechanical Engineering, vol. 5, 2013.
- Z. Ren, F. Fang, N. Yan, and Y. Wu, “State of the art in defect detection based on machine vision,” International Journal of Precision Engineering and Manufacturing-Green Technology, vol. 9, pp. 661–691, 2021.
- M. A. H. Ali and A. K. Lun, “A cascading fuzzy logic with image processing algorithm–based defect detection for automatic visual inspection of industrial cylindrical object’s surface,” Int. Journal of Advanced Manufacturing Technology, vol. 102, pp. 81–94, 2019.
- M. Hanses, R. Behrens, and N. Elkmann, “Hand-guiding robots along predefined geometric paths under hard joint constraints,” in 21st International Conference on Emerging Technologies and Factory Automation (ETFA), 2016, pp. 1–5.
- Z. Loncarevic, A. Gams, S. Rebersek, B. Nemec, J. Skrabar, J. Skvarc, and A. Ude, “Specifying and optimizing robotic motion for visual quality inspection,” Robotics and Computer Integrated Manufacturing, vol. 72, p. 102200, 2021.
- M. Roberts, S. Shah, D. Dey, A. T. Truong, S. N. Sinha, A. Kapoor, P. Hanrahan, and N. Joshi, “Submodular trajectory optimization for aerial 3d scanning,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5334–5343.
- R. Monica and J. Aleotti, “Surfel-based next best view planning,” IEEE Robotics and Automation Letters, vol. 3, pp. 3324–3331, 2018.
- M. Naazare, F. G. Rosas, and D. Schulz, “Online next-best-view planner for 3d-exploration and inspection with a mobile manipulator robot,” IEEE Robotics and Automation Letters, vol. 7, pp. 3779–3786, 2022.
- X. Fan, L. Zhang, B. J. Brown, and S. Rusinkiewicz, “Automated view and path planning for scalable multi-object 3d scanning,” ACM Transactions on Graphics TOG, vol. 35, pp. 1 – 13, 2016.
- R. Arav, S. Filin, and N. Pfeifer, “Content-aware point cloud simplification of natural scenes,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
- X. Yu, Z. Cheng, Y. Zhang, and L. Ou, “Point cloud modeling and slicing algorithm for trajectory planning of spray painting robot,” Robotica, vol. 39, pp. 2246 – 2267, 2021.
- G. Wang, J. Cheng, R. Li, and K. Chen, “A new point cloud slicing based path planning algorithm for robotic spray painting,” in IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, pp. 1717–1722.
- M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Knowledge Discovery and Data Mining, 1996.
- Q.-Y. Zhou, J. Park, and V. Koltun, “Open3D: A modern library for 3D data processing,” ArXiv, vol. abs/1801.09847, 2018.
- M. Pauly, “Point primitives for interactive modeling and processing of 3d-geometry,” Ph.D. dissertation, ETH Zurich, 2003.
- I. A. Şucan, M. Moll, and L. E. Kavraki, “The Open Motion Planning Library,” IEEE Robotics & Automation Magazine, vol. 19, no. 4, pp. 72–82, 2012.