Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PointCore: Efficient Unsupervised Point Cloud Anomaly Detector Using Local-Global Features (2403.01804v1)

Published 4 Mar 2024 in cs.CV

Abstract: Three-dimensional point cloud anomaly detection that aims to detect anomaly data points from a training set serves as the foundation for a variety of applications, including industrial inspection and autonomous driving. However, existing point cloud anomaly detection methods often incorporate multiple feature memory banks to fully preserve local and global representations, which comes at the high cost of computational complexity and mismatches between features. To address that, we propose an unsupervised point cloud anomaly detection framework based on joint local-global features, termed PointCore. To be specific, PointCore only requires a single memory bank to store local (coordinate) and global (PointMAE) representations and different priorities are assigned to these local-global features, thereby reducing the computational cost and mismatching disturbance in inference. Furthermore, to robust against the outliers, a normalization ranking method is introduced to not only adjust values of different scales to a notionally common scale, but also transform densely-distributed data into a uniform distribution. Extensive experiments on Real3D-AD dataset demonstrate that PointCore achieves competitive inference time and the best performance in both detection and localization as compared to the state-of-the-art Reg3D-AD approach and several competitors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. D. Carrera, F. Manganini, G. Boracchi, and E. Lanzarone, “Defect detection in sem images of nanofibrous materials,” IEEE Transactions on Industrial Informatics, p. 551–561, Apr 2017.
  2. K. Song and Y. Yan, “A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects,” Applied Surface Science, p. 858–864, Nov 2013.
  3. D. Hendrycks, S. Basart, M. Mazeika, M. Mostajabi, J. Steinhardt, and D. Song, “A benchmark for anomaly segmentation,” arXiv preprint arXiv:1911.11132, vol. 1, no. 2, p. 5, 2019.
  4. Y. Xu, W. Hu, S. Wang, X. Zhang, S. Wang, S. Ma, Z. Guo, and W. Gao, “Predictive generalized graph fourier transform for attribute compression of dynamic point clouds,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 5, pp. 1968–1982, 2020.
  5. V. Zavrtanik, M. Kristan, and D. Skočaj, “Draem – a discriminatively trained reconstruction embedding for surface anomaly detection,” International Conference on Computer Vision, International Conference on Computer Vision, Aug 2021.
  6. J. T. Zhou, L. Zhang, Z. Fang, J. Du, X. Peng, and Y. Xiao, “Attention-driven loss for anomaly detection in video surveillance,” IEEE transactions on circuits and systems for video technology, vol. 30, no. 12, pp. 4639–4647, 2019.
  7. Y. Wang, Q. Liu, and Y. Lei, “Ted-net: Dispersal attention for perceiving interaction region in indirectly-contact hoi detection,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2024.
  8. J. Liu, G. Xie, R. Chen, X. Li, J. Wang, Y. Liu, C. Wang, and F. Zheng, “Real3d-ad: A dataset of point cloud anomaly detection,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  9. K. Roth, L. Pemula, J. Zepeda, B. Scholkopf, T. Brox, and P. Gehler, “Towards total recall in industrial anomaly detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2022.
  10. W. Li and X. Xu, “Towards scalable 3d anomaly detection and localization: A benchmark via 3d anomaly synthesis and a self-supervised learning network,” arXiv preprint arXiv:2311.14897, 2023.
  11. W. Zhu, Z. Ma, Y. Xu, L. Li, and Z. Li, “View-dependent dynamic point cloud compression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 2, pp. 765–781, 2020.
  12. H. Peng and G. Tong, “Class-aware 3d detector from point clouds with partial knowledge diffusion and center-weighted iou,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  13. H. Xiao, Y. Li, W. Kang, and Q. Wu, “Distinguishing and matching-aware unsupervised point cloud completion,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  14. E. Horwitz and Y. Hoshen, “Back to the feature: classical 3d features are (almost) all you need for 3d anomaly detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2967–2976.
  15. R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (fpfh) for 3d registration,” in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 3212–3217.
  16. S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in Proceedings Third International Conference on 3-D Digital Imaging and Modeling, 2001, pp. 145–152.
  17. L. Yang and W. Guo, “Greedy local-set based sampling and reconstruction for band-limited graph signals,” in 2016 23rd International Conference on Telecommunications (ICT), 2016, pp. 1–5.
  18. Y. Pang, W. Wang, F. E. Tay, W. Liu, Y. Tian, and L. Yuan, “Masked autoencoders for point cloud self-supervised learning,” in European conference on computer vision.   Springer, 2022, pp. 604–621.
  19. H. Zhao, L. Jiang, J. Jia, P. Torr, and V. Koltun, “Point transformer.” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2021.
  20. A. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “Shapenet: An information-rich 3d model repository,” arXiv: Graphics,arXiv: Graphics, Dec 2015.
  21. Y. Wang, J. Peng, J. Zhang, R. Yi, Y. Wang, and C. Wang, “Multimodal industrial anomaly detection via hybrid fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8032–8041.
Citations (2)

Summary

We haven't generated a summary for this paper yet.