Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

COFT-AD: COntrastive Fine-Tuning for Few-Shot Anomaly Detection (2402.18998v1)

Published 29 Feb 2024 in cs.CV

Abstract: Existing approaches towards anomaly detection~(AD) often rely on a substantial amount of anomaly-free data to train representation and density models. However, large anomaly-free datasets may not always be available before the inference stage; in which case an anomaly detection model must be trained with only a handful of normal samples, a.k.a. few-shot anomaly detection (FSAD). In this paper, we propose a novel methodology to address the challenge of FSAD which incorporates two important techniques. Firstly, we employ a model pre-trained on a large source dataset to initialize model weights. Secondly, to ameliorate the covariate shift between source and target domains, we adopt contrastive training to fine-tune on the few-shot target domain data. To learn suitable representations for the downstream AD task, we additionally incorporate cross-instance positive pairs to encourage a tight cluster of the normal samples, and negative pairs for better separation between normal and synthesized negative samples. We evaluate few-shot anomaly detection on on 3 controlled AD tasks and 4 real-world AD tasks to demonstrate the effectiveness of the proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger, “Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
  2. T. Xiang and S. Gong, “Video behavior profiling for anomaly detection,” IEEE transactions on pattern analysis and machine intelligence, 2008.
  3. K. Lis, K. Nakka, P. Fua, and M. Salzmann, “Detecting the unexpected via image resynthesis,” in IEEE/CVF International Conference on Computer Vision, 2019.
  4. S. Sheynin, S. Benaim, and L. Wolf, “A hierarchical transformation-discriminating generative model for few shot anomaly detection,” in IEEE/CVF International Conference on Computer Vision, 2021.
  5. C. Huang, H. Guan, A. Jiang, Y. Zhang, M. Spratling, and Y.-F. Wang, “Registration based few-shot anomaly detection,” in European Conference on Computer Vision, 2022.
  6. G. Xie, J. Wang, J. Liu, Y. Jin, and F. Zheng, “Pushing the limits of fewshot anomaly detection in industry vision: Graphcore,” in International Conference on Learning Representations, 2023.
  7. N. Kodali, J. Abernethy, J. Hays, and Z. Kira, “On convergence and stability of gans,” arXiv preprint arXiv:1705.07215, 2017.
  8. K. Roth, L. Pemula, J. Zepeda, B. Schölkopf, T. Brox, and P. Gehler, “Towards total recall in industrial anomaly detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  9. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, 2015.
  10. S. Kornblith, J. Shlens, and Q. V. Le, “Do better imagenet models transfer better?” in IEEE/CVF conference on computer vision and pattern recognition, 2019.
  11. X. Xu, J. Liao, L. Cai, M. C. Nguyen, K. Lu, W. Zhang, Y. Yazici, and C. S. Foo, “Revisiting pretraining for semi-supervised learning in the low-label regime,” arXiv preprint arXiv:2205.03001, 2022.
  12. S. Li, D. Chen, Y. Chen, L. Yuan, L. Zhang, Q. Chu, B. Liu, and N. Yu, “Unsupervised finetuning,” arXiv preprint arXiv:2110.09510, 2021.
  13. M. Wang and W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, 2018.
  14. J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar et al., “Bootstrap your own latent: A new approach to self-supervised learning,” in Advances in neural information processing systems, 2020.
  15. C.-L. Li, K. Sohn, J. Yoon, and T. Pfister, “Cutpaste: Self-supervised learning for anomaly detection and localization,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  16. M.-L. Shyu, S.-C. Chen, K. Sarinnapakorn, and L. Chang, “A novel anomaly detection scheme based on principal component classifier,” Miami Univ Coral Gables Fl Dept of Electrical and Computer Engineering, Tech. Rep., 2003.
  17. J. Kim and C. D. Scott, “Robust kernel density estimation,” Journal of Machine Learning Research, 2012.
  18. B. Schölkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson, “Estimating the support of a high-dimensional distribution,” Neural Computation, vol. 13, no. 7, pp. 1443–1471, 2001.
  19. P. Bergmann, S. Löwe, M. Fauser, D. Sattlegger, and C. Steger, “Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders,” in International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2019.
  20. P. Perera, R. Nallapati, and B. Xiang, “OCGAN: One-Class Novelty Detection Using GANs With Constrained Latent Representations,” in IEEE International Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019.
  21. T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs, “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” Proceedings of International Conference on Information Processing in Medical Imaging, 2017.
  22. L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Müller, and M. Kloft, “Deep one-class classification,” in International conference on machine learning, 2018.
  23. I. Golan and R. El-Yaniv, “Deep anomaly detection using geometric transformations,” in Advances in neural information processing systems, 2018.
  24. J. Yi and S. Yoon, “Patch svdd: Patch-level svdd for anomaly detection and segmentation,” in Proceedings of the Asian Conference on Computer Vision, 2020.
  25. K. Sohn, C.-L. Li, J. Yoon, M. Jin, and T. Pfister, “Learning and evaluating representations for deep one-class classification,” in International Conference on Learning Representations, 2021.
  26. P. Perera and V. M. Patel, “Learning deep features for one-class classification,” IEEE Transactions on Image Processing, 2019.
  27. T. Cao, J. Zhu, and G. Pang, “Anomaly detection under distribution shift,” 2023.
  28. H. Deng and X. Li, “Anomaly detection via reverse distillation from one-class embedding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  29. L. Ruff, R. A. Vandermeulen, N. Görnitz, A. Binder, E. Müller, K.-R. Müller, and M. Kloft, “Deep semi-supervised anomaly detection,” in International Conference on Learning Representations, 2019.
  30. G. Pang, C. Ding, C. Shen, and A. v. d. Hengel, “Explainable deep few-shot anomaly detection with deviation networks,” arXiv preprint arXiv:2108.00462, 2021.
  31. S. Ando and A. Yamamoto, “Anomaly detection via few-shot learning on normality,” in European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.
  32. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, 2020.
  33. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  34. K. Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” Advances in neural information processing systems, 2016.
  35. Y. Liu, P. Kothari, B. van Delft, B. Bellot-Gurlet, T. Mordan, and A. Alahi, “Ttt++: When does self-supervised test-time training fail or thrive?” in Advances in Neural Information Processing Systems, 2021.
  36. D. Chen, D. Wang, T. Darrell, and S. Ebrahimi, “Contrastive test-time adaptation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  37. J. Tack, S. Mo, J. Jeong, and J. Shin, “Csi: Novelty detection via contrastive learning on distributionally shifted instances,” in Advances in neural information processing systems, 2020.
  38. N. Huyan, D. Quan, X. Zhang, X. Liang, J. Chanussot, and L. Jiao, “Unsupervised outlier detection using memory and contrastive learning,” IEEE Transactions on Image Processing, 2022.
  39. L. Jézéquel, N.-S. Vu, J. Beaudet, and A. Histace, “Efficient anomaly detection using self-supervised multi-cue tasks,” IEEE Transactions on Image Processing, 2023.
  40. M.-E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” in 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 2008.
  41. D. Hendrycks and T. Dietterich, “Benchmarking neural network robustness to common corruptions and perturbations,” arXiv preprint arXiv:1903.12261, 2019.
  42. J. Silvestre-Blanes, T. Albero Albero, I. Miralles, R. Pérez-Llorens, and J. Moreno, “A public fabric database for defect detection methods and results,” Autex Research Journal, 2019.
  43. Y. Huang, C. Qiu, and K. Yuan, “Surface defect saliency of magnetic tile,” The Visual Computer, 2020.
  44. R. S. Pahwa, M. T. L. Nwe, R. Chang, O. Z. Min, W. Jie, S. Gopalakrishnan, D. H. S. Wee, R. Qin, V. S. Rao, H. Dai et al., “Automated attribute measurements of buried package features in 3d x-ray images using deep learning,” in IEEE 71st Electronic Components and Technology Conference (ECTC), 2021.
  45. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2014.
  46. J. An and S. Cho, “Variational autoencoder based anomaly detection using reconstruction probability,” Special Lecture on IE, 2015.
  47. D. Gudovskiy, S. Ishizaka, and K. Kozuka, “Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022.
  48. M. Rudolph, B. Wandt, and B. Rosenhahn, “Same same but differnet: Semi-supervised defect detection with normalizing flows,” in IEEE/CVF Winter Conference on Applications of Computer Vision, 2021.
  49. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  50. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations, 2015.
  51. T. Reiss, N. Cohen, and Y. Hoshen, “No free lunch: The hazards of over-expressive representations in anomaly detection,” 2023.
  52. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, 2008.
  53. G. Pang, L. Cao, L. Chen, and H. Liu, “Learning representations of ultrahigh-dimensional data for random distance-based outlier detection,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018.
  54. H. Xu, G. Pang, Y. Wang, and Y. Wang, “Deep isolation forest for anomaly detection,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 12, p. 12591–12604, Dec. 2023. [Online]. Available: http://dx.doi.org/10.1109/TKDE.2023.3270293
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jingyi Liao (18 papers)
  2. Xun Xu (62 papers)
  3. Manh Cuong Nguyen (21 papers)
  4. Adam Goodge (6 papers)
  5. Chuan Sheng Foo (15 papers)
Citations (3)