Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Image Outlier Detection Without Training using RANSAC (2307.12301v3)

Published 23 Jul 2023 in cs.CV, cs.IR, and cs.LG

Abstract: Image outlier detection (OD) is an essential tool to ensure the quality of images used in computer vision tasks. Existing algorithms often involve training a model to represent the inlier distribution, and outliers are determined by some deviation measure. Although existing methods proved effective when trained on strictly inlier samples, their performance remains questionable when undesired outliers are included during training. As a result of this limitation, it is necessary to carefully examine the data when developing OD models for new domains. In this work, we present a novel image OD algorithm called RANSAC-NN that eliminates the need of data examination and model training altogether. Unlike existing approaches, RANSAC-NN can be directly applied on datasets containing outliers by sampling and comparing subsets of the data. Our algorithm maintains favorable performance compared to existing methods on a range of benchmarks. Furthermore, we show that RANSAC-NN can enhance the robustness of existing methods by incorporating our algorithm as part of the data preparation process.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Latent space autoregression for novelty detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 481–490, 2019.
  2. Charu C. Aggarwal. Outlier Analysis. Springer Publishing Company, Incorporated, 2nd edition, 2016.
  3. Self-supervised representation learning for visual anomaly detection. arXiv preprint arXiv:2006.09654, 2020.
  4. Variational autoencoder based anomaly detection using reconstruction probability. Special lecture on IE, 2(1):1–18, 2015.
  5. A cookbook of self-supervised learning. ArXiv, abs/2304.12210, 2023.
  6. Efficient anomaly detection by isolation using nearest neighbour ensemble. In 2014 IEEE International Conference on Data Mining Workshop, pages 698–705, 2014.
  7. Classification-based anomaly detection for general data. In International Conference on Learning Representations (ICLR), 2020.
  8. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In International Conference on Machine Learning, 2013.
  9. Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 2006.
  10. Lof: Identifying density-based local outliers. SIGMOD Rec., 29(2):93–104, 2000.
  11. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC Genomics, 21, 2020.
  12. Anomaly detection with generative adversarial networks, 2018.
  13. High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. Pattern Recognition, 58:121–134, 2016.
  14. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Pattern Recognition Workshop, 2004.
  15. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, 1981.
  16. Exploring the limits of out-of-distribution detection. In Advances in Neural Information Processing Systems, pages 7068–7081. Curran Associates, Inc., 2021.
  17. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2018.
  18. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1705–1714, 2019.
  19. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, page 2672–2680, Cambridge, MA, USA, 2014. MIT Press.
  20. Lunar: Unifying local outlier detection methods via graph neural networks. 2022.
  21. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  22. Using self-supervised learning can improve model robustness and uncertainty. Advances in Neural Information Processing Systems (NeurIPS), 2019.
  23. A survey of outlier detection methodologies. Artificial intelligence review, 22:85–126, 2004.
  24. Searching for mobilenetv3. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1314–1324, 2019.
  25. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547, 2019.
  26. A survey of recent trends in one class classification. In Artificial Intelligence and Cognitive Science, pages 188–197, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
  27. Back-to-the-basics: Revisiting out-of-distribution detection baselines. In Proceedings of the International Conference on Machine Learning (ICML) Workshops, Workshop on Principles of Distribution Shift, 2022.
  28. Rca: A deep collaborative autoencoder approach for anomaly detection. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 1505–1511. International Joint Conferences on Artificial Intelligence Organization, 2021. Main Track.
  29. Isolation forest. In 2008 Eighth IEEE International Conference on Data Mining, pages 413–422, 2008.
  30. Generative adversarial active learning for unsupervised outlier detection. IEEE Transactions on Knowledge and Data Engineering, 32:1517–1528, 2018.
  31. Out-of-distribution detection for deep neural networks with isolation forest and local outlier factor. IEEE Access, 9:132980–132989, 2021.
  32. One-class classifier networks for target recognition applications. In Proceedings World Congress on Neural Networks, 1993.
  33. One-class convolutional neural network. IEEE Signal Processing Letters, 26(2):277–281, 2018.
  34. Learning deep features for one-class classification. IEEE Transactions on Image Processing, 28(11):5450–5463, 2019.
  35. Tomávs. Pevný. Loda: Lightweight on-line detector of anomalies. Machine Learning, 102:275–304, 2016.
  36. Neural transformation learning for deep anomaly detection beyond images. In International Conference on Machine Learning, pages 8703–8714. PMLR, 2021.
  37. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
  38. Imagenet-21k pretraining for the masses. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. Curran, 2021.
  39. Deep one-class classification. In Proceedings of the 35th International Conference on Machine Learning, pages 4393–4402. PMLR, 2018.
  40. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2nd workshop on machine learning for sensory data analysis, pages 4–11, 2014.
  41. Puzzle-ae: Novelty detection in images through solving puzzles. arXiv preprint arXiv:2008.12959, 2020.
  42. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In Information Processing in Medical Imaging, pages 146–157, Cham, 2017. Springer International Publishing.
  43. Estimating the Support of a High-Dimensional Distribution. Neural Computation, 13(7):1443–1471, 2001.
  44. Ssd: A unified framework for self-supervised outlier detection. arXiv preprint arXiv:2103.12051, 2021.
  45. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
  46. Support vector data description. Machine learning, 54:45–66, 2004.
  47. Fastvit: A fast hybrid vision transformer using structural reparameterization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.
  48. Learning discriminative reconstructions for unsupervised outlier removal. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1511–1519, 2015.
  49. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485–3492, 2010.
  50. Deep isolation forest for anomaly detection. IEEE Transactions on Knowledge and Data Engineering, pages 1–14, 2023.
  51. Visual anomaly detection for images: A survey. arXiv preprint arXiv:2109.13157, 2021a.
  52. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021b.
  53. Attribute restoration framework for anomaly detection. IEEE Transactions on Multimedia, 24:116–127, 2020.
  54. Pyod: A python toolbox for scalable outlier detection. Journal of Machine Learning Research, 20(96):1–7, 2019.
  55. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018.

Summary

We haven't generated a summary for this paper yet.