Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection (2111.08644v3)

Published 16 Nov 2021 in cs.CV and cs.LG

Abstract: Detecting abnormal events in video is commonly framed as a one-class classification task, where training videos contain only normal events, while test videos encompass both normal and abnormal events. In this scenario, anomaly detection is an open-set problem. However, some studies assimilate anomaly detection to action recognition. This is a closed-set scenario that fails to test the capability of systems at detecting new anomaly types. To this end, we propose UBnormal, a new supervised open-set benchmark composed of multiple virtual scenes for video anomaly detection. Unlike existing data sets, we introduce abnormal events annotated at the pixel level at training time, for the first time enabling the use of fully-supervised learning methods for abnormal event detection. To preserve the typical open-set formulation, we make sure to include disjoint sets of anomaly types in our training and test collections of videos. To our knowledge, UBnormal is the first video anomaly detection benchmark to allow a fair head-to-head comparison between one-class open-set models and supervised closed-set models, as shown in our experiments. Moreover, we provide empirical evidence showing that UBnormal can enhance the performance of a state-of-the-art anomaly detection framework on two prominent data sets, Avenue and ShanghaiTech. Our benchmark is freely available at https://github.com/lilygeorgescu/UBnormal.

Citations (73)

Summary

  • The paper introduces UBnormal as a new benchmark for supervised open-set video anomaly detection with pixel-level annotations distinguishing training from test anomaly sets.
  • The paper demonstrates that UBnormal challenges state-of-the-art models by revealing performance gaps through AUC and localization metrics on unseen anomaly types.
  • The paper shows that UBnormal enhances real-world performance by bridging the anomaly data gap and enabling effective domain adaptation from virtual to natural scenes.

An Overview of UBnormal: A Benchmark for Supervised Open-Set Video Anomaly Detection

The UBnormal paper presents a novel framework and data set designed to address the complexities of video anomaly detection. The research introduces a new paradigm in the detection of anomalies by framing it as a supervised open-set problem, which provides an innovative approach compared to the existing paradigms of one-class classification and weakly-supervised action recognition.

The core contribution of this paper is the UBnormal benchmark, which is distinct for enabling a direct comparison between one-class open-set models and supervised closed-set models. The benchmark spans across 29 meticulously crafted virtual scenes with a total of 236,902 frames. This richness in diversity not only offers varied contexts for normal and abnormal activities but also emphasizes its capability to evaluate models' generalization across unseen anomaly types.

The data set is curated with pixel-level anomaly annotations during training, enhancing the applicability of fully supervised learning models. The UBnormal benchmark ensures disjoint sets of anomalies between training and test phases, adhering to the open-set detection principle. This setup contrasts with other frameworks that often rely on action recognition, where the same anomaly categories can appear both in training and testing, reducing the challenge of detecting novel anomaly types.

Experimentally, the paper evaluates UA normal using three state-of-the-art models. The results reflect the benchmark's demanding nature, with performance indicators such as AUC and localization metrics illustrating the challenges posed by UBnormal, even for sophisticated models. Notably, the paper shows how UBnormal can enhance the performance of current models on real-world data sets like Avenue and ShanghaiTech, by bridging the anomaly data gap through data-centric enhancements.

The implication of this paper in the anomaly detection domain is multifaceted. Practically, UBnormal provides a robust ground for developing systems that can operate under adverse conditions, identifying novel anomalies that were not part of the training set. Theoretically, it offers a framework to explore the generalization capabilities of supervised models under open-set conditions, which are often encountered in real-world applications. Furthermore, it opens directions for exploring domain adaptation techniques to transition knowledge from virtual to natural scenes, which was demonstrated to benefit real-world benchmarks in the paper.

Future directions may involve refining domain adaptation processes, such as incorporating advanced generative models, to reduce distributional discrepancies between synthetic and real-world scenes. Moreover, expanding the diversity of scenes and anomaly types in UBnormal could further enhance the testing of model capabilities and adaptability.

In conclusion, UBnormal represents a significant benchmark for advancing the field of video anomaly detection under supervised open-set settings. It pushes the boundaries of anomaly detection by providing a new testing ground that challenges models to not only learn from observed anomalies but also to robustly detect unforeseen ones, a step forward in building truly intelligent and adaptive systems.

Github Logo Streamline Icon: https://streamlinehq.com