Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Unsupervised Video Anomaly Detection by Multi-Path Frame Prediction (2011.02763v2)

Published 5 Nov 2020 in cs.CV and cs.LG

Abstract: Video anomaly detection is commonly used in many applications such as security surveillance and is very challenging.A majority of recent video anomaly detection approaches utilize deep reconstruction models, but their performance is often suboptimal because of insufficient reconstruction error differences between normal and abnormal video frames in practice. Meanwhile, frame prediction-based anomaly detection methods have shown promising performance. In this paper, we propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design which is more in line with the characteristics of surveillance videos. The proposed method is equipped with a multi-path ConvGRU-based frame prediction network that can better handle semantically informative objects and areas of different scales and capture spatial-temporal dependencies in normal videos. A noise tolerance loss is introduced during training to mitigate the interference caused by background noise. Extensive experiments have been conducted on the CUHK Avenue, ShanghaiTech Campus, and UCSD Pedestrian datasets, and the results show that our proposed method outperforms existing state-of-the-art approaches. Remarkably, our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xuanzhao Wang (1 paper)
  2. Zhengping Che (41 papers)
  3. Bo Jiang (235 papers)
  4. Ning Xiao (3 papers)
  5. Ke Yang (152 papers)
  6. Jian Tang (327 papers)
  7. Jieping Ye (169 papers)
  8. Jingyu Wang (60 papers)
  9. Qi Qi (66 papers)
Citations (130)

Summary

We haven't generated a summary for this paper yet.