IEEE Access | 2021

Generative Adversarial Networks for Abnormal Event Detection in Videos Based on Self-Attention Mechanism

 
 
 
 
 

Abstract


Unsupervised anomaly detection defines an abnormal event as an event that does not conform to expected behavior. In the field of unsupervised anomaly detection, it is a pioneering work that leverages the difference between a future frame predicted by a generative adversarial network and its ground truth to detect an abnormal event. Based on the work, we improve the ability of video prediction framework to detect abnormal events by enhancing the difference between prediction results for normal and abnormal events. We incorporate super-resolution and self-attention mechanism to design a generative adversarial network. We propose an auto-encoder as a generator, which incorporates dense residual networks and self-attention. Moreover, we propose a new discriminator, which introduces self-attention on the basis of a relativistic discriminator. To predict a future frame with higher quality for normal events, we impose a constraint on the motion in video prediction by fusing optical flow and gradient difference between frames. We also introduce a perception constraint in video prediction to enrich the texture details of a frame. The AUC of our method on CUHK Avenue and Shanghai Tech datasets reaches 89.2% and 75.7% respectively, which is better than most existing methods. In addition, we propose a processing flow that can realize real-time anomaly detection in videos. The average running time of our video prediction framework is 37 frames per second. Among all real-time methods for abnormal event detection in videos, our method is competitive with the state-of-the-art methods.

Volume 9
Pages 124847-124860
DOI 10.1109/ACCESS.2021.3110798
Language English
Journal IEEE Access

Full Text