Archive | 2021

Deep Visual Anomaly detection with Negative Learning

 
 
 
 

Abstract


With the increase in the learning capability of deep convolutionbased architectures, various applications of such models have been proposed over time. In the field of anomaly detection, improvements in deep learning opened new prospects of exploration for the researchers whom tried to automate the labor-intensive features of data collection. First, in terms of data collection, it is impossible to anticipate all the anomalies that might exist in a given environment. Second, assuming we limit the possibilities of anomalies, it will still be hard to record all these scenarios for the sake of training a model Third, even if we manage to record a significant amount of abnormal data, it’s laborious to annotate this data on pixel or even frame level. Various approaches address the problem by proposing one-class classification using generative models trained on only normal data. In such methods, only the normal data is used, which is abundantly available and doesn’t require significant human input. However, such approaches have two drawbacks. First, these are trained with only normal data and at the test time, given abnormal data as input, still generate normal-looking output. This happens due to the hallucination characteristic of generative models, which is not desirable in anomaly detection systems because of their need to be accurate and reliable. Next, these systems are not capable of utilizing abnormal examples, however small in number, during the training. In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection by utilizing a very small number of labeled anomaly data as compared with the normal data during training. The idea, which is fairly simple yet effective, is to limit the reconstruction capability of a generative model using the given anomaly examples. During the training, normal data is learned as would have been in a conventional method, but the abnormal data is utilized to maximize loss of the network on abnormality distribution. with this simple tweaking, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies. In order to evaluate the efficiency of our proposed method, we ar X iv :2 10 5. 11 05 8v 1 [ cs .C V ] 2 4 M ay 2 02 1

Volume None
Pages 218-232
DOI 10.1007/978-3-030-81638-4_18
Language English
Journal None

Full Text