Neurocomputing | 2021

Efficient Detection of Adversarial, Out-of-distribution and Other Misclassified Samples

 
 

Abstract


Abstract Deep Neural Networks (DNNs) are increasingly being considered for safety–critical approaches in which it is crucial to detect misclassified samples. Typically, detection methods are geared towards either the detection of out-of-distribution or adversarial data. Additionally, most detection methods require a significant amount of parameters and runtime. In this contribution we discuss a novel approach for detecting misclassified samples suitable for out-of-distribution, adversarial and additionally real world error-causing corruptions. It is based on the Gradient’s Norm (GraN) of the DNN and is parameter and runtime efficient. We evaluate GraN on two different classification DNNs (DenseNet, ResNet) trained on different datasets (CIFAR-10, CIFAR-100, SVHN). In addition to the detection of different adversarial example types (FGSM, BIM, Deepfool, CWL2) and out-of-distribution data (TinyImageNet, LSUN, CIFAR-10, SVHN) we evaluate GraN for novel corruption set-ups (Gaussian, Shot and Impulse noise). Our experiments show that GraN performs comparable to state-of-the-art methods for adversarial and out-of-distribution detection and is superior for real world corruptions while being parameter and runtime efficient.

Volume None
Pages None
DOI 10.1016/j.neucom.2021.05.102
Language English
Journal Neurocomputing

Full Text