2021 International Joint Conference on Neural Networks (IJCNN) | 2021

OPA2D: One-Pixel Attack, Detection, and Defense in Deep Neural Networks

 
 
 
 
 
 

Abstract


Adversarial images have been proposed to deceive deep neural networks (DNNs) by adding perturbations to the pixels. Unlike existing attacks, Su et al. [1] analyzed an attack in an extremely limited constraint where only one pixel was modified. However, their one-pixel attack is easy to recognize by humans. In this paper, we improve the attack to enable the deceit of both DNNs and humans. We conducted a human recognition analysis to prove our attack s effect. We then propose detection and defense methods against the attack by re-attacking the adversarial images. Our experimental results on the six most recent convolutional neural networks show that while our attack achieved approximately the same success rates and confidence scores as in the existing attack, our attack achieves a higher success rate for deceiving humans. Only 49.41 % of participants can recognize our attack even though 81.04 % participants have recognized the existing attack. OPA2D detects 99.33% of the existing attack and 100% of our attack and defends 92.00% of the existing attack and 95.33 % of our attack.

Volume None
Pages 1-10
DOI 10.1109/IJCNN52387.2021.9534332
Language English
Journal 2021 International Joint Conference on Neural Networks (IJCNN)

Full Text