2021 7th International Conference on Computing and Artificial Intelligence | 2021

Robust Deep Facial Attribute Prediction against Adversarial Attacks

 
 

Abstract


Face recognition has always been a hot topic in research, and has also widely been applied in industry areas and daily life. Nowadays, face recognition models with excellent performance are mostly based on deep neural networks (DNN). However, recently researchers find that images added invisible perturbations could successfully fool neural networks, which is known as the so-called adversarial attack. The perturbed images, also known as adversarial examples, are almost the same as the original images, but neural network could give different and wrong predictions with high confidence on these adversarial examples. Such a phenomenon indicates the vulnerable robustness of neural network and thus casts a shadow on the security of DNN-based face recognition models. Therefore, in this paper, we focus on the facial attribute prediction task in face recognition, investigate the influence of adversarial attack on facial attribute prediction and give a solution on improving the robustness of facial attribute prediction models. Extensive experiment results illustrate that the solution could indeed produce much more robust results in facial attribute prediction against adversarial attacks.

Volume None
Pages None
DOI 10.1145/3467707.3467737
Language English
Journal 2021 7th International Conference on Computing and Artificial Intelligence

Full Text