2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE) | 2019

Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error

 
 
 

Abstract


Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN s security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

Volume None
Pages 136-139
DOI 10.1109/AIKE.2019.00033
Language English
Journal 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)

Full Text