Neural Processing Letters | 2021

Poisonous Label Attack: Black-Box Data Poisoning Attack with Enhanced Conditional DCGAN

 
 
 

Abstract


Data poisoning is identified as a security threat for machine learning models. This paper explores the poisoning attack against the convolutional neural network under black-box conditions. The proposed attack is “black-box,” which means the attacker has no knowledge about the targeted model’s structure and parameters when attacking the model, and it uses “poisonous-labels” images, fake images with crafted wrong labels, as poisons. We present a method for generating “poisonous-label” images that use Enhanced Conditional DCGAN (EC-DCGAN) to synthesizes fake images and uses asymmetric poisoning vectors to mislabel them. We evaluate our method by generating “poisonous-label” images from MNIST and FashionMNIST datasets and using them to manipulate image classifiers. Our experiments demonstrate that, similarly to white box data poisoning attacks, the poisonous label attack can also dramatically increase the classification error.

Volume None
Pages None
DOI 10.1007/s11063-021-10584-w
Language English
Journal Neural Processing Letters

Full Text