2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) | 2021

Poisoning Attacks via Generative Adversarial Text to Image Synthesis

 
 
 

Abstract


A poisoning attack is where the adversary can inject a small fraction of poisoning instances into the training data used to train a machine learning model to compromise the performance. Poison attacks can significantly affect the learning process and performance as the model is trained on incorrect data. We have seen many works on data poisoning over the years, but it is limited to few deep learning networks. In this work, we introduce a novel approach by leveraging Generative Adversarial Text to Image Synthesis to create poison attacks against machine learning classifiers. Our approach has three components, which are the generator, discriminator, and the target classifier. We performed an extensive experimental evaluation that proves our attackā€™s efficiency to compromise machine learning classifiers including deep networks.

Volume None
Pages 158-165
DOI 10.1109/DSN-W52860.2021.00035
Language English
Journal 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)

Full Text