Journal of Physics: Conference Series | 2021

Attacking Sequential Learning Models with Style Transfer Based Adversarial Examples

 
 
 

Abstract


In the field of deep neural network security, it has been recently found that non-sequential networks are vulnerable to adversarial examples. There are however few studies to investigate the adversarial attack on sequential tasks. To this end, in this paper, we propose a novel method to generate adversarial examples for sequential tasks. Specifically, an image style transfer method is used to generate for a Scene Text Recognition (STR) network adversarial examples, which are only different from the original image on the style. While they will not interfere with the recognition of image information by human vision, the adversarial examples would significantly mislead the recognition results of sequential networks. Moreover, based on a black-box attack, both in digital and physical environments, we show that the proposed method can use cross text shape information and attack successfully the TPS-ResNet-BiLSTM-Attention (TRBA) and Convolutional Recurrent Neural Network (CRNN) models. Finally, we demonstrate further that physical adversarial examples can easily mislead commercial recognition algorithms, e.g. iFLYTEK and Youdao, suggesting that STR models are also highly vulnerable to attacks from adversarial examples.

Volume 1880
Pages None
DOI 10.1088/1742-6596/1880/1/012021
Language English
Journal Journal of Physics: Conference Series

Full Text