2020 25th International Conference on Pattern Recognition (ICPR) | 2021

Transferable Adversarial Attacks for Deep Scene Text Detection

 
 
 
 
 
 

Abstract


Scene text detection (STD) aims to locate text in images and plays an important role in many computer vision tasks including automatic driving and text recognition systems. Recently, deep neural networks (DNNs) have been widely and successfully used in scene text detection, leading to plenty of DNN-based STD methods including regression-based and segmentation-based STD methods. However, recent studies have also shown that DNN is vulnerable to adversarial attacks, which can significantly degrade the performance of DNN models. In this paper, we investigate the robustness of DNN-based STD methods against adversarial attacks. To this end, we propose a generic and efficient attack method to generate adversarial examples, which are produced by adding small but imperceptible adversarial perturbation to the input images. Experiments on attacking four various models and a real-world STD engine of Google optical character recognition (OCR) show that the state-of-the-art DNN-based STD methods including regression-based and segmentation-based methods are vulnerable to adversarial attacks.

Volume None
Pages 8945-8951
DOI 10.1109/ICPR48806.2021.9412319
Language English
Journal 2020 25th International Conference on Pattern Recognition (ICPR)

Full Text