2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) | 2021

Defending Against Adversarial Attacks On Medical Imaging Ai System, Classification Or Detection?

 
 
 

Abstract


Medical imaging AI systems such as disease classification and segmentation are increasingly inspired and transformed from computer vision based AI systems. Although an array of defense techniques have been developed and proved to be effective in computer vision, defending against adversarial attacks on medical images remains largely an uncharted territory due to their unique challenges: 1) label scarcity limits adversarial generalizability; 2) vastly similar and dominant fore- and background make it difficult for learning the discriminating features; and 3) crafted adversarial noises added to a highly standardized medical image can make it a hard sample for model to predict. In this paper, we propose a novel robust medical imaging AI framework based on Semi-Supervised Adversarial Training (SSAT) and Unsupervised Adversarial Detection (UAD), followed by a new measure for assessing systems adversarial risk. We systematically demonstrate the advantages of our robust medical imaging AI system over the existing adversarial defense techniques under diverse real-world settings of adversarial attacks using a benchmark OCT imaging data set.

Volume None
Pages 1677-1681
DOI 10.1109/ISBI48211.2021.9433761
Language English
Journal 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)

Full Text