IEEE Transactions on Multimedia | 2021

A Mutually Attentive Co-Training Framework for Semi-Supervised Recognition

 
 
 
 
 

Abstract


Self-training plays an important role in practical recognition applications where sufficient clean labels are unavailable. Existing methods focus on generating reliable pseudo labels to retrain a model, while ignoring the importance of improving model reliability to those inevitably mislabeled data. In this paper, we propose a novel Mutually Attentive Co-training Framework (MACF) that can effectively alleviate the negative impacts of incorrect labels on model retraining by exploring deep model disagreements. Specifically, MACF trains two symmetrical sub-networks that have the same input and are connected by several attention modules at different layers. Each attention module analyzes the inferred features from two sub-networks for the same input and feedback attention maps for them to indicate noisy gradients. This is realized by exploring the back-propagation process of incorrect labels at different layers to design attention modules. By multi-layer interception, the noisy gradients caused by incorrect labels can be effectively reduced for both sub-networks, leading to robust training to potential incorrect labels. In addition, a hierarchical distillation strategy is developed to improve the pseudo labels by aggregating the predictions from multi-models and data transformations. The experiments on six general benchmarks, including classification and biomedical segmentation, demonstrate that MACF is much robust to noisy labels than previous methods.

Volume 23
Pages 899-910
DOI 10.1109/TMM.2020.2990063
Language English
Journal IEEE Transactions on Multimedia

Full Text