Inf. Sci. | 2021

Noisy label tolerance: A new perspective of Partial Multi-Label Learning

 
 
 

Abstract


Abstract Partial Multi-Label learning (PML) aims to learn from training data where each example is associated with a set of candidate labels, among which only a subset of them is correct. The major challenge of PML lies in that the training procedure is prone to be misled by the label noise. To address this problem, nearly all existing PML methods focus on solely label disambiguation, i.e., dislodging the noisy labels from the candidate label set and then utilizing the remaining credible labels for model induction. However, these remaining “credible” labels may be incorrectly identified, which thereby would have a huge adverse impact on the subsequent model induction. In this paper, in contrary to the above label disambiguation strategy, we propose a simple yet effective Noisy lAbel Tolerated pArtial multi-label Learning (NATAL) method, where the labeling information is considered to be precise while the feature information is assumed to be missing. Using our proposed method, the task of PML can be re-interpreted as a Feature Completion problem, and the desired prediction model can be directly induced from the completed feature together with all candidate labels. Extensive experimental results on various data sets clearly demonstrate the effectiveness of our proposed approach.

Volume 543
Pages 454-466
DOI 10.1016/J.INS.2020.09.019
Language English
Journal Inf. Sci.

Full Text