Neural Processing Letters | 2019

Alignment Based Feature Selection for Multi-label Learning

 
 

Abstract


Multi-label learning deals with data sets in which each example is assigned with a set of labels, and the goal is to construct a learning model to predict the label set for unseen examples. Multi-label data sets share the same problems with single-label data sets that usually possess high-dimensional features and may exist redundancy features, which will influence the performance of the algorithm. Thus it is obviously necessary to address feature selection in multi-label learning. Meanwhile, information among labels play an important role in multi-label learning, thereby it is significance to measure information among labels in order to improve the performance of learning algorithms. In this paper, we introduce kernel alignment into multi-label learning to measure the consistency between feature space and label space by which features are ranked and selected. Firstly we define an ideal kernel in label space as a convex combination of ideal kernels defined by each label, and a linear combination of kernels where each kernel corresponds to a feature. Secondly, through maximizing the kernel alignment value between linear combination kernel and ideal kernel, both weights in the two defined kernels are learned in this process simultaneously, and the learned weights of labels can be employed as the degree of labeling importance regarded as a kind of information among labels. Finally, features are ranked according to their weights in linear combined kernel, and a proper feature subset consisting of top ranking features is selected. Thus a novel method of feature selection for multi-label learning is developed which can learn and address importance degree of labels automatically, and effectiveness of this method is demonstrated by experimental comparisons.

Volume None
Pages 1-22
DOI 10.1007/s11063-019-10009-9
Language English
Journal Neural Processing Letters

Full Text