2021 International Joint Conference on Neural Networks (IJCNN) | 2021

Learning Effective Discriminative Features with Differentiable Magnet Loss

 
 
 

Abstract


Neural network optimization relies on the ability of the loss function to learn highly discriminative features. In recent years, Softmax loss has been widely used to train neural network models in various tasks. In order to further enhance the discriminative power of the learned features, Center loss is introduced as an auxiliary function to aid Softmax loss jointly reduce the intra-class variances. In this paper, we propose a novel loss called Differentiable Magnet loss (DML), which can optimize neural nets independently of Softmax loss without joint supervision. This loss offers a more definite convergence target for each class, which not only allows the sample to be close to the homogeneous (intra-class) center but also to stay away from all heterogeneous (inter-class) centers in the feature embedding space. Extensive experimental results demonstrate the superiority of DML in a variety of classification and clustering tasks. Specifically, the 2-D visualization of the learned embedding features by t-SNE effectively proves that our proposed new loss can learn better discriminative representations.

Volume None
Pages 1-8
DOI 10.1109/IJCNN52387.2021.9534158
Language English
Journal 2021 International Joint Conference on Neural Networks (IJCNN)

Full Text