IEEE Signal Processing Letters | 2019

Distance-Dependent Metric Learning

 
 
 
 

Abstract


In most existing metric learning methods, the data pairs are equally treated without considering their diversity. In fact, for pairs with different distances, the main purpose of the metric acted on them has some differences. If the pairs have smaller distance, metric should focus more on expanding the negative pairs, which are more easily misjudged. While if the pairs have larger distance, metric should focus more on shrinking the positive pairs. However, most metric learning methods neglect these differences. In this letter, we propose a distance-dependent metric learning (D<inline-formula><tex-math notation= LaTeX >$^2$</tex-math></inline-formula>ML) method. It partitions data pairs into different clusters according to the <inline-formula><tex-math notation= LaTeX >$\\ell _2$</tex-math></inline-formula> distance between them. Each cluster is associated with a Mahalanobis metric that learns the pairs’ distance. This not only allows us to make each metric more targeted and adapt to the data diversity flexibly, but also avoids the problem of computing the distance between points assigned to different clusters, which happens in some local metric learning methods. D<inline-formula><tex-math notation= LaTeX >$^2$</tex-math></inline-formula>ML is further extended to D<inline-formula><tex-math notation= LaTeX >$^3$</tex-math></inline-formula>ML to embrace the nonlinear capacity of neural network. Experiments on UCI datasets and speaker recognition i-vector machine learning challenge show that the proposed methods are superior to other metric learning methods.

Volume 26
Pages 357-361
DOI 10.1109/LSP.2019.2891913
Language English
Journal IEEE Signal Processing Letters

Full Text