Digit. Signal Process. | 2021

Latent sparse transfer subspace learning for cross-corpus facial expression recognition

 
 
 
 

Abstract


Abstract Facial expression recognition has become an increasingly important research topic in pattern recognition and affective computing. Most of facial expression recognition methods assume that the training and testing data come from the same corpus. However, in practical situations, this assumption does not hold, in which the data are often collected from different scenarios, e.g., different races, environments, or devices, and the recognition performance will drop significantly. To tackle this problem, in this paper, we propose a novel transfer learning method, called latent sparse transfer subspace learning (LSTSL), for cross-corpus facial expression recognition. Specifically, we aim to learn a common subspace in which the target samples can be linearly represented by a few source samples. By introducing an l 2 , 1 -norm on the reconstructive transformation matrix, the most discriminative features can be well selected. To guide the new representation learning, we design a novel graph in which the local structure information can be well preserved. Furthermore, we introduce the popular distance metric, i.e., maximum mean discrepancy (MMD), to boost the transfer ability. We conduct extensive cross-corpus experiments on four popular facial expression datasets. The results show that our proposed method can outperform several state-of-the-art transfer learning methods for cross-corpus facial expression recognition.

Volume 116
Pages 103121
DOI 10.1016/J.DSP.2021.103121
Language English
Journal Digit. Signal Process.

Full Text