Minghai Xin
Southeast University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Minghai Xin.
IEEE Signal Processing Letters | 2014
Wenming Zheng; Minghai Xin; Xiaolan Wang; Bei Wang
In this letter, we propose a novel speech emotion recognition method based on least square regression (LSR) model, in which a novel incomplete sparse LSR (ISLSR) model is proposed and utilized to characterize the linear relationship between speech features and the corresponding emotion labels. In training the ISLSR model, both labeled and unlabeled speech data sets are utilized, where the use of unlabeled data set aims to enhance the compatibility of the model such that it is well suitable for the out-of-sample speech data. Another novelty of ISLSR lies in the capability of dealing with feature selection. To evaluate the performance of the proposed method, we conduct experiments on two emotional speech databases. The experimental results on both databases demonstrate that the proposed method achieves better recognition performance in compared with several state-of-the-art methods.
international symposium on circuits and systems | 2012
Xiaoyan Zhou; Wenming Zheng; Minghai Xin
In this paper, we propose a novel canonical correlation analysis (CCA) algorithm for facial expression recognition. In contrast to the traditional CCA algorithm, the proposed method is capable of selecting the optimal spectral components of the training data matrix in modelling the linear correlation between the facial feature vectors and the corresponding expression class membership vectors. We formulate this spectral selection problem as a sparse optimization problem, where the ℓ1-norm penalty is adopted to this goal. To recognize the emotion category of each facial image, we present a linear regression formula to predict the emotion class membership for each facial image. The experiments on the JAFFE facial expression database confirm the better recognition performance of the proposed method.
international conference on acoustics, speech, and signal processing | 2015
Wenming Zheng; Xiaoyan Zhou; Minghai Xin
In this paper, color facial expression recognition based on color local features is investigated, in which each color facial image is decomposed into three color component images. For each color component image, we extract a set of color local features to represent the color component image, where color local features could be either color local binary patterns (LBP) or color scale-invariant feature transform (SIFT). To cope with the facial expression recognition problem, we use a group sparse least square regression (GSLSR) model to describe the relationship between the color local feature vectors and the associated emotion label vectors and then perform expression recognition based on it. Finally, experiments on the Multi-PIE color facial expression database are conducted to testify the proposed method and compare the results with state-of-the-art methods.
IEEE Transactions on Affective Computing | 2018
Wenming Zheng; Yuan Zong; Xiaoyan Zhou; Minghai Xin
Facial expression recognition across domains, e.g., training and testing facial images come from different facial poses, is very challenging due to the different marginal distributions between training and testing facial feature vectors. To deal with such challenging cross-domain facial expression recognition problem, a novel transductive transfer subspace learning method is proposed in this paper. In this method, a labelled facial image set from source domain is combined with an unlabelled auxiliary facial image set from target domain to jointly learn a discriminative subspace and make the class labels prediction of the unlabelled facial images, where a transductive transfer regularized least-squares regression (TTRLSR) model is proposed to this end. Then, based on the auxiliary facial image set, we train a SVM classifier for classifying the expressions of other facial images in the target domain. Moreover, we also investigate the use of color facial features to evaluate the recognition performance of the proposed facial expression recognition method, where color scale invariant feature transform (CSIFT) features associated with 49 landmark facial points are extracted to describe each color facial image. Finally, extensive experiments on BU-3DFE and Multi-PIE multiview color facial expression databases are conducted to evaluate the cross-database & cross-view facial expression recognition performance of the proposed method. Comparisons with state-of-the-art domain adaption methods are also included in the experiments. The experimental results demonstrate that the proposed method achieves much better recognition performance compared with the state-of-the-art methods.
IEICE Transactions on Information and Systems | 2014
Peng Song; Yun Jin; Li Zhao; Minghai Xin
IEICE Transactions on Information and Systems | 2013
Yun Jin; Peng Song; Wenming Zheng; Li Zhao; Minghai Xin
IEICE Transactions on Information and Systems | 2016
Ping Lu; Wenming Zheng; Ziyan Wang; Qiang Li; Yuan Zong; Minghai Xin; Lenan Wu
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2013
Hao Wang; Li Zhao; Wenjiang Pei; Jiakuo Zuo; Qingyun Wang; Minghai Xin
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2015
Peng Song; Wenming Zheng; Xinran Zhang; Yun Jin; Cheng Zha; Minghai Xin
IEICE Transactions on Information and Systems | 2014
Jingjie Yan; Wenming Zheng; Minghai Xin; Jingwei Yan