Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shangfei Wang is active.

Publication


Featured researches published by Shangfei Wang.


IEEE Transactions on Multimedia | 2010

A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference

Shangfei Wang; Zhilei Liu; Siliang Lv; Yanpeng Lv; Guobing Wu; Peng Peng; Fei Chen; Xufa Wang

To date, most facial expression analysis has been based on visible and posed expression databases. Visible images, however, are easily affected by illumination variations, while posed expressions differ in appearance and timing from natural ones. In this paper, we propose and establish a natural visible and infrared facial expression database, which contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions. The posed database includes the apex expressional images with and without glasses. As an elementary assessment of the usability of our spontaneous database for expression recognition and emotion inference, we conduct visible facial expression recognition using four typical methods, including the eigenface approach [principle component analysis (PCA)], the fisherface approach [PCA + linear discriminant analysis (LDA)], the Active Appearance Model (AAM), and the AAM-based + LDA. We also use PCA and PCA+LDA to recognize expressions from infrared thermal images. In addition, we analyze the relationship between facial temperature and emotion through statistical analysis. Our database is available for research purposes.


computer vision and pattern recognition | 2013

Capturing Complex Spatio-temporal Relations among Facial Muscles for Facial Expression Recognition

Ziheng Wang; Shangfei Wang; Qiang Ji

Spatial-temporal relations among facial muscles carry crucial information about facial expressions yet have not been thoroughly exploited. One contributing factor for this is the limited ability of the current dynamic models in capturing complex spatial and temporal relations. Existing dynamic models can only capture simple local temporal relations among sequential events, or lack the ability for incorporating uncertainties. To overcome these limitations and take full advantage of the spatio-temporal information, we propose to model the facial expression as a complex activity that consists of temporally overlapping or sequential primitive facial events. We further propose the Interval Temporal Bayesian Network to capture these complex temporal relations among primitive facial events for facial expression modeling and recognition. Experimental results on benchmark databases demonstrate the feasibility of the proposed approach in recognizing facial expressions based purely on spatio-temporal relations among facial muscles, as well as its advantage over the existing methods.


IEEE Transactions on Image Processing | 2013

Simultaneous Facial Feature Tracking and Facial Expression Recognition

Yongqiang Li; Shangfei Wang; Yongping Zhao; Qiang Ji

The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.


international conference on computer vision | 2013

Capturing Global Semantic Relationships for Facial Action Unit Recognition

Ziheng Wang; Yongqiang Li; Shangfei Wang; Qiang Ji

In this paper we tackle the problem of facial action unit (AU) recognition by exploiting the complex semantic relationships among AUs, which carry crucial top-down information yet have not been thoroughly exploited. Towards this goal, we build a hierarchical model that combines the bottom-level image features and the top-level AU relationships to jointly recognize AUs in a principled manner. The proposed model has two major advantages over existing methods. 1) Unlike methods that can only capture local pair-wise AU dependencies, our model is developed upon the restricted Boltzmann machine and therefore can exploit the global relationships among AUs. 2) Although AU relationships are influenced by many related factors such as facial expressions, these factors are generally ignored by the current methods. Our model, however, can successfully capture them to more accurately characterize the AU relationships. Efficient learning and inference algorithms of the proposed model are also developed. Experimental results on benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships as well as its superior AU recognition performance over existing approaches.


Multimedia Tools and Applications | 2014

Hybrid video emotional tagging using users' EEG and video content

Shangfei Wang; Yachen Zhu; Guobing Wu; Qiang Ji

In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features.


Pattern Recognition | 2014

Enhancing multi-label classification by modeling dependencies among labels

Shangfei Wang; Jun Wang; Zhaoyu Wang; Qiang Ji

Abstract In this paper, we propose a novel framework for multi-label classification, which directly models the dependencies among labels using a Bayesian network. Each node of the Bayesian network represents a label, and the links and conditional probabilities capture the probabilistic dependencies among multiple labels. We employ our Bayesian network structure learning method, which guarantees to find the global optimum structure, independent of the initial structure. After structure learning, maximum likelihood estimation is used to learn the conditional probabilities among nodes. Any current multi-label classifier can be employed to obtain the measurements of labels. Then, using the learned Bayesian network, the true labels are inferred by combining the relationship among labels with the labels׳ estimates obtained from a current multi-labeling method. We further extend the proposed multi-label classification method to deal with incomplete label assignments. Structural Expectation-Maximization algorithm is adopted for both structure and parameter learning. Experimental results on two benchmark multi-label databases show that our approach can effectively capture the co-occurrent and the mutual exclusive relation among labels. The relation modeled by our approach is more flexible than the pairwise or fixed subset labels captured by current multi-label learning methods. Thus, our approach improves the performance over current multi-label classifiers. Furthermore, our approach demonstrates its robustness to incomplete multi-label classification.


affective computing and intelligent interaction | 2005

Emotion semantics image retrieval: an brief overview

Shangfei Wang; Xufa Wang

Emotion is the most abstract semantic structure of images. This paper overviews recent research on emotion semantics image retrieval. First, the paper introduces the general frame of emotion semantics image retrieval and points out the four main research issues: to exact sensitive features from images, to define users’ emotion information, to build emotion user model and to individualize the user model. Then several algorithms to solve these four issues are analyzed in detail. After that, some future research topics, including construction of an emotion database, evaluation of the user model and computation of the user model, are discussed, and some resolved strategies are presented elementarily.


affective computing and intelligent interaction | 2011

Emotion recognition using hidden Markov models from facial temperature sequence

Zhilei Liu; Shangfei Wang

In this paper, an emotion recognition from facial temporal sequence has been proposed. Firstly, the temperature difference histogram features and five statistical features are extracted from the facial temperature difference matrix of each difference frame in the data sequences. Then the discrete Hidden Markov Models are used as the classifier for each feature. In which, a feature selection strategy based on the recognition results in the training set is introduced. Finally, the results of the experiments on the samples of the USTC-NVIE database demonstrate the effectiveness of our method. Besides, the experiment results also demonstrate that the temperature information of the forehead is more useful than that of the other regions in emotion recognition and understanding, which is consistent with some related research results.


Pattern Recognition | 2013

Eye localization from thermal infrared images

Shangfei Wang; Zhilei Liu; Peijia Shen; Qiang Ji

By using the knowledge of facial structure and temperature distribution, this paper proposes an automatic eye localization method from long wave infrared thermal images both with eyeglasses and without eyeglasses. First, with the help of support vector machine classifier, three gray-projection features are defined to determine whether a subject is with eyeglasses. For subjects with eyeglasses, the locations of valleys in the projection curve are used to perform eye localization. For subjects without eyeglasses, a facial structure consisting of 15 sub-regions is proposed to extract Haar-like features. Eight classifiers are learned from the features selected by Adaboost algorithm for left and right eye, respectively. A vote strategy is employed to find the most likely eyes. To evaluate the effectiveness of our approach, experiments are performed on NVIE and Equinox databases. The eyeglass detection results on NVIE database and Equinox database are 99.36% and 95%, respectively, which demonstrate the effectiveness and robustness of our eyeglass detection method. Eye localization results of within-database experiments and cross-database experiments on these two databases are very comparable with the previous results in this field, verifying the effectiveness and the generalization ability of our approach.


international conference on pattern recognition | 2014

Multi-label Learning with Missing Labels

Baoyuan Wu; Zhilei Liu; Shangfei Wang; Bao-Gang Hu; Qiang Ji

In multi-label learning, each sample can be assigned to multiple class labels simultaneously. In this work, we focus on the problem of multi-label learning with missing labels (MLML), where instead of assuming a complete label assignment is provided for each sample, only partial labels are assigned with values, while the rest are missing or not provided. The positive (presence), negative (absence) and missing labels are explicitly distinguished in MLML. We formulate MLML as a transductive learning problem, where the goal is to recover the full label assignment for each sample by enforcing consistency with available label assignments and smoothness of label assignments. Along with an exact solution, we also provide an effective and efficient approximated solution. Our method shows much better performance than several state-of-the-art methods on several benchmark data sets.

Collaboration


Dive into the Shangfei Wang's collaboration.

Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Zhilei Liu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Zhen Gao

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yachen Zhu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Chongliang Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Jun Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Menghua He

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Shan Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Zhaoyu Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Peijia Shen

University of Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge