Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoyi Feng is active.

Publication


Featured researches published by Xiaoyi Feng.


Pattern Recognition and Image Analysis | 2007

Facial expression recognition based on local binary patterns

Xiaoyi Feng; Matti Pietikäinen; Abdenour Hadid

In this paper, a novel approach to automatic facial expression recognition from static images is proposed. The face area is first divided automatically into small regions, from which the local binary pattern (LBP) histograms are extracted and concatenated into a single feature histogram, efficiently representing facial expressions—anger, disgust, fear, happiness, sadness, surprise, and neutral. Then, a linear programming (LP) technique is used to classify the seven facial expressions. Experimental results demonstrate an average expression recognition accuracy of 93.8% on the JAFFE database, which outperforms the rate of all other reported methods on the same database.


international conference on image analysis and recognition | 2004

A Coarse-to-Fine Classification Scheme for Facial Expression Recognition

Xiaoyi Feng; Abdenour Hadid; Matti Pietikäinen

In this paper, a coarse-to-fine classification scheme is used to recognize facial expressions (angry, disgust, fear, happiness, neutral, sadness and surprise) of novel expressers from static images. In the coarse stage, the seven-class problem is reduced to a two-class one as follows: First, seven model vectors are produced, corresponding to the seven basic facial expressions. Then, distances from each model vector to the feature vector of a testing sample are calculated. Finally, two of the seven basic expression classes are selected as the testing sample’s expression candidates (candidate pair). In the fine classification stage, a K-nearest neighbor classifier fulfils final classification. Experimental results on the JAFFE database demonstrate an average recognition rate of 77% for novel expressers, which outperforms the reported results on the same database.


Journal of Electronic Imaging | 2013

Image fusion with nonsubsampled contourlet transform and sparse representation

Jun Wang; Jinye Peng; Xiaoyi Feng; Guiqing He; Jun Wu; Kun Yan

Abstract. Image fusion combines several images of the same scene into a fused image, which contains all important information. Multiscale transform and sparse representation can solve this problem effectively. However, due to the limited number of dictionary atoms, it is difficult to provide an accurate description for image details in the sparse representation–based image fusion method, and it needs a great deal of calculations. In addition, for the multiscale transform–based method, the low-pass subband coefficients are so hard to represent sparsely that they cannot extract significant features from images. In this paper, a nonsubsampled contourlet transform (NSCT) and sparse representation–based image fusion method (NSCTSR) is proposed. NSCT is used to perform a multiscale decomposition of source images to express the details of images, and we present a dictionary learning scheme in NSCT domain, based on which we can represent low-frequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking. The experimental results show that the proposed method outperforms both the fusion method based on single sparse representation and multiscale decompositon.


Neurocomputing | 2015

A regularized optimization framework for tag completion and image retrieval

Zhaoqiang Xia; Xiaoyi Feng; Jinye Peng; Jun Wu; Jianping Fan

Abstract With the fast expansion of social image sharing websites, the tag-based image retrieval (TBIR) becomes important and prevalent for Internet users to search the social images. However, some user-provided tags of social images are too incomplete and ambiguous to facilitate the social image retrieval. In this paper, we propose a regularized optimization framework to complete the missing tags for social images ( tag completion ). Within the regularized optimization framework, the non-negative matrix factorization (NMF) and the holistic visual diversity minimization are used jointly to make the tag-image matrix completed as the relationships of images and tags are represented to a tag-image matrix. The non-negative matrix factorization casts the tag-image matrix into a latent low-rank space and utilizes the semantic relevance of tags to partially complete the insufficient tags. To take the visual content of images into account, the other objective term representing the holistic visual diversity is appended with the NMF to leverage the content-similar images. Moreover, to ensure the proper corrections and sparseness of tag-image matrix, two regularized factors are also included into the optimization framework. Through conducting the experiments on the benchmark image set with the adequate ground truth, we verify the effectiveness of our proposed approach.


joint international conference on information sciences | 2006

A Novel Feature Extraction Method for Facial Expression Recognition

Xiaoyi Feng; Baohua Lv; Zhen Li; Jiling Zhang

In this work, a novel facial feature extraction method is proposed for automatic facial expressions recognition, which detecting local texture information, global texture information and shape information of the face automatically to form the facial features. First, Active Appearance Model (AAM) is used to locate facial feature points automatically. Then, the local texture information in these feature points and the global texture feature information of the whole face area are extracted based on the Local Binary Pattern (LBP) techniques, and also the shape information of the face are detected. Finally, all the information are combined together to form the feature vector. The proposed feature extraction method is tested by the JAFFE database and experimental results show that it is promising.


Computer Vision and Image Understanding | 2016

Spontaneous micro-expression spotting via geometric deformation modeling

Zhaoqiang Xia; Xiaoyi Feng; Jinye Peng; Xianlin Peng; Guoying Zhao

A probabilistic framework is proposed to detect spontaneous micro-expression clips.The geometric deformation captured by ASM model is utilized as features.The features are robust to subtle head movement and illumination variation.The Adaboost algorithm is used to estimate the initial probability for each frame.The random walk algorithm computes the transition probability by deformation similarity.Extensive experiments are performed on two spontaneous datasets. Facial micro-expression is important and prevalent as it reveals the actual emotion of humans. Especially, the automated micro-expression analysis substituted for humans begins to gain the attention recently. However, largely unsolved problems of detecting micro-expressions for subsequent analysis need to be addressed sequentially, such as subtle head movements and unconstrained lighting conditions. To face these challenges, we propose a probabilistic framework to detect spontaneous micro-expression clips temporally from a video sequence (micro-expression spotting) in this paper. In the probabilistic framework, a random walk model is presented to calculate the probability of individual frames having micro-expressions. The Adaboost model is utilized to estimate the initial probability for each frame and the correlation between frames would be considered into the random walk model. The active shape model and Procrustes analysis, which are robust to the head movement and lighting variation, are used to describe the geometric shape of human face. Then the geometric deformation would be modeled and used for Adaboost training. Through performing the experiments on two spontaneous micro-expression datasets, we verify the effectiveness of our proposed micro-expression spotting approach.


ieee international conference on automatic face gesture recognition | 2017

OULU-NPU: A Mobile Face Presentation Attack Database with Real-World Variations

Zinelabinde Boulkenafet; Jukka Komulainen; Lei Li; Xiaoyi Feng; Abdenour Hadid

The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.


international conference on image processing | 2016

An original face anti-spoofing approach using partial convolutional neural network

Lei Li; Xiaoyi Feng; Zinelabidine Boulkenafet; Zhaoqiang Xia; Mingming Li; Abdenour Hadid

Recently deep Convolutional Neural Networks have been successfully applied in many computer vision tasks and achieved promising results. So some works have introduced the deep learning into face anti-spoofing. However, most approaches just use the final fully-connected layer to distinguish the real and fake faces. Inspired by the idea of each convolutional kernel can be regarded as a part filter, we extract the deep partial features from the convolutional neural network (CNN) to distinguish the real and fake faces. In our prosed approach, the CNN is fine-tuned firstly on the face spoofing datasets. Then, the block principle component analysis (PCA) method is utilized to reduce the dimensionality of features that can avoid the over-fitting problem. Lastly, the support vector machine (SVM) is employed to distinguish the real the real and fake faces. The experiments evaluated on two public available databases, Replay-Attack and CASIA, show the proposed method can obtain satisfactory results compared to the state-of-the-art methods.


mexican international conference on artificial intelligence | 2005

Real time facial expression recognition using local binary patterns and linear programming

Xiaoyi Feng; Jie Cui; Matti Pietikäinen; Abdenour Hadid

In this paper, a fully automatic, real-time system is proposed to recognize seven basic facial expressions (angry, disgust, fear, happiness, neutral, sadness and surprise). First, faces are located and normalized based on an illumination insensitive skin model and face segmentation; then, the Local Binary Patterns (LBP) techniques, which are invariant to monotonic grey level changes, are used for facial feature extraction; finally, the Linear Programming (LP) technique is employed to classify seven facial expressions. Theoretical analysis and experimental results show that the proposed system performs well in some degree of illumination changes and head rotations.


computer vision and pattern recognition | 2016

Towards Facial Expression Recognition in the Wild: A New Database and Deep Recognition System

Xianlin Peng; Zhaoqiang Xia; Lei Li; Xiaoyi Feng

Automatic facial expression recognition (FER) plays an important role in many fields. However, most existing FER techniques are devoted to the tasks in the constrained conditions, which are different from actual emotions. To simulate the spontaneous expression, the number of samples in acted databases is usually small, which limits the ability of facial expression classification. In this paper, a novel database for natural facial expression is constructed leveraging the social images and then a deep model is trained based on the naturalistic dataset. An amount of social labeled images are obtained from the image search engines by using specific keywords. The algorithms of junk image cleansing are then utilized to remove the mislabeled images. Based on the collected images, the deep convolutional neural networks are learned to recognize these spontaneous expressions. Experiments show the advantages of the constructed dataset and deep approach.

Collaboration


Dive into the Xiaoyi Feng's collaboration.

Top Co-Authors

Avatar

Jinye Peng

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Zhaoqiang Xia

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyue Jiang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jianping Fan

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jianping Fan

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jun Wu

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Lei Li

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guiqing He

Northwestern Polytechnical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge