Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuli Xue is active.

Publication


Featured researches published by Yuli Xue.


Digital Signal Processing | 2012

Speech emotion recognition: Features and classification models

Lijiang Chen; Xia Mao; Yuli Xue; L. L. Cheng

To solve the speaker independent emotion recognition problem, a three-level speech emotion recognition model is proposed to classify six speech emotions, including sadness, anger, surprise, fear, happiness and disgust from coarse to fine. For each level, appropriate features are selected from 288 candidates by using Fisher rate which is also regarded as input parameter for Support Vector Machine (SVM). In order to evaluate the proposed system, principal component analysis (PCA) for dimension reduction and artificial neural network (ANN) for classification are adopted to design four comparative experiments, including Fisher+SVM, PCA+SVM, Fisher+ANN, PCA+ANN. The experimental results proved that Fisher is better than PCA for dimension reduction, and SVM is more expansible than ANN for speaker independent speech emotion recognition. The average recognition rates for each level are 86.5%, 68.5% and 50.2% respectively.


Quantum Information Processing | 2014

SQR: a simple quantum representation of infrared images

Suzhen Yuan; Xia Mao; Yuli Xue; Lijiang Chen; Qingxu Xiong; Angelo Compare

A simple quantum representation (SQR) of infrared images is proposed based on the characteristic that infrared images reflect infrared radiation energy of objects. The proposed SQR model is inspired from the Qubit Lattice representation for color images. Instead of the angle parameter of a qubit to store a color as in Qubit Lattice representation, probability of projection measurement is used to store the radiation energy value of each pixel for the first time in this model. Since the relationship between radiation energy values and probability values can be quantified for the limited radiation energy values, it makes the proposed model more clear. In the process of image preparation, only simple quantum gates are used, and the performance comparison with the latest flexible representation of quantum images reveals that SQR can achieve a quadratic speedup in quantum image preparation. Meanwhile, quantum infrared image operations can be performed conveniently based on SQR, including both the global operations and local operations. This paper provides a basic way to express infrared images in quantum computer.


Applied Intelligence | 2012

Mandarin emotion recognition combining acoustic and emotional point information

Lijiang Chen; Xia Mao; Pengfei Wei; Yuli Xue; Mitsuru Ishizuka

In this contribution, we introduce a novel approach to combine acoustic information and emotional point information for a robust automatic recognition of a speaker’s emotion. Six discrete emotional states are recognized in the work. Firstly, a multi-level model for emotion recognition by acoustic features is presented. The derived features are selected by fisher rate to distinguish different types of emotions. Secondly, a novel emotional point model for Mandarin is established by Support Vector Machine and Hidden Markov Model. This model contains 28 emotional syllables which reflect rich emotional information. Finally the acoustic information and emotional point information are integrated by a soft decision strategy. Experimental results show that the application of emotional point information in speech emotion recognition is effective.


Iet Computer Vision | 2014

Facial expression recognition considering individual differences in facial structure and texture

Jizheng Yi; Xia Mao; Lijiang Chen; Yuli Xue; Angelo Compare

Facial expression recognition (FER) plays an important role in human-computer interaction. The recent years have witnessed an increasing trend of various approaches for the FER, but these approaches usually do not consider the effect of individual differences to the recognition result. When the face images change from neutral to a certain expression, the changing information constituted of the structural characteristics and the texture information can provide rich important clues not seen in either face image. Therefore it is believed to be of great importance for machine vision. This study proposes a novel FER algorithm by exploiting the structural characteristics and the texture information hiding in the image space. Firstly, the feature points are marked by an active appearance model. Secondly, three facial features, which are feature point distance ratio coefficient, connection angle ratio coefficient and skin deformation energy parameter, are proposed to eliminate the differences among the individuals. Finally, a radial basis function neural network is utilised as the classifier for the FER. Extensive experimental results on the Cohn-Kanade database and the Beihang University (BHU) facial expression database show the significant advantages of the proposed method over the existing ones.


Quantum Information Processing | 2015

Quantum morphology operations based on quantum representation model

Suzhen Yuan; Xia Mao; Tian Li; Yuli Xue; Lijiang Chen; Qingxu Xiong

Quantum morphology operations are proposed based on the novel enhanced quantum representation model. Two kinds of quantum morphology operations are included: quantum binary and grayscale morphology operations. Dilation and erosion operations are fundamental to morphological operations. Consequently, we focus on quantum binary and flat grayscale dilation and erosion operations and their corresponding circuits. As the basis of designing of binary morphology operations, three basic quantum logic operations AND, OR, and NOT involving two binary images are presented. Thus, quantum binary dilation and erosion operations can be realized based on these logic operations supplemented by quantum measurement operations. As to the design of flat grayscale dilation and erosion operations, the searching for maxima or minima in a certain space is involved; here, we use Grover’s search algorithm to get these maxima and minima. With respect that the grayscale is represented by quantum bit string, the quantum bit string comparator is used as an oracle in Grover’s search algorithm. In these quantum morphology operations, quantum parallelism is well utilized. The time complexity analysis shows that quantum morphology operations’ time complexity is much lower or equal to the classical morphology operations.


Iet Computer Vision | 2015

Trajectory-based view-invariant hand gesture recognition by fusing shape and orientation

Xingyu Wu; Xia Mao; Lijiang Chen; Yuli Xue

Traditional studies in vision-based hand gesture recognition remain rooted in view-dependent representations, and hence users are forced to be fronto-parallel to the camera. To solve this problem, view-invariant gesture recognition aims to make the recognition result independent of viewpoint changes. However, in current works the view-invariance is achieved at the price of mixing different gesture patterns that have similar trajectory curve shape but different semantic meanings. For example, the gesture ‘push’ can be mistaken as ‘drag’ from another viewpoint. To address this shortcoming, in this study, the authors use a shape descriptor to extract the view-invariant features of a three-dimensional (3D) trajectory. As the shape features are invariant to omnidirectional viewpoint changes, the orientation features are then added into weight different rotation angles so that similar trajectory shapes are better separated. The proposed method was conducted on two different databases, including a popular Australian Sign Language database and a challenging Kinect Hand Trajectory database. Experimental results show that the proposed algorithm achieves a higher average recognition rate than the state-of-the-art approaches, and can better distinguish confusing gestures while meeting the view-invariant condition.


Journal of Mathematical Imaging and Vision | 2016

Point Context: An Effective Shape Descriptor for RST-Invariant Trajectory Recognition

Xingyu Wu; Xia Mao; Lijiang Chen; Yuli Xue; Alberto Rovetta

Motion trajectory recognition is important for characterizing the moving property of an object. The speed and accuracy of trajectory recognition rely on a compact and discriminative feature representation, and the situations of varying rotation, scaling, and translation have to be specially considered. In this paper, we propose a novel feature extraction method for trajectories. Firstly, a trajectory is represented by a proposed point context, which is a rotation-scale-translation invariant shape descriptor with a flexible tradeoff between the complexity and discrimination, yet we prove that it is a complete shape descriptor. Secondly, the point context is nonlinearly mapped to a subspace by kernel nonparametric discriminant analysis to get a compact feature representation, and thus a trajectory is projected to a low-dimensional feature space. Experimental results show that the proposed trajectory feature demonstrates encouraging improvement than state-of-the-art methods.


international conference on pattern recognition | 2014

View-Invariant Gesture Recognition Using Nonparametric Shape Descriptor

Xingyu Wu; Xia Mao; Lijiang Chen; Yuli Xue; Angelo Compare

In this paper we propose a new method for view-invariant gesture recognition, based on what we call nonparametric shape descriptor. We represent gestures as 3D motion trajectories and then we prove that the shape of a trajectory is equivalent to the Euclidean distances between all its points. The set of point-to-point distances description is mapped to a high-dimensional kernel space by kernel principal component analysis (KPCA), and then nonparametric discriminant analysis (NDA) is used to extract the view-invariant shape features as the input for pattern classification. The algorithm is performed on a public dataset, and shows better view-invariant performance than other state-of-the-art methods.


PLOS ONE | 2015

Illumination Normalization of Face Image Based on Illuminant Direction Estimation and Improved Retinex

Jizheng Yi; Xia Mao; Lijiang Chen; Yuli Xue; Alberto Rovetta; Catalin-Daniel Caleanu

Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.


Applied Optics | 2014

Illuminant direction estimation for a single image based on local region complexity analysis and average gray value

Jizheng Yi; Xia Mao; Lijiang Chen; Yuli Xue; Angelo Compare

Illuminant direction estimation is an important research issue in the field of image processing. Due to low cost for getting texture information from a single image, it is worthwhile to estimate illuminant direction by employing scenario texture information. This paper proposes a novel computation method to estimate illuminant direction on both color outdoor images and the extended Yale face database B. In our paper, the luminance component is separated from the resized YCbCr image and its edges are detected with the Canny edge detector. Then, we divide the binary edge image into 16 local regions and calculate the edge level percentage in each of them. Afterward, we use the edge level percentage to analyze the complexity of each local region included in the luminance component. Finally, according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model, we calculate the illuminant directions of the luminance components three local regions, which meet the requirements of lower complexity and larger average gray value, and synthesize them as the final illuminant direction. Unlike previous works, the proposed method requires neither all of the information of the image nor the texture that is included in the training set. Experimental results show that the proposed method works better at the correct rate and execution time than the existing ones.

Collaboration


Dive into the Yuli Xue's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge