Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Enqing Chen is active.

Publication


Featured researches published by Enqing Chen.


international symposium on multimedia | 2012

Discriminative Multiple Canonical Correlation Analysis for Multi-feature Information Fusion

Lei Gao; Lin Qi; Enqing Chen; Ling Guan

This paper presents a novel approach for multi-feature information fusion. The proposed method is based on the Discriminative Multiple Canonical Correlation Analysis (DMCCA), which can extract more discriminative characteristics for recognition from multi-feature information representation. It represents the different patterns among multiple subsets of features identified by minimizing the Frobenius norm. We will demonstrate that the Canonical Correlation Analysis (CCA), the Multiple Canonical Correlation Analysis (MCCA), and the Discriminative Canonical Correlation Analysis (DCCA) are special cases of the DMCCA. The effectiveness of the DMCCA is demonstrated through experimentation in speaker recognition and speech-based emotion recognition. Experimental results show that the proposed approach outperforms the traditional methods of serial fusion, CCA, MCCA and DCCA.


Iet Communications | 2009

Decision-directed channel estimation based on iterative linear minimum mean square error for orthogonal frequency division multiplexing systems

Jiankang Zhang; Xiaomin Mu; Enqing Chen; Shouyi Yang

A decision-directed (DD) channel estimation based on iterative linear minimum mean square error (LMMSE) is proposed for orthogonal frequency division multiplexing systems. Existing DD channel estimation is well known to have the problem of error propagation because of symbol-by-symbol detection. The proposed algorithm can estimate the correction term of current channel state information (CSI) according to the error vector of previous CSI by applying the orthogonality principle, and corrects the current CSI with this correction term. Analysis and simulation results have shown that this method has no error propagation problem. The performance of the proposed algorithm is much better than the conventional DD channel estimation, and close to the optimal LMMSE estimator, but with much less computational complexity compared with the optimal LMMSE estimator.


IEEE Signal Processing Letters | 2016

Improving Action Recognition Using Collaborative Representation of Local Depth Map Feature

Chengwu Liang; Enqing Chen; Lin Qi; Ling Guan

Based on depth information, this letter introduces a new local depth map feature describing local spatiotemporal details of human motion and a collaborative representation for classification with regularized least squares. By extracting a multilayered depth motion feature and then applying a multiscale Histograms of Oriented Gradient (HOG) descriptor to it, the proposed feature characterizes the local temporal change of human motion and the local spatial structure (appearance) of an action. Instead of class-specific dictionary, the test action sample is represented collaboratively by the common shared dictionary. Moreover, we present an analytical solution of collaborative representation, which is independent of the query and can be precalculated as a projection matrix, leading to low computational cost in recognition. The evaluations on MSRAction3D and MSRGesture3D datasets demonstrate its effectiveness.


multimedia signal processing | 2015

Spatio-Temporal Pyramid Model based on depth maps for action recognition

Haining Xu; Enqing Chen; Chengwu Liang; Lin Qi; Ling Guan

This paper presents a novel human action recognition method by using depth maps. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, we divide the entire depth maps into several sub-actions. The absolute difference between two consecutive projected maps is accumulated through a depth video (several sub-actions) sequence to form a Depth Motion Map (DMM) to describe the dynamic feature of an action. Also the difference within the threshold between two consecutive projected maps is calculated through the entire depth video to form another kind of Depth Static Map (DSM) to describe the static feature. Collectively, we call them Temporal Pyramid of Depth Model (TPDM). Then Spatial Pyramid Histograms of Oriented Gradient (SPHOG) is computed from the TPDM for the representation of an action. For classification, we apply support vector machine (SVM) to classify the proposed descriptorsbased on MSR Action3D dataset. Experimental results demonstrates the effectiveness of our proposed method.


IEEE Transactions on Image Processing | 2018

Discriminative Multiple Canonical Correlation Analysis for Information Fusion

Lei Gao; Lin Qi; Enqing Chen; Ling Guan

In this paper, we propose the discriminative multiple canonical correlation analysis (DMCCA) for multimodal information analysis and fusion. DMCCA is capable of extracting more discriminative characteristics from multimodal information representations. Specifically, it finds the projected directions, which simultaneously maximize the within-class correlation and minimize the between-class correlation, leading to better utilization of the multimodal information. In the process, we analytically demonstrate that the optimally projected dimension by DMCCA can be quite accurately predicted, leading to both superior performance and substantial reduction in computational cost. We further verify that canonical correlation analysis (CCA), multiple canonical correlation analysis (MCCA) and discriminative canonical correlation analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for canonical correlation analysis. We implement a prototype of DMCCA to demonstrate its performance in handwritten digit recognition and human emotion recognition. Extensive experiments show that DMCCA outperforms the traditional methods of serial fusion, CCA, MCCA, and DCCA.In this paper, we propose the discriminative multiple canonical correlation analysis (DMCCA) for multimodal information analysis and fusion. DMCCA is capable of extracting more discriminative characteristics from multimodal information representations. Specifically, it finds the projected directions, which simultaneously maximize the within-class correlation and minimize the between-class correlation, leading to better utilization of the multimodal information. In the process, we analytically demonstrate that the optimally projected dimension by DMCCA can be quite accurately predicted, leading to both superior performance and substantial reduction in computational cost. We further verify that canonical correlation analysis (CCA), multiple canonical correlation analysis (MCCA) and discriminative canonical correlation analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for canonical correlation analysis. We implement a prototype of DMCCA to demonstrate its performance in handwritten digit recognition and human emotion recognition. Extensive experiments show that DMCCA outperforms the traditional methods of serial fusion, CCA, MCCA, and DCCA.


multimedia signal processing | 2015

Action recognition using multi-layer Depth Motion maps and Sparse Dictionary Learning

Chengwu Liang; Enqing Chen; Lin Qi; Ling Guan

In this paper, we propose a new spatio-temporal feature based method for human action recognition using depth image sequence. Fist, Layered Depth Motion maps (LDM) are utilized to capture the temporal motion feature. Next, multi-scale HOG descriptors are computed on LDM to characterize the structural information of actions. Then sparse coding is applied for feature representation. Extending Sparse fisher Discriminative Dictionary Learning (SDDL) model and its corresponding classification scheme are also introduced. In SDDL model, the sub-dictionary is updated class by class, leading to class-specific compact discriminative dictionaries. The proposed method is evaluated on public MSR Action3D datasets and demonstrates great performance, especially in cross subject test.


international conference on multimedia and expo | 2014

A fisher discriminant framework based on Kernel Entropy Component Analysis for feature extraction and emotion recognition

Lei Gao; Lin Qi; Enqing Chen; Ling Guan

This paper aims at providing a general method for feature extraction and recognition. The most essential issues for pattern recognition include extracting discriminant features and improving recognition accuracy. Kernel Entropy Component Analysis (KECA), as a new method for data transformation and dimensionality reduction, has attracted more attentions. However, as KECA only reveals structure relating to the Renyi entropy of the input space data set, it cannot extract effectively discriminant classification information for recognition. In this paper, we propose combining KECA and Fishers linear discriminant analysis (LDA), utilizing descriptor of information entropy and scatter information of classes to improve recognition performance. The proposed method is applied to speech-based emotion recognition, and evaluated though experiments on RML audiovisual emotion databases. The results clear demonstrate the effectiveness of the proposed solution.


signal processing systems | 2013

Rotation Invariance in 2D-FRFT with Application to Digital Image Watermarking

Lei Gao; Lin Qi; Yongjin Wang; Enqing Chen; Shouyi Yang; Ling Guan

The extraction of rotation invariant representation is important for many signal processing tasks such as image analysis, computer vision, pattern recognition and so forth. In this paper, we show that, under certain conditions, the Two-Dimensional Fractional Fourier Transform (2D-FRFT) possesses this attractive property through mathematical analysis and extensive computer simulations. Based on this property, we propose a novel digital image watermarking method which combines 2D chirp signal with the addition and rotation invariant properties of 2D-FRFT in order to improve robustness and security of digital image watermark in 2D-FRFT domain. Experimental results show that the proposed method is robust against numerous watermarking attacks including rotation geometrical transform, JPEG compression, crop, median filtering, histogram equalization, salt & pepper noise, Gaussian lowpass filter, shift, dithering and Gaussian noise, and secures against unauthorized information copying and redistribution.


international conference on biometrics theory applications and systems | 2016

Depth-based action recognition using multiscale sub-actions depth motion maps and local auto-correlation of space-time gradients

Chengwu Liang; Lin Qi; Enqing Chen; Ling Guan

This paper presents a method for human action recognition from depth sequence. First, we subdivided the normalized motion energy vector into a set of segments, whose corresponding frame indices are used to partition a video. Then each sub-action is represented by three Depth Motion Maps (DMMs) to capture motion cues in three orthogonal projection views. Multi-scale Histogram of Oriented Gradients (HOG) descriptors are then computed from the DMMs for capturing the appearance cues. In order to cope with the temporal information loss in the DMMs generation, one complementary feature, a 3D motion feature descriptor, is extracted from the depth video utilizing local space-time auto-correlation of gradients (STACOG). Discriminative Multiple Canonical Correlation Analysis (DM-CCA) is then adopted to analyze DMMs-based feature and STACOG, and the two sets of features are fused into a more complete and discriminative representation of the information embedded in the dataset. l2-regularized Collaborative Representation Classification (l2-CRC) is applied to classify the proposed descriptors. Evaluations on MSR Action3D and MSRGesture3D Datasets demonstrate the effectiveness of the proposed method.


Eighth International Conference on Graphic and Image Processing (ICGIP 2016) | 2017

Face recognition based on the band fusion of generalized phase spectrum of 2D-FrFT

Xu Wang; Lin Qi; Yun Tie; Enqing Chen; Huijing Sun

In this paper, we propose a novel feature extraction method for face recognition based on two dimensional fractional Fourier transform (2D-FrFT). First, we extract the phase information of facial image in 2D-FrFT, which is called the generalized phase spectra (GPS). Then, we present an improved two-dimensional separability judgment (I2DSJ) to select appropriate order parameters for discrete fractional Fourier transform. Finally, multiple orders’ generalized phase spectrum bands (MGPSB) fusion is proposed. In order to make full use of the discriminative information from different orders for face recognition, the proposed approach merges different orders’ generalized phase spectra (GPS) of 2D-FrFT. The proposed method is no need to construct the subspace through the feature extraction methods and has less computation cost. Experimental results on the public face databases demonstrate that our method outperforms the representative methods.

Collaboration


Dive into the Enqing Chen's collaboration.

Top Co-Authors

Avatar

Lin Qi

Zhengzhou University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Gao

Zhengzhou University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge