Tian Chunna
Xidian University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tian Chunna.
international conference on signal processing | 2008
Jiang Shan; Shuang Kai; Fan Guoliang; Tian Chunna; Wang Yu
We propose an improved Tensorfaces algorithm for multi-view face recognition which integrates multi-linear analysis, manifold learning and statistical clustering in one framework. The training face images from different views are first mapped into a 2-D space by the Locality Preserving Projections (LPP) method where statistical clustering is used to capture the view variability. Then a test image of an unknown view can be projected into this 2-D space, and the two closet views can be identified. We develop a modified tensor decomposition method by incorporating two closest views in the calculation of the identity coefficients. The proposed method is evaluated on a large database of multi-view face images that include the CMU PIE and Weizmann databases. Experimental results show that this method outperforms the original TensorFaces method.
international conference on signal processing | 2016
Wang Mengmeng; Tian Chunna; Gao Xinbo; Li Minglangjun
In this paper, we propose an effective visual tracking method based on candidates selected by deep features and CNSGM. Higher level deep features contain semantic information, which enables the tracker focus more on the target than on the background. A certain convolutional layer contains many neurons. Different neurons response different things in the image. We pick the neuron has larger response on the target to select the object-related candidates for CNSGM, which can reduce the calculation load on local sparse representation in CNSGM to improve tracking speed by around 6 times. Turn out the deep features can also improve the tracking accuracy by choosing the object-related candidates. Experiment results on TB-76 video database proved that the proposed method has better performance in general.
international conference on signal processing | 2016
Li Haiyang; Tian Chunna; Yuan Bo; Wei Wei
Facial expression details are important visual cues to interpret expressions, which are also hard to be synthesized. We propose a novel facial expression transferring method to synthesize realistic colorful facial expressions with source-specific expression details. First, we choose YUV color space to represent the expression images. In Y channel of the color space, the expression image of the source object is transformed into frequency domain through discrete cosine transform. Then, the expression details including micro-expressions of the source object are extracted through analyzing their frequency distribution. Finally, those expression details are stitched smoothly to the target object with Poisson image editing method. The transferred result is combined with U and V channels of the warped target expression to synthesize the colorful realistic expressions of the target object, which reduces the effect of illumination and skin color variations. Experiments on the Multi-PIE database show that our method has higher perceptual quality over the state-of-the-art methods.
Archive | 2013
Tian Chunna; Pu Qian; Gao Xinbo; Yuan Bo; Wang Daifu; Li Dongyang; Li Ying; Zhao Lin; Zheng Hong; Lu Yang
Archive | 2013
Wang Xiumei; Gao Xinbo; Ji Xiuyun; Tian Chunna; Li Jie; Han Bing; Deng Cheng; Wang Ying; Wang Bin
Archive | 2013
Tian Chunna; Gao Xinbo; Lu Yang; Wang Huaqing; Pu Qian; Li Dongyang; Wang Daifu; Zheng Hong; Zhang Xiangnan; Yang Erkun
Archive | 2015
Wang Xiumei; Ding Lijie; Gao Xinbo; Tian Chunna; Deng Cheng; Han Bing; Niu Zhenxing; Ji Xiuyun
Archive | 2012
Gao Xinbo; Tian Chunna; Li Liang; Li Ying; Yan Jianqiang; Wang Xiumei; Sun Libin; Yuan Bo; Zhao Lin; Yang Xi
Journal of Systems Engineering and Electronics | 2012
Tian Chunna; Gao Xinbo
Archive | 2017
Wang Xiumei; Wang Xinxin; Gao Xinbo; Zhang Tianzhen; Li Jie; Tian Chunna; Deng Cheng