Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chun Qi is active.

Publication


Featured researches published by Chun Qi.


Pattern Recognition | 2010

Hallucinating face by position-patch

Xiang Ma; Junping Zhang; Chun Qi

A novel face hallucination method is proposed in this paper for the reconstruction of a high-resolution face image from a low-resolution observation based on a set of high- and low-resolution training image pairs. Different from most of the established methods based on probabilistic or manifold learning models, the proposed method hallucinates the high-resolution image patch using the same position image patches of each training image. The optimal weights of the training image position-patches are estimated and the hallucinated patches are reconstructed using the same weights. The final high-resolution facial image is formed by integrating the hallucinated patches. The necessity of two-step framework or residue compensation and the differences between hallucination based on patch and global image are discussed. Experiments show that the proposed method without residue compensation generates higher-quality images and costs less computational time than some recent face image super-resolution (hallucination) techniques.


IEEE Transactions on Image Processing | 2015

Background Subtraction Based on Low-Rank and Structured Sparse Decomposition

Xin Liu; Guoying Zhao; Jiawen Yao; Chun Qi

Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.


international conference on multimedia and expo | 2009

Position-based face hallucination method

Xiang Ma; Junping Zhang; Chun Qi

In this paper, we propose a novel face hallucination method to reconstruct a high-resolution face image from a lowresolution observation based on a set of high- and lowresolution local training image pairs. Instead of basing on probabilistic or manifold learning models, the proposed method synthesizes the high-resolution image patch using the same position image patches of training image pairs. A cost function is formulated to obtain the optimal weights of the training image position-patches and the high-resolution patches are reconstructed using the same weights. The final high-resolution facial image is formed by integrating the hallucinated patches. Experiments show that the proposed method without residue compensation generates higherquality images than some methods.


Neurocomputing | 2013

Future-data driven modeling of complex backgrounds using mixture of Gaussians

Xin Liu; Chun Qi

Mixture of Gaussians (MoG) is well-known for effectively in sustaining background variations, which has been widely adopted for background subtraction. However, in complex backgrounds, MoG often traps in keeping balance between model convergence speed and its stability. The main difficulty is the selection of learning rates. In this paper, an effective learning strategy is proposed to provide better regularization of background adaptation for MoG. First, the video-data is splitted into the future-data and history-data, then a set of background distributions (MoG) is computed for each case. To distinguish between dynamic and static background, the equality of these two sets is tested by the hypothesis testing method. Next, a two-layer LBP-based method is proposed for foreground classification. Finally, the global and static learning rates are replaced by the adaptive learning rates for image pixels with distinct properties for each frame. By means of the proposed learning strategy, a novel background modeling for detecting foreground objects from complex environments is established. We compare our procedure against the state-of-the-art alternatives, the experimental results show that the performance of learning speed and accuracy obtained by proposed learning rate control strategy is better than existing MoG approaches.


Pattern Recognition | 2014

Global consistency, local sparsity and pixel correlation: A unified framework for face hallucination

Jingang Shi; Xin Liu; Chun Qi

In this paper, a novel two-phase framework is presented to deal with the face hallucination problem. In the first phase, an initial high-resolution (HR) face image is produced in patch-wise. Each input low-resolution (LR) patch is represented as a linear combination of training patches and the corresponding HR patch is estimated by the same combination coefficients. Realizing that training patches similar with the input may provide more appropriate textures in the reconstruction, we regularize the combination coefficients by a weighted @?2-norm minimization term which enlarges the coefficients for relevant patches. The HR face image is then initialized by integrating all the HR patches. In the second phase, three regularization models are introduced to produce the final HR face image. Different from most previous approaches which consider global and local priors separately, the proposed algorithm incorporates the global reconstruction model, the local sparsity model and the pixel correlation model into a unified regularization framework. Initializing the regularization problem with the HR image obtained in the first phase, the final output HR image can be optimized through an iterative procedure. Experimental results show that the proposed algorithm achieves better performances in both reconstruction error and visual quality.


Neurocomputing | 2016

Saliency detection based on global and local short-term sparse representation

Qiang Fan; Chun Qi

Saliency detection has been considered to be an important issue in many computer vision tasks. In this paper, we propose a novel bottom-up saliency detection method based on sparse representation. Saliency detection includes two elements: image representation and saliency measurement. For an input image, first, the ICA algorithm is employed to learn a set of basis functions, then the image can be represented by this set of basis functions. Next, a global and local saliency framework is employed to measure the saliency. The global saliency is obtained through Low-Rank Representation (LRR), and the local saliency is obtained through a sparse coding scheme. The proposed method is compared with six state-of-the-art methods on two popular human eye fixation datasets, the experimental results indicate the accuracy of the proposed method to predict the human eye fixations.*Corresponding author.


Neurocomputing | 2014

Letters: Quasi-Newton Iterative Projection Algorithm for Sparse Recovery

Mingli Jing; Xueqin Zhou; Chun Qi

A computationally simple and efficient algorithm for compressed sensing is proposed. The algorithm, a simple combination of the orthogonal projection algorithm and of a novel quasi-Newton optimization scheme, is termed Quasi-Newton Iterative Projection (QNIP). There are two main advantages of the proposed algorithm. First, the computation of the proposed algorithm is very simple, which involves the application of the sampling matrix and its transpose at each iteration. Second, the algorithm appears to require a fewer number of iterations for convergence, whilst it provides a higher rate of perfect recovery compared with the reference algorithms. The performance of the proposed algorithm is validated via theoretical analysis as well as some numerical examples.


international conference on multimedia and expo | 2014

Foreground detection using low rank and structured sparsity

Jiawen Yao; Xin Liu; Chun Qi

In this paper, a novel foreground detection method based on two-stage framework is presented. In the first stage, a class of structured sparsity-inducing norms is introduced to model moving objects in videos and thus regard the observed sequence as being made up of the sum of a low-rank matrix and a structured sparse outlier matrix. In virtue of adaptive parameters, the proposed method includes a motion saliency measurement to dynamically estimate the support of the foreground in the second stage. Experiments on challenging datasets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.


international conference on image analysis and recognition | 2009

An Example-Based Two-Step Face Hallucination Method through Coefficient Learning

Xiang Ma; Junping Zhang; Chun Qi

Face hallucination is to reconstruct a high-resolution face image from a low-resolution one based on a set of high- and low-resolution training image pairs. This paper proposes an example-based two-step face hallucination method through coefficient learning. Firstly, the low-resolution input image and the low-resolution training images are interpolated to the same high-resolution space. Minimizing the square distance between the interpolated low-resolution input and the linear combination of the interpolated training images, the optimal coefficients of the interpolated training images are estimated. Then replacing the interpolated training images with the corresponding high-resolution training images in the linear combination formula, the result of first step is obtained. Furthermore, a local residue compensation scheme based on position is proposed to better recover high frequency information of face. Experiments demonstrate that our method can synthesize distinct high-resolution faces.


international conference on image analysis and processing | 2009

Hallucinating Faces: Global Linear Modal Based Super-Resolution and Position Based Residue Compensation

Xiang Ma; Junping Zhang; Chun Qi

A learning-based face hallucination method is proposed in this paper for the reconstruction of a high-resolution face image from a low-resolution observation based on a set of high- and low-resolution training image pairs. The proposed global linear modal based super-resolution estimates the optimal weights of all the low-resolution training images and a high-resolution image is obtained by applying the estimated weights to the high-resolution space. Then, we propose a position based local residue compensation algorithm to better recover subtle details of face. Experiments demonstrate that our method has advantage over some established methods.

Collaboration


Dive into the Chun Qi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Ma

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jiawen Yao

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Li

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge