Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qinping Zhao is active.

Publication


Featured researches published by Qinping Zhao.


international conference on computer graphics and interactive techniques | 2012

Manifold preserving edit propagation

Xiaowu Chen; Dongqing Zou; Qinping Zhao; Ping Tan

We propose a novel edit propagation algorithm for interactive image and video manipulations. Our approach uses the locally linear embedding (LLE) to represent each pixel as a linear combination of its neighbors in a feature space. While previous methods require similar pixels to have similar results, we seek to maintain the manifold structure formed by all pixels in the feature space. Specifically, we require each pixel to be the same linear combination of its neighbors in the result. Compared with previous methods, our proposed algorithm is more robust to color blending in the input data. Furthermore, since every pixel is only related to a few nearest neighbors, our algorithm easily achieves good runtime efficiency. We demonstrate our manifold preserving edit propagation on various applications.


computer vision and pattern recognition | 2013

Image Matting with Local and Nonlocal Smooth Priors

Xiaowu Chen; Dongqing Zou; Steven Zhiying Zhou; Qinping Zhao; Ping Tan

In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth prior from matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.


computer vision and pattern recognition | 2010

Rectilinear parsing of architecture in urban environment

Peng Zhao; Tian Fang; Jiangxiong Xiao; Honghui Zhang; Qinping Zhao; Long Quan

We propose an approach that parses registered images captured at ground level into architectural units for large-scale city modeling. Each parsed unit has a regularized shape, which can be used for further modeling purposes. In our approach, we first parse the environment into buildings, the ground, and the sky using a joint 2D-3D segmentation method. Then, we partition buildings into individual façades. The partition problem is formulated as a dynamic programming optimization for a sequence of natural vertical separating lines. Each façade is regularized by a floor line and a roof line. The floor line is the intersection line of the vertical plane of buildings and the horizontal plane of the ground. The roof line links edge points of roof region. The parsed results provide a first geometric approximation to the city environment, and can be further analyzed if necessary. The approach is demonstrated and validated on several large-scale city datasets.


computer vision and pattern recognition | 2011

Face illumination transfer through edge-preserving filters

Xiaowu Chen; Mengmeng Chen; Xin Jin; Qinping Zhao

This article proposes a novel image-based method to transfer illumination from a reference face image to a target face image through edge-preserving filters. According to our method, only a single reference image, without any knowledge of the 3D geometry or material information of the target face, is needed. We first decompose the lightness layers of the reference and the target images into large-scale and detail layers through weighted least square (WLS) filter after face alignment. The large-scale layer of the reference image is filtered with the guidance of the target image. Adaptive parameter selection schemes for the edge-preserving filters is proposed in the above two filtering steps. The final relit result is obtained by replacing the large-scale layer of the target image with that of the reference image. We acquire convincing relit result on numerous target and reference face images with different lighting effects and genders. Comparisons with previous work show that our method is less affected by geometry differences and can preserve better the identification structure and skin color of the target face.


computer vision and pattern recognition | 2014

Sparse Dictionary Learning for Edit Propagation of High-Resolution Images

Xiaowu Chen; Dongqing Zou; Jianwei Li; Xiaochun Cao; Qinping Zhao; Hao Zhang

We introduce a method of sparse dictionary learning for edit propagation of high-resolution images or video. Previous approaches for edit propagation typically employ a global optimization over the whole set of image pixels, incurring a prohibitively high memory and time consumption for high-resolution images. Rather than propagating an edit pixel by pixel, we follow the principle of sparse representation to obtain a compact set of representative samples (or features) and perform edit propagation on the samples instead. The sparse set of samples provides an intrinsic basis for an input image, and the coding coefficients capture the linear relationship between all pixels and the samples. The representative set of samples is then optimized by a novel scheme which maximizes the KL-divergence between each sample pair to remove redundant samples. We show several applications of sparsity-based edit propagation including video recoloring, theme editing, and seamless cloning, operating on both color and texture features. We demonstrate that with a sample-to-pixel ratio in the order of 0.01%, signifying a significant reduction on memory consumption, our method still maintains a high-degree of visual fidelity.


computer vision and pattern recognition | 2011

Partial similarity based nonparametric scene parsing in certain environment

Honghui Zhang; Tian Fang; Xiaowu Chen; Qinping Zhao; Long Quan

In this paper we propose a novel nonparametric image parsing method for the image parsing problem in certain environment. A novel and efficient nearest neighbor matching scheme, the ANN bilateral matching scheme, is proposed. Based on the proposed matching scheme, we first retrieve some partially similar images for each given test image from the training image database. The test image can be well explained by these retrieved images, with similar regions existing in the retrieved images for each region in the test image. Then, we match the test image to the retrieved training images with the ANN bilateral matching scheme, and parse the test image by integrating multiple cues in a markov random field. Experiment on three datasets shows our method achieved promising parsing accuracy and outperformed two state-of-the-art nonparametric image parsing methods.


european conference on computer vision | 2010

Learning artistic lighting template from portrait photographs

Xin Jin; Mingtian Zhao; Xiaowu Chen; Qinping Zhao; Song-Chun Zhu

This paper presents a method for learning artistic portrait lighting template from a dataset of artistic and daily portrait photographs. The learned template can be used for (1) classification of artistic and daily portrait photographs, and (2) numerical aesthetic quality assessment of these photographs in lighting usage. For learning the template, we adopt Haar-like local lighting contrast features, which are then extracted from pre-defined areas on frontal faces, and selected to form a log-linear model using a stepwise feature pursuit algorithm. Our learned template corresponds well to some typical studio styles of portrait photography. With the template, the classification and assessment tasks are achieved under probability ratio test formulations. On our dataset composed of 350 artistic and 500 daily photographs, we achieve a 89.5% classification accuracy in cross-validated tests, and the assessment model assigns reasonable numerical scores based on portraits aesthetic quality in lighting.


IEEE Transactions on Image Processing | 2013

Face Illumination Manipulation Using a Single Reference Image by Adaptive Layer Decomposition

Xiaowu Chen; Hongyu Wu; Xin Jin; Qinping Zhao

This paper proposes a novel image-based framework to manipulate the illumination of human face through adaptive layer decomposition. According to our framework, only a single reference image, without any knowledge of the 3D geometry or material information of the input face, is needed. To transfer the illumination effects of a reference face image to a normal lighting face, we first decompose the lightness layers of the reference and the input images into large-scale and detail layers through weighted least squares (WLS) filter with adaptive smoothing parameters according to the gradient values of the face images. The large-scale layer of the reference image is filtered with the guidance of the input image by guided filter with adaptive smoothing parameters according to the face structures. The relit result is obtained by replacing the largescale layer of the input image with that of the reference image. To normalize the illumination effects of a non-normal lighting face (i.e., face delighting), we introduce similar reflectance prior to the layer decomposition stage by WLS filter, which make the normalized result less affected by the high contrast light and shadow effects of the input face. Through these two procedures, we can change the illumination effects of a non-normal lighting face by first normalizing the illumination and then transferring the illumination of another reference face to it. We acquire convincing relit results of both face relighting and delighting on numerous input and reference face images with various illumination effects and genders. Comparisons with previous papers show that our framework is less affected by geometry differences and can preserve better the identification structure and skin color of the input face.


Computer Graphics Forum | 2012

Artistic Illumination Transfer for Portraits

Xiaowu Chen; Xin Jin; Qinping Zhao; Hongyu Wu

Relighting a portrait in a single image is still a challenging problem, particularly when only a single artistic reference photograph or painting is provided. In this paper, we propose an artistic illumination transfer system for portraits based on a database of portrait images (photographs and paintings) associated with hand‐drawn illumination templates (276) by artists. Users can select a reference portrait image in the database, and the corresponding illumination template is transferred to an input portrait using image warping. Users can also provide reference portrait images those are not in the database. Based on the Face Illumination Descriptor (FID), the system selects from the database the reference image with the closest illumination to that of the user‐provided reference image and adjusts the corresponding illumination template to match the contrast of the user‐provided reference image. Experiments on not only paintings but also photographs, paper‐cuts and sketches demonstrate that convincing illumination transferred results can be rendered by our system.


european conference on computer vision | 2012

Supervised geodesic propagation for semantic label transfer

Xiaowu Chen; Qing Li; Yafei Song; Xin Jin; Qinping Zhao

In this paper we propose a novel semantic label transfer method using supervised geodesic propagation (SGP). We use supervised learning to guide the seed selection and the label propagation. Given an input image, we first retrieve its similar image set from annotated databases. A Joint Boost model is learned on the similar image set of the input image. Then the recognition proposal map of the input image is inferred by this learned model. The initial distance map is defined by the proposal map: the higher probability, the smaller distance. In each iteration step of the geodesic propagation, the seed is selected as the one with the smallest distance from the undetermined superpixels. We learn a classifier as an indicator to indicate whether to propagate labels between two neighboring superpixels. The training samples of the indicator are annotated neighboring pairs from the similar image set. The geodesic distances of its neighbors are updated according to the combination of the texture and boundary features and the indication value. Experiments on three datasets show that our method outperforms the traditional learning based methods and the previous label transfer method for the semantic segmentation work.

Collaboration


Dive into the Qinping Zhao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Jin

Beijing Electronic Science and Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Long Quan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge