Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qing Wang is active.

Publication


Featured researches published by Qing Wang.


international conference on image processing | 2009

Joint image registration and super-resolution reconstruction based on regularized total least norm

Qing Wang; Xiaoli Song

Accurate registration of the low resolution (LR) images is a critical step in image super resolution reconstruction (SRR). Conventional algorithms always use invariable motion parameters derived from registration algorithms, and carry on SRR without considering the registration errors in the disjointed method. In this paper we propose a new method that performs joint image registration and SRR based on regularized total least norm (RTLN), updating the motion parameters and HR image simultaneously. Not only translation but also rotation motion are considered, which makes the motion model more universal. Experimental results have shown that our approach is more effective and efficient than traditional ones.


Computational Visual Media | 2016

Decoding and calibration method on focused plenoptic camera

Chunping Zhang; Zhe Ji; Qing Wang

The ability of light gathering of plenoptic camera opens up new opportunities for a wide range of computer vision applications. An efficient and accurate method to calibrate plenoptic camera is crucial for its development. This paper describes a 10-intrinsic-parameter model for focused plenoptic camera with misalignment. By exploiting the relationship between the raw image features and the depth–scale information in the scene, we propose to estimate the intrinsic parameters from raw images directly, with a parallel biplanar board which provides depth prior. The proposed method enables an accurate decoding of light field on both angular and positional information, and guarantees a unique solution for the 10 intrinsic parameters in geometry. Experiments on both simulation and real scene data validate the performance of the proposed calibration method.


international conference on image processing | 2016

LFHOG: A discriminative descriptor for live face detection from light field image

Zhe Ji; Hao Zhu; Qing Wang

How to avoid the invading of the attack in the biometric system, such as 2D printed photos, gradually becomes an important research hotspot. In this paper, we present a novel descriptor in light field to tackle the issue. Based on the angular and spatial information in light field, the proposed light field histogram of gradient (LFHoG) descriptor is derived from three directions, including vertical, horizontal and depth. Different with traditional HoG in 2D image, the gradient in depth direction is distinctive in light field. To validate the effectiveness of the proposed LFHoG descriptor, experiments have been carried out on light field datasets taken by a Lytro camera. The descriptor can achieve 99.75% accuracy on the user collected dataset, which proves the correctness and effectiveness of the LFHoG descriptor.


IEEE Journal of Selected Topics in Signal Processing | 2017

Occlusion-Model Guided Antiocclusion Depth Estimation in Light Field

Hao Zhu; Qing Wang; Jingyi Yu

Occlusion is one of the most challenging problems in depth estimation. Previous work has modeled the single-occluder occlusion in light field and achieves good performances, however it is still difficult to obtain accurate depth for multioccluder occlusion. In this paper, we explore the complete occlusion model in light field and derive the occluder-consistency between the spatial and angular spaces, which is used as a guidance to select unoccluded views for each candidate occlusion point. Then, an antiocclusion energy function is built to regularize the depth map. Experimental results on both synthetic and real light-field datasets have demonstrated the advantages of the proposed algorithm compared with state-of-the-art algorithms of light-field depth estimation, especially in multioccluder cases.


international conference on image processing | 2016

Rectifying projective distortion in 4D light field

Chunping Zhang; Zhe Ji; Qing Wang

The accuracy of calibration will significantly affect the post processing capability of light field imaging. The geometry of the reconstructed scene is related to the parameters of light field closely, involving the accuracy of decoded rays and ambiguities from ray correspondences. Through exploring the ray correspondence, we derive a transformation matrix to describe the projective distortion on reconstructed scene in 4D light field. Based on our derivation, we simplify the light field camera geometry as a 4-parameter model and calibrate its intrinsic parameters, including a linear initialization and a nonlinear refine process. The proposed light field calibration can simply be implemented with a parallel bi-planar board. Experiments on both simulation and real scene data validate the performance of the calibration.


international conference on image processing | 2014

Reconstructing scene depth and appearance behind foreground occlusion using camera array

Zhaolin Xiao; Qing Wang; Lipeng Si; Guoqing Zhou

Foreground occlusion is a significant challenge in 3D reconstruction. In the paper, we first characterize the differences between multiview reconstruction with and without foreground occlusion. Considering both scene depth and appearance are unknown, we propose a generalized model for scene reconstruction. Then, we propose an iterative reconstruction approach in the global optimization framework, which is well performed on the camera array system. Even when all views are partially occluded, our approach can recover accurate depth map as well as scene appearance. Experimental results have indicated that our approach is more robust to foreground occlusions and outperforms state-of-the-art approaches.


IEEE Transactions on Image Processing | 2017

Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields

Zhaolin Xiao; Qing Wang; Guoqing Zhou; Jingyi Yu

When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.


Neurocomputing | 2017

Robust outlier removal using penalized linear regression in multiview geometry

Guoqing Zhou; Qing Wang; Zhaolin Xiao

Abstract In multiview geometry, it is crucial to remove outliers before the optimization since they are adverse factors for parameter estimation. Some efficient and very popular methods for this task are RANSAC, MLESAC and their improved variants. However, Olsson etxa0al. have pointed that mismatches in longer point tracks may go undetected by using RANSAC or MLESAC. Although some robust and efficient algorithms are proposed to deal with outlier removal, little concerns on the masking (an outlier is undetected as such) and swamping (an inlier is misclassified as an outlier) effects are taken into account in the community, which probably makes the fitted model biased. In the paper, we first characterize some typical parameter estimation problems in multiview geometry, such as triangulation, homography estimate and shape from motion (SFM), into a linear regression model. Then, a non-convex penalized regression approach is proposed to effectively remove outliers for robust parameter estimation. Finally,we analyze the robustness of non-convex penalized regression theoretically. We have validated our method on three representative estimation problems in multiview geometry, including triangulation, homography estimate and the SFM with known camera orientation. Experiments on both synthetic data and real scene objects demonstrate that the proposed method outperforms the state-of-the-art methods. This approach can also be extended to more generic problems that within-profile correlations exist.


Optoelectronic Imaging and Multimedia Technology IV | 2016

Light field camera self-calibration and registration

Zhe Ji; Chunping Zhang; Qing Wang

The multi-view light fields (MVLF) provide new solutions to the existing problems in monocular light field, such as the limited field of view. However as key steps in MVLF, the calibration and registration have been limited studied. In this paper, we propose a method to calibrate the camera and register different LFs without the checkboard at the same time, which we call the self-calibrating method. We model the LF structure as a 5-parameter two-parallel-plane (2PP) model, then represent the associations between rays and reconstructed points as a 3D projective transformation. With the constraints of ray-ray correspondences in different LFs, the parameters can be solved with a linear initialization and a nonlinear refinement. The result in real scene and 3D point clouds registration error of MVLF in simulated data verify the high performance of the proposed model.


arXiv: Computer Vision and Pattern Recognition | 2018

Dense Light Field Reconstruction From Sparse Sampling Using Residual Network.

Mantang Guo; Hao Zhu; Guoqing Zhou; Qing Wang

Collaboration


Dive into the Qing Wang's collaboration.

Top Co-Authors

Avatar

Guoqing Zhou

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Hao Zhu

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Zhe Ji

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Chunping Zhang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jingyi Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Zhaolin Xiao

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Zhaolin Xiao

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Feng Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Lipeng Si

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge