Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xinqing Guo is active.

Publication


Featured researches published by Xinqing Guo.


international conference on computer vision | 2013

Line Assisted Light Field Triangulation and Stereo Matching

Zhan Yu; Xinqing Guo; Haibing Ling; Andrew Lumsdaine; Jingyi Yu

Light fields are image-based representations that use densely sampled rays as a scene description. In this paper, we explore geometric structures of 3D lines in ray space for improving light field triangulation and stereo matching. The triangulation problem aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for light field super-resolution. We show that the light field space is largely bilinear due to 3D line segments in the scene, and direct triangulation of these bilinear subspaces leads to large errors. We instead present a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT). Based on our analysis, we further develop a novel line-assisted graph-cut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching. Experiments on synthetic and real data show that both our triangulation and LAGC algorithms outperform state-of-the-art solutions in accuracy and visual quality.


IEEE Transactions on Visualization and Computer Graphics | 2016

Enhancing Light Fields through Ray-Space Stitching

Xinqing Guo; Zhan Yu; Sing Bing Kang; Haiting Lin; Jingyi Yu

Light fields (LFs) have been shown to enable photorealistic visualization of complex scenes. In practice, however, an LF tends to have a relatively small angular range or spatial resolution, which limits the scope of virtual navigation. In this paper, we show how seamless virtual navigation can be enhanced by stitching multiple LFs. Our technique consists of two key components: LF registration and LF stitching. To register LFs, we use what we call the ray-space motion matrix (RSMM) to establish pairwise ray-ray correspondences. Using Plücker coordinates, we show that the RSMM is a 5 x 6 matrix, which reduces to a 5 x 5 matrix under pure translation and/or in-plane rotation. The final LF stitching is done using multi-resolution, high-dimensional graph-cut in order to account for possible scene motion, imperfect RSMM estimation, and/or undersampling. We show how our technique allows us to create LFs with various enhanced features: extended horizontal and/or vertical field-of-view, larger synthetic aperture and defocus blur, and larger parallax.


european conference on computer vision | 2014

Barcode Imaging Using a Light Field Camera

Xinqing Guo; Haiting Lin; Zhan Yu; Scott McCloskey

We present a method to capture sharp barcode images, using a microlens-based light field camera. Relative to standard barcode readers, which typically use fixed-focus cameras in order to reduce mechanical complexity and shutter lag, employing a light field camera significantly increases the scanner’s depth of field. However, the increased computational complexity that comes with software-based focusing is a major limitation on these approaches. Whereas traditional light field rendering involves time-consuming steps intended to produce a focus stack in which all objects appear sharply-focused, a scanner only needs to produce an image of the barcode region that falls within the decoder’s inherent robustness to defocus. With this in mind, we speed up image processing by segmenting the barcode region before refocus is applied. We then estimate the barcode’s depth directly from the raw sensor image, using a lookup table characterizing a relationship between depth and the code’s spatial frequency. Real image experiments with the Lytro camera illustrate that our system can produce a decodable image with a fraction of the computational complexity.


Proceedings of SPIE | 2014

Mobile multi-flash photography

Xinqing Guo; Jin Sun; Zhan Yu; Haibin Ling; Jingyi Yu

Multi-flash (MF) photography offers a number of advantages over regular photography including removing the effects of illumination, color and texture as well as highlighting occlusion contours. Implementing MF photography on mobile devices, however, is challenging due to their restricted form factors, limited synchronization capabilities, low computational power and limited interface connectivity. In this paper, we present a novel mobile MF technique that overcomes these limitations and achieves comparable performance as conventional MF. We first construct a mobile flash ring using four LED lights and design a special mobile flash-camera synchronization unit. The mobile device’s own flash first triggers the flash ring via an auxiliary photocell. The mobile flashes are then triggered consecutively in sync with the mobile camera’s frame rate, to guarantee that each image is captured with only one LED flash on. To process the acquired MF images, we further develop a class of fast mobile image processing techniques for image registration, depth edge extraction, and edge-preserving smoothing. We demonstrate our mobile MF on a number of mobile imaging applications, including occlusion detection, image thumbnailing, image abstraction and object category classification.


Archive | 2016

3-D LIGHT FIELD CAMERA AND PHOTOGRAPHY METHOD

Jingyi Yu; Xinqing Guo; Zhan Yu


Studies in health technology and informatics | 2014

An immersive surgery training system with live streaming capability.

Yang Yang; Xinqing Guo; Zhan Yu; Karl V. Steiner; Kenneth E. Barner; Thomas Bauer; Jingyi Yu


Studies in health technology and informatics | 2013

A portable immersive surgery training system using RGB-D sensors.

Xinqing Guo; Luis D. Lopez; Zhan Yu; Karl V. Steiner; Kenneth E. Barner; Thomas Bauer; Jingyi Yu


arXiv: Computer Vision and Pattern Recognition | 2017

A Learning-based Framework for Hybrid Depth-from-Defocus and Stereo Matching.

Zhang Chen; Xinqing Guo; Siyuan Li; Xuan Cao; Jingyi Yu


arXiv: Computer Vision and Pattern Recognition | 2017

Deep Depth Inference using Binocular and Monocular Cues

Xinqing Guo; Zhang Chen; Siyuan Li; Yang Yang; Jingyi Yu


Archive | 2017

Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs.

Xinqing Guo; Zhang Chen; Siyuan Li; Yang Yang; Jingyi Yu

Collaboration


Dive into the Xinqing Guo's collaboration.

Top Co-Authors

Avatar

Jingyi Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Zhan Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Haiting Lin

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Bauer

Christiana Care Health System

View shared research outputs
Top Co-Authors

Avatar

Andrew Lumsdaine

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge