Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qi Shan is active.

Publication


Featured researches published by Qi Shan.


international conference on computer graphics and interactive techniques | 2008

High-quality motion deblurring from a single image

Qi Shan; Jiaya Jia; Aseem Agarwala

We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.


international conference on computer graphics and interactive techniques | 2008

Fast image/video upsampling

Qi Shan; Zhaorong Li; Jiaya Jia; Chi-Keung Tang

We propose a simple but effective upsampling method for automatically enhancing the image/video resolution, while preserving the essential structural information. The main advantage of our method lies in a feedback-control framework which faithfully recovers the high-resolution image information from the input data, without imposing additional local structure constraints learned from other examples. This makes our method independent of the quality and number of the selected examples, which are issues typical of learning-based algorithms, while producing high-quality results without observable unsightly artifacts. Another advantage is that our method naturally extends to video upsampling, where the temporal coherence is maintained automatically. Finally, our method runs very fast. We demonstrate the effectiveness of our algorithm by experimenting with different image/video data.


international conference on computer vision | 2007

Rotational Motion Deblurring of a Rigid Object from a Single Image

Qi Shan; Wei Xiong; Jiaya Jia

Most previous motion deblurring methods restore the degraded image assuming a shift-invariant linear blur filter. These methods are not applicable if the blur is caused by spatially variant motions. In this paper, we model the physical properties of a 2-D rigid body movement and propose a practical framework to deblur rotational motions from a single image. Our main observation is that the transparency cue of a blurred object, which represents the motion blur formation from an imaging perspective, provides sufficient information in determining the object movements. Comparatively, single image motion deblurring using pixel color/gradient information has large uncertainties in motion representation and computation. Our results are produced by minimizing a new energy function combining rotation, possible translations, and the transparency map using an iterative optimizing process. The effectiveness of our method is demonstrated using challenging image examples.


IEEE Transactions on Visualization and Computer Graphics | 2010

Globally Optimized Linear Windowed Tone Mapping

Qi Shan; Jiaya Jia; Michael S. Brown

This paper introduces a new tone mapping operator that performs local linear adjustments on small overlapping windows over the entire input image. While each window applies a local linear adjustment that preserves the monotonicity of the radiance values, the problem is implicitly cast as one of global optimization that satisfies the local constraints defined on each of the overlapping windows. Local constraints take the form of a guidance map that can be used to effectively suppress local high contrast while preserving details. Using this method, image structures can be preserved even in challenging high dynamic range (HDR) images that contain either abrupt radiance change, or relatively smooth but salient transitions. Another benefit of our formulation is that it can be used to synthesize HDR images from low dynamic range (LDR) images.


international conference on 3d vision | 2013

The Visual Turing Test for Scene Reconstruction

Qi Shan; Riley Adams; Brian Curless; Yasutaka Furukawa; Steven M. Seitz

We present the first large scale system for capturing and rendering relight able scene reconstructions from massive unstructured photo collections taken under different illumination conditions and viewpoints. We combine photos taken from many sources, Flickr-Based ground-level imagery, oblique aerial views, and street view, to recover models that are significantly more complete and detailed than previously demonstrated. We demonstrate the ability to match both the viewpoint and illumination of arbitrary input photos, enabling a Visual Turing Test in which photo and rendering are viewed side-by-side and the observer has to guess which is which. While we cannot yet fool human perception, the gap is closing.


international conference on 3d vision | 2014

Accurate Geo-Registration by Ground-to-Aerial Image Matching

Qi Shan; Changchang Wu; Brian Curless; Yasutaka Furukawa; Carlos Hernández; Steven M. Seitz

We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy.


computer vision and pattern recognition | 2012

Refractive height fields from single and multiple images

Qi Shan; Sameer Agarwal; Brian Curless

We propose a novel framework for reconstructing homogenous, transparent, refractive height-fields from a single viewpoint. The height-field is imaged against a known planar background, or sequence of backgrounds. Unlike existing approaches that do a point-by-point reconstruction - which is known to have intractable ambiguities - our method estimates and optimizes for the entire height-field at the same time. The formulation supports shape recovery from measured distortions (deflections) or directly from the images themselves, including from a single image. We report results for a variety of refractive height-fields showing significant improvement over prior art.


computer vision and pattern recognition | 2010

Using optical defocus to denoise

Qi Shan; Jiaya Jia; Sing Bing Kang; Zenglu Qin

Effective reduction of noise is generally difficult because of the possible tight coupling of noise with high-frequency image structure. The problem is worse under low-light conditions. In this paper, we propose slightly optically defocusing the image in order to loosen this noise-image structure coupling. This allows us to more effectively reduce noise and subsequently restore the small defocus. We analytically show how this is possible, and demonstrate our technique on a number of examples that include low-light images.


european conference on computer vision | 2018

RIDI: Robust IMU Double Integration

Hang Yan; Qi Shan; Yasutaka Furukawa

This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground-truth motions across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our algorithm has surprisingly shown comparable results to full Visual Inertial navigation. To our knowledge, this paper is the first to integrate sophisticated machine learning techniques with inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.


computer vision and pattern recognition | 2014

Occluding Contours for Multi-view Stereo

Qi Shan; Brian Curless; Yasutaka Furukawa; Carlos Hernández; Steven M. Seitz

Collaboration


Dive into the Qi Shan's collaboration.

Top Co-Authors

Avatar

Brian Curless

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Jiaya Jia

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changchang Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Hang Yan

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Riley Adams

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge