Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nelson Liang An Chang is active.

Publication


Featured researches published by Nelson Liang An Chang.


computer vision and pattern recognition | 2009

A novel feature descriptor invariant to complex brightness changes

Feng Tang; Suk Hwan Lim; Nelson Liang An Chang; Hai Tao

We describe a novel and robust feature descriptor called ordinal spatial intensity distribution (OSID) which is invariant to any monotonically increasing brightness changes. Many traditional features are invariant to intensity shift or affine brightness changes but cannot handle more complex nonlinear brightness changes, which often occur due to the nonlinear camera response, variations in capture device parameters, temporal changes in the illumination, and viewpoint-dependent illumination and shadowing. A configuration of spatial patch sub-divisions is defined, and the descriptor is obtained by computing a 2-D histogram in the intensity ordering and spatial sub-division spaces. Extensive experiments show that the proposed descriptor significantly outperforms many state-of-the-art descriptors such as SIFT, GLOH, and PCA-SIFT under complex brightness changes. Moreover, the experiments demonstrate the proposed descriptors superior performance even in the presence of image blur, viewpoint changes, and JPEG compression. The proposed descriptor has far reaching implications for many applications in computer vision including motion estimation, object tracking/recognition, image classification/retrieval, 3D reconstruction, and stereo.


IEEE Transactions on Image Processing | 1997

View generation for three-dimensional scenes from video sequences

Nelson Liang An Chang; Avideh Zakhor

This paper focuses on the representation and view generation of three-dimensional (3-D) scenes. In contrast to existing methods that construct a full 3-D model or those that exploit geometric invariants, our representation consists of dense depth maps at several preselected viewpoints from an image sequence. Furthermore, instead of using multiple calibrated stationary cameras or range scanners, we derive our depth maps from image sequences captured by an uncalibrated camera with only approximately known motion. We propose an adaptive matching algorithm that assigns various confidence levels to different regions in the depth maps. Nonuniform bicubic spline interpolation is then used to fill in low confidence regions in the depth maps. Once the depth maps are computed at preselected viewpoints, the intensity and depth at these locations are used to reconstruct arbitrary views of the 3-D scene. Specifically, the depth maps are regarded as vertices of a deformable 2-D mesh, which are transformed in 3-D, projected to 2-D, and rendered to generate the desired view. Experimental results are presented to verify our approach.


ACM Transactions on Graphics | 2009

Display supersampling

Niranjan Damera-Venkata; Nelson Liang An Chang

Supersampling is widely used by graphics hardware to render anti-aliased images. In conventional supersampling, multiple scene samples are computationally combined to produce a single screen pixel. We consider a novel imaging paradigm that we call display supersampling, where multiple display samples are physically combined via the superimposition of multiple image subframes. Conventional anti-aliasing and texture mapping techniques are shown inadequate for the task of rendering high-quality images on supersampled displays. Instead of requiring anti-aliasing filters, supersampled displays actually require alias generation filters to cancel the aliasing introduced by nonuniform sampling. We present fundamental theory and efficient algorithms for the real-time rendering of high-resolution anti-aliased images on supersampled displays. We show that significant image quality gains are achievable by taking advantage of display supersampling. We prove that alias-free resolution beyond the Nyquist limits of a single subframe may be achieved by designing a bank of alias-canceling rendering filters. In addition, we derive a practical noniterative filter bank approach to real-time rendering and discuss implementations on commodity graphics hardware.


computer vision and pattern recognition | 2007

Realizing Super-Resolution with Superimposed Projection

Niranjan Damera-Venkata; Nelson Liang An Chang

We consider the problem of rendering high-resolution images on a display composed of multiple superimposed lower-resolution projectors. A theoretical analysis of this problem in the literature previously concluded that the multi-projector superimposition of low resolution projectors cannot produce high resolution images. In our recent work, we showed to the contrary that super-resolution via multiple superimposed projectors is indeed theoretically achievable. This paper derives practical algorithms for real multi-projector systems that account for the intra- and inter-projector variations and that render high-quality, high-resolution content at real-time interactive frame rates. A camera is used to estimate the geometric, photometric, and color properties of each component projector in a calibration step. Given this parameter information, we demonstrate novel methods for efficiently generating optimal sub-frames so that the resulting projected image is as close as possible to the given high resolution images.


IEEE Transactions on Visualization and Computer Graphics | 2007

A Unified Paradigm For Scalable Multi-Projector Displays

Niranjan Damera-Venkata; Nelson Liang An Chang; Jeffrey M. Dicarlo

We present a general framework for the modeling and optimization of scalable multi-projector displays. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors without manual adjustment. When the projectors are tiled, we show that our framework automatically produces blending maps that outperform state-of-the-art projector blending methods. When all the projectors are superimposed, the framework can produce high-resolution images beyond the Nyquist resolution limits of component projectors. When a combination of tiled and superimposed projectors are deployed, the same framework harnesses the best features of both tiled and superimposed multi-projector projection paradigms. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution video at real-time interactive frame rates achieved on commodity graphics platforms. This work allows for inexpensive, compelling, flexible, and robust large scale visualization systems to be built and deployed very efficiently.


international conference on multimedia and expo | 2004

PathMarker: systems for capturing trips

Ramin Samadani; Debargha Mukherjee; Ullas Gargi; Nelson Liang An Chang; Dan Tretter; Michael Harville

Central to capturing a trip is knowing where you were, and when you were there. Combining continuous path data with media (path-enhanced media or PEM) offers substantial advantages over the previous approach of tagging individual media with time and location. Prototype systems, collectively called PathMarker, are used for gathering, editing, presenting and browsing PEM. We have developed: (1) a methodology for gathering PEM with off-the-shelf hardware; (2) software for automatic conversion of the raw path data and media into an application independent XML representation; (3) two example PEM applications. The first application provides map-overlaid trip editing, presentation and browsing. The second application provides a 3D immersive environment with digital elevation maps for automatic trip flybys and for browsing. Experience with a number of recorded trips confirms that PathMarker systems seem to capture the essence of a trip.


International Journal of Computer Vision | 2001

Constructing a Multivalued Representation for View Synthesis

Nelson Liang An Chang; Avideh Zakhor

A fundamental problem in computer vision and graphics is that of arbitrary view synthesis for static 3-D scenes, whereby a user-specified viewpoint of the given scene may be created directly from a representation. We propose a novel compact representation for this purpose called the multivalued representation (MVR). Starting with an image sequence captured by a moving camera undergoing either unknown planar translation or orbital motion, a MVR is derived for each preselected reference frame, and may then be used to synthesize arbitrary views of the scene. The representation itself is comprised of multiple depth and intensity levels in which the k-th level consists of points occluded by exactly k surfaces. To build a MVR with respect to a particular reference frame, dense depth maps are first computed for all the neighboring frames of the reference frame. The depth maps are then combined together into a single map, where points are organized by occlusions rather than by coherent affine motions. This grouping facilitates an automatic process to determine the number of levels and helps to reduce the artifacts caused by occlusions in the scene. An iterative multiframe algorithm is presented for dense depth estimation that both handles low-contrast regions and produces piecewise smooth depth maps. Reconstructed views as well as arbitrary flyarounds of real scenes are presented to demonstrate the effectiveness of the approach.


international conference on image processing | 2007

On the Resolution Limits of Superimposed Projection

Niranjan Damera-Venkata; Nelson Liang An Chang

Multi-projector super-resolution is the dual of multi-camera super-resolution. The goal of projector super-resolution is to produce a high resolution frame via superimposition of multiple low resolution subframes. Prior work claims that it is impossible to improve resolution via superimposed projection except in specialized circumstances. Rigorous analysis has been previously restricted to the special case of uniform display sampling, which reduces the problem to a simple shift-invariant deblurring. To understand the true behavior of superimposed projection as an inverse of classical camera super-resolution, one must consider the effects of non-uniform displacements between component subframes. In this paper, we resolve two fundamental theoretical questions concerning resolution enhancement via superimposed projection. First, we show that it is possible to reproduce frequencies that are well beyond the Nyquist limit of any of the component subframes. Second, we show that nonuniform sampling and pixel reconstruction functions impose fundamental limits on achievable resolution.


international conference on image processing | 1999

A multivalued representation for view synthesis

Nelson Liang An Chang; Avideh Zakhor

We propose a new depth-based representation of 3-D scenes primarily for the problem of arbitrary view synthesis. The information contained in a given image sequence is compacted into a single multivalued array, where points are organized by occlusions rather than by coherent affine motions. This grouping facilitates an automatic process to determine the number of layers and helps to reduce the artifacts caused by occlusions in the scene. In addition, an iterative multiframe dynamic programming algorithm is described to produce piecewise smooth depth maps. A novel multiframe segmentation, tracking, and plane fitting algorithm is also proposed to handle the traditionally difficult low-contrast regions. Reconstructed views as well as arbitrary flyarounds of real scenes are presented to demonstrate the effectiveness of the approach.


international conference on acoustics, speech, and signal processing | 1995

Arbitrary view generation for three-dimensional scenes from uncalibrated video cameras

Nelson Liang An Chang; Avideh Zakhor

This paper focuses on the representation and arbitrary view generation of three dimensional (3-D) scenes. In contrast to existing methods that construct a full 3-D model or those that exploit geometric invariants, our representation consists of dense depth maps at several preselected viewpoints from an image sequence. Furthermore, instead of using multiple calibrated stationary cameras or range data, we derive our depth maps from image sequences captured by an uncalibrated camera. We propose an adaptive matching algorithm which assigns various confidence levels to different regions. Nonuniform bicubic spline interpolation is then used to fill in low confidence regions in the depth maps. Once the depth maps are computed at preselected viewpoints, the intensity and depth at these locations are used to reconstruct arbitrary views of the 3-D scene. Experimental results are presented to verify our approach.

Collaboration


Dive into the Nelson Liang An Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avideh Zakhor

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge