Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Lawrence Zitnick is active.

Publication


Featured researches published by Charles Lawrence Zitnick.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Automatic Estimation and Removal of Noise from a Single Image

Ce Liu; Richard Szeliski; Sing Bing Kang; Charles Lawrence Zitnick; William T. Freeman

Image denoising algorithms often assume an additive white Gaussian noise (AWGN) process that is independent of the actual RGB values. Such approaches cannot effectively remove color noise produced by todays CCD digital camera. In this paper, we propose a unified framework for two tasks: automatic estimation and removal of color noise from a single image using piecewise smooth image models. We introduce the noise level function (NLF), which is a continuous function describing the noise level as a function of image brightness. We then estimate an upper bound of the real NLF by fitting a lower envelope to the standard deviations of per-segment image variances. For denoising, the chrominance of color noise is significantly removed by projecting pixel values onto a line fit to the RGB values in each segment. Then, a Gaussian conditional random field (GCRF) is constructed to obtain the underlying clean image from the noisy input. Extensive experiments are conducted to test the proposed algorithm, which is shown to outperform state-of-the-art denoising algorithms.


international conference on computer vision | 2005

Consistent segmentation for optical flow estimation

Charles Lawrence Zitnick; Nebojsa Jojic; Sing Bing Kang

In this paper, we propose a method for jointly computing optical flow and segmenting video while accounting for mixed pixels (matting). Our method is based on statistical modeling of an image pair using constraints on appearance and motion. Segments are viewed as overlapping regions with fractional (alpha) contributions. Bidirectional motion is estimated based on spatial coherence and similarity of segment colors. Our model is extended to video by chaining the pairwise models to produce a joint probability distribution to be maximized. To make the problem more tractable, we factorize the posterior distribution and iteratively minimize its parts. We demonstrate our method on frame interpolation


international conference on computer vision | 1995

A multibaseline stereo system with active illumination and real-time image acquisition

Sing Bing Kang; Jon A. Webb; Charles Lawrence Zitnick; Takeo Kanade

We describe our implementation of a parallel depth recovery scheme for a four-camera multibaseline stereo in a convergent configuration. Our system is capable of image capture at video rate. This is critical in applications that require three-dimensional tracking. We obtain dense stereo depth data by projecting a light pattern of frequency modulated sinusoidally varying intensity onto the scene, thus increasing the local discriminability at each pixel and facilitating matches. In addition, we make most of the camera view areas by converging them at a volume of interest. Results show that we are able to extract stereo depth data that are, on the average, less than 1 mm in error at distances between 1.5 to 3.5 m away from the cameras.<<ETX>>


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Image Restoration by Matching Gradient Distributions

Taeg Sang Cho; Charles Lawrence Zitnick; Neel Joshi; Sing Bing Kang; Richard Szeliski; William T. Freeman

The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when used with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. We present an alternative deconvolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that a reconstructed image should have a gradient distribution similar to a reference distribution. In natural images, a reference distribution not only varies from one image to another, but also within an image depending on texture. We estimate a reference distribution directly from an input image for each texture segment. Our algorithm is able to restore rich mid-frequency textures. A large-scale user study supports the conclusion that our algorithm improves the visual realism of reconstructed images compared to those of MAP estimators.


computer vision and pattern recognition | 2008

Stereo reconstruction with mixed pixels using adaptive over-segmentation

Y. Taguchi; Bennett Wilburn; Charles Lawrence Zitnick

We present an over-segmentation based, dense stereo algorithm that jointly estimates segmentation and depth. For mixed pixels on segment boundaries, the algorithm computes foreground opacity (alpha), as well as color and depth for the foreground and background. We model the scene as a collection of fronto-parallel planar segments in a reference view, and use a generative model for image formation that handles mixed pixels at segment boundaries. Our method iteratively updates the segmentation based on color, depth and shape constraints using MAP estimation. Given a segmentation, the depth estimates are updated using belief propagation. We show that our method is competitive with the state-of-the-art based on the new Middlebury stereo evaluation, and that it overcomes limitations of traditional segmentation based methods while properly handling mixed pixels. Z-keying results show the advantages of combining opacity and depth estimation.


computer vision and pattern recognition | 2008

From appearance to context-based recognition: Dense labeling in small images

Devi Parikh; Charles Lawrence Zitnick; Tsuhan Chen

Traditionally, object recognition is performed based solely on the appearance of the object. However, relevant information also exists in the scene surrounding the object. As supported by our human studies, this contextual information is necessary for accurate recognition in low resolution images. This scenario with impoverished appearance information, as opposed to using images of higher resolution, provides an appropriate venue for studying the role of context in recognition. In this paper, we explore the role of context for dense scene labeling in small images. Given a segmentation of an image, our algorithm assigns each segment to an object category based on the segmentpsilas appearance and contextual information. We explicitly model context between object categories through the use of relative location and relative scale, in addition to co-occurrence. We perform recognition tests on low and high resolution images, which vary significantly in the amount of appearance information present, using just the object appearance information, the combination of appearance and context, as well as just context without object appearance information (blind recognition). We also perform these tests in human studies and analyze our findings to reveal interesting patterns. With the use of our context model, our algorithm achieves state-of-the-art performance on MSRC and Corel. datasets.


IEEE Computer Graphics and Applications | 2011

A Viewer-Centric Editor for 3D Movies

Sanjeev J. Koppal; Charles Lawrence Zitnick; Michael F. Cohen; Sing Bing Kang; Bryan Ressler; Alex Colburn

A proposed mathematical framework is the basis for a viewer-centric digital editor for 3D movies thats driven by the audiences perception of the scene. The editing tool allows both shot planning and after-the-fact digital manipulation of the perceived scene shape.


british machine vision conference | 2009

Clustering videos by location

Simon Baker; Charles Lawrence Zitnick; Gerhard Florian Schroff

We propose an algorithm to cluster video shots by the location in which they were captured. Each shot is represented as a set of keyframes and each keyframe is represented by a histogram of textons. Clustering is performed using an energy-based formulation. We propose an energy function for the clusters that matches the expected distribution of viewpoints in any one location and use the chi-squared distance to measure the similarity of two shots. We also add a temporal prior to model the fact that temporally neighboring shots are more likely to have been captured in the same location. We test our algorithm on both home videos and professionally edited footage (sitcoms). Quantitative results are presented to justify each choice made in the design of our algorithm, as well as comparisons with k-means, connected components, and spectral clustering.


computer vision and pattern recognition | 2006

Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures

Vaibhav Vaish; Marc Levoy; Richard Szeliski; Charles Lawrence Zitnick; Sing Bing Kang


Archive | 2004

Interactive viewpoint video system and process

Sing Bing Kang; Charles Lawrence Zitnick; Matthew Uyttendaele; Simon Winder; Richard Szeliski

Collaboration


Dive into the Charles Lawrence Zitnick's collaboration.

Researchain Logo
Decentralizing Knowledge