Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Uyttendaele is active.

Publication


Featured researches published by Matthew Uyttendaele.


international conference on computer graphics and interactive techniques | 2004

High-quality video view interpolation using a layered representation

C. Lawrence Zitnick; Sing Bing Kang; Matthew Uyttendaele; Simon Winder; Richard Szeliski

The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.


international conference on computer graphics and interactive techniques | 2007

Joint bilateral upsampling

Johannes Kopf; Michael F. Cohen; Dani Lischinski; Matthew Uyttendaele

Image analysis and enhancement tasks such as tone mapping, colorization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a downsampled image. Although general purpose upsampling methods can be used to interpolate the low resolution solution to the full resolution, these methods generally assume a smoothness prior for the interpolation. We demonstrate that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution. We show results for each of the applications above and compare them to traditional upsampling methods.


international conference on computer graphics and interactive techniques | 2008

Deep photo: model-based photograph enhancement and viewing

Johannes Kopf; Boris Neubert; Billy Chen; Michael F. Cohen; Daniel Cohen-Or; Oliver Deussen; Matthew Uyttendaele; Dani Lischinski

In this paper, we introduce a novel system for browsing, enhancing, and manipulating casual outdoor photographs by combining them with already existing georeferenced digital terrain and urban models. A simple interactive registration process is used to align a photograph with such a model. Once the photograph and the model have been registered, an abundance of information, such as depth, texture, and GIS data, becomes immediately available to our system. This information, in turn, enables a variety of operations, ranging from dehazing and relighting the photograph, to novel view synthesis, and overlaying with geographic information. We describe the implementation of a number of these applications and discuss possible extensions. Our results show that augmenting photographs with already available 3D models of the world supports a wide variety of new ways for us to experience and interact with our everyday snapshots.


international conference on computer graphics and interactive techniques | 2006

Interactive local adjustment of tonal values

Dani Lischinski; Zeev Farbman; Matthew Uyttendaele; Richard Szeliski

This paper presents a new interactive tool for making local adjustments of tonal values and other visual parameters in an image. Rather than carefully selecting regions or hand-painting layer masks, the user quickly indicates regions of interest by drawing a few simple brush strokes and then uses sliders to adjust the brightness, contrast, and other parameters in these regions. The effects of the users sparse set of constraints are interpolated to the entire image using an edge-preserving energy minimization method designed to prevent the propagation of tonal adjustments to regions of significantly different luminance. The resulting system is suitable for adjusting ordinary and high dynamic range images, and provides the user with much more creative control than existing tone mapping algorithms. Our tool is also able to produce a tone mapping automatically, which may serve as a basis for further local adjustments, if so desired. The constraint propagation approach developed in this paper is a general one, and may also be used to interactively control a variety of other adjustments commonly performed in the digital darkroom.


IEEE Computer Graphics and Applications | 2004

Image-based interactive exploration of real-world environments

Matthew Uyttendaele; Antonio Criminisi; Sing Bing Kang; Simon Winder; Richard Szeliski; Richard I. Hartley

Interactive scene walkthroughs have long been an important computer graphics application area. More recently, researchers have developed techniques for constructing photorealistic 3D architectural models from real-world images. We present an image-based rendering system that brings us a step closer to a compelling sense of being there. Whereas many previous systems have used still photography and 3D scene modeling, we avoid explicit 3D reconstruction because it tends to be brittle. Our system is not the first to propose interactive video-based tours. We believe, however, that our system is the first to deliver fully interactive, photorealistic image-based tours on a personal computer at or above broadcast video resolutions and frame rates. Moreover, to our knowledge, no other tour provides the same rich set of interactions or visually complex environments.


computer vision and pattern recognition | 2006

Seamless Image Stitching of Scenes with Large Motions and Exposure Differences

Ashley M. Eden; Matthew Uyttendaele; Richard Szeliski

This paper presents a technique to automatically stitch multiple images at varying orientations and exposures to create a composite panorama that preserves the angular extent and dynamic range of the inputs. The main contribution of our method is that it allows for large exposure differences, large scene motion or other misregistrations between frames and requires no extra camera hardware. To do this, we introduce a two-step graph cut approach. The purpose of the first step is to fix the positions of moving objects in the scene. In the second step, we fill in the entire available dynamic range. We introduce data costs that encourage consistency and higher signal-to-noise ratios, and seam costs that encourage smooth transitions. Our method is simple to implement and effective. We demonstrate the effectiveness of our approach on several input sets with varying exposures and camera orientations.


computer vision and pattern recognition | 2012

Real-time image-based 6-DOF localization in large-scale environments

Hyon Lim; Sudipta N. Sinha; Michael F. Cohen; Matthew Uyttendaele

We present a real-time approach for image-based localization within large scenes that have been reconstructed offline using structure from motion (Sfm). From monocular video, our method continuously computes a precise 6-DOF camera pose, by efficiently tracking natural features and matching them to 3D points in the Sfm point cloud. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method lies in invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a window of successive frames. This enables the algorithm to run in real-time, without fluctuations in the latency over long durations. We evaluate the method in large indoor and outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power, mobile computer suitable for onboard computation on a quadrotor micro aerial vehicle.


international conference on computational photography | 2011

Fast Poisson blending using multi-splines

Richard Szeliski; Matthew Uyttendaele; Drew Steedly

We present a technique for fast Poisson blending and gradient domain compositing. Instead of using a single piecewise-smooth offset map to perform the blending, we associate a separate map with each input source image. Each individual offset map is itself smoothly varying and can therefore be represented using a low-dimensional spline. The resulting linear system is much smaller than either the original Poisson system or the quadtree spline approximation of a single (unified) offset map. We demonstrate the speed and memory improvements available with our system and apply it to large panoramas. We also show how robustly modeling the multiplicative gain rather than the offset between overlapping images leads to improved results, and how adding a small amount of Laplacian pyramid blending improves the results in areas of inconsistent texture.


computer vision and pattern recognition | 2004

Probability models for high dynamic range imaging

Chris Pal; Richard Szeliski; Matthew Uyttendaele; Nebojsa Jojic

Methods for expanding the dynamic range of digital photographs by combining images taken at different exposures have recently received a lot of attention. Current techniques assume that the photometric transfer function of a given camera is the same (modulo an overall exposure change) for all the input images. Unfortunately, this is rarely the case with todays camera, which may perform complex nonlinear color and intensity transforms on each picture. In this paper, we show how the use of probability models for the imaging system and weak prior models for the response functions enable us to estimate a different function for each image using only pixel intensity values. Our approach also allows us to characterize the uncertainty inherent in each pixel measurement. We can therefore produce statistically optimal estimates for the hidden variables in our model representing scene irradiance. We present results using this method to statistically characterize camera imaging functions and construct high-quality high dynamic range (HDR) images using only image pixel information.


european conference on computer vision | 2006

Video and image bayesian demosaicing with a two color image prior

Eric P. Bennett; Matthew Uyttendaele; C. Lawrence Zitnick; Richard Szeliski; Sing Bing Kang

The demosaicing process converts single-CCD color representations of one color channel per pixel into full per-pixel RGB. We introduce a Bayesian technique for demosaicing Bayer color filter array patterns that is based on a statistically-obtained two color per-pixel image prior. By modeling all local color behavior as a linear combination of two fully specified RGB triples, we avoid color fringing artifacts while preserving sharp edges. Our grid-less, floating-point pixel location architecture can process both single images and multiple images from video within the same framework, with multiple images providing denser color samples and therefore better color reproduction with reduced aliasing. An initial clustering is performed to determine the underlying local two color model surrounding each pixel. Using a product of Gaussians statistical model, the underlying linear blending ratio of the two representative colors at each pixel is estimated, while simultaneously providing noise reduction. Finally, we show that by sampling the image model at a finer resolution than the source images during reconstruction, our continuous demosaicing technique can super-resolve in a single step.

Collaboration


Dive into the Matthew Uyttendaele's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Pal

École Polytechnique de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge