Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul E. Debevec is active.

Publication


Featured researches published by Paul E. Debevec.


international conference on computer graphics and interactive techniques | 1997

Recovering high dynamic range radiance maps from photographs

Paul E. Debevec; Jitendra Malik

We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.


international conference on computer graphics and interactive techniques | 1996

Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach

Paul E. Debevec; Camillo J. Taylor; Jitendra Malik

We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD).


international conference on computer graphics and interactive techniques | 1998

Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography

Paul E. Debevec

We present a method that uses measured scene radiance and global illumination in order to add new objects to light-based models with correct lighting. The method uses a high dynamic range image-based model of the scene, rather than synthetic light sources, to illuminate the new objects. To compute the illumination, the scene is considered as three components: the distant scene, the local scene, and the synthetic objects. The distant scene is assumed to be photometrically unaffected by the objects, obviating the need for reflectance model information. The local scene is endowed with estimated reflectance model information so that it can catch shadows and receive reflected light from the new objects. Renderings are created with a standard global illumination method by simulating the interaction of light amongst the three components. A differential rendering technique allows for good results to be obtained when only an estimate of the local scene reflectance properties is known. We apply the general method to the problem of rendering synthetic objects into real scenes. The light-based model is constructed from an approximate geometric model of the scene and by using a light probe to measure the incident illumination at the location of the synthetic objects. The global illumination solution is then composited into a photograph of the scene using the differential rendering technique. We conclude by discussing the relevance of the technique to recovering surface reflectance properties in uncontrolled lighting situations. Applications of the method include visual effects, interior design, and architectural visualization.


international conference on computer graphics and interactive techniques | 2000

Acquiring the reflectance field of a human face

Paul E. Debevec; Tim Hawkins; Chris Tchou; Haarm-Pieter Duiker; Westley Sarokin; Mark Sagar

We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a persons face under novel illumination and viewpoints.


international conference on computer graphics and interactive techniques | 1999

Inverse global illumination: recovering reflectance models of real scenes from photographs

Yizhou Yu; Paul E. Debevec; Jitendra Malik; Tim Hawkins

In this paper we present a method for recovering the reflectance properties of all surfaces in a real scene from a sparse set of photographs, taking into account both direct and indirect illumination. The result is a lighting-independent model of the scene’s geometry and reflectance properties, which can be rendered with arbitrary modifications to structure and lighting via traditional rendering methods. Our technique models reflectance with a lowparameter reflectance model, and allows diffuse albedo to vary arbitrarily over surfaces while assuming that non-diffuse characteristics remain constant across particular regions. The method’s input is a geometric model of the scene and a set of calibrated high dynamic range photographs taken with known direct illumination. The algorithm hierarchically partitions the scene into a polygonal mesh, and uses image-based rendering to construct estimates of both the radiance and irradiance of each patch from the photographic data. The algorithm computes the expected location of specular highlights, and then analyzes the highlight areas in the images by running a novel iterative optimization procedure to recover the diffuse and specular reflectance parameters for each region. Lastly, these parameters are used in constructing high-resolution diffuse albedo maps for each surface. The algorithm has been applied to both real and synthetic data, including a synthetic cubical room and a real meeting room. Rerenderings are produced using a global illumination system under both original and novel lighting, and with the addition of synthetic objects. Side-by-side comparisons show success at predicting the appearance of the scene under novel lighting conditions. CR Categories: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding—modeling and recovery of physical attributes I.3.7 [Computer Graphics]: Three-dimensional Graphics and Realism—color, shading, shadowing, and texture I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism— Radiosity I.4.8 [Image Processing]: Scene Analysis—Color, photometry, shading


international conference on computer graphics and interactive techniques | 2005

Performance relighting and reflectance transformation with time-multiplexed illumination

Andreas Wenger; Andrew Gardner; Chris Tchou; Jonas Unger; Tim Hawkins; Paul E. Debevec

We present a technique for capturing an actors live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actors reflectance to produce both subtle and stylistic effects.


international conference on computer graphics and interactive techniques | 2007

Rendering for an interactive 360° light field display

Andrew Jones; Ian E. McDowall; Hideshi Yamada; Mark T. Bolas; Paul E. Debevec

We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the systems projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewers height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the displays visual accommodation performance and discuss techniques for displaying color imagery.


international conference on computer graphics and interactive techniques | 2004

High dynamic range imaging

Paul E. Debevec; Erik Reinhard; Greg Ward; Sumanta N. Pattanaik

Current display devices can display only a limited range of contrast and colors, which is one of the main reasons that most image acquisition, processing, and display techniques use no more than eight bits per color channel. This course outlines recent advances in high-dynamic-range imaging, from capture to display, that remove this restriction, thereby enabling images to represent the color gamut and dynamic range of the original scene rather than the limited subspace imposed by current monitor technology. This hands-on course teaches how high-dynamic-range images can be captured, the file formats available to store them, and the algorithms required to prepare them for display on low-dynamic-range display devices. The trade-offs at each stage, from capture to display, are assessed, allowing attendees to make informed choices about data-capture techniques, file formats, and tone-reproduction operators. The course also covers recent advances in image-based lighting, in which HDR images can be used to illuminate CG objects and realistically integrate them into real-world scenes. Through practical examples taken from photography and the film industry, it shows the vast improvements in image fidelity afforded by high-dynamic-range imaging.


eurographics symposium on rendering techniques | 2007

Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination

Wan-Chun Ma; Tim Hawkins; Pieter Peers; Charles-Félix Chabert; Malte Weiss; Paul E. Debevec

We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that highresolution normal maps based on the specular component can be used with structured light 3D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.


international conference on computer graphics and interactive techniques | 2009

Dynamic shape capture using multi-view photometric stereo

Daniel Vlasic; Pieter Peers; Ilya Baran; Paul E. Debevec; Jovan Popović; Szymon Rusinkiewicz; Wojciech Matusik

We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.

Collaboration


Dive into the Paul E. Debevec's collaboration.

Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Tim Hawkins

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Graham Fyffe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jay Busch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xueming Yu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Oleg Alexander

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chris Tchou

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge