Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Tchou is active.

Publication


Featured researches published by Chris Tchou.


international conference on computer graphics and interactive techniques | 2000

Acquiring the reflectance field of a human face

Paul E. Debevec; Tim Hawkins; Chris Tchou; Haarm-Pieter Duiker; Westley Sarokin; Mark Sagar

We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a persons face under novel illumination and viewpoints.


international conference on computer graphics and interactive techniques | 2005

Performance relighting and reflectance transformation with time-multiplexed illumination

Andreas Wenger; Andrew Gardner; Chris Tchou; Jonas Unger; Tim Hawkins; Paul E. Debevec

We present a technique for capturing an actors live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actors reflectance to produce both subtle and stylistic effects.


international conference on computer graphics and interactive techniques | 2002

A lighting reproduction approach to live-action compositing

Paul E. Debevec; Andreas Wenger; Chris Tchou; Andrew Gardner; Jamie Waese; Tim Hawkins

We describe a process for compositing a live performance of an actor into a virtual set wherein the actor is consistently illuminated by the virtual environment. The Light Stage used in this work is a two-meter sphere of inward-pointing RGB light emitting diodes focused on the actor, where each light can be set to an arbitrary color and intensity to replicate a real-world or virtual lighting environment. We implement a digital two-camera infrared matting system to composite the actor into the background plate of the environment without affecting the visible-spectrum illumination on the actor. The color reponse of the system is calibrated to produce correct color renditions of the actor as illuminated by the environment. We demonstrate moving-camera composites of actors into real-world environments and virtual sets such that the actor is properly illuminated by the environment into which they are composited.


computer graphics, virtual reality, visualisation and interaction in africa | 2004

Direct HDR capture of the sun and sky

Jessi Stumpfel; Chris Tchou; Andrew Jones; Tim Hawkins; Andreas Wenger; Paul E. Debevec

We present a technique for capturing the extreme dynamic range of natural illumination environments that include the sun and sky, which has presented a challenge for traditional high dynamic range photography processes. We find that through careful selection of exposure times, aperture, and neutral density filters that this full range can be covered in seven exposures with a standard digital camera. We discuss the particular calibration issues such as lens vignetting, infrared sensitivity, and spectral transmission of neutral density filters which must be addressed. We present an adaptive exposure range adjustment technique for minimizing the number of exposures necessary. We demonstrate our results by showing time-lapse renderings of a complex scene illuminated by high-resolution, high dynamic range natural illumination environments.


eurographics symposium on rendering techniques | 2004

Animatable facial reflectance fields

Tim Hawkins; Andreas Wenger; Chris Tchou; Andrew Gardner; Fredrik Göransson; Paul E. Debevec

We present a technique for creating an animatable image-based appearance model of a human face, able to capture appearance variation over changing facial expression, head pose, view direction, and lighting condition. Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject sequentially from many different directions in just a few seconds. For each pose, the subject remains still while six video cameras capture their appearance under each of the directions of lighting. We repeat this process for approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The images for each of the poses and camera views are registered to each other semi-automatically with the help of fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured poses and under any desired lighting condition by warping, scaling, and blending data from the original images. Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear combination of the original captured poses.


eurographics symposium on rendering techniques | 2001

Real-Time High Dynamic Range Texture Mapping

Jonathan Cohen; Chris Tchou; Tim Hawkins; Paul E. Debevec

This paper presents a technique for representing and displaying high dynamic-range texture maps (HDRTMs) using current graphics hardware. Dynamic range in real-world environments often far exceeds the range representable in 8-bit per-channel texture maps. The increased realism afforded by a highdynamic range representation provides improved fidelity and expressiveness for interactive visualization of image-based models. Our technique allows for realtime rendering of scenes with arbitrary dynamic range, limited only by available texture memory. In our technique, high-dynamic range textures are decomposed into sets of 8- bit textures. These 8-bit textures are dynamically reassembled by the graphics hardwares programmable multitexturing system or using multipass techniques and framebuffer image processing. These operations allow the exposure level of the texture to be adjusted continuously and arbitrarily at the time of rendering, correctly accounting for the gamma curve and dynamic range restrictions of the display device. Further, for any given exposure only two 8-bit textures must be resident in texture memory simultaneously. We present implementation details of this technique on various 3D graphics hardware architectures. We demonstrate several applications, including high-dynamic range panoramic viewing with simulated auto-exposure, real-time radiance environment mapping, and simulated Fresnel reflection.


international conference on computer graphics and interactive techniques | 2004

Unlighting the Parthenon

Chris Tchou; Jessi Stumpfel; Per Einarsson; Marcos Fajardo; Paul E. Debevec

We present a method that extends techniques in [Yu and Malik 1998] and [Debevec 1998] to estimate the surface colors of a complex scene with diffuse surfaces lit by natural outdoor illumination. Given a model of the scene’s geometry, a set of photographs of the scene taken under natural illumination, and corresponding measurements of the illumination, we can calculate the spatially-varying diffuse surface reflectance. The process employs a simple iterative inverse global illumination technique to compute the surface colors for the scene which, when rendered under the recorded illumination, best reproduce the appearance in the photographs. The results can then be used to render the scene under novel illumination.


international conference on computer graphics and interactive techniques | 2003

Assembling the sculptures of the Parthenon

Jessi Stumpfel; Chris Tchou; Tim Hawkins; Paul E. Debevec; Jonathan Cohen; Andrew Jones; Brian Emerson; Philippe Martinez; Tomas Lochman

Although the Parthenon has stood on the Athenian Acropolis for nearly 2,500 years, its sculptural decorations have been scattered to museums around the world. Many of its sculptures have been damaged or lost. Fortunately, most of the decoration survives through drawings, descriptions, and casts. A component of our Parthenon Project has been to assemble digital models of the sculptures and virtually reunite them with the Parthenon. This sketch details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of almost all of the existing pediments, metopes, and frieze. Our techniques have been designed to work as quickly as possible and at low cost.


international conference on computer graphics and interactive techniques | 2004

Postproduction re-illumination of live action using interleaved lighting

Andrew Gardner; Chris Tchou; Andreas Wenger; Paul E. Debevec; Tim Hawkins

In this work, we present a technique for capturing a time-varying human performance in such a way that it can be re-illuminated in postproduction. The key idea is to illuminate the subject with a variety of rapidly changing time-multiplexed basis lighting conditions, and to record these lighting conditions with a fast enough video camera so that several or many different basis lighting conditions are recorded during the span of the final video’s desired frame rate. In this poster we present two versions of such a system and propose plans for creating a complete, production-ready device.


international conference on computer graphics and interactive techniques | 2003

Linear light source reflectometry

Andrew Gardner; Chris Tchou; Tim Hawkins; Paul E. Debevec

Collaboration


Dive into the Chris Tchou's collaboration.

Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Tim Hawkins

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Gardner

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andreas Wenger

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jessi Stumpfel

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Marcos Fajardo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Per Einarsson

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philippe Martinez

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge