Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Hawkins is active.

Publication


Featured researches published by Tim Hawkins.


international conference on computer graphics and interactive techniques | 2005

Performance relighting and reflectance transformation with time-multiplexed illumination

Andreas Wenger; Andrew Gardner; Chris Tchou; Jonas Unger; Tim Hawkins; Paul E. Debevec

We present a technique for capturing an actors live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actors reflectance to produce both subtle and stylistic effects.


eurographics symposium on rendering techniques | 2007

Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination

Wan-Chun Ma; Tim Hawkins; Pieter Peers; Charles-Félix Chabert; Malte Weiss; Paul E. Debevec

We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that highresolution normal maps based on the specular component can be used with structured light 3D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.


international conference on computer graphics and interactive techniques | 2002

A lighting reproduction approach to live-action compositing

Paul E. Debevec; Andreas Wenger; Chris Tchou; Andrew Gardner; Jamie Waese; Tim Hawkins

We describe a process for compositing a live performance of an actor into a virtual set wherein the actor is consistently illuminated by the virtual environment. The Light Stage used in this work is a two-meter sphere of inward-pointing RGB light emitting diodes focused on the actor, where each light can be set to an arbitrary color and intensity to replicate a real-world or virtual lighting environment. We implement a digital two-camera infrared matting system to composite the actor into the background plate of the environment without affecting the visible-spectrum illumination on the actor. The color reponse of the system is calibrated to produce correct color renditions of the actor as illuminated by the environment. We demonstrate moving-camera composites of actors into real-world environments and virtual sets such that the actor is properly illuminated by the environment into which they are composited.


international conference on computer graphics and interactive techniques | 2008

Facial performance synthesis using deformation-driven polynomial displacement maps

Wan-Chun Ma; Andrew Jones; Jen-Yuan Chiang; Tim Hawkins; Sune Frederiksen; Pieter Peers; Marko Vukovic; Ming Ouhyoung; Paul E. Debevec

We present a novel method for acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps. Our method consists of an analysis phase where the relationship between motion capture markers and detailed facial geometry is inferred, and a synthesis phase where novel detailed animated facial geometry is driven solely by a sparse set of motion capture markers. For analysis, we record the actor wearing facial markers while performing a set of training expression clips. We capture real-time high-resolution facial deformations, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, we compute displacements between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in a polynomial displacement map which is parameterized according to the local deformations of the motion capture dots. For synthesis, we drive the polynomial displacement map with new motion capture data. This allows the recreation of large-scale muscle deformation, medium and fine wrinkles, and dynamic skin pore detail. Applications include the compression of existing performance data and the synthesis of new performances. Our technique is independent of the underlying geometry capture system and can be used to automatically generate high-frequency wrinkle and pore details on top of many existing facial animation systems.


computer graphics, virtual reality, visualisation and interaction in africa | 2004

Direct HDR capture of the sun and sky

Jessi Stumpfel; Chris Tchou; Andrew Jones; Tim Hawkins; Andreas Wenger; Paul E. Debevec

We present a technique for capturing the extreme dynamic range of natural illumination environments that include the sun and sky, which has presented a challenge for traditional high dynamic range photography processes. We find that through careful selection of exposure times, aperture, and neutral density filters that this full range can be covered in seven exposures with a standard digital camera. We discuss the particular calibration issues such as lens vignetting, infrared sensitivity, and spectral transmission of neutral density filters which must be addressed. We present an adaptive exposure range adjustment technique for minimizing the number of exposures necessary. We demonstrate our results by showing time-lapse renderings of a complex scene illuminated by high-resolution, high dynamic range natural illumination environments.


international conference on computer graphics and interactive techniques | 2008

Practical modeling and acquisition of layered facial reflectance

Abhijeet Ghosh; Tim Hawkins; Pieter Peers; Sune Frederiksen; Paul E. Debevec

We present a practical method for modeling layered facial reflectance consisting of specular reflectance, single scattering, and shallow and deep subsurface scattering. We estimate parameters of appropriate reflectance models for each of these layers from just 20 photographs recorded in a few seconds from a single viewpoint. We extract spatially-varying specular reflectance and single-scattering parameters from polarization-difference images under spherical and point source illumination. Next, we employ direct-indirect separation to decompose the remaining multiple scattering observed under cross-polarization into shallow and deep scattering components to model the light transport through multiple layers of skin. Finally, we match appropriate diffusion models to the extracted shallow and deep scattering components for different regions on the face. We validate our technique by comparing renderings of subjects to reference photographs recorded from novel viewpoints and under novel illumination conditions.


ieee virtual reality conference | 2003

Digital reunification of the parthenon and its sculptures

Jessi Stumpfel; Christopher Tchou; Nathan Yun; Philippe Martinez; Tim Hawkins; Andrew Jones; Brian Emerson; Paul E. Debevec

The location, condition, and number of the Parthenon sculptures present a considerable challenge to archeologists and researchers studying this monument. Although the Parthenon proudly stands on the Athenian Acropolis after nearly 2,500 years, many of its sculptures have been damaged or lost. Since the end of the 18th century, its surviving sculptural decorations have been scattered to museums around the world. We propose a strategy for digitally capturing a large number of sculptures while minimizing impact on site and working under time and resource constraints. Our system employs a custom structured light scanner and adapted techniques for organizing, aligning and merging the data. In particular this paper details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of most of the known existing pediments, metopes, and frieze. We demonstrate our results by virtually placing the scanned sculptures on the Parthenon.


eurographics | 2003

Capturing and rendering with incident light fields

Jonas Unger; Andreas Wenger; Tim Hawkins; Andrew Gardner; Paul E. Debevec

This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.


visual analytics science and technology | 2001

A photometric approach to digitizing cultural artifacts

Tim Hawkins; Jonathan Cohen; Paul E. Debevec

In this paper we present a photometry-based approach to the digital documentation of cultural artifacts. Rather than representing an artifact as a geometric model with spatially varying reflectance properties, we instead propose directly representing the artifact in terms of its reflectance field --- the manner in which it transforms light into images. The principal device employed in our technique is a computer-controlled lighting apparatus which quickly illuminates an artifact from an exhaustive set of incident illumination directions and a set of digital video cameras which record the artifacts appearance under these forms of illumination. From this database of recorded images, we compute linear combinations of the captured images to synthetically illuminate the object under arbitrary forms of complex incident illumination, correctly capturing the effects of specular reflection, subsurface scattering, self-shadowing, mutual illumination, and complex BRDFs often present in cultural artifacts. We also describe a computer application that allows users to realistically and interactively relight digitized artifacts.


eurographics symposium on rendering techniques | 2004

Animatable facial reflectance fields

Tim Hawkins; Andreas Wenger; Chris Tchou; Andrew Gardner; Fredrik Göransson; Paul E. Debevec

We present a technique for creating an animatable image-based appearance model of a human face, able to capture appearance variation over changing facial expression, head pose, view direction, and lighting condition. Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject sequentially from many different directions in just a few seconds. For each pose, the subject remains still while six video cameras capture their appearance under each of the directions of lighting. We repeat this process for approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The images for each of the poses and camera views are registered to each other semi-automatically with the help of fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured poses and under any desired lighting condition by warping, scaling, and blending data from the original images. Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear combination of the original captured poses.

Collaboration


Dive into the Tim Hawkins's collaboration.

Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chris Tchou

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Gardner

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andreas Wenger

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Wan-Chun Ma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Per Einarsson

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jessi Stumpfel

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Charles-Félix Chabert

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge