Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Graham Fyffe is active.

Publication


Featured researches published by Graham Fyffe.


international conference on computer graphics and interactive techniques | 2009

Achieving eye contact in a one-to-many 3D video teleconferencing system

Andrew Jones; Magnus Lang; Graham Fyffe; Xueming Yu; Jay Busch; Ian E. McDowall; Mark T. Bolas; Paul E. Debevec

We present a set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system. The participants face is scanned in 3D at 30Hz and transmitted in real time to an autostereoscopic horizontal-parallax 3D display, displaying him or her over more than a 180° field of view observable to multiple observers. To render the geometry with correct perspective, we create a fast vertex shader based on a 6D lookup table for projecting 3D scene vertices to a range of subject angles, heights, and distances. We generalize the projection mathematics to arbitrarily shaped display surfaces, which allows us to employ a curved concave display surface to focus the high speed imagery to individual observers. To achieve two-way eye contact, we capture 2D video from a cross-polarized camera reflected to the position of the virtual participants eyes, and display this 2D video feed on a large screen in front of the real participant, replicating the viewpoint of their virtual self. To achieve correct vertical perspective, we further leverage this image to track the position of each audience members eyes, allowing the 3D display to render correct vertical perspective for each of the viewers around the device. The result is a one-to-many 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferencing systems.


international conference on computer graphics and interactive techniques | 2011

Multiview face capture using polarized spherical gradient illumination

Abhijeet Ghosh; Graham Fyffe; Borom Tunwattanapong; Jay Busch; Xueming Yu; Paul E. Debevec

We present a novel process for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints using polarized spherical gradient illumination. Key to our method is a new pair of linearly polarized lighting patterns which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs. The patterns -- one following lines of latitude and one following lines of longitude -- allow the use of fixed linear polarizers in front of the cameras, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints. In a second step, we employ these albedo and normal maps as input to a novel multi-resolution adaptive domain message passing stereo reconstruction algorithm to create high resolution facial geometry. To do this, we formulate the stereo reconstruction from multiple cameras in a commonly parameterized domain for multiview reconstruction. We show competitive results consisting of high-resolution facial geometry with relightable reflectance maps using five DSLR cameras. Our technique scales well for multiview acquisition without requiring specialized camera systems for sensing multiple polarization states.


international conference on computer graphics and interactive techniques | 2013

Acquiring reflectance and shape from continuous spherical harmonic illumination

Borom Tunwattanapong; Graham Fyffe; Paul Graham; Jay Busch; Xueming Yu; Abhijeet Ghosh; Paul E. Debevec

We present a novel technique for acquiring the geometry and spatially-varying reflectance properties of 3D objects by observing them under continuous spherical harmonic illumination conditions. The technique is general enough to characterize either entirely specular or entirely diffuse materials, or any varying combination across the surface of the object. We employ a novel computational illumination setup consisting of a rotating arc of controllable LEDs which sweep out programmable spheres of incident illumination during 1-second exposures. We illuminate the object with a succession of spherical harmonic illumination conditions, as well as photographed environmental lighting for validation. From the response of the object to the harmonics, we can separate diffuse and specular reflections, estimate world-space diffuse and specular normals, and compute anisotropic roughness parameters for each view of the object. We then use the maps of both diffuse and specular reflectance to form correspondences in a multiview stereo algorithm, which allows even highly specular surfaces to be corresponded across views. The algorithm yields a complete 3D model and a set of merged reflectance maps. We use this technique to digitize the shape and reflectance of a variety of objects difficult to acquire with other techniques and present validation renderings which match well to photographs in similar lighting.


ACM Transactions on Graphics | 2014

Driving High-Resolution Facial Scans with Video Performance Capture

Graham Fyffe; Andrew Jones; Oleg Alexander; Ryosuke Ichikari; Paul E. Debevec

We present a process for rendering a realistic facial performance with control of viewpoint and illumination. The performance is based on one or more high-quality geometry and reflectance scans of an actor in static poses, driven by one or more video streams of a performance. We compute optical flow correspondences between neighboring video frames, and a sparse set of correspondences between static scans and video frames. The latter are made possible by leveraging the relightability of the static 3D scans to match the viewpoint(s) and appearance of the actor in videos taken in arbitrary environments. As optical flow tends to compute proper correspondence for some areas but not others, we also compute a smoothed, per-pixel confidence map for every computed flow, based on normalized cross-correlation. These flows and their confidences yield a set of weighted triangulation constraints among the static poses and the frames of a performance. Given a single artist-prepared face mesh for one static pose, we optimally combine the weighted triangulation constraints, along with a shape regularization term, into a consistent 3D geometry solution over the entire performance that is drift free by construction. In contrast to previous work, even partial correspondences contribute to drift minimization, for example, where a successful match is found in the eye region but not the mouth. Our shape regularization employs a differential shape term based on a spatially varying blend of the differential shapes of the static poses and neighboring dynamic poses, weighted by the associated flow confidences. These weights also permit dynamic reflectance maps to be produced for the performance by blending the static scan maps. Finally, as the geometry and maps are represented on a consistent artist-friendly mesh, we render the resulting high-quality animated face geometry and animated reflectance maps using standard rendering tools.


eurographics | 2011

Comprehensive Facial Performance Capture

Graham Fyffe; Tim Hawkins; Chris Watts; Wan-Chun Ma; Paul E. Debevec

We present a system for recording a live dynamic facial performance, capturing highly detailed geometry and spatially varying diffuse and specular reflectance information for each frame of the performance. The result is a reproduction of the performance that can be rendered from novel viewpoints and novel lighting conditions, achieving photorealistic integration into any virtual environment. Dynamic performances are captured directly, without the need for any template geometry or static geometry scans, and processing is completely automatic, requiring no human input or guidance. Our key contributions are a heuristic for estimating facial reflectance information from gradient illumination photographs, and a geometry optimization framework that maximizes a principled likelihood function combining multi‐view stereo correspondence and photometric stereo, using multi‐resolution belief propagation. The output of our system is a sequence of geometries and reflectance maps, suitable for rendering in off‐the‐shelf software. We show results from our system rendered under novel viewpoints and lighting conditions, and validate our results by demonstrating a close match to ground truth photographs.


international conference on computational photography | 2011

Single-shot photometric stereo by spectral multiplexing

Graham Fyffe; Xueming Yu; Paul E. Debevec

We propose a novel method for single-shot photometric stereo by spectral multiplexing. The output of our method is a simultaneous per-pixel estimate of the surface normal and full-color reflectance. Our method is well suited to materials with varying color and texture, requires no time-varying illumination, and no high-speed cameras. Being a single-shot method, it may be applied to dynamic scenes without any need for optical flow. Our key contributions are a generalization of three-color photometric stereo to more than three color channels, and the design of a practical six-color-channel system using off-the-shelf parts.


international conference on computer graphics and interactive techniques | 2010

Head-mounted photometric stereo for performance capture

Andrew Jones; Graham Fyffe; Xueming Yu; Alex Ma; Jay Busch; Mark T. Bolas; Paul E. Debevec

Head-mounted cameras are an increasingly important tool for capturing an actors facial performance. Such cameras provide a fixed, unoccluded view of the face. The resulting imagery is useful for observing motion capture dots or as input to existing video analysis techniques. Unfortunately current systems are typically affected by ambient light and generally fail to record subtle 3D shape changes between expressions Artistic interventions is often required to cleanup and map the captured performance onto a virtual character. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and records per-pixel surface normal. Our data can be used to generate dynamic 3D geometry, for facial relighting, or as input of machine learning algorithms to accurately control an animated face.


international conference on computer graphics and interactive techniques | 2015

Skin microstructure deformation with displacement map convolution

Koki Nagano; Graham Fyffe; Oleg Alexander; Jernej Barbič; Hao Li; Abhijeet Ghosh; Paul E. Debevec

We present a technique for synthesizing the effects of skin microstructure deformation by anisotropically convolving a high-resolution displacement map to match normal distribution changes in measured skin samples. We use a 10-micron resolution scanning technique to measure several in vivo skin samples as they are stretched and compressed in different directions, quantifying how stretching smooths the skin and compression makes it rougher. We tabulate the resulting surface normal distributions, and show that convolving a neutral skin microstructure displacement map with blurring and sharpening filters can mimic normal distribution changes and microstructure deformations. We implement the spatially-varying displacement map filtering on the GPU to interactively render the effects of dynamic microgeometry on animated faces obtained from high-resolution facial scans.


Computer Graphics Forum | 2017

Multi-View Stereo on Consistent Face Topology

Graham Fyffe; Koki Nagano; L. Huynh; Shunsuke Saito; Jay Busch; Andrew Jones; Hao Li; Paul E. Debevec

We present a multi‐view stereo reconstruction technique that directly produces a complete high‐fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist‐quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi‐view input images of the subject, while satisfying cross‐view, cross‐subject, and cross‐pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA‐based dimension reduction and denoising scheme. We demonstrate high‐fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state‐of‐the‐art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production‐quality mesh topologies.


conference on visual media production | 2011

Head-Mounted Photometric Stereo for Performance Capture

Andrew Jones; Graham Fyffe; Xueming Yu; Wan-Chun Ma; Jay Busch; Ryosuke Ichikari; Mark T. Bolas; Paul E. Debevec

Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.

Collaboration


Dive into the Graham Fyffe's collaboration.

Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xueming Yu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jay Busch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Oleg Alexander

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Paul Graham

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Wan-Chun Ma

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge