Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay Busch is active.

Publication


Featured researches published by Jay Busch.


international conference on computer graphics and interactive techniques | 2009

Achieving eye contact in a one-to-many 3D video teleconferencing system

Andrew Jones; Magnus Lang; Graham Fyffe; Xueming Yu; Jay Busch; Ian E. McDowall; Mark T. Bolas; Paul E. Debevec

We present a set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system. The participants face is scanned in 3D at 30Hz and transmitted in real time to an autostereoscopic horizontal-parallax 3D display, displaying him or her over more than a 180° field of view observable to multiple observers. To render the geometry with correct perspective, we create a fast vertex shader based on a 6D lookup table for projecting 3D scene vertices to a range of subject angles, heights, and distances. We generalize the projection mathematics to arbitrarily shaped display surfaces, which allows us to employ a curved concave display surface to focus the high speed imagery to individual observers. To achieve two-way eye contact, we capture 2D video from a cross-polarized camera reflected to the position of the virtual participants eyes, and display this 2D video feed on a large screen in front of the real participant, replicating the viewpoint of their virtual self. To achieve correct vertical perspective, we further leverage this image to track the position of each audience members eyes, allowing the 3D display to render correct vertical perspective for each of the viewers around the device. The result is a one-to-many 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferencing systems.


international conference on computer graphics and interactive techniques | 2011

Multiview face capture using polarized spherical gradient illumination

Abhijeet Ghosh; Graham Fyffe; Borom Tunwattanapong; Jay Busch; Xueming Yu; Paul E. Debevec

We present a novel process for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints using polarized spherical gradient illumination. Key to our method is a new pair of linearly polarized lighting patterns which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs. The patterns -- one following lines of latitude and one following lines of longitude -- allow the use of fixed linear polarizers in front of the cameras, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints. In a second step, we employ these albedo and normal maps as input to a novel multi-resolution adaptive domain message passing stereo reconstruction algorithm to create high resolution facial geometry. To do this, we formulate the stereo reconstruction from multiple cameras in a commonly parameterized domain for multiview reconstruction. We show competitive results consisting of high-resolution facial geometry with relightable reflectance maps using five DSLR cameras. Our technique scales well for multiview acquisition without requiring specialized camera systems for sensing multiple polarization states.


international conference on computer graphics and interactive techniques | 2013

Acquiring reflectance and shape from continuous spherical harmonic illumination

Borom Tunwattanapong; Graham Fyffe; Paul Graham; Jay Busch; Xueming Yu; Abhijeet Ghosh; Paul E. Debevec

We present a novel technique for acquiring the geometry and spatially-varying reflectance properties of 3D objects by observing them under continuous spherical harmonic illumination conditions. The technique is general enough to characterize either entirely specular or entirely diffuse materials, or any varying combination across the surface of the object. We employ a novel computational illumination setup consisting of a rotating arc of controllable LEDs which sweep out programmable spheres of incident illumination during 1-second exposures. We illuminate the object with a succession of spherical harmonic illumination conditions, as well as photographed environmental lighting for validation. From the response of the object to the harmonics, we can separate diffuse and specular reflections, estimate world-space diffuse and specular normals, and compute anisotropic roughness parameters for each view of the object. We then use the maps of both diffuse and specular reflectance to form correspondences in a multiview stereo algorithm, which allows even highly specular surfaces to be corresponded across views. The algorithm yields a complete 3D model and a set of merged reflectance maps. We use this technique to digitize the shape and reflectance of a variety of objects difficult to acquire with other techniques and present validation renderings which match well to photographs in similar lighting.


ACM Transactions on Graphics | 2010

Temporal upsampling of performance geometry using photometric alignment

Cyrus A. Wilson; Abhijeet Ghosh; Pieter Peers; Jen-Yuan Chiang; Jay Busch; Paul E. Debevec

We present a novel technique for acquiring detailed facial geometry of a dynamic performance using extended spherical gradient illumination. Key to our method is a new algorithm for jointly aligning two photographs, under a gradient illumination condition and its complement, to a full-on tracking frame, providing dense temporal correspondences under changing lighting conditions. We employ a two-step algorithm to reconstruct detailed geometry for every captured frame. In the first step, we coalesce information from the gradient illumination frames to the full-on tracking frame, and form a temporally aligned photometric normal map, which is subsequently combined with dense stereo correspondences yielding a detailed geometry. In a second step, we propagate the detailed geometry back to every captured instance guided by the previously computed dense correspondences. We demonstrate reconstructed dynamic facial geometry, captured using moderate to video rates of acquisition, for every captured frame.


international conference on computer graphics and interactive techniques | 2013

An autostereoscopic projector array optimized for 3D facial display

Koki Nagano; Andrew Jones; Jing Liu; Jay Busch; Xueming Yu; Mark T. Bolas; Paul E. Debevec

Video projectors are rapidly shrinking in size, power consumption, and cost. Such projectors provide unprecedented flexibility to stack, arrange, and aim pixels without the need for moving parts. We present a dense projector display that is optimized in size and resolution to display an autostereoscopic life-sized 3D human face with a wide 110 degree field of view. Applications include 3D teleconferencing and fully synthetic characters for education and interactive entertainment.


eurographics | 2013

Measurement-based Synthesis of Facial Microgeometry

Paul Graham; Borom Tunwattanapong; Jay Busch; Xueming Yu; Andrew Jones; Paul E. Debevec; Abhijeet Ghosh

We present a technique for generating microstructure‐level facial geometry by augmenting a mesostructure‐level facial scan with detail synthesized from a set of exemplar skin patches scanned at much higher resolution. Additionally, we make point‐source reflectance measurements of the skin patches to characterize the specular reflectance lobes at this smaller scale and analyze facial reflectance variation at both the mesostructure and microstructure scales. We digitize the exemplar patches with a polarization‐based computational illumination technique which considers specular reflection and single scattering. The recorded microstructure patches can be used to synthesize full‐facial microstructure detail for either the same subject or to a different subject. We show that the technique allows for greater realism in facial renderings including more accurate reproduction of skins specular reflection effects.


international conference on computer graphics and interactive techniques | 2010

Head-mounted photometric stereo for performance capture

Andrew Jones; Graham Fyffe; Xueming Yu; Alex Ma; Jay Busch; Mark T. Bolas; Paul E. Debevec

Head-mounted cameras are an increasingly important tool for capturing an actors facial performance. Such cameras provide a fixed, unoccluded view of the face. The resulting imagery is useful for observing motion capture dots or as input to existing video analysis techniques. Unfortunately current systems are typically affected by ambient light and generally fail to record subtle 3D shape changes between expressions Artistic interventions is often required to cleanup and map the captured performance onto a virtual character. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and records per-pixel surface normal. Our data can be used to generate dynamic 3D geometry, for facial relighting, or as input of machine learning algorithms to accurately control an animated face.


Journal of Electronic Imaging | 2014

Interpolating vertical parallax for an autostereoscopic three-dimensional projector array

Andrew V. Jones; Koki Nagano; Jing Liu; Jay Busch; Xueming Yu; Mark T. Bolas; Paul E. Debevec

Abstract. We present a technique for achieving tracked vertical parallax for multiple users using a variety of autostereoscopic projector array setups, including front- and rear-projection and curved display surfaces. This hybrid parallax approach allows for immediate horizontal parallax as viewers move left and right and tracked parallax as they move up and down, allowing cues such as three-dimensional (3-D) perspective and eye contact to be conveyed faithfully. We use a low-cost RGB-depth sensor to simultaneously track multiple viewer head positions in 3-D space, and we interactively update the imagery sent to the array so that imagery directed to each viewer appears from a consistent and correct vertical perspective. Unlike previous work, we do not assume that the imagery sent to each projector in the array is rendered from a single vertical perspective. This lets us apply hybrid parallax to displays where a single projector forms parts of multiple viewers’ imagery. Thus, each individual projected image is rendered with multiple centers of projection, and might show an object from above on the left and from below on the right. We demonstrate this technique using a dense horizontal array of pico-projectors aimed into an anisotropic vertical diffusion screen, yielding 1.5 deg angular resolution over 110 deg field of view. To create a seamless viewing experience for multiple viewers, we smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the array’s field of view, reducing image distortion, cross talk, and artifacts from tracking errors.


international conference on computer graphics and interactive techniques | 2012

A single-shot light probe

Paul E. Debevec; Paul Graham; Jay Busch; Mark T. Bolas

We demonstrate a novel light probe which can estimate the full dynamic range of a scene with multiple bright light sources. It places diffuse strips between mirrored spherical quadrants, effectively co-locating diffuse and mirrored probes to record the full dynamic range of illumination in a single exposure. From this image, we estimate the intensity of multiple saturated light sources by solving a linear system.


Computer Graphics Forum | 2017

Multi-View Stereo on Consistent Face Topology

Graham Fyffe; Koki Nagano; L. Huynh; Shunsuke Saito; Jay Busch; Andrew Jones; Hao Li; Paul E. Debevec

We present a multi‐view stereo reconstruction technique that directly produces a complete high‐fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist‐quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi‐view input images of the subject, while satisfying cross‐view, cross‐subject, and cross‐pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA‐based dimension reduction and denoising scheme. We demonstrate high‐fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state‐of‐the‐art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production‐quality mesh topologies.

Collaboration


Dive into the Jay Busch's collaboration.

Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xueming Yu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Graham Fyffe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Oleg Alexander

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Borom Tunwattanapong

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Paul Graham

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge