Xueming Yu
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xueming Yu.
international conference on computer graphics and interactive techniques | 2009
Andrew Jones; Magnus Lang; Graham Fyffe; Xueming Yu; Jay Busch; Ian E. McDowall; Mark T. Bolas; Paul E. Debevec
We present a set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system. The participants face is scanned in 3D at 30Hz and transmitted in real time to an autostereoscopic horizontal-parallax 3D display, displaying him or her over more than a 180° field of view observable to multiple observers. To render the geometry with correct perspective, we create a fast vertex shader based on a 6D lookup table for projecting 3D scene vertices to a range of subject angles, heights, and distances. We generalize the projection mathematics to arbitrarily shaped display surfaces, which allows us to employ a curved concave display surface to focus the high speed imagery to individual observers. To achieve two-way eye contact, we capture 2D video from a cross-polarized camera reflected to the position of the virtual participants eyes, and display this 2D video feed on a large screen in front of the real participant, replicating the viewpoint of their virtual self. To achieve correct vertical perspective, we further leverage this image to track the position of each audience members eyes, allowing the 3D display to render correct vertical perspective for each of the viewers around the device. The result is a one-to-many 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferencing systems.
international conference on computer graphics and interactive techniques | 2011
Abhijeet Ghosh; Graham Fyffe; Borom Tunwattanapong; Jay Busch; Xueming Yu; Paul E. Debevec
We present a novel process for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints using polarized spherical gradient illumination. Key to our method is a new pair of linearly polarized lighting patterns which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs. The patterns -- one following lines of latitude and one following lines of longitude -- allow the use of fixed linear polarizers in front of the cameras, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints. In a second step, we employ these albedo and normal maps as input to a novel multi-resolution adaptive domain message passing stereo reconstruction algorithm to create high resolution facial geometry. To do this, we formulate the stereo reconstruction from multiple cameras in a commonly parameterized domain for multiview reconstruction. We show competitive results consisting of high-resolution facial geometry with relightable reflectance maps using five DSLR cameras. Our technique scales well for multiview acquisition without requiring specialized camera systems for sensing multiple polarization states.
international conference on computer graphics and interactive techniques | 2013
Borom Tunwattanapong; Graham Fyffe; Paul Graham; Jay Busch; Xueming Yu; Abhijeet Ghosh; Paul E. Debevec
We present a novel technique for acquiring the geometry and spatially-varying reflectance properties of 3D objects by observing them under continuous spherical harmonic illumination conditions. The technique is general enough to characterize either entirely specular or entirely diffuse materials, or any varying combination across the surface of the object. We employ a novel computational illumination setup consisting of a rotating arc of controllable LEDs which sweep out programmable spheres of incident illumination during 1-second exposures. We illuminate the object with a succession of spherical harmonic illumination conditions, as well as photographed environmental lighting for validation. From the response of the object to the harmonics, we can separate diffuse and specular reflections, estimate world-space diffuse and specular normals, and compute anisotropic roughness parameters for each view of the object. We then use the maps of both diffuse and specular reflectance to form correspondences in a multiview stereo algorithm, which allows even highly specular surfaces to be corresponded across views. The algorithm yields a complete 3D model and a set of merged reflectance maps. We use this technique to digitize the shape and reflectance of a variety of objects difficult to acquire with other techniques and present validation renderings which match well to photographs in similar lighting.
international conference on computer graphics and interactive techniques | 2013
Koki Nagano; Andrew Jones; Jing Liu; Jay Busch; Xueming Yu; Mark T. Bolas; Paul E. Debevec
Video projectors are rapidly shrinking in size, power consumption, and cost. Such projectors provide unprecedented flexibility to stack, arrange, and aim pixels without the need for moving parts. We present a dense projector display that is optimized in size and resolution to display an autostereoscopic life-sized 3D human face with a wide 110 degree field of view. Applications include 3D teleconferencing and fully synthetic characters for education and interactive entertainment.
international conference on computational photography | 2011
Graham Fyffe; Xueming Yu; Paul E. Debevec
We propose a novel method for single-shot photometric stereo by spectral multiplexing. The output of our method is a simultaneous per-pixel estimate of the surface normal and full-color reflectance. Our method is well suited to materials with varying color and texture, requires no time-varying illumination, and no high-speed cameras. Being a single-shot method, it may be applied to dynamic scenes without any need for optical flow. Our key contributions are a generalization of three-color photometric stereo to more than three color channels, and the design of a practical six-color-channel system using off-the-shelf parts.
eurographics | 2013
Paul Graham; Borom Tunwattanapong; Jay Busch; Xueming Yu; Andrew Jones; Paul E. Debevec; Abhijeet Ghosh
We present a technique for generating microstructure‐level facial geometry by augmenting a mesostructure‐level facial scan with detail synthesized from a set of exemplar skin patches scanned at much higher resolution. Additionally, we make point‐source reflectance measurements of the skin patches to characterize the specular reflectance lobes at this smaller scale and analyze facial reflectance variation at both the mesostructure and microstructure scales. We digitize the exemplar patches with a polarization‐based computational illumination technique which considers specular reflection and single scattering. The recorded microstructure patches can be used to synthesize full‐facial microstructure detail for either the same subject or to a different subject. We show that the technique allows for greater realism in facial renderings including more accurate reproduction of skins specular reflection effects.
international conference on computer graphics and interactive techniques | 2010
Andrew Jones; Graham Fyffe; Xueming Yu; Alex Ma; Jay Busch; Mark T. Bolas; Paul E. Debevec
Head-mounted cameras are an increasingly important tool for capturing an actors facial performance. Such cameras provide a fixed, unoccluded view of the face. The resulting imagery is useful for observing motion capture dots or as input to existing video analysis techniques. Unfortunately current systems are typically affected by ambient light and generally fail to record subtle 3D shape changes between expressions Artistic interventions is often required to cleanup and map the captured performance onto a virtual character. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and records per-pixel surface normal. Our data can be used to generate dynamic 3D geometry, for facial relighting, or as input of machine learning algorithms to accurately control an animated face.
Journal of Electronic Imaging | 2014
Andrew V. Jones; Koki Nagano; Jing Liu; Jay Busch; Xueming Yu; Mark T. Bolas; Paul E. Debevec
Abstract. We present a technique for achieving tracked vertical parallax for multiple users using a variety of autostereoscopic projector array setups, including front- and rear-projection and curved display surfaces. This hybrid parallax approach allows for immediate horizontal parallax as viewers move left and right and tracked parallax as they move up and down, allowing cues such as three-dimensional (3-D) perspective and eye contact to be conveyed faithfully. We use a low-cost RGB-depth sensor to simultaneously track multiple viewer head positions in 3-D space, and we interactively update the imagery sent to the array so that imagery directed to each viewer appears from a consistent and correct vertical perspective. Unlike previous work, we do not assume that the imagery sent to each projector in the array is rendered from a single vertical perspective. This lets us apply hybrid parallax to displays where a single projector forms parts of multiple viewers’ imagery. Thus, each individual projected image is rendered with multiple centers of projection, and might show an object from above on the left and from below on the right. We demonstrate this technique using a dense horizontal array of pico-projectors aimed into an anisotropic vertical diffusion screen, yielding 1.5 deg angular resolution over 110 deg field of view. To create a seamless viewing experience for multiple viewers, we smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the array’s field of view, reducing image distortion, cross talk, and artifacts from tracking errors.
conference on visual media production | 2011
Andrew Jones; Graham Fyffe; Xueming Yu; Wan-Chun Ma; Jay Busch; Ryosuke Ichikari; Mark T. Bolas; Paul E. Debevec
Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.
international conference on computer graphics and interactive techniques | 2016
Chloe LeGendre; Xueming Yu; Dai Liu; Jay Busch; Andrew Jones; Sumanta N. Pattanaik; Paul E. Debevec
We present a practical framework for reproducing omnidirectional incident illumination conditions with complex spectra using a light stage with multispectral LED lights. For lighting acquisition, we augment standard RGB panoramic photography with one or more observations of a color chart with numerous reflectance spectra. We then solve for how to drive the multispectral light sources so that they best reproduce the appearance of the color charts in the original lighting. Even when solving for non-negative intensities, we show that accurate lighting reproduction is achievable using just four or six distinct LED spectra for a wide range of incident illumination spectra. A significant benefit of our approach is that it does not require the use of specialized equipment (other than the light stage) such as monochromators, spectroradiometers, or explicit knowledge of the LED power spectra, camera spectral response functions, or color chart reflectance spectra. We describe two simple devices for multispectral lighting capture, one for slow measurements of detailed angular spectral detail, and one for fast measurements with coarse angular detail. We validate the approach by realistically compositing real subjects into acquired lighting environments, showing accurate matches to how the subject would actually look within the environments, even for those including complex multispectral illumination. We also demonstrate dynamic lighting capture and playback using the technique.