Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John P. Lewis is active.

Publication


Featured researches published by John P. Lewis.


international conference on computer graphics and interactive techniques | 2000

Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation

John P. Lewis; Matt Cordner; Nickson Fong

Pose space deformation generalizes and improves upon both shape interpolation and common skeleton-driven deformation techniques. This deformation approach proceeds from the observation that several types of deformation can be uniformly represented as mappings from a pose space, defined by either an underlying skeleton or a more abstract system of parameters, to displacements in the object local coordinate frames. Once this uniform representation is identified, previously disparate deformation types can be accomplished within a single unified approach. The advantages of this algorithm include improved expressive power and direct manipulation of the desired shapes yet the performance associated with traditional shape interpolation is achievable. Appropriate applications include animation of facial and body deformation for entertainment, telepresence, computer gaming, and other applications where direct sculpting of deformations is desired or where real-time synthesis of a deforming model is required.


IEEE Computer Graphics and Applications | 2005

Automated eye motion using texture synthesis

Zhigang Deng; John P. Lewis; Ulrich Neumann

Modeling human eyes requires special care. The goal of this paper is to synthesize realistic eye-gaze and blink motion, accounting for any possible correlations between the two. The technique adopts a data-driven texture synthesis approach to the problem of synthesizing realistic eye motion. The basic assumption is that eye gaze probably has some connection with eyelid motion, as well as with head motion and speech. But the connection is not strictly deterministic and would be difficult to characterize explicitly. A major advantage of the data-driven approach is that the investigator does not need to determine whether these apparent correlations actually exist. If the correlations occur in the data, the synthesis (properly applied) will reproduce them.


Computer Graphics Forum | 2006

Real-Time Weighted Pose-Space Deformation on the GPU

Taehyun Rhee; John P. Lewis; Ulrich Neumann

WPSD (Weighted Pose Space Deformation) is an example based skinning method for articulated body animation. The per‐vertex computation required in WPSD can be parallelized in a SIMD (Single Instruction Multiple Data) manner and implemented on a GPU. While such vertex‐parallel computation is often done on the GPU vertex processors, further parallelism can potentially be obtained by using the fragment processors. In this paper, we develop a parallel deformation method using the GPU fragment processors. Joint weights for each vertex are automatically calculated from sample poses, thereby reducing manual effort and enhancing the quality of WPSD as well as SSD (Skeletal Subspace Deformation). We show sufficient speed‐up of SSD, PSD (Pose Space Deformation) and WPSD to make them suitable for real‐time applications.


Computer Graphics Forum | 2010

A Survey of Procedural Noise Functions

Ares Lagae; Sylvain Lefebvre; Robert L. Cook; Tony DeRose; George Drettakis; David S. Ebert; John P. Lewis; Ken Perlin; Matthias Zwicker

Procedural noise functions are widely used in computer graphics, from off‐line rendering in movie production to interactive video games. The ability to add complex and intricate details at low memory and authoring cost is one of its main attractions. This survey is motivated by the inherent importance of noise in graphics, the widespread use of noise in industry and the fact that many recent research developments justify the need for an up‐to‐date survey. Our goal is to provide both a valuable entry point into the field of procedural noise functions, as well as a comprehensive view of the field to the informed reader. In this report, we cover procedural noise functions in all their aspects. We outline recent advances in research on this topic, discussing and comparing recent and well‐established methods. We first formally define procedural noise functions based on stochastic processes and then classify and review existing procedural noise functions. We discuss how procedural noise functions are used for modelling and how they are applied to surfaces. We then introduce analysis tools and apply them to evaluate and compare the major approaches to noise generation. We finally identify several directions for future work.


international conference on computer graphics and interactive techniques | 2003

Universal capture: image-based facial animation for "The Matrix Reloaded"

George Borshukov; Dan Piponi; Oystein Larsen; John P. Lewis; Christina Tempelaar-Lietz

The VFX R&D stage for The Matrix Reloaded was kicked off in January 2000 with the challenge to create realistic human faces. We believed that traditional facial animation approaches like muscle deformers or blend shapes would simply never work, both because of the richness of facial movement and because of the human viewers extreme sensitivity to facial nuances. Our task was further complicated as we had to recreate familiar actors such as Keanu Reeves and Lawrence Fishburne. Our team had been very successful at applying image-based techniques for photorealistic film set/location rendering, so we decided to approach the problem from the image-based side again. We wanted to produce a 3-d recording of the real actors performance and be able to play it back from different angles and under different lighting conditions. Just as we can extract geometry, texture, or light from images, we are now able to extract movement. Universal Capture combines two powerful computer vision techniques: optical flow and photogrammetry.The VFX R&D stage for The Matrix Reloaded was kicked off in January 2000 with the challenge to create realistic human faces. We believed that traditional facial animation approaches like muscle deformers or blend shapes would simply never work, both because of the richness of facial movement and because of the human viewer’s extreme sensitivity to facial nuances. Our task was further complicated as we had to recreate familiar actors such as Keanu Reeves and Lawrence Fishburne. Our team had been very successful at applying image-based techniques for photorealistic film set/location rendering, so we decided to approach the problem from the image-based side again. We wanted to produce a 3-d recording of the real actors performance and be able to play it back from different angles and under different lighting conditions. Just as we can extract geometry, texture, or light from images, we are now able to extract movement. Universal Capture combines two powerful computer vision techniques: optical flow and photogrammetry.


interactive 3d graphics and games | 2006

Human hand modeling from surface anatomy

Taehyun Rhee; Ulrich Neumann; John P. Lewis

The human hand is an important interface with complex shape and movement. In virtual reality and gaming applications the use of an individualized rather than generic hand representation can increase the sense of immersion and in some cases may lead to more effortless and accurate interaction with the virtual world. We present a method for constructing a person-specific model from a single canonically posed palm image of the hand without human guidance. Tensor voting is employed to extract the principal creases on the palmar surface. Joint locations are estimated using extracted features and analysis of surface anatomy. The skin geometry of a generic 3D hand model is deformed using radial basis functions guided by correspondences to the extracted surface anatomy and hand contours. The result is a 3D model of an individuals hand, with similar joint locations, contours, and skin texture.


eurographics | 2014

Practice and Theory of Blendshape Facial Models

John P. Lewis; Ken Anjyo; Taehyun Rhee; Mengjie Zhang; Frédéric H. Pighin; Zhigang Deng

Blendshapes”, a simple linear model of facial expression, is the prevalent approach to realistic facial animation. It has driven animated characters in Hollywood films, and is a standard feature of commercial animation packages. The blendshape approach originated in industry, and became a subject of academic research relatively recently. This survey describes the published state of the art in this area, covering both literature from the graphics research community, and developments published in industry forums. We show that, despite the simplicity of the blendshape approach, there remain open problems associated with this fundamental technique.


ACM Transactions on Graphics | 2012

Spacetime expression cloning for blendshapes

Yeongho Seol; John P. Lewis; Jaewoo Seo; Byungkuk Choi; Ken Anjyo; Junyong Noh

The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.


international conference on computer graphics and interactive techniques | 2004

Improved automatic caricature by feature normalization and exaggeration

Zhenyao Mo; John P. Lewis; Ulrich Neumann

This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].


international conference on computer graphics and interactive techniques | 2006

Facial motion retargeting

Frédéric H. Pighin; John P. Lewis

When done correctly, a digitally recorded facial performance is an accurate measurement of the performer’s motions. As such it reflects all the idiosyncrasies of the performer. However, often the digital character that needs to be animated is not a digital replica of the performer. In this case, the decision to use performance capture might be motivated by cost issues, the desire to use a favorite actor regardless of the intended character, or the desire to portray an older, younger, or otherwise altered version of the actor. The many incarnations of Tom Hanks in Polar Express illustrate several of these scenarios. In this scenario, the recorded (source) performance has to be adapted to the target character. This process is called motion retargeting or cross-mapping. In this section, we examine different techniqques for retargeting a recorded facial performance onto a digital face.

Collaboration


Dive into the John P. Lewis's collaboration.

Top Co-Authors

Avatar

Ulrich Neumann

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taehyun Rhee

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Krishna S. Nayak

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Nathan Schurr

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Paul Scerri

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Zhenyao Mo

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge