Data-Driven Physical Face Inversion
Yeara Kozlov, Hongyi Xu, Moritz Bächer, Derek Bradley, Markus Gross, Thabo Beeler
DData-Driven Physical Face Inversion
Yeara Kozlov , Hongyi Xu Moritz Bächer Derek Bradley Markus Gross , Thabo Beeler DisneyResearch|Studios ETH Zurich Disney Research
Figure 1: We present a method to capture the physical material properties of real human faces for use in physical simulation for facialanimation. From a sparse set of facial surface deformations obtained from a lightweight capture setup (a), and a simple volumetric facemodel (b), we automatically infer both rest-shape simulation geometry and spatially-varying material properties (red: high Young’s modulus;blue: low Young’s modulus) for physical simulation (c).
Abstract
Facial animation is one of the most challenging problems in computer graphics, and it is often solved using linear heuristics likeblend-shape rigging. More expressive approaches like physical simulation have emerged, but these methods are very difficultto tune, especially when simulating a real actor’s face. We propose to use a simple finite element simulation approach for faceanimation, and present a novel method for recovering the required simulation parameters in order to best match a real actor’sface motion. Our method involves reconstructing a very small number of head poses of the actor in 3D, where the head posesspan different configurations of force directions due to gravity. Our algorithm can then automatically recover both the gravity-free rest shape of the face as well as the spatially-varying physical material stiffness such that a forward simulation will matchthe captured targets as closely as possible. As a result, our system can produce actor-specific, physical parameters that can beimmediately used in recent physical simulation methods for faces. Furthermore, as the simulation results depend heavily on thechosen spatial layout of material clusters, we analyze and compare different spatial layouts.
1. Introduction
Creating realistic physical effects for human faces using simulationmethods in computer graphics is an extremely challenging prob-lem. On the one hand, linear surface rigs, such as blendshapes, of-fer intuitive control but are not suited for physical simulation andphysical effects have to be added manually. On the other hand,anatomically accurate rigs are hard to create and control, and re-quire high expertise in both human anatomy and CG animation.Recently, simple volumetric rigs have been proposed as middle-ground [IKNDP16, KBB ∗
17, IKKP17], which maintain the intu-itive control of blendshapes but are also suited for physical simula- tion. Finding suitable material properties for such rigs is, however, achallenging task, which so far is only possible in a manual trial anderror process. And since physical properties change significantlybetween humans, due to tissue distribution, age and body mass in-dex (BMI), they have to be re-specified anew for every character.In this paper we investigate how to directly estimate the physicalproperties of human faces from captured data.One possible approach to this problem would be to measure tis-sue properties in-vitro for the different tissues that constitute a face,and then distribute these properties assuming some knowledge ofthe internal tissue distributions. Of course, usually such knowledge a r X i v : . [ c s . G R ] J u l ozlov et al. / Data-Driven Physical Face Inversion is not available, and even in a case where the internal structure ofthe tissue can be observed, i.e. from MRI or CT data, and whenthe anatomical structures are known, recovering material parame-ters for simulation (e.g. stiffness) is still challenging since in-vitrouni-axial strain-stress response measured from thin samples do notdirectly map to the behaviour of larger tissue structures in-vivo.We feel a more promising approach is to measure the materialproperties of human soft tissue in-vivo. While there have been ap-proaches that measure human tissue at a few sparse locations us-ing a force probe [BBO ∗
09, PRWH ∗
2. Related Work
Arguably, the most common approach to animate faces is blend-shape based facial animation [LAR ∗ One of the first methods for physics-based facial animation waspresented by Sifakis et al. [SNF05], who built a detailed face rigconsisting of a complete, anatomically accurate muscle structure,created manually from the actor’s medical data. Building the mus-cle structure for an actor is a time consuming process, thereforeCong et al. [CBJ ∗
15] developed an automatic way to transfer a tem-plate anatomy to target input faces. These transferred muscle-basedrigs can then be artistically refined by modifying the tracks thatmuscles follow during activation [CBF16]. Ichim et al. [IKKP17]also fit a template model of bones, muscles and flesh to facial scans.Their method succeeds by solving for the muscle activation param-eters that best fit the input scans during forward simulation, andthus produces an actor-specific physical face mesh for animation. A similar approach for full bodies was explored by Kadleˇcek etal. [KIL ∗ ∗
12, BSC16, BS18] or volumet-ric rigs [LXB17, IKNDP16, KBB ∗ ∗
12] use a mass-spring system tobuild a blend shape model that incorporates physical interaction.You et al. [YSZ09], as well as Barrielle et al. [BSC16] proposeto define forces at the vertices of a face mesh and blend differentforces that correspond to different face shapes in order to gener-ate facial animation in a simulation setting, which can even be ex-tended to real-time animation [BS18]. While surface based simula-tion methods are typically easier to setup, they cannot create vol-ume based dynamics such as of soft tissue, which is one of the ben-efits of volumetric rigs. On the volumetric side, Li et al [LXB17]propose a method to enrich triangle mesh animations by fitting atetrahedral mesh, applying physics, and then transferring the sec-ondary motion and collision resolution to the original mesh. Ichimet al. [IKNDP16], as well as Kozlov et al. [KBB ∗
17] propose tomore intricately couple blendshape-based facial animation withuser-specific volumetric rigs, creating blend-volumes that can beused in a finite-element simulation approach. The work of Ichimet al. [IKNDP16] allows for several interesting physical effects butdoes not account for expression-specific dynamics. Kozlov et al.[KBB ∗
17] focus on the creation of expression-specific physical ef-fects, but the drawback is that spatially-varying material parametersneed to be painted and set manually by an artist for each expres-sion. Our method is complementary to Kozlov et al. [KBB ∗
17] aswe aim to automatically determine the material parameters for asimilar volumetric simulation rig.
Our work falls into the category of estimating physical parame-ters from real world capture data. Several methods have been pre-sented, outside the focus area of facial animation, for exampleon the topic of capturing cloth simulation parameters [BTH ∗ ∗ ∗
15, MMO16].Our work estimates both nonlinear heterogeneous material dis-tribution and rest shape geometry for human faces, under a unifiedoptimization framework of sensitivity analysis. Similar to recentmethods [XLCB15,WWY ∗ ∗
09] fit a stress-strain curve to elastostatic de-formation of skin tissue under known forces. More recently, Pai et c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion al. [PRWH ∗
18] measure the mechanical properties of the humanbody with a new handheld device. Kim et al. [KPMP ∗
17] learnan active and passive tissue segmentation and the material param-eters for the active tissue for whole-body deformations. Finally, itis worth mentioning that Pons-Moll et al. [PMRMB15] took a non-physics based approach, but rather a learning approach to modelthe deformation caused by soft-tissue dynamics in an applicationof full body animation, by scanning over 40,000 poses of real peo-ple. A similar approach could be used for more expressive facialanimation without physical simulation.
3. Overview
As illustrated in Fig. 1, the input to our approach is a sparse set ofaligned facial surface deformations under gravity, captured at dif-ferent orientations of the head with a fixed expression (Section 6).Our approach outputs the simulation mesh geometry at rest (with-out gravity) and spatially-varying material stiffness across the de-formable volume of the face, such that when we forward simulatethe face model under gravity, it will match the target surface defor-mations.We model the deformable face with a tetrahedral mesh and phys-ically simulate the deformation using the Finite Element Method(FEM) with a nonlinear material model (Section 4). To obtain thephysical parameters for forward simulation, we formulate an in-verse optimization to match the captured facial deformations. Weoptimize the unknown physical parameters, by alternating betweenoptimizing the rest shape geometry and material stiffness distribu-tion using sensitivity analysis (Section 5). We regularize our in-verse problem by aggregating the material distribution into spatialclusters (Section 5.2) and asking for minimal rest-shape geometrychanges to our initial guess (Section 5.3).In Section 7, we analyze the optimized results from different spa-tial material layouts. We validate our optimized physical parame-ters by simulating to different poses from the capture sequence,which were not part of the optimization targets.
4. Physically-Based Face Simulation
Inspired by [KBB ∗
17] we do not explicitly model the underly-ing anatomy but instead treat the entire face as a single volume,with spatially varying material properties that abstract the under-lying anatomical complexity. This results in much simpler cre-ation and simulation of a human face, while still achieving phys-ically plausible effects as recently demonstrated in various papers[IKNDP16, KBB ∗
17, IKKP17].To simulate the deforming facial tissue, we use the Finite Ele-ment Method (FEM) simulation of nonlinear elastic materials. Thedeformable volume is modeled as a tetrahedral mesh, subject tofixed boundary conditions at the skull and jaw (Fig. 2).The simulation model is governed by the underlying hyperelasticmaterial law where the elastic potential energy W is a summationof elemental contribution W e ( X , x , P ) as W ( X , x , P ) = ∑ e W e ( X , x , P ) , (1) Figure 2: The simulation face model consists of a single deformablevolume of tetrahedral elements (right, wireframe), conforming tothe surface mesh (left), attached to the skull and jaw bone by hardboundary conditions (right, red dots).where P are the per-element material parameters and X , x ∈ R n are the undeformed and deformed nodal positions, respectively,with n denoting the number of nodes. We use a Neo-Hookean ma-terial [BW08] to model the nonlinearity in facial deformation, inwhich case P ∈ R m is a collection of per-element Young’s modu-lus E e and Poisson’s ratio ν e , with m denoting the number of tetra-hedral elements. To improve the simulation robustness to elementinversions, we employ invertible FEM simulation using thresh-olded deformation gradient [ITF04].For quasi-static deformation, the deformed state x can be deter-mined by balancing the elastic force and external forces as ∇ x W ( X , x , P ) = f ext , (2)where f ext ∈ R n can be gravity force, contact or any other exter-nal loads. In section 5, we will use the quasi-static simulation forour inverse problem, where the face is statically deformed undergravity.
5. Static Inversion
Given the face model described in Section 4, under gravity the fa-cial tissue would be statically deformed. In this section, we describehow we invert this effect by estimating simulation-ready rest-shapegeometry X and per-element material parameters P based on obser-vations of the static facial deformation under known gravity forces.In the following section, we outline a capture procedure and pro-cessing pipeline to generate such observations (Section 6). Given a deformed configuration and the initial simulation meshtopology, we aim at solving for the undeformed vertices X and ma-terial parameters P . We formulate our optimization objective as g ( X , P ) = (cid:107) Sx ( X , P ) − ¯ x s (cid:107) + α R ( X ) , (3)subject to P l ≤ P ≤ P u , (4)where ¯ x s ∈ R s denote the subset of deformed and observed ver-tices on the surface and S ∈ R s × n is the selection matrix to the c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion corresponding observed vertices. R ( X ) is a regularization term wewill use in our rest shape geometry optimization (section 5.3). P l and P u are the lower and upper bounds for our material parametersrespectively. Note that the relationship x ( X , P ) is non-trivial. Givena new rest configuration or an updated set of material parameters,we first solve a quasi-static forward simulation (Equation 2) to getthe deformed configuration x at static equilibrium before we com-pare the result to the observed vertices ¯ x s .To minimize this objective with a quasi-Newton method withBFGS, an analytical gradient is desirable, (cid:34) d g ( X , P ) d X d g ( X , P ) d P (cid:35) = (cid:34) α dR ( X ) d X + ( Sx ( X , P ) − ¯ x s ) T S d x ( X , P ) d X ( Sx ( X , P ) − ¯ x s ) T S d x ( X , P ) d P (cid:35) . (5)However, there is no direct analytical expression for d x ( X , P ) d X or d x ( X , P ) d P due to the aforementioned implicit relationship x ( X , P ) .Therefore we propose to use sensitivity analysis to compute theanalytical gradient.Here for the simplicity of notation, we assemble all the optimiza-tion variables into vector y = [ X , P ] . Starting from Equation 2, wetake the derivative with respect to the optimization variables y onboth sides, we then get − ∂ f ext ( y ) ∂ y + ∂ W ( x , y ) ∂ y ∂ x + ∂ W ( x , y ) ∂ x d x ( y ) d y ! = O , (6)with O denoting the zero matrix. We note that in our case, f ext isthe gravity force only.We could compute d x ( y ) d y by solving the equation systems ∂ W ( x , y ) ∂ x d x ( y ) d y = b ( x , y ) (7)with right-hand side b ( x , y ) = ∂ f ext ( y ) ∂ y − ∂ W ( x , y ) ∂ y ∂ x (8)where ∂ W ( x , y ) ∂ x is the stiffness matrix we use when solving the for-ward problem with Newton’s method. Adjoint Evaluation.
The dimension of the right hand side(Equation 8) is 3 n × p , where p is the number of the optimiza-tion variables. Evaluating the system 7 directly requires p linearsystem solves. Even though we can prefactorize the left hand sidematrix, this is still computationally expensive when we have a high-dimensional optimization problem. Here instead we use the adjointmethod [CLPS03] to reduce this into a single system solve.We plug the explicit expressiond x ( y ) d y = (cid:32) ∂ W ( x , y ) ∂ x (cid:33) − b ( x , y ) (9)into the expression for the gradient,d g ( y ) d y = dR ( y ) d y + ( Sx − ¯ x s ) T S (cid:32) ∂ W ( x , y ) ∂ x (cid:33) − b ( x , y ) . (10) By introducing a column vector λλ T = ( Sx − ¯ x s ) T S (cid:32) ∂ W ( x , y ) ∂ x (cid:33) − , (11)we can avoid solving several linear equation systems but solve the adjoint system (cid:32) ∂ W ( x , y ) ∂ x (cid:33) λ = S T ( Sx − ¯ x s ) (12)instead, resulting in the gradient asd g ( y ) d y = α dR ( y ) d y + λ T b ( x , y ) . (13)Instead of optimizing all the variables at the same time, we willuse block coordinate descent for our inverse problem, alternatingbetween material optimization 5.2 and rest shape optimization 5.3,until convergence. For material optimization, we keep the rest shape geometry X con-stant and optimize per-element material parameters P . As discussedin section 4, we use Neo-Hookean material for our forward simu-lation where the material stiffness is parameterized with Young’smodulus E and volume preservation is controlled with Poisson’s ra-tio ν . However, we note that for different facial tissues, they mostlydiffer in the Young’s modulus but are close in Poisson’s ratio withhigh incompressibility. We therefore only optimize heterogeneousdistribution of Young’s modulus for our inverse problem but use aconstant and homogeneous distribution of Poisson’s ratio (we use ν = . m -dimension, where m is the number of tetrahedral ele-ments and can be very large as the simulation mesh becomes com-plex. In practice, high-dimensional optimization often suffers fromslow converge, poor local minima and parameters overfitting. In-spired from reduced material optimization [XLCB15, WWY ∗ c spatial clus-ters ( c (cid:28) m ) of material distribution (Section 6.5). The Young’smodulus of each tet is then a linear combination of material clustervalues, E e = c ∑ i = w ie E c , (14)where w ie is the weight to each cluster for element e and ∑ ci = w ie = . In our case, we assign each interior element exclusively to a sin-gle cluster but for elements at the boundary of clusters, we do lin-ear interpolation between the neighboring clusters. By optimizingthe material parameters of c clusters only, we solve a much lower-dimensional problem where the analytical gradient to the i -th clus-ter parameter E ic is calculated as dgdE ic = m ∑ e = dgdE e w ie . (15) c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion The initial guess to our material optimization is a homogeneousdistribution of Young’s modulus where E c = . P l = .
001 MPa and P u = . Although one of our observations will be a deformed surface ge-ometry under gravity at a neutral expression (refer to Section 6),the undeformed positions of the simulation mesh X still remain un-known. To solve for the rest shape geometry, we keep the materialparameters unchanged and optimize the rest shape X . Note that weneed to find undeformed positions for all the nodes including theinterior ones, even though only a subset of the surface mesh nodes¯ x s are observed by our capture system.Similar to element-wise material optimization, the geometry op-timization is also a high-dimensional problem with 3 n number ofvariables. However, here we adopt a different strategy to materialoptimization by regularizing the objective with R ( X ) = W ( X , X , P ) , (16)where X is the initial guess to the undeformed geometry and P are the some constant material parameters. The regularization term R ( X ) helps maintain a consistent simulation topology by penalizinglarge deviation of the solution to the initial guess with the elasticenergy W .The initial guess to our rest shape geometry X is obtained byforward quasi-static simulation of the volumetric mesh by applyingthe opposite gravity force. We note that our initial guess is reason-ably close to our solution, we therefore use gradient descent methodwith line search for our optimization. With the analytic gradient ∆ X = dg ( X ) d X evaluated from adjoint evaluation, we do a line searchalong the gradient direction. To avoid simulation mesh topologychange from element inversion, we back track the maximum steplength β such that no element is inverted [SS15] and the first Wolfecondition [NW06] is satisfied: g ( X ( k ) − β∆ X ) ≤ g ( X ( k ) ) − γβ∆ X T ∇ g ( X ( k ) ) , (17)where γ is a control parameter (we use γ = − ). In order to robustly estimate the physical properties, we proposeto include multiple poses of the same expression under differenthead rotations and observe the facial deformations under gravity.To compute the rest configuration and material parameters fromseveral observations under varying pose orientations, we extend ourobjective as g ( X , P ) = ∑ o (cid:107) S o x o ( X , P ) − ¯ x os (cid:107) (18)where each o is an observation of the same expression in a dif-ferent orientation relative to the gravity. We note that here insteadof transforming the rest shape X according to the orientations, we align our observed targets to a single pose and rotate the directionof gravity based on the relative orientation, and perform the opti-mization in the canonical frame. In practice, we also observe betterconvergence by starting our optimization from a single pose andusing the optimized result as the initial guess to our multiple-poseoptimization.
6. Data Acquisition and Preparation
We now describe a procedure to capture the observations requiredfor inversion, as described in Section 5, for a real human face.
Our capture setup consists of four synchronized stereo pairs ofmachine-vision cameras, capturing at approx. 30fps (see Fig. 3).The method of Beeler et al. [BHB ∗
11] is used to reconstruct geom-etry from performance sequences of facial motion, and generate aset of meshes in dense vertex correspondence. The face meshes arestabilized to remove the rigid motion of the head using an approxi-mated skull position, as proposed by Beeler and Bradley [BB14].Figure 3: Capture setup with four stereo cameras pairs and a tripodstanding in for an actor.
The capture is recorded in a standing position. The actor performsa slow rotation of his head by moving his body and keeping thehead position constant relative to their neck. To prevent jaw motionrelative to the skull, the actor lightly holds a small teeth guard intheir mouth. The slow head rotation prevents any inertia-induceddeformation, therefore the observed face deformation is assumedto be quasi-static and only due to the effects of gravity. A subset ofthe captured and reconstructed data is shown in Fig. 4.
Laplacian deformation is used to deform a closed-surface, generictemplate face model to one of the target shapes. The underlying c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion Figure 4: Here we show a subset of the captured performance. Theactor tilts his head to each side, providing different tissue deforma-tion due to the known gravitational force.skull and jaw bones are estimated using the technique described byZoss et al. [ZBBB18].We use Tetgen [Si06] to generate a surface conforming tetrahe-dralization of the volume between the face surface and the bonemeshes. The correspondences between the resulting volumetricmodel vertices and the surface meshes of the bones are used togenerate the simulation boundary constraints. The final simulationmodel consists of 50k vertices and 300k tetrahedral elements.
From the sequence of tracked meshes, we choose a sparse set ofposes to be used as optimization targets (refer to Fig. 4). The se-lected targets should exhibit a wide span of poses. The effects ofthe soft tissue deformation exhibited by the face can be easily dom-inated by any alignment errors, therefore the optimization requirescareful treatment of the target poses. In cases where the automaticstabilization of the meshes is visually insufficient, we manuallyalign the head poses. The aligned frames are then used to com-pute a pose-specific adjusted gravity direction, as the optimizationis performed in a canonical reference frame. The deformation onthe vertices then serves as the observation ¯x s for our optimization.See Fig. 5 for visualization of the aligned targets and computedgravity direction. Per target we generate positional constraints forabout 1500 vertices of the simulation mesh, selected using a man-ually specified mask confined to the center of the face. The segmentation of the mesh into material clusters is done bypainting colors on the surface of the volumetric mesh and propa-gating these colors inwards through the tetrahedral mesh’s connec-tivity. We evaluate two different material cluster layouts;
Layout 1 is based on the material maps from Kozlov et al. [KBB ∗ Layout2 is based on the one Tena et al. [TDlTM11] suggested, but we Figure 5: Visualization of aligned targets and adjusted gravity di-rections. The target poses correspond to the neutral, upright pose(third image) and three of the selected poses shown in Fig. 4. Thecolors encode the vertex displacement from the reference pose,where blue is 0 red is equivalent to 3.5mm.
Configuration Total Error (mm) Mean Vertex Error (mm)Naive 263048 1.31Layout 1 210186 1.05Layout 2 212624 1.06
Table 1: Validation results. The validation sequence consists of 136poses, out of which three are used as optimization targets. The erroris measured as the total absolute point to point vertex error betweenthe simulation result and the tracked mesh.added clusters for the lips as we anticipate those to differ from thesurrounding tissue, yielding a total of 19 clusters. The layouts areshown in Fig. 7.
7. Results
We validate the results quantitatively by forward simulating to a setof frames where we know the gravity vector and have ground truthsurface measurements from the performance capture system. Wefirst simulate using a naïve configuration, where a neutral scan isused as rest-shape and we manually determine a uniform materialthat is soft enough to produce secondary motion and other dynamiceffects. While this might seem like a trivial setup, it is probablythe most common approach for artists since neither geometry normaterial inversion are readily available. Since this treats the entireface as a single material, results are a compromise and, for example,the nose is too soft, leading to large undesired deformations (Fig. 6,left column). Assigning spatially varying materials will remedy thisproblem, but picking those manually is a challenge in its own. In-stead, the proposed system determines the optimal rest-shape incombination with optimal material properties for a chosen layout,which yield a lower error when used in simulation (Fig. 6, centerand right columns). The overall error averaged over a total of 136poses is summarized in Table 1.Fig. 7 shows the two layouts used in this publication, as wellas the resulting Young’s Modulus automatically determined by theproposed system. As can be seen, both the layouts yield similarsemantic material assignments, i.e. the cheeks come out soft, wherenose and lips are stiffer. Starting from an overall stiff material, theoptimization successively updates the estimated stiffness per regionuntil convergence as shown in Fig. 8, which is achieved after about30 iterations. The optimized Young’s Modulus stay in the physical c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion Figure 6: Visualization of validation results for different head posesnot used during optimization. The naive approach to manually as-sign a material to the neutral shape yields large errors, in particularfor the nose. The chosen material is soft in order to exhibit sec-ondary dynamics during simulation and corresponds to the cheekmaterial for the two other layouts, which have a lower overall erroras they can recover spatially varying material properties (Fig. 7) aswell as the corresponding res shape geometry (Fig. 9).range for facial tissues, as reported in literature [CNJO96]. In analternating process, we again optimize for the optimal rest shapegiven the resulting material configuration, which corresponds to theface under zero gravity (Fig. 9).
Performance
The simulation mesh for our face model has 50 k ver-tices and 300 k tetrahedral elements, with 7 −
19 material clusters,depending on the layout. An unoptimized implementation usingfive optimization target poses requires up to two minutes for a sin-gle evaluation of the function on an Intel i7 3.2Ghz with eight cores,yielding a total runtime of approximately 5 hours. The energies andgradients for the different targets are evaluated in parallel.
8. Conclusion
We present a novel method to automatically estimate the physicalproperties required to simulate real human faces given a small num-ber of aligned facial scans. As physical simulation for facial anima- Figure 7: Comparison of evaluated layouts and the resulting ma-terial stiffness. Please note that even though the layouts differ, therecovered stiffness values for corresponding facial parts are similar.Figure 8: Visualization of material optimization convergence forLayout 2 (refer to Fig. 7 for the Young’s Modulus color scale).tion is gaining popularity, the challenge of tuning the simulation tomatch a real human face is becoming an increasing roadblock forusing simulation approaches in several real-world scenarios (suchas VFX and VR). This work represents a large step in the directionof applying simulation methods on real faces. By requiring only asparse number of input poses, we show that the capture require-ments are minimal, and we further study the effects of differentspatial layouts for material clustering on the face. c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion Figure 9: Visualization of geometric inversion, which starts fromthe geometry observed under gravity (left) and optimizes for a dif-ferent geometry (right) that when used as rest-shape during forwardsimulation yields again the geometry observed under gravity. Thecolor encodes the distance from original rest shape, where blue en-codes 0 and red corresponds to 3mm.Our method estimates heterogeneous material properties andrest-shape simulation geometry from multiple poses in a neutral ex-pression. In the future, we would like to extend our method to mul-tiple expressions, accounting for material and geometry changesacross different expressions [KBB ∗ References [BB14] B
EELER
T., B
RADLEY
D.: Rigid stabilization of facial expres-sions.
ACM Transactions on Graphics (TOG) 33 , 4 (2014), 44. 5[BBO ∗
09] B
ICKEL
B., B
ÄCHER
M., O
TADUY
M. A., M
ATUSIK
W.,P
FISTER
H., G
ROSS
M.: Capture and modeling of non-linear hetero-geneous soft tissue.
ACM Trans. Graphics (Proc. SIGGRAPH) 28 , 3(2009), 89. 2[BHB ∗
11] B
EELER
T., H
AHN
F., B
RADLEY
D., B
ICKEL
B., B
EARD - SLEY
P., G
OTSMAN
C., S
UMNER
R. W., G
ROSS
M.: High-qualitypassive facial performance capture using anchor frames.
ACM Trans.Graph. 30 , 4 (July 2011), 75:1–75:10. 5[BS18] B
ARRIELLE
V., S
TOIBER
N.: Realtime performanceâ ˘A ˇRdrivenphysical simulation for facial animation.
Computer Graphics Forum(Proc. SCA) 0 , 0 (2018). 2[BSC16] B
ARRIELLE
V., S
TOIBER
N., C
AGNIART
C.: Blendforces: Adynamic framework for facial animation. In
Eurographics (2016). 2[BTH ∗
03] B
HAT
K. S., T
WIGG
C. D., H
ODGINS
J. K., K
HOSLA
P. K.,P
OPOVI ´C
Z., S
EITZ
S. M.: Estimating cloth simulation parameters fromvideo. In
Proceedings of the 2003 ACM SIGGRAPH/Eurographics Sym-posium on Computer Animation (2003), pp. 37–51. 2[BW08] B
ONET
J., W
OOD
R. D.:
Nonlinear Continuum Mechanics forFinite Element Analysis . Cambridge University Press, 2008. 3[CBF16] C
ONG
M., B
HAT
K. S., F
EDKIW
R. P.: Art-Directed MuscleSimulation for High-End Facial Animation. In
Eurographics/ ACM SIG-GRAPH Symposium on Computer Animation (2016), Kavan L., WojtanC., (Eds.), The Eurographics Association. 2[CBJ ∗
15] C
ONG
M., B AO M., J
ANE
L. E., B
HAT
K. S., F
EDKIW
R.:Fully automatic generation of anatomical face simulation models. In
Proc. SCA (2015), pp. 175–183. 2[CLMK17] C
HEN
D., L
EVIN
D. I., M
ATUSIK
W., K
AUFMAN
D. M.:Dynamics-aware numerical coarsening for fabrication design.
ACMTrans. Graph. 34 , 4 (2017). 2[CLPS03] C AO Y., L I S., P
ETZOLD
L., S
ERBAN
R.: Adjoint sensitiv-ity analysis for differential-algebraic equations: The adjoint dae systemand its numerical solution.
SIAM Journal on Scientific Computing 24 , 3(2003), 1076–1089. 4[CNJO96] C
HEN
E. J., N
OVAKOFSKI
J., J
ENKINS
W. K., O’B
RIEN
W. D.: Young’s modulus measurements of soft tissues with applicationto elasticity imaging.
IEEE Transactions on ultrasonics, ferroelectrics,and frequency control 43 , 1 (1996), 191–194. 7[CZXZ14] C
HEN
X., Z
HENG
C., X U W., Z
HOU
K.: An asymptoticnumerical method for inverse elastic shape design.
ACM Trans. Graph.33 , 4 (July 2014), 95:1–95:11. 2[IKKP17] I
CHIM
A.-E., K
ADLE ˇCEK
P., K
AVAN
L., P
AULY
M.: Phace:physics-based face modeling and animation.
ACM Transactions onGraphics (TOG) 36 , 4 (2017), 153. 1, 2, 3[IKNDP16] I
CHIM
A.-E., K
AVAN
L., N
IMIER -D AVID
M., P
AULY
M.:Building and animating user-specific volumetric face rigs. In
SCA (2016). 1, 2, 3[ITF04] I
RVING
G., T
ERAN
J., F
EDKIW
R.: Invertible finite elements forrobust simulation of large deformation. In
Proceedings of the 2004 ACMSIGGRAPH/Eurographics Symposium on Computer Animation (2004),SCA ’04, pp. 131–140. 3[KBB ∗
17] K
OZLOV
Y., B
RADLEY
D., B
ÄCHER
M., T
HOMASZEWSKI
B., B
EELER
T., G
ROSS
M.: Enriching facial blendshape rigs with phys-ical simulation.
Comput. Graph. Forum 36 , 2 (2017). 1, 2, 3, 6, 8[KIL ∗
16] K
ADLE ˇCEK
P., I
CHIM
A.-E., L IU T., K ˇRIVÁNEK
J., K
AVAN
L.: Reconstructing personalized anatomical models for physics-basedbody animation.
ACM Trans. Graph. 35 , 6 (2016), 213:1–213:13. 2[KPMP ∗
17] K IM M., P
ONS -M OLL
G., P
UJADES
S., B
ANG
S., K IM J.,B
LACK
M. J., L EE S.-H.: Data-driven physics for human soft tissueanimation.
ACM Trans. Graph. 36 , 4 (2017). 3[LAR ∗
14] L
EWIS
J. P., A
NJYO
K., R
HEE
T., Z
HANG
M., P
IGHIN
F.,D
ENG
Z.: Practice and theory of blendshape facial models. In
EG 2014- State of the Art Reports (2014). 2[LXB17] L I Y., X U H., B
ARBI ˇC
J.: Enriching triangle mesh animationswith physically based simulation.
IEEE Transactions on Visualizationand Computer Graphics 23 , 10 (2017), 2301–2313. 2[MBT ∗
12] M
IGUEL
E., B
RADLEY
D., T
HOMASZEWSKI
B., B
ICKEL
B., M
ATUSIK
W., O
TADUY
M. A., M
ARSCHNER
S.: Data-driven esti-mation of cloth simulation models, 2012. 2 c (cid:13) ozlov et al. / Data-Driven Physical Face Inversion [MMO16] M IGUEL
E., M
IRAUT
D., O
TADUY
M. A.: Modeling andestimation of energy-based hyperelastic objects. In
Computer GraphicsForum (2016), vol. 35, Wiley Online Library, pp. 385–396. 2[MWF ∗
12] M A W.-C., W
ANG
Y.-H., F
YFFE
G., C
HEN
B.-Y., D E - BEVEC
P.: A blendshape model that incorporates physical interaction.
Computer Animation and Virtual Worlds 23 , 3-4 (2012), 235–243. 2[NW06] N
OCEDAL
J., W
RIGHT
S. J.:
Numerical Optimization , 2nd ed.Springer, New York, 2006. 5[PMRMB15] P
ONS -M OLL
G., R
OMERO
J., M
AHMOOD
N., B
LACK
M. J.: Dyna: A model of dynamic human shape in motion.
ACM Trans-actions on Graphics, (Proc. SIGGRAPH) 34 , 4 (Aug. 2015), 120:1–120:14. 3[PRWH ∗
18] P AI D. K., R
OTHWELL
A., W
YDER -H ODGE
P., W
ICK
A.,F AN Y., L
ARIONOV
E., H
ARRISON
D., N
EOG
D. R., S
HING
C.: Thehuman touch: measuring contact with real human soft tissues.
ACMTransactions on Graphics (TOG) 37 , 4 (2018), 58. 2, 3[Si06] S I H.: Tetgen - a quality tetrahedral mesh generator and three-dimensional delaunay triangulator.
Weierstrass Institute for AppliedAnalysis and Stochastic, Berlin, Germany (2006). 6[SNF05] S
IFAKIS
E., N
EVEROV
I., F
EDKIW
R.: Automatic determina-tion of facial muscle activations from sparse motion capture marker data.In
ACM Trans. Graphics (2005), vol. 24, pp. 417–425. 2[SS15] S
MITH
J., S
CHAEFER
S.: Bijective parameterization with freeboundaries.
ACM Trans. Graph. 34 , 4 (July 2015), 70:1–70:9. 5[TDlTM11] T
ENA
J. R., D
E LA T ORRE
F., M
ATTHEWS
I.: Interactiveregion-based linear 3d face models. In
ACM Transactions on Graphics(TOG) (2011), vol. 30, ACM, p. 76. 6[WOR11] W
ANG
H., O’B
RIEN
J. F., R
AMAMOORTHI
R.: Data-drivenelastic models for cloth: modeling and measurement.
ACM Transactionson Graphics (SIGGRAPH 2011) 30 , 4 (2011), 71:1–71:12. 2[WWY ∗
15] W
ANG
B., W U L., Y IN K., A
SCHER
U., L IU L., H
UANG
H.: Deformation capture and modeling of soft objects.
ACM Transac-tions on Graphics (TOG) 34 , 4 (2015), 94. 2, 4[XB17] X U H., B
ARBI ˇC
J.: Example-based damping design.
ACMTrans. Graph. 36 , 4 (July 2017), 53:1–53:14. 2[XLCB15] X U H., L I Y., C
HEN
Y., B
ARBI ˇC
J.: Interactive materialdesign using model reduction.
ACM Transactions on Graphics (TOG)34 , 2 (2015), 18. 2, 4[YSZ09] Y OU L., S
OUTHERN
R., Z
HANG
J. J.: Adaptive physics–inspired facial animation. In
Motion in Games . 2009, pp. 207–218. 2[ZBBB18] Z
OSS
G., B
RADLEY
D., B
ÉRARD
P., B
EELER
T.: An em-pirical rig for jaw animation.
ACM Trans. Graph. 37 , 4 (July 2018),59:1–59:12. 6[ZBK18] Z HU Y., B
RIDSON
R., K
AUFMAN
D. M.: Blended curedquasi-newton for distortion optimization.
ACM Trans. Graph. 37 , 4 (July2018), 40:1–40:14. 8 c (cid:13)(cid:13)