Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Vlasic is active.

Publication


Featured researches published by Daniel Vlasic.


international conference on computer graphics and interactive techniques | 2005

Face transfer with multilinear models

Daniel Vlasic; Matthew Brand; Hanspeter Pfister; Jovan Popović

Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate and drive a detailed 3D textured face mesh for a target identity, which can be seamlessly rendered back into target footage. The underlying face model automatically adjusts for how the target performs facial expressions and visemes. The performance data can be easily edited to change the visemes, expressions, pose, or even the identity of the target---the attributes are separably controllable. This supports a wide variety of video rewrite and puppetry applications.Face Transfer is based on a multilinear model of 3D face meshes that separably parameterizes the space of geometric variations due to different attributes (e.g., identity, expression, and viseme). Separability means that each of these attributes can be independently varied. A multilinear model can be estimated from a Cartesian product of examples (identities × expressions × visemes) with techniques from statistical analysis, but only after careful preprocessing of the geometric data set to secure one-to-one correspondence, to minimize cross-coupling artifacts, and to fill in any missing examples. Face Transfer offers new solutions to these problems and links the estimated model with a face-tracking algorithm to extract pose, expression, and viseme parameters.


international conference on computer graphics and interactive techniques | 2008

Articulated mesh animation from multi-view silhouettes

Daniel Vlasic; Ilya Baran; Wojciech Matusik; Jovan Popović

Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning.


international conference on computer graphics and interactive techniques | 2007

Practical motion capture in everyday surroundings

Daniel Vlasic; Rolf Adelsberger; Giovanni Vannucci; John C. Barnwell; Markus H. Gross; Wojciech Matusik; Jovan Popović

Commercial motion-capture systems produce excellent in-studio reconstructions, but offer no comparable solution for acquisition in everyday environments. We present a system for acquiring motions almost anywhere. This wearable system gathers ultrasonic time-of-flight and inertial measurements with a set of inexpensive miniature sensors worn on the garment. After recording, the information is combined using an Extended Kalman Filter to reconstruct joint configurations of a body. Experimental results show that even motions that are traditionally difficult to acquire are recorded with ease within their natural settings. Although our prototype does not reliably recover the global transformation, we show that the resulting motions are visually similar to the original ones, and that the combined acoustic and intertial system reduces the drift commonly observed in purely inertial systems. Our final results suggest that this system could become a versatile input device for a variety of augmented-reality applications.


international conference on computer graphics and interactive techniques | 2009

Dynamic shape capture using multi-view photometric stereo

Daniel Vlasic; Pieter Peers; Ilya Baran; Paul E. Debevec; Jovan Popović; Szymon Rusinkiewicz; Wojciech Matusik

We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.


international conference on computer graphics and interactive techniques | 2011

Video face replacement

Kevin Dale; Kalyan Sunkavalli; Micah K. Johnson; Daniel Vlasic; Wojciech Matusik; Hanspeter Pfister

We present a method for replacing facial performances in video. Our approach accounts for differences in identity, visual appearance, speech, and timing between source and target videos. Unlike prior work, it does not require substantial manual operation or complex acquisition hardware, only single-camera video. We use a 3D multilinear model to track the facial performance in both videos. Using the corresponding 3D geometry, we warp the source to the target face and retime the source to match the target performance. We then compute an optimal seam through the video volume that maintains temporal consistency in the final composite. We showcase the use of our method on a variety of examples and present the result of a user study that suggests our results are difficult to distinguish from real video footage.


ACM Transactions on Graphics | 2012

Temporally coherent completion of dynamic shapes

Hao Li; Linjie Luo; Daniel Vlasic; Pieter Peers; Jovan Popović; Mark Pauly; Szymon Rusinkiewicz

We present a novel shape completion technique for creating temporally coherent watertight surfaces from real-time captured dynamic performances. Because of occlusions and low surface albedo, scanned mesh sequences typically exhibit large holes that persist over extended periods of time. Most conventional dynamic shape reconstruction techniques rely on template models or assume slow deformations in the input data. Our framework sidesteps these requirements and directly initializes shape completion with topology derived from the visual hull. To seal the holes with patches that are consistent with the subjects motion, we first minimize surface bending energies in each frame to ensure smooth transitions across hole boundaries. Temporally coherent dynamics of surface patches are obtained by unwarping all frames within a time window using accurate interframe correspondences. Aggregated surface samples are then filtered with a temporal visibility kernel that maximizes the use of nonoccluded surfaces. A key benefit of our shape completion strategy is that it does not rely on long-range correspondences or a template model. Consequently, our method does not suffer error accumulation typically introduced by noise, large deformations, and drastic topological changes. We illustrate the effectiveness of our method on several high-resolution scans of human performances captured with a state-of-the-art multiview 3D acquisition system.


international conference on computer graphics and interactive techniques | 2009

Semantic deformation transfer

Ilya Baran; Daniel Vlasic; Eitan Grinspun; Jovan Popović

Transferring existing mesh deformation from one character to another is a simple way to accelerate the laborious process of mesh animation. In many cases, it is useful to preserve the semantic characteristics of the motion instead of its literal deformation. For example, when applying the walking motion of a human to a flamingo, the knees should bend in the opposite direction. Semantic deformation transfer accomplishes this task with a shape space that enables interpolation and projection with standard linear algebra. Given several example mesh pairs, semantic deformation transfer infers a correspondence between the shape spaces of the two characters. This enables automatic transfer of new poses and animations.


interactive 3d graphics and games | 2003

Opacity light fields: interactive rendering of surface light fields with view-dependent opacity

Daniel Vlasic; Hanspeter Pfister; Sergey Molinov; Radek Grzeszczuk; Wojciech Matusik

We present new hardware-accelerated techniques for rendering surface light fields with opacity hulls that allow for interactive visualization of objects that have complex reflectance properties and elaborate geometrical details. The opacity hull is a shape enclosing the object with view-dependent opacity parameterized onto that shape. We call the combination of opacity hulls and surface light fields the opacity light field. Opacity light fields are ideally suited for rendering of the visually complex objects and scenes obtained with 3D photography. We show how to implement opacity light fields in the framework of three surface light field rendering methods: view-dependent texture mapping, unstructured lumigraph rendering, and light field mapping. The modified algorithms can be effectively supported on modern graphics hardware. Our results show that all three implementations are able to achieve interactive or real-time frame rates.


international conference on computer graphics and interactive techniques | 2004

Multilinear models for face synthesis

Daniel Vlasic; Matthew Brand; Hanspeter Pfister; Jovan Popović

Multilinear models offer a natural way of modeling heterogenous sources of variation. We are specifically interested in facial geometry variations due to identity and expression changes. In this setting, the multilinear model is able to capture idiosyncrasies such as style of smiling, i.e. the smile depends on the identity parameters. Two properties of multilinear models are of particular interest to animators: Separability – expression can be varied while identity stays constant, and vice versa; and Consistency – expression parameters encoding a smile for one person will encode a smile for every person spanned by the model, appropriate to their facial geometry and style of smiling. We introduce methods that make multilinear models a practical tool for animating faces, addressing two key obstacles. The key obstacle in constructing a multilinear model is the vast amount of data (in full correspondence) needed to account for every possible combination of attribute settings. The key problem in using a multilinear model is devising an intuitive control interface. For the data-acquisition problem, we show how to estimate a detailed multilinear model from an incomplete set of high-quality face scans. For the control problem, we show how to drive this model with a video performance, extracting identity, expression, and pose parameters.


Archive | 2012

Dynamic three-dimensional imaging of ear canals

Douglas P. Hart; Federico Frigerio; Douglas M. Johnston; Manas C. Menon; Daniel Vlasic

Collaboration


Dive into the Daniel Vlasic's collaboration.

Top Co-Authors

Avatar

Federico Frigerio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Manas C. Menon

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Douglas M. Johnston

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Douglas P. Hart

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wojciech Matusik

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Brand

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge