Yann Savoye
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yann Savoye.
international conference on computer graphics and interactive techniques | 2014
Lin Lu; Andrei Sharf; Haisen Zhao; Yuan Wei; Qingnan Fan; Xuelin Chen; Yann Savoye; Changhe Tu; Daniel Cohen-Or; Baoquan Chen
The emergence of low-cost 3D printers steers the investigation of new geometric problems that control the quality of the fabricated object. In this paper, we present a method to reduce the material cost and weight of a given object while providing a durable printed model that is resistant to impact and external forces. We introduce a hollowing optimization algorithm based on the concept of honeycomb-cells structure. Honeycombs structures are known to be of minimal material cost while providing strength in tension. We utilize the Voronoi diagram to compute irregular honeycomb-like volume tessellations which define the inner structure. We formulate our problem as a strength--to--weight optimization and cast it as mutually finding an optimal interior tessellation and its maximal hollowing subject to relieve the interior stress. Thus, our system allows to build-to-last 3D printed objects with large control over their strength-to-weight ratio and easily model various interior structures. We demonstrate our method on a collection of 3D objects from different categories. Furthermore, we evaluate our method by printing our hollowed models and measure their stress and weights.
asian conference on computer vision | 2010
Yann Savoye; Jean-Sébastien Franco
Full body performance capture is a promising emerging technology that has been intensively studied in Computer Graphics and Computer Vision over the last decade. Highly-detailed performance animations are easier to obtain using existing multiple views platforms, markerless capture and 3D laser scanner. In this paper, we investigate the feasibility of extracting optimal reduced animation parameters without requiring an underlying rigid kinematic structure. This paper explores the potential of introducing harmonic cage-based linear estimation and deformation as post-process of current performance capture techniques used in 3D time-varying scene capture technology. We propose the first algorithm for performing cage-based tracking across time for vision and virtual reality applications. The main advantages of our novel approach are its linear single pass estimation of the desired surface, easy-to-reuse output cage sequences and reduction in storage size of animations. Our results show that estimated parameters allow a sufficient silhouette-consistent generation of the enclosed mesh under sparse frame-to-frame animation constraints and large deformation.
conference on visual media production | 2013
Yann Savoye
Dynamic shape capture from casual videos is a fundamental task at the cross-fertilization of Computer Vision and Computer Graphics. Notwithstanding, recent advances in low-cost dynamic scanners turn the cross parametrization of non-rigid animatable surface into an ill-posed and vision-oriented problem. In this paper, we propose a cage-based technique to register non-rigid observed shapes using a meaningful, reduced and animator-friendly embedding. This subspace offers natural silhouette-awareness to encode the deformation complexity already encapsulated in the targets. The estimated time-varying parameters associated the underlying flexible structure allows potential reuse. In particular, we leverage the problem of highly non-rigid spacetime registration by employing an elastoplastic coarse cage. Thus, we perform scalable handle-aware biharmonic shape registration, relying on the high-level of shape abstraction offered by this space-based paradigm. Finally, we tested the effectiveness of our proposed solution on real-world datasets capturing time-varying multi-view silhouettes.
international conference on computer graphics and interactive techniques | 2012
Yann Savoye
Recent advances in low-cost dynamic scanning turn the cross-parametrization of non-rigid animatable surface into a vision-oriented ill-posed problem. In contrast with [Li et al. 2012], we propose a novel detail-preserving registration approach with resolution-independent control. Furthermore, our skin-detached surface registration avoids patch-based segmentation or affine fitting to maintain the local plasticity, as required in [Budd and Hilton 2010]. In particular, we leverage the problem of highly non-rigid spacetime registration by employing an elasto-plastic coarse cage. Thus, we perform scalable handle-aware harmonic shape registration, relying on the high-level of shape abstraction offered by the space-based paradigm. To the best of our knowledge, our technique is the first to investigate handle-aware elastic overlapping-rigidities for registering life-like dynamic shapes in full-body clothing.
international conference on computer graphics and interactive techniques | 2011
Yann Savoye
In this work, we describe a new and simple approach to re-use skeleton of animation with joint-based Laplacian-type regularization, in the context of exaggerated skeleton-based character animation. Despite decades of research, interactive character animations still have a lack of flexibility and editability in order to re-use real vertebral motion. In further details, generation of expressive cartoon animation from real data, is a challenging key task for non-photorealistic animation. Hence, a major problem for artists in production is to enhance the expressiveness of classical motion clips by direct manipulation of the underlying skeletal structure. Relatively small number of researchers present their approach for processing cartoon effects on motion data in [Kwon and Lee 2007; Davis and Kannappan 2002; Bregler et al. 2002]. However existing techniques often avoid dealing with the potential of skeletal-based optimization, while preserving the joint coherence and connectivity. Besides, the majority of characters in cartoons have the flexibility to stretch to extreme positions and squash to astounding shapes. It can also be noticed that squash-and-stretch is easier to realize in traditional animation rather than mocap-based computer generated animation. For this reason, Ratatouille a Pixar® movie, did not use a rigid skeleton, while abandoning motion capture to reach such essential non-ultra realistic appeal.
spring conference on computer graphics | 2017
Yann Savoye
Cage-based structures are reduced subspace deformers enabling non-isometric stretching deformations induced by clothing or muscle bulging. In this paper, we reformulate the cage-based rigging as an incompressible Stokes problem in the vorticity space. The key to our approach is a compact stencil allowing the expression of fluid-inspired high-order coordinates. Thus, our cage-based coordinates are obtained by vorticity transport as the numerical solution of the linearized Stokes equations. Then, we turn the incompressible creeping Newtonian flow into Stokes equations, and we devise a second-order compact approximation with center differencing for solving the vorticity-stream function. To the best of our knowledge, our work is the first to devise a vorticity-stream function formulation as a computational model for cage-based weighting functions. Finally, we demonstrate the effectiveness of our new techniques for a collection of cage-based shapes and applications.
international conference on computer graphics and interactive techniques | 2016
Yann Savoye
Nowadays, highly-detailed animations of live-like performances are easier to acquire thanks to low-cost sensors. Also, 4D meshes have reached considerable attentions in visual media productions. This course will address new paradigm to achieve performance capture using cage-based shapes in motion. We define cage-based performance capture as the non-invasive process of capturing non-rigid surface of actors from multi-view in the form of sparse control deformation handles trajectories and a laser-scanned template shape. In this course, we address the hard problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations in four steps: (1) cage-based inverse kinematics, (2) conversion of surface performance capture into cage-based deformation, (3) cage-based cartoon surface exaggeration, and (4) cage-based registration of time-varying reconstructed point clouds. The key objective is to attract the interest of game programmers, digital artists, and filmmakers in employing purely geometric animator-friendly tools to capture and reuse surfaces in motion. Finally, a broad range of advanced animation techniques and promising research-to-production opportunities for the years to come, in-between Graphics, and Vision fields will be presented. At first sight, a central challenge is to express plausible boneless deformations while preserving global and local properties of dynamic captured surfaces with a limited number of controllable, flexible and reusable parameters. While abandoning the classical articulated skeleton as the underlying structure, we show that cage-based deformers offer a flexible design space abstraction to dynamically non-rigid surface motion through learning space-time shape variability. Registered cage-handles trajectories allow the reconstruction of complex mesh sequences by deforming an enclosed template mesh. Decoupling motion from geometry, cage-based performance capture techniques offer reusable outputs for animation transfer.
interactive 3d graphics and games | 2016
Yann Savoye
Cage-based structures are reduced subspace deformers enabling non-isometric stretching deformations induced by clothing or muscle bulging. In this paper, we reformulate the cage-based rigging as an incompressible Stokes problem in the vorticity space. The key to our approach is a compact stencil allowing the expression of fluid-inspired high-order coordinates. Thus, our cage-based coordinates are obtained by vorticity transport as the numerical solution of the linearized Stokes equations. Then, we turn the incompressible creeping Newtonian flow into Stokes equations, and we devise a second-order compact approximation with center differencing for solving the vorticity-stream function. To the best of our knowledge, our work is the first to devise a vorticity-stream function formulation as a computational model for cage-based weighting functions.
conference on visual media production | 2012
Yann Savoye
Cutting-edge efforts have been invested in the automatic production of breath-taking visual effects involving time-varying data captured from real-actor performances. However, a challenges for computer-generated imagery is the puppetry of heterogeneous captured data, without the heavy use of artistic skills. Then, we focus on achieving desired exaggerated animations coherently while preserving life-life baked-in visual cues. In this paper, we propose a new method to generate content-aware exaggerated animations by melting motion, shape and appearance properties from captured data. In particular, our suggested approach explores two closely tools that serve the common theme of Animation-Cartoonization. The first one consists in realizing articulated-based stretchable cartoon editing from marker-based mocap clips. The second one generates video-based toon character from surface performance capture. Finally, we demonstrate the flexibility and stability of our approach on a variety of captured animations as input.
international conference on computer graphics and interactive techniques | 2011
Yann Savoye
Recent advances in dynamic surface capture have made the creation of a realistic animation a promising task for modern visual media production such as computer generated movies or 3D cinematographic video games. Nonetheless, automatic creation of cartoon mesh animations, driven by real-life cues, is still a costly and time-consuming process and presents a number of hard technical challenges. To the best of our knowledge, we propose the first attempt at generating as-photorealistic-as possible cartoon animation from markerless surface performance capture. The purpose of synthesizing new puppetry animation demonstrating fidelity to the spirit of comic book style with more exaggerated motion while preserving extreme captured cloth wrinkles is difficult to achieve. Consequently, the key contribution of our work focuses on a novel cartoon stylization approach for 3D video that efficiently reuse temporally consistent dynamic surface sequence captured from real-world actor performance. In particular, our simple and effective algorithm converts realistic spatiotemporal captured surfaces and multi-view data into an exaggerated life-like squash-and-stretch shape evolution coupled with context-aware cartoon-style expressive rendering.