Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yannick Remion is active.

Publication


Featured researches published by Yannick Remion.


Proceedings of the Eurographic workshop on Computer animation and simulation | 2001

Continuous deformation energy for dynamic material splines subject to finite displacements

Olivier Nocent; Yannick Remion

This paper presents some improvements of a previous continuous parametric model for dynamic animation of curvilinear objects called Dynamic Material Splines (DMS). It begins with the replacement of the previous parametric density function by an actually per unit length density function. It then shows how continuous deformation energy can be used to model internal strains for DMS according to the classical theory of elasticity. After these theoretical developments, numerical results are given to point out the advantages of continuous deformation energy versus discrete springs.


International Journal of Digital Multimedia Broadcasting | 2010

An Occlusion Approach with Consistency Constraint for Multiscopic Depth Extraction

Cédric Niquin; Stéphanie Prévost; Yannick Remion

This is a new approach to handle occlusions in stereovision algorithms in the multiview context using images ndestined for autostereoscopic displays. It takes advantage of information from all views and ensures the consistency of their disparity maps. We demonstrate its napplication in a correlation-based method and a graphcuts-based method. The latter uses a new energy, which nmerges both dissimilarities and occlusions evaluations. We discuss the results on real and virtual images.


International Journal of Digital Multimedia Broadcasting | 2010

Multiview Shooting Geometry for Multiscopic Rendering with Controlled Distortion

Jessica Prévoteau; Sylvia Chalençon-Piotin; Didier Debons; Laurent Lucas; Yannick Remion

A fundamental element of stereoscopic and/or autostereoscopic image production is the geometrical analysisnof the shooting and viewing conditions in order to obtain a qualitative 3D perception experience. This paper firstlyncompares the perceived depth with the shot scene depth from the viewing and shooting geometries for a couplenof shooting and rendering devices. This yields a depth distortion model whose parameters are expressed fromnthe geometrical characteristics of shooting and rendering devices. Secondly, these expressions are inverted innorder to design convenient shooting layouts yielding chosen distortions on specific rendering devices. Thirdly,nthis design scheme provides three shooting technologies (3D computer graphics software, photo rail, and cameranbox system) producing qualitative 3D content for various kinds of scenes (real or virtual, still or animated),ncomplying with any prechosen distortion when rendered on any specific multiscopic technology or devicenformerly specified.


Journal of Visualization and Computer Animation | 2000

A dynamic animation engine for generic spline objects

Yannick Remion; Jean-Michel Nourrit; Didier Gillard

This paper introduces an accurate and efficient engine whose purpose is the dynamic animation of curvilinear objects which are modelled as successions of splines. At each time step the object shape conforms to its spline definitions, thus ensuring that each property implied by the chosen spline models is verified. This is achieved by animating spline control points. However, these control points are not considered as material points but rather as the degrees of freedom of the continuous object. The chosen dynamic equations (Lagrangian formalism) reflect this modelling scheme and yield an exact and very proficient linear system. In the chosen formalism, forces are introduced by either their potential energy or their power ratings in the virtual movements instilled by the degrees of freedom. Both methods are carried out in three cases: gravity, viscosity and generic force. Suitable and classical methods for constraint handling and numerical resolution are briefly discussed. Finally, this animation engine is applied to knitted patterns. Copyright


The Visual Computer | 2001

d-Dimensional parametric models for dynamic animation of deformable objects

Yannick Remion; Jean-Michel Nourrit; Olivier Nocent

This paper introduces an accurate, efficient, and unified engine dedicated to dynamic animation of d-dimensional deformable objects. The objects are modelled as d-dimensional manifolds defined as functional combinations of axa0mesh of 3D control points, weighted by parametric blending functions. This model ensures that, at each time step, the object shape conforms to its manifold definitions. The object motion is deduced from the control points dynamic animation. In fact, control points should be viewed as the degrees of freedom of the continuous object. The chosen dynamic equations (Lagrangian formalism) reflect this generic modelling scheme and yield an exact and computationally efficient linear system.


Computer Networks and Isdn Systems | 1997

A new unsupervised cube-based algorithm for iso-surface generation

Laurent Lucas; Didier Gillard; Yannick Remion

This paper introduces a new algorithm which automatically produces polygonal representations of 3D structures within a volume data set built from a stack of parallel cross-sections. Several methods of 3D surface reconstruction have already been proposed ranging from heuristic approaches for constructing 3D surfaces from 2D contours to the Marching Cubes (MC) approach where the different configurations are checked systematically. Instead, we define a cube-to-cube connection based upon geometrical closeness provided by convex hulls computation. We further evaluate the precision of 3D models reconstructed from synthetic and real data obtained in confocal microscopy and compare it with the conventional MC algorithm. We also discuss improvements that allow to reduce the number of generated surface patches and the ability to be used in 3D quantitative tasks.


4th International Conference on 3D Body Scanning Technologies, Long Beach CA, USA, 19-20 November 2013 | 2013

RECOVER3D: A Hybrid Multi-View System for 4D Reconstruction of Moving Actors

Laurent Lucas; Philippe Souchet; Muhannad Ismael; Olivier Nocent; Cédric Niquin; Céline Loscos; Ludovic Blache; Stéphanie Prévost; Yannick Remion

4D multi-view reconstruction of moving actors has many applications in the entertainment industry and although studios providing such services become more accessible, efforts have to be done in order to improve the underlying technology and to produce high-quality 3D contents. The RECOVER3D project aim is to elaborate an integrated virtual video system for the broadcast and motion pictures markets. In particular, we present a hybrid acquisition system coupling mono and multiscopic video cameras where actor’s performance is captured as 4D data set: a sequence of 3D volumes over time. The visual improvement of the software solutions being implemented relies on “silhouette-based” techniques and (multi-)stereovision, following several hybridization scenarios integrating GPU-based processing. Afterwards, we transform this sequence of independent 3D volumes in a unique dynamic mesh. Our approach is based on a motion estimation procedure. An adaptive signed volume distance function is used as the principal shape descriptor and an optical flow algorithm is adapted to the surface setting with a modification that minimizes the interference between unrelated surface regions.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009

Accurate multi-view depth reconstruction with occlusions handling

Cédric Niquin; Stéphanie Prévost; Yannick Remion

We present an offline method for stereo matching using a large number of views. Our method is based on occlusions detection. It is composed of two steps, one global and one local. In the first step we formulate an energy function that handles data, occlusions, and smooth terms through a global graph-cuts optimization. In our second step we introduce a local cost that handles occlusions from the first step in order to refine the result. This cost takes advantage of both the multi-view aspect and the occlusions. The experimental results show how our algorithm joins the advantages of both global and local methods, and how much it is accurate on boundaries detection and on details.


Proceedings of SPIE | 2011

Autostereoscopic visualization of 3D time-varying complex objects in volumetric image sequences

A. Benassarou; Gilles Valette; Didier Debons; Yannick Remion; Laurent Lucas

This paper presents 4dVizMed, a framework for interactive analysis and autostereoscopic visualization of 3d time-varying objects in volumetric image sequences. It combines a deformable surface model which automatically tracks volumetric features, real-time multi-view stereo volume rendering, and some interactive tools for manipulation and quantization. Our method is based on a topological feature tracking process, using a flow-based paradigm and a deformable surface model. It tracks through time the evolution of the components of an isosurface and their interaction with other components. We focus on the difficulties of visualizing 4d volume data, and we report the results of preliminary experiments designed to evaluate the utility of autostereoscopic displays for this purpose.


Proceedings of SPIE | 2010

Real 3D video capturing for multiscopic rendering with controlled distortion

Jessica Prévoteau; Sylvia Chalençon-Piotin; Didier Debons; Laurent Lucas; Yannick Remion

Today, 3D viewing devices still need qualitative content; up to now, there is no real 3D video shooting system specifically designed to ensure a qualitative 3D experience on some pre-chosen 3D display. A fundamental element of multiscopic image production is the geometrical analysis of shooting and viewing conditions in order to obtain a qualitative 3D perception experience. Many autostereoscopic camera systems are proposed but none is designed with control of possible depth distortions in mind. This article introduces a patented autostereoscopic camera design scheme based upon this distortion control. Thanks to our scientific know-how, we based our work on the link between the shooting and rendering geometries, which enables to control the distortion of the perceived depth. Thus this design scheme provides camera systems producing qualitative 3D content complying with any pre-chosen distortion when rendered on any specific autostereoscopic display. Thanks to our technological expertise, we use this design scheme to product pre-industrial camera systems devoted to 3D live or pre-recorded shooting. These systems are compact, lightweight, easy to deploy and rather adaptable to other conditions (3D displays, depth distortion). We will introduce the associated software, which allows to control our 3D cameras and to display in real-time on the autostereoscopic display formerly specified. According to numerous spectators, both naive and expert, the 3D perception is really qualitative.

Collaboration


Dive into the Yannick Remion's collaboration.

Top Co-Authors

Avatar

Laurent Lucas

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Céline Loscos

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Olivier Nocent

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Didier Debons

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Stéphanie Prévost

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Cédric Niquin

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Jean-Michel Nourrit

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Muhannad Ismael

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Didier Gillard

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Benjamin Battin

University of Reims Champagne-Ardenne

View shared research outputs
Researchain Logo
Decentralizing Knowledge