Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yohan Payan is active.

Publication


Featured researches published by Yohan Payan.


Medical Image Analysis | 2003

Patient specific finite element model of the face soft tissues for computer-assisted maxillofacial surgery

Matthieu Chabanas; Vincent Luboz; Yohan Payan

This paper addresses the prediction of face soft tissue deformations resulting from bone repositioning in maxillofacial surgery. A generic 3D Finite Element model of the face soft tissues was developed. Face muscles are defined in the mesh as embedded structures, with different mechanical properties (transverse isotropy, stiffness depending on muscle contraction). Simulations of face deformations under muscle actions can thus be performed. In the context of maxillofacial surgery, this generic soft-tissue model is automatically conformed to patient morphology by elastic registration, using skin and skull surfaces segmented from a CT scan. Some elements of the patient mesh could be geometrically distorted during the registration, which disables Finite Element analysis. Irregular elements are thus detected and automatically regularized. This semi-automatic patient model generation is robust, fast and easy to use. Therefore it seems compatible with clinical use. Six patient models were successfully built, and simulations of soft tissue deformations resulting from bone displacements performed on two patient models. Both the adequation of the models to the patient morphologies and the simulations of post-operative aspects were qualitatively validated by five surgeons. Their conclusions are that the models fit the morphologies of the patients, and that the predicted soft tissue modifications are coherent with what they would expect.


Neuroscience | 2008

Can a plantar pressure–based tongue-placed electrotactile biofeedback improve postural control under altered vestibular and neck proprioceptive conditions?

Nicolas Vuillerme; Olivier Chenu; Nicolas Pinsault; Anthony Fleury; Jacques Demongeot; Yohan Payan

We investigated the effects of a plantar pressure-based tongue-placed electrotactile biofeedback on postural control during quiet standing under normal and altered vestibular and neck proprioceptive conditions. To achieve this goal, 14 young healthy adults were asked to stand upright as immobile as possible with their eyes closed in two Neutral and Extended head postures and two conditions of No-biofeedback and Biofeedback. The underlying principle of the biofeedback consisted of providing supplementary information related to foot sole pressure distribution through a wireless embedded tongue-placed tactile output device. Center of foot pressure (CoP) displacements were recorded using a plantar pressure data acquisition system. Results showed that (1) the Extended head posture yielded increased CoP displacements relative to the Neutral head posture in the No-biofeedback condition, with a greater effect along the anteroposterior than mediolateral axis, whereas (2) no significant difference between the two Neutral and Extended head postures was observed in the Biofeedback condition. The present findings suggested that the availability of the plantar pressure-based tongue-placed electrotactile biofeedback allowed the subjects to suppress the destabilizing effect induced by the disruption of vestibular and neck proprioceptive inputs associated with the head extended posture. These results are discussed according to the sensory re-weighting hypothesis, whereby the CNS would dynamically and selectively adjust the relative contributions of sensory inputs (i.e. the sensory weights) to maintain upright stance depending on the sensory contexts and the neuromuscular constraints acting on the subject.


ISBMS'06 Proceedings of the Third international conference on Biomedical Simulation | 2006

Hierarchical multi-resolution finite element model for soft body simulation

Matthieu Nesme; François Faure; Yohan Payan

The complexity of most surgical models has not allowed interactive simulations on standard computers. We propose a new framework to finely control the resolution of the models. This allows us to dynamically concentrate the computational force where it is most needed n nGiven the segmented scan of an object to simulate, we first compute a bounding box and then recursively subdivide it where needed. The cells of this octree structure are labelled with mechanical properties based on material parameters and fill rate. An efficient physical simulation is then performed using hierarchical hexaedral finite elements. The object surface can be used for rendering and to apply boundary conditions n nCompared with traditional finite element approaches, our method dramatically simplifies the task of volume meshing in order to facilitate the using of patient specific models, and increases the propagation of the deformations


VRIPHYS | 2006

Animating Shapes at Arbitrary Resolution with Non-Uniform Stiffness

Matthieu Nesme; Yohan Payan; François Faure

We present a new method for physically animating deformable shapes using finite element models (FEM). Contrary to commonly used methods based on tetrahedra, our finite elements are the bounding voxels of a given shape at arbitrary resolution. This alleviates the complexities and limitations of tetrahedral volume meshing and results in regular, well-conditionned meshes. We show how to build the voxels and how to set the masses and stiffnesses in order to model the physical properties as accurately as possible at any given resolution. Additionally, we extend a fast and robust tetrahedron-FEM approach to the case of hexahedral elements. This permits simulation of arbitrarily complex shapes at interactive rates in a manner that takes into account the distribution of material within the elements.


Medical Image Analysis | 2017

Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation

Fanny Morin; Hadrien Courtecuisse; Ingerid Reinertsen; Florian Le Lann; Olivier Palombi; Yohan Payan; Matthieu Chabanas

HighlightsA constraint‐based biomechanical simulation method is proposed to compensate for brain‐shift.Intraoperatively, a single ultrasound acquisition is used to account for the vessels and cortical deformations.Quantitative validation over synthetic data and five clinical cases is provided.Improvements over one of the closest existing methods are shown.This method is fully compatible with a surgical process. Graphical abstract Figure. No caption available. Purpose. During brain tumor surgery, planning and guidance are based on preoperative images which do not account for brain‐shift. However, this deformation is a major source of error in image‐guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint‐based biomechanical simulation method to compensate for craniotomy‐induced brain‐shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Methods. Prior to surgery, a patient‐specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B‐mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint, respectively. A constraint‐based simulation is then executed to register the pre‐ and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. Results. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided, first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain‐shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. Conclusion. In this paper, a new constraint‐based biomechanical simulation method is proposed to compensate for craniotomy‐induced brain‐shift. While being efficient to correct this deformation, the method is fully integrable in a clinical process.


Archive | 2002

Evaluation of the exophthalmia reduction with a finite element model

Vincent Luboz; Annaig Pedrono; Frank Boutault; Pascal Swider; Yohan Payan

The exophthalmia is a pathology defined by an excessive forward protrusion of the ocular globe [1]. For disthyroidy exophthalmia, a surgery is usually needed, once the endocrinal situation has been stabilized. A classical surgical technique consists in decompressing the orbit [2] by opening the walls, and pushing the ocular globe in order to evacuate some of the fat tissues inside the sinuses. This work aims at proposing a biomechanical model of the complete orbit in order to help the clinician in the definition of his surgical planning.


eurographics | 2009

Short paper: role of force-cues in path following of 3D trajectories in virtual reality

Jérémy Bluteau; Edouard Gentaz; Yohan Payan; Sabine Coquillart

This paper examines the effect of adding haptic force cues (simulated inertia, compensation of gravity) during 3D-path following in large immersive virtual reality environments. Thirty-four participants were asked to follow a 3D ring-on-wire trajectory. The experiment consisted of one pre-test/control bloc of twelve trials with no haptic feedback; followed by three randomized blocs of twelve trials, where force feedbacks differed. Two levels of inertia were proposed and one level compensating the effect of gravity (No-gravity). In all blocks, participants received a real time visual warning feedback (color change), related to their spatial performance. Contrariwise to several psychophysics studies, haptic force cues did not significantly change the task performance in terms of time completion or spatial distance error. The participants however significantly reduced the time passed in the visual warning zone in the presence of haptic cues. Taken together, these results are discussed from a psychophysics and multi-sensory integration point of view.


Recent Research Developments in Biomechanics | 2004

Physically realistic interactive simulation for biological soft tissues

Matthieu Nesme; Maud Marchal; Emmanuel Promayon; Matthieu Chabanas; Yohan Payan; François Faure


Journal of Speech Language and Hearing Research | 2013

The Distributed Lambda (λ) Model (DLM): A 3-D, Finite-Element Muscle Model Based on Feldman's λ Model; Assessment of Orofacial Gestures

Mohammad Ali Nazari; Pascal Perrier; Yohan Payan


Archive | 2005

Dispositif de prevention d'escarre

Yohan Payan; Jacques Demongeot; Jose-Octavio Vazquez-Buenosaires

Collaboration


Dive into the Yohan Payan's collaboration.

Top Co-Authors

Avatar

Nicolas Vuillerme

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Chenu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Bruno Diot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Francis Cannard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marek Bucki

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Matthieu Chabanas

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vincent Luboz

Joseph Fourier University

View shared research outputs
Top Co-Authors

Avatar

Jérémy Bluteau

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge