Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sylvain Bouchigny is active.

Publication


Featured researches published by Sylvain Bouchigny.


Robotics and Autonomous Systems | 2013

Haptic systems for training sensorimotor skills: A use case in surgery

Florian Gosselin; Sylvain Bouchigny; Christine Mégard; Farid Taha; Pascal Delcampe; CéDric D'Hauthuille

Learning and training sensorimotor tasks typically involve different phases. Usually, the trainee has first to understand the task. He can read books, listen to explanations, look at videos or observe other people. Then he has to perform the task, either on his own or under the supervision of a mentor, to build his own perception action loops. With time and practice, he will progress towards expertise, more efficiently if he receives an appropriate feedback. This common approach has however a limited applicability for safety-critical tasks and for partially blindfolded activities heavily relying on haptics, both commonly found in surgery. Safety issues prevent initial trials and errors and require good performance on the first trial just from explanations and observation, exactly what is missing for blindfolded activities and difficult for complex movements with force accuracy requirements. Haptic systems and virtual reality (VR) technologies allow reproducing such situations, yet without risks for the patient, and offer new opportunities for training. The trainee can learn the task on a multimodal console until he has reached a sufficient proficiency level before moving to the operating room (OR). In this paper, we present how this approach was applied in the context of maxillofacial surgery (MFS). A novel training platform offering high fidelity haptic interactions was specified, developed and evaluated. The first results tend to prove its efficiency in qualifying expertise and training people.


international conference on robotics and automation | 2011

Specification and design of a new haptic interface for maxillo facial surgery

Florian Gosselin; Fabien Ferlay; Sylvain Bouchigny; Christine Mégard; Farid Taha

Multimodal VR training platforms appear as a very promising complement to traditional learning methods for the transfer of skills. The environment is fully controlled and the content of the application, as well as the feedbacks, can be tuned to the performances and progress of the user. The efficiency of this approach depends however on the ability to realistically reproduce the situations encountered in the real world. If not, users could develop false perception-action loops or illusionary conjunctions. This would be detrimental for the transfer of the training to the real world. Our institute, the Lab of applied research on Software-Intensive Technologies from the French Atomic Energy Commission, CEA, LIST, is currently developing such a platform for the training of maxillo facial surgery. As no existing haptic device fits the requirements of this application, we specifically developed a new interface. This paper presents its specification, design and performances.


virtual reality software and technology | 2009

User-centered design of a maxillo-facial surgery training platform

Christine Mégard; Florian Gosselin; Sylvain Bouchigny; Fabien Ferlay; Farid Taha

The paper describes the requirements specification process involved in the design of a Virtual Reality trainer for Epker maxillo-facial surgery. This surgery is considered as very delicate and difficult to teach. Vision guided movements are very limited and haptic sense is largely used. The user-centered methodology and the first experiments designed to develop the training platform are presented. Finally the Virtual training platform is sketched out.


international conference on human haptic sensing and touch enabled computer applications | 2010

Design of a multimodal VR platform for the training of surgery skills

Florian Gosselin; Fabien Ferlay; Sylvain Bouchigny; Christine Mégard; Farid Taha

There are many ways by which we can learn new skills. For sensory motor skills, repeated practice (often under supervision and guidance of an expert mentor) is required in order to progressively understand the consequences of our actions, adapt our behavior and develop optimal perception-action loops needed to intuitively and efficiently perform the task. VR multimodal platforms, if adequately designed, can offer an alternative to real environments therefore. Indeed they present interesting features: controlled environment, measure of the users performance, display of quantitative feedback. This paper presents such a platform that was developed for surgery skills training.


Human Factors | 2013

Training Haptic Stiffness Discrimination Time Course of Learning With or Without Visual Information and Knowledge of Results

Kinneret Teodorescu; Sylvain Bouchigny; Maria Korman

Objective: In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Background: Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Method: Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block. Additional visual information and/or KR were manipulated between participants during training blocks. Results: Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Conclusion: Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Application: Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.


Proceedings of the 1st International Conference on Digital Tools & Uses Congress - DTUC '18 | 2018

Tangible Sounds: An Audio System for Interactive Tabletops

Lucile Cossou; Céphise Louison; Sylvain Bouchigny; Mehdi Ammi

One of the most interesting features of the interactive tabletops is their capacity to provide multimodal interaction. Multimodality allows to create attractive and immersive applications for these devices which have been used in many different setups (museums, restaurants, music composition, schools...). Multimodality is also the key to accessible digital content, as it allows people with disabilities to use their preferred medium to get access to the information. However, most retail available interactive tabletops lack from one of the multimodality most multimedia devices have: audio output. Previous works emphasized on the importance to design a specific audio system dedicated to interactive tabletops in order to provide a satisfying user experience. In this paper, we propose a new audio output system, based on the use of tangible objects.


International Conference on Computer-Human Interaction Research and Applications | 2017

A Tangible Visual Accessibility Tool for Interactive Tabletops

Lucile Cossou; Sylvain Bouchigny; Christine Mégard; Sonia Huguenin; Mehdi Ammi

Thanks to being highly multimodal, interactive tabletops show great potential for edutainment. But in order to meet the world spreading inclusive school policies requirements, they need new and dedicated accessibility tools which would take into account both the specificity of the platform and the needs of its users. We present here the design methodology we adopted to design one of the said tools and the results we obtained.


Archive | 2010

A VR Training Platform for Maxillo Facial Surgery

Florian Gosselin; Christine Mégard; Sylvain Bouchigny; Fabien Ferlay; Farid Taha; Pascal Delcampe; Cédric d’Hauthuille


The International Conference SKILLS 2011 | 2011

Evaluation of a Multimodal VR training platform for maxillofacial surgery

Sylvain Bouchigny; Christine Mégard; Ludovic Gabet; Pablo F. Hoffmann; Maria Korman


international conference on multimodal interfaces | 2009

Analysis of the drilling sound in maxillo-facial surgery

Pablo F. Hoffmann; Florian Gosselin; Farid Taha; Sylvain Bouchigny; Dorte Hammershøi

Collaboration


Dive into the Sylvain Bouchigny's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kinneret Teodorescu

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehdi Ammi

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge