Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christophe Doignon is active.

Publication


Featured researches published by Christophe Doignon.


Real-time Imaging | 2005

Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature

Christophe Doignon; P. Graebling; M. de Mathelin

In this paper, the real-time segmentation of surgical instruments with color images used in minimally invasive surgery is addressed. This work has been developed in the scope of the robotized laparoscopic surgery, specifically for the detection and tracking of gray regions and accounting for images of metallic instruments inside the abdominal cavity. With this environment, the moving background due to the breathing motion, the non-uniform and time-varying lighting conditions and the presence of specularities are the main difficulties to overcome. Then, to achieve an automatic color segmentation suitable for robot control, we developed a technique based on a discriminant color feature with robustness capabilities with respect to intensity variations and specularities. We also designed an adaptive region growing with automatic region seed detection and a model-based region classification, both dedicated to laparoscopy. The foreseen application is a good training ground to evaluate the proposed technique and the effectiveness of this work has been demonstrated through experimental results with endoscopic image sequences to efficiently locate boundaries of a landmark-free needle-holder at half the video-rate.


medical image computing and computer-assisted intervention | 2004

A Parallel Robotic System with Force Sensors for Percutaneous Procedures Under CT-Guidance

Benjamin Maurin; Jacques Gangloff; Bernard Bayle; Michel de Mathelin; Olivier Piccin; Philippe Zanne; Christophe Doignon; Luc Soler; Afshin Gangi

This paper presents a new robotic framework for assisted CT-guided percutaneous procedures with force feedback and automatic patient-to-image registration of needle. The purpose is to help practitioners in performing accurate needle insertion while preserving them from harmful intra-operative X-rays imaging devices. Starting from medical requirements for needle insertions in the liver under CT-scan, a description of a dedicated parallel robot is made. Its geometrical and physical properties are explained. The design is mainly based on the accuracy and safety constraints. A real prototype is presented that is currently tested.


Medical Image Analysis | 2017

The status of augmented reality in laparoscopic surgery as of 2016

Sylvain Bernhardt; Stéphane Nicolau; Luc Soler; Christophe Doignon

This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery.


international conference on image processing | 2007

Design of a Monochromatic Pattern for a Robust Structured Light Coding

Chadi Albitar; Pierre Graebling; Christophe Doignon

In this paper we present a new pattern for the robust coding of the structured light based on the spatial neighborhood scheme and by means of the M-array approach. The pattern is robust as it allows a high error rate characterized by a Hamming distance higher than 3. We tackle the design problem with the definition of a small set of symbols associated to geometrical features with simple shapes. We benefit from the stripe, as it is one of the considered primitives, to embed the local orientation of the pattern. This is helpful while performing the search for the relevant neighborhood during the decoding process. The aim of this work is to use this pattern for the 3 - D reconstruction of dynamic scenes with fast and reliable detection and decoding stages. Ongoing results are presented to assess both the capabilities of the proposed pattern and the decoding algorithm with projections onto simple 3 - D scenes.


computer vision and pattern recognition | 2011

A pattern framework driven by the Hamming distance for structured light-based reconstruction with a single image

Xavier Maurice; Pierre Graebling; Christophe Doignon

Structured light based patterns provide a means to capture the state of an object shape. However it may be inefficient when the object is freely moving, when its surface contains high curvature parts or in out of depth of field situations. For image-based robotic guidance in unstructured and dynamic environment, only one shot is required for capturing the shape of a moving region-of-interest. Then robust patterns and real-time capabilities must be targeted. To this end, we have developed a novel technique for the generation of coded patterns directly driven by the Hamming distance. The counterpart is the big amount of codes the coding/decoding algorithms have to face with a high desired Hamming distance. We show that the mean Hamming distance is a useful criterion for driving the patterns generation process and we give a way to predict its value. Furthermore, to ensure local uniqueness of codewords with consideration of many incomplete ones, the Perfect Map theory is involved. Then, we describe a pseudorandom/exhaustive algorithm to build patterns with more than 200×200 features, in a very short time, thanks to a splitting strategy which performs the Hamming tests in the codeword space instead of the pattern array. This leads to a significant reduction of the computational complexity and it may be applied to other purposes. Finally, real-time reconstructions from single images are reported and results are compared to the best known which are outperformed in many cases.


machine vision applications | 2005

A structured light vision system for out-of-plane vibration frequencies location of a moving web

Christophe Doignon; Dominique Knittel

In this paper, we address the problem of the detection of out-of-plane web vibrations by means of a single camera and a laser dots pattern device. We have been motivated by the important economical impact of web vibrations phenomena which occur in winding/unwinding systems. Among many sources of disturbances, out-of-plane vibrations of an elastic moving web are well-known to be one of the most limiting factors for the velocity in the web transport industry.The new technique we proposed for the contact-less estimation of out-of-plane web vibration properties and during the winding process is the main contribution of this work. As far as we know, this is the first time a technique is proposed to evaluate the vibrations of a moving web with a camera. Vibration frequencies are estimated from distance variations of a web cross-section with respect to the camera.Experiments have been performed on a winding plant for elastic fabric with a web width of 10 cm. Distances from the web surface to the camera have been estimated all along an image sequence and the most significant frequencies have been extracted from the variations of this signal (forced and free vibrations) and compared to those provided with strain gauges and also with a simple elastic string model, in motion.


Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display | 2005

CTBot: A stereotactic-guided robotic assistant for percutaneous procedures of the abdomen

Benjamin Maurin; Christophe Doignon; Jacques Gangloff; Bernard Bayle; Michel de Mathelin; Olivier Piccin; Afshin Gangi

This article presents positioning results of a stereotactic robotic assistant for percutaneous needle insertions in the abdomen. The robotic system, called the CT-Bot, is succinctly described. This mechanically safe device is compatible with medical requirements and offers a novel approach robotic needle insertion with computed tomography guidance. Our system does self-registration using only visual information from a fiducial marker. The theoretical developments explain how the pose reconstruction is done using only four fiducial points and how the automatic registration algorithm is achieved. The results concern the automatic positioning of the tip of a needle with respect to a reference point selected in a CT-image. The accuracy of the positioning result show how interesting this system is for clinical use.


medical image computing and computer assisted intervention | 2012

Simulation of pneumoperitoneum for laparoscopic surgery planning

Jordan Bano; Alexandre Hostettler; Stéphane Nicolau; Stéphane Cotin; Christophe Doignon; Hurng-Sheng Wu; Min-Ho Huang; Luc Soler; Jacques Marescaux

Laparoscopic surgery planning is usually realized on a preoperative image that does not correspond to the operating room conditions. Indeed, the patient undergoes gas insufflation (pneumoperitoneum) to allow instrument manipulation inside the abdomen. This insufflation moves the skin and the viscera so that their positions do no longer correspond to the preoperative image, reducing the benefit of surgical planning, more particularly for the trocar positioning step. A simulation of the pneumoperitoneum influence would thus improve the realism and the quality of the surgical planning. We present in this paper a method to simulate the movement of skin and viscera due to the pneumoperitoneum. Our method requires a segmented preoperative 3D medical image associated to realistic biomechanical parameters only. The simulation is performed using the SOFA simulation engine. The results were evaluated using computed tomography [CT] images of two pigs, before and after pneumoperitoneum. Results show that our method provides a very realistic estimation of skin, viscera and artery positions with an average error within 1 cm.


international symposium on experimental robotics | 2000

Towards Semi-autonomy in Laparoscopic Surgery through Vision and Force Feedback Control

Alexandre Krupa; Christophe Doignon; Jacques Gangloff; Michel de Mathelin; Luc Soler; Guillaume Morel

This paper shows ongoing research results on the development of automatic control modes for robotized laparoscopic surgery. We show how both force feedback and visual feedback can be used in an hybrid control scheme to autonomously perform basic surgical subtasks. Preliminary experimental results on an example clamping tasks are given.


Archive | 2008

Pose Estimation and Feature Tracking for Robot Assisted Surgery with Medical Imaging

Christophe Doignon; Florent Nageotte; Benjamin Maurin; Alexandre Krupa

The field of vision-based robotics has been widely growing for more than three decades, and more and more complex 3-D scenes are within robot vision capabilities thanks to better understanding of the scenes, improvement of computer capabilities and control theory. The achievement of applications like medical robotics, mobile robotics, micro-robotic manipulation, agricultural automation or the observation by aerial or underwater robots needs the integration of several research areas in computer vision and automatic control ([32, 19]). For the past two decades, medical robot and computer-assisted surgery have gained increasing popularity. They have expanded the capabilities and comfort for both patients and surgeons in many kinds of interventions such as local therapy, biopsies, tumors detection and removal with techniques like multi-modal registration, online visualization, simulators for specific interventions and tracking. Medical robots provide a significant help in surgery, mainly for the improvement of positioning accuracy and particularly for intra-operative image guidance [36]. The main challenge in visual 3-D tracking for medical robotic purposes is to catch the relevant video information from images acquired with endoscopes [5], ultra-sound probes [17, 21] or scanners [35, 26] so as to evaluate the position and the velocity of objects of interest which usually are natural or artificial landmarks attached to a surgical instrument.

Collaboration


Dive into the Christophe Doignon's collaboration.

Top Co-Authors

Avatar

Luc Soler

University of Strasbourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. de Mathelin

University of Strasbourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philippe Zanne

University of Strasbourg

View shared research outputs
Top Co-Authors

Avatar

Chadi Albitar

University of Strasbourg

View shared research outputs
Researchain Logo
Decentralizing Knowledge