Alexandre Krupa
University of Strasbourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandre Krupa.
international conference on robotics and automation | 2003
Alexandre Krupa; J. Gangloff; C. Doignon; M. de Mathelin; Guillaume Morel; Joel Leroy; Luc Soler; Jacques Marescaux
This paper presents a robotic vision system that automatically retrieves and positions surgical instruments during robotized laparoscopic surgical operations. The instrument is mounted on the end-effector of a surgical robot which is controlled by visual servoing. The goal of the automated task is to safely bring the instrument at a desired three-dimensional location from an unknown or hidden position. Light-emitting diodes are attached on the tip of the instrument, and a specific instrument holder fitted with optical fibers is used to project laser dots on the surface of the organs. These optical markers are detected in the endoscopic image and allow localizing the instrument with respect to the scene. The instrument is recovered and centered in the image plane by means of a visual servoing algorithm using feature errors in the image. With this system, the surgeon can specify a desired relative position between the instrument and the pointed organ. The relationship between the velocity screw of the surgical instrument and the velocity of the markers in the image is estimated online and, for safety reasons, a multistages servoing scheme is proposed. Our approach has been successfully validated in a real surgical environment by performing experiments on living tissues in the surgical training room of the Institut de Recherche sur les Cancers de lAppareil Digestif (IRCAD), Strasbourg, France.
IEEE Transactions on Robotics | 2010
Rafik Mebarki; Alexandre Krupa; François Chaumette
This paper presents a visual-servoing method that is based on 2-D ultrasound (US) images. The main goal is to guide a robot actuating a 2-D US probe in order to reach a desired cross-section image of an object of interest. The method we propose allows the control of both in-plane and out-of-plane probe motions. Its feedback visual features are combinations of moments extracted from the observed image. The exact analytical form of the interaction matrix that relates the image-moments time variation to the probe velocity is developed, and six independent visual features are proposed to control the six degrees of freedom of the robot. In order to endow the system with the capability of automatically interacting with objects of unknown shape, a model-free visual servoing is developed. For that, we propose an efficient online estimation method to identify the parameters involved in the interaction matrix. Results obtained in both simulations and experiments validate the methods presented in this paper and show their robustness to different errors and perturbations, especially those inherent to the noisy US images.
international conference on robotics and automation | 2002
Alexandre Krupa; Guillaume Morel; M. de Mathelin
In this paper, we present a new solution to laparoscopic manipulation based on force feedback control. This method allows to both explicitly control the forces applied to the patient through the trocar, and to precisely control the position of the surgical instrument. It does not require any geometrical model of the operative environment, nor any fine robot base placement prior to the instrument insertion. Different control strategies, involving different kinds of sensory equipments are proposed. They are experimentally validated on a laboratory apparatus.
international symposium on experimental robotics | 2000
Alexandre Krupa; Christophe Doignon; Jacques Gangloff; Michel de Mathelin; Luc Soler; Guillaume Morel
This paper shows ongoing research results on the development of automatic control modes for robotized laparoscopic surgery. We show how both force feedback and visual feedback can be used in an hybrid control scheme to autonomously perform basic surgical subtasks. Preliminary experimental results on an example clamping tasks are given.
Archive | 2008
Christophe Doignon; Florent Nageotte; Benjamin Maurin; Alexandre Krupa
The field of vision-based robotics has been widely growing for more than three decades, and more and more complex 3-D scenes are within robot vision capabilities thanks to better understanding of the scenes, improvement of computer capabilities and control theory. The achievement of applications like medical robotics, mobile robotics, micro-robotic manipulation, agricultural automation or the observation by aerial or underwater robots needs the integration of several research areas in computer vision and automatic control ([32, 19]). For the past two decades, medical robot and computer-assisted surgery have gained increasing popularity. They have expanded the capabilities and comfort for both patients and surgeons in many kinds of interventions such as local therapy, biopsies, tumors detection and removal with techniques like multi-modal registration, online visualization, simulators for specific interventions and tracking. Medical robots provide a significant help in surgery, mainly for the improvement of positioning accuracy and particularly for intra-operative image guidance [36]. The main challenge in visual 3-D tracking for medical robotic purposes is to catch the relevant video information from images acquired with endoscopes [5], ultra-sound probes [17, 21] or scanners [35, 26] so as to evaluate the position and the velocity of objects of interest which usually are natural or artificial landmarks attached to a surgical instrument.
intelligent robots and systems | 2002
Alexandre Krupa; C. Doignon; J. Gangloff; M. de Mathelin
In this paper, we address the problem of controlling the motion of a surgical instrument close to an unknown organ surface by visual servoing in the context of robotized laparoscopic surgery. To achieve this goal, a visual servoing algorithm is developed that combines feature errors in the image and errors in depth measurements. The relationship between the velocity screw of the surgical instrument, the depth and the motion field is defined and a two-stage servoing scheme is proposed. In order to measure the orientation and the depth of the instrument with respect to the organ, a laser dot pattern is projected on the organ surface and optical markers are stuck on the instrument. Our work has been successfully validated with a surgical robot by performing experiments on living tissues in the surgical training room of IRCAD.
international conference on robotics and automation | 2002
Alexandre Krupa; J. Gangloff; M. de Mathelin; C. Doignon; Guillaume Morel; Luc Soler; Joel Leroy; Jacques Marescaux
This paper presents a robotic vision system that automatically retrieves and positions surgical instruments in robotized laparoscopic surgery. The surgical instrument is mounted on the end-effector of a surgical robot which can be controlled by automatic visual feedback. The goal of the automated task is to bring the instrument at a desired location from an unknown or hidden position. To achieve this task, a special instrument-holder is designed with optical fibers and collimators. This instrument-holder projects laser dot patterns onto the organ surface which are seen in the endoscopic images. Then, the instrument is retrieved and centered in the image plane using a visual servoing algorithm. With this system, the surgeon can also specify a desired position for the instrument in the image. Our approach is successfully validated in a real surgical environment by performing experiments on living animals in the surgical training room of IRCAD.
Advanced Robotics | 2004
Alexandre Krupa; Guillaume Morel; Michel de Mathelin
In this paper, we present a new solution to laparoscopic manipulation based on forcefeedback control. This method allows us to both explicitely control the forces applied to the patient through the trocar and to precisely control the position of the surgical instrument. It does not require any geometrical model of the operative environment nor any fine robot base placement prior to the instrument insertion. Different adaptive control strategies involving different kinds of sensory equipments are proposed. These strategies are experimentally validated on a laboratory apparatus. An experiment is also presented where a laparoscope held by the robots arm tracks a target through visual servoing.
medical image computing and computer assisted intervention | 2002
Alexandre Krupa; Michel de Mathelin; Christophe Doignon; Jacques Gangloff; Guillaume Morel; Luc Soler; Joel Leroy; Jacques Marescaux
This paper presents a robotic vision system that automatically retrieves and positions surgical instruments in the robotized laparoscopic surgical environment. The goal of the automated task is to bring the instrument from an unknown or hidden position to a desired location specified by the surgeon on the endoscopic image. To achieve this task, a special instrument-holder is designed that projects laser dot patterns onto the organ surface which are seen on the endoscopic images. Then, the surgical instrument positioning is done using an automatic visual servoing from the endoscopic image. Our approach is successfully validated in a real surgical environment by performing experiments on living pigs in the surgical training room of IRCAD.
Advanced Robotics | 2006
Alexandre Krupa; François Chaumette
A new visual servoing technique based on two-dimensional (2-D) ultrasound (US) image is proposed in order to control the motion of an US probe held by a medical robot. In opposition to a standard camera which provides a projection of the three-dimensional (3-D) scene to a 2-D image, US information is strictly in the observation plane of the probe and consequently visual servoing techniques have to be adapted. In this paper the coupling between the US probe and a motionless crossed string phantom used for probe calibration is modeled. Then a robotic task is developed which consists of positioning the US image on the intersection point of the crossed string phantom while moving the probe to different orientations. The goal of this task is to optimize the procedure of spatial parameter calibration of 3-D US systems.