Holger Mönnich
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Holger Mönnich.
international conference on robotics and automation | 2011
Oliver Weede; Holger Mönnich; Beat Müller; Heinz Wörn
The endoscopic guidance system for minimally invasive surgery presented here autonomously aligns the laparoscopic camera with the end-effectors of the surgeons instruments. It collects information on the movements of the instruments from former interventions and can therefore predict them for autonomous guidance of the endoscopic camera. Knowledge is extracted by trajectory clustering, maximum likelihood classification and a Markov model to predict states. Alternative movements in an ongoing intervention are modeled. A first prototype of a robotic platform for minimally invasive surgery is described, which has two instrument arms, an autonomous robotic camera assistant and two haptic devices to control the instrument arms. The approach of long-term prediction and optimal camera positioning was tested in a phantom experiment with a hit rate of over 89% for predicting the movement of the end-effectors. Including this prediction for computing the camera position, leads to 29.2% less movements and to an improved visibility of the instruments.
robotics and biomimetics | 2011
Holger Mönnich; Philip Nicolai; Tim Beyl; Jörg Raczkowsky; Heinz Wörn
This paper introduces the OP:Sense system that is able to track objects and humans and. To reach this goal a complete surgical robotic system is built up that can be used for telemanipulation as well as for autonomous tasks, e.g. cutting or needle-insertion. Two KUKA lightweight robots that feature seven DOF and allow variable stiffness and damping due to an integrated impedance controller are used as actuators. The system includes two haptic input devices for providing haptic feedback in telemanipulation mode as well as including virtual fixtures to guide the surgeon even during telemanipulation mode. The supervision system consists of a marker-based optical tracking system, Photonic Mixer Device cameras (PMD) and rgb-d cameras (Microsoft Kinect). A simulation environment is constantly updated with the model of the environment, the model of the robots and tracked objects, the occupied space as well as tracked models of humans.
international conference on control and automation | 2009
Daniel Stein; Holger Mönnich; Jörg Raczkowsky; Heinz Wörn
This paper presents a visual servoing application with a fixed camera configuration for CO2 laser osteotomy. A multi camera system from ART is used to track the position of the robot and a skull via marker spheres that are attached to both. A CT scan from the skull is performed and segmented to acquire a 3D model. Inside the model the position for the robot for the laser ablation is planned. A 60 Hz update rate for the tracking system a and a 80 HZ update rate for the robot is sufficient to reposition the robot relative to the skull for CO2 laser ablation scenarios to ablate during motion, e.g. breathing motion. Also the accuracy of the lightweight robot is increased with the additional supervision of an optical tracking system. Accuracy improvement was measured with an FARO measurement arm. A visual servoing control schema is presented. The demonstrator shows a working visual servoing application for laser.
international workshop on advanced motion control | 2012
Holger Mönnich; Heinz Wörn; Daniel Stein
OP:Sense is a research platform developed for applications in roboter assisted surgery. The system can be used for automatic positioning tasks, like CO2 laser cutting or conventional bone cutting techniques or highly accurate positioning, like needle placement in biopsie. Due to the flexibility of the used lightweight robots the system can also be used for minimal invasive surgery. Also new combinations of MIRS techniques and automatic positioning becomes possible, a semi-autonomous usage of the surgical robotic system. In this paper the focus is to describe the control system developed for OP:sense that enables to use the system in a wide area of surgical applications.
international conference on robotics and automation | 2010
Holger Mönnich; Daniel Stein; Jörg Raczkowsky; Heinz Wörn
This paper describes the content of the video for ICRA 2010. The approach presented in the video is a complete automatic registration of all needed devices for robotic guided CO2 bone processing. The system consists of an optical tracking system, a lightweight robot and a scan head. While a standard point to point transformation for the transformation between optical tracking system and robot is inadequate, a special registration algorithm was developed and therefore is used. This allows an automatic registration by moving the robot to different positions and storing the position of a body attached to the robot TCP. The scan head is registered to the optical tracking system or to the robot using a camera that tracks the prototype laser. Only the patient must be registered manually. The result is that the robot moves to the target position on the bone defined inside the segmented 3D CT dataset. With the known registration between robot and tracking system the robot can also be controlled via visual servoing.
robotics and biomimetics | 2011
Philip Nicolai; Tim Beyl; Holger Mönnich; Jörg Raczkowsky; Heinz Wörn
In this video we show the capabilities of the OP:Sense system. OP:Sense is an integrated rapid application development environment for robot assisted surgery. It mainly aims on MIRS and on open head neurosurgery as OP:Sense is developed for the EU Projects FP7 SAFROS and FP7 ACTIVE that aim on these use-cases. Besides the framework, OP:Sense also integrates applications. Thus it is not only the framework itself but also a system that demonstrates how robots can be used for surgical interventions. Core of the system is the ACE TAO framework [1] [2] that implements realtime CORBA for communication between distributed systems. We built interfaces based on CORBA for use in Matlab and Simulink. Also there are modules for 3D Slicer and applications for the control of devices like robots, or surgical tools. As Matlab is a mighty tool for rapid application development it can be used to develop applications in a faster way compared to using C++ or similar programming languages. We use Matlab for setting up our environment and for tasks and computations that does not need to run in realtime. For Realtime tasks like telemanipulation we use Simulink models.
Archive | 2009
Holger Mönnich; Daniel Stein; Jörg Raczkowsky; Heinz Wörn
In this paper a system for laser osteotomy on the human skull is described. The system consists of a KUKA lightweight robot, a scanhead, a laser distance sensor, and a multi camera tracking system. To measure the accuracy of the robot a FARO measurement arm is used. The result of the measurement is a great repeatability of the KUKA robot although a total accuracy that is not as good. To overcome this problem the multi camera system is used to control the position of the robot. A CT scan is taken from the skull, segmented and used to calculate the position for the robot. The cutting is performed with a CO2 laser that is delivered through an articulated mirror arm and controlled by the scanhead attached to the robot.
World Congress on Medical Physics and Biomedical Engineering : Surgery, Nimimal Invasive Interventions, Endoscopy and Image Guided Therapy, Munich, Germany, 7th - 12th September 2009. Ed.: O. Dössel | 2009
Holger Mönnich; Daniel Stein; Jörg Raczkowsky; Heinz Wörn
In this paper a way to use a KUKA lightweight robot (LWR) for an intuitive planning of surgical interventions is presented. It is possible to guide the robot by hand and this is used for the planning of the trajectory for bone cutting on the skull. The robot allows hand guided movements that can be used to record the coordinates of the trajectory. The planning system is constructed for a laser osteotomy system including a pilot laser. While the pilot laser shows the position of the point on the real skull, the same position is shown on the model inside the visualization program. Thus the vertices points for the trajectories are selected and the computed trajectory is shown on the model.
Studies in health technology and informatics | 2012
Tim Beyl; Philip Nicolai; Holger Mönnich; Raczkowksy J; Heinz Wörn
In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.
electronic healthcare | 2009
Holger Mönnich; Jörg Raczkowsky; Heinz Wörn
This paper describes a model checking approach for robotic guided surgical interventions. The execution plan is modeled with a workflow editor as a petri net. The net is then analyzed for correct structure and syntax with XMLSchema. Petri nets allow checking for specific constraints, like soundness. Still the possibility to prove the net with runtime variables is missing. For this reason model checking is introduced to the architecture. The Petri-Net is transformed to the Model Checking language of NuSMV2, an open source model checking tool. Conditions are modeled with temporal logic and these specifications are proved with the model checker. This results in the possibility to prove the correct initialization of hardware devices and to find possible runtime errors. The workflow editor and model checking capabilities are developed for a demonstrator consisting of a KUKA lightweight robot, a laser distance sensor and ART tracking for CO2 laser ablation on bone.