Stefanie Speidel
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefanie Speidel.
Medical Image Analysis | 2013
Lena Maier-Hein; Peter Mountney; Adrien Bartoli; Haytham Elhawary; Daniel S. Elson; Anja Groch; Andreas Kolb; Marcos A. Rodrigues; Jonathan M. Sorger; Stefanie Speidel; Danail Stoyanov
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeons navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D optical imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions.
IEEE Transactions on Medical Imaging | 2014
Lena Maier-Hein; Anja Groch; A. Bartoli; Sebastian Bodenstedt; G. Boissonnat; Ping-Lin Chang; Neil T. Clancy; Daniel S. Elson; S. Haase; E. Heim; Joachim Hornegger; Pierre Jannin; Hannes Kenngott; Thomas Kilgus; B. Muller-Stich; D. Oladokun; Sebastian Röhl; T. R. Dos Santos; Heinz Peter Schlemmer; Alexander Seitel; Stefanie Speidel; Martin Wagner; Danail Stoyanov
Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper.
Medical Physics | 2012
Sebastian Röhl; Sebastian Bodenstedt; Stefan Suwelack; Hannes Kenngott; Beat P. Müller-Stich; Rüdiger Dillmann; Stefanie Speidel
PURPOSE In laparoscopic surgery, soft tissue deformations substantially change the surgical site, thus impeding the use of preoperative planning during intraoperative navigation. Extracting depth information from endoscopic images and building a surface model of the surgical field-of-view is one way to represent this constantly deforming environment. The information can then be used for intraoperative registration. Stereo reconstruction is a typical problem within computer vision. However, most of the available methods do not fulfill the specific requirements in a minimally invasive setting such as the need of real-time performance, the problem of view-dependent specular reflections and large curved areas with partly homogeneous or periodic textures and occlusions. METHODS In this paper, the authors present an approach toward intraoperative surface reconstruction based on stereo endoscopic images. The authors describe our answer to this problem through correspondence analysis, disparity correction and refinement, 3D reconstruction, point cloud smoothing and meshing. Real-time performance is achieved by implementing the algorithms on the gpu. The authors also present a new hybrid cpu-gpu algorithm that unifies the advantages of the cpu and the gpu version. RESULTS In a comprehensive evaluation using in vivo data, in silico data from the literature and virtual data from a newly developed simulation environment, the cpu, the gpu, and the hybrid cpu-gpu versions of the surface reconstruction are compared to a cpu and a gpu algorithm from the literature. The recommended approach toward intraoperative surface reconstruction can be conducted in real-time depending on the image resolution (20 fps for the gpu and 14fps for the hybrid cpu-gpu version on resolution of 640 × 480). It is robust to homogeneous regions without texture, large image changes, noise or errors from camera calibration, and it reconstructs the surface down to sub millimeter accuracy. In all the experiments within the simulation environment, the mean distance to ground truth data is between 0.05 and 0.6 mm for the hybrid cpu-gpu version. The hybrid cpu-gpu algorithm shows a much more superior performance than its cpu and gpu counterpart (mean distance reduction 26% and 45%, respectively, for the experiments in the simulation environment). CONCLUSIONS The recommended approach for surface reconstruction is fast, robust, and accurate. It can represent changes in the intraoperative environment and can be used to adapt a preoperative model within the surgical site by registration of these two models.
international conference on medical imaging and augmented reality | 2006
Stefanie Speidel; Michael Delles; Carsten N. Gutt; Rüdiger Dillmann
Intraoperative assistance systems aim to improve the quality of the surgery and enhance the surgeon’s capabilities. Preferable would be a system which provides support depending on the surgery context and surgical skills accomplished. Therefore, the automated analysis and recognition of surgical skills during an intervention is necessary. In this paper a robust tracking of instruments in minimally invasive surgery based on endoscopic image sequences is presented. The instruments were not modified and the tracking was tested on sequences acquired during a real intervention. The generated trajectory of the instruments provides information which can be further used for surgical gesture interpretation.
medical image computing and computer assisted intervention | 2014
Lena Maier-Hein; Sven Mersmann; Daniel Kondermann; Sebastian Bodenstedt; Alexandro Sanchez; Christian Stock; Hannes Kenngott; Mathias Eisenmann; Stefanie Speidel
Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.
Medical Physics | 2014
Stefan Suwelack; Sebastian Röhl; Sebastian Bodenstedt; Daniel Reichard; Rüdiger Dillmann; Thiago R. Dos Santos; Lena Maier-Hein; Martin Wagner; Josephine Wünscher; Hannes Kenngott; Beat Müller; Stefanie Speidel
PURPOSE Soft-tissue deformations can severely degrade the validity of preoperative planning data during computer assisted interventions. Intraoperative imaging such as stereo endoscopic, time-of-flight or, laser range scanner data can be used to compensate these movements. In this context, the intraoperative surface has to be matched to the preoperative model. The shape matching is especially challenging in the intraoperative setting due to noisy sensor data, only partially visible surfaces, ambiguous shape descriptors, and real-time requirements. METHODS A novel physics-based shape matching (PBSM) approach to register intraoperatively acquired surface meshes to preoperative planning data is proposed. The key idea of the method is to describe the nonrigid registration process as an electrostatic-elastic problem, where an elastic body (preoperative model) that is electrically charged slides into an oppositely charged rigid shape (intraoperative surface). It is shown that the corresponding energy functional can be efficiently solved using the finite element (FE) method. It is also demonstrated how PBSM can be combined with rigid registration schemes for robust nonrigid registration of arbitrarily aligned surfaces. Furthermore, it is shown how the approach can be combined with landmark based methods and outline its application to image guidance in laparoscopic interventions. RESULTS A profound analysis of the PBSM scheme based on in silico and phantom data is presented. Simulation studies on several liver models show that the approach is robust to the initial rigid registration and to parameter variations. The studies also reveal that the method achieves submillimeter registration accuracy (mean error between 0.32 and 0.46 mm). An unoptimized, single core implementation of the approach achieves near real-time performance (2 TPS, 7-19 s total registration time). It outperforms established methods in terms of speed and accuracy. Furthermore, it is shown that the method is able to accurately match partial surfaces. Finally, a phantom experiment demonstrates how the method can be combined with stereo endoscopic imaging to provide nonrigid registration during laparoscopic interventions. CONCLUSIONS The PBSM approach for surface matching is fast, robust, and accurate. As the technique is based on a preoperative volumetric FE model, it naturally recovers the position of volumetric structures (e.g., tumors and vessels). It cannot only be used to recover soft-tissue deformations from intraoperative surface models but can also be combined with landmark data from volumetric imaging. In addition to applications in laparoscopic surgery, the method might prove useful in other areas that require soft-tissue registration from sparse intraoperative sensor data (e.g., radiation therapy).
Computerized Medical Imaging and Graphics | 2013
Darko Katic; Anna-Laura Wekerle; Jochen Görtler; Patrick Spengler; Sebastian Bodenstedt; Sebastian Röhl; Stefan Suwelack; Hannes Kenngott; Martin Wagner; Beat P. Müller-Stich; Rüdiger Dillmann; Stefanie Speidel
Augmented Reality is a promising paradigm for intraoperative assistance. Yet, apart from technical issues, a major obstacle to its clinical application is the man-machine interaction. Visualization of unnecessary, obsolete or redundant information may cause confusion and distraction, reducing usefulness and acceptance of the assistance system. We propose a system capable of automatically filtering available information based on recognized phases in the operating room. Our system offers a specific selection of available visualizations which suit the surgeons needs best. The system was implemented for use in laparoscopic liver and gallbladder surgery and evaluated in phantom experiments in conjunction with expert interviews.
Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling | 2008
Stefanie Speidel; Gunther Sudra; Julien Senemaud; Maximilian Drentschew; Beat P. Müller-Stich; Carsten N. Gutt; Rüdiger Dillmann
Minimally invasive surgery has gained significantly in importance over the last decade due to the numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate these constraints we propose to enhance the surgeons capabilities by providing a context-aware assistance using augmented reality (AR) techniques. In order to generate a context-aware assistance it is necessary to recognize the current state of the intervention using intraoperatively gained sensor data and a model of the surgical intervention. In this paper we present the recognition of risk situations, the system warns the surgeon if an instrument gets too close to a risk structure. The context-aware assistance system starts with an image-based analysis to retrieve information from the endoscopic images. This information is classified and a semantic description is generated. The description is used to recognize the current state and launch an appropriate AR visualization. In detail we present an automatic vision-based instrument tracking to obtain the positions of the instruments. Situation recognition is performed using a knowledge representation based on a description logic system. Two augmented reality visualization programs are realized to warn the surgeon if a risk situation occurs.
Medical Image Analysis | 2014
Thiago Ramos dos Santos; Alexander Seitel; Thomas Kilgus; Stefan Suwelack; Anna Laura Wekerle; Hannes Kenngott; Stefanie Speidel; Heinz Peter Schlemmer; Hans-Peter Meinzer; Tobias Heimann; Lena Maier-Hein
One of the main challenges in computer-assisted soft tissue surgery is the registration of multi-modal patient-specific data for enhancing the surgeons navigation capabilities by observing beyond exposed tissue surfaces. A new approach to marker-less guidance involves capturing the intra-operative patient anatomy with a range image device and doing a shape-based registration. However, as the target organ is only partially visible, typically does not provide salient features and underlies severe non-rigid deformations, surface matching in this context is extremely challenging. Furthermore, the intra-operatively acquired surface data may be subject to severe systematic errors and noise. To address these issues, we propose a new approach to establishing surface correspondences, which can be used to initialize fine surface matching algorithms in the context of intra-operative shape-based registration. Our method does not require any prior knowledge on the relative poses of the input surfaces to each other, does not rely on the detection of prominent surface features, is robust to noise and can be used for overlapping surfaces. It takes into account (1) similarity of feature descriptors, (2) compatibility of multiple correspondence pairs, as well as (3) the spatial configuration of the entire correspondence set. We evaluate the algorithm on time-of-flight (ToF) data from porcine livers in a respiratory liver motion simulator. In all our experiments the alignment computed from the established surface correspondences yields a registration error below 1cm and is thus well suited for initializing fine surface matching algorithms for intra-operative soft-tissue registration.
Proceedings of SPIE | 2009
Stefanie Speidel; Julia Benzko; Sebastian Krappe; Gunther Sudra; Pedram Azad; Beat P. Müller-Stich; Carsten N. Gutt; Rüdiger Dillmann
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeons capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.