Lena Maier-Hein
Heidelberg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lena Maier-Hein.
Medical Imaging 2007: Visualization and Image-Guided Procedures | 2007
Lena Maier-Hein; Daniel Maleike; Jochen Neuhaus; Alfred M. Franz; Ivo Wolf; Hans-Peter Meinzer
We evaluate two core modules of a novel soft tissue navigation system. The system estimates the position of a hidden target (e.g. a tumor) during a minimally invasive intervention from the location of a set of optically tracked needle-shaped navigation aids which are placed in the vicinity of the target. The initial position of the target relative to the navigation aids is obtained from a CT scan. The accuracy of the entire system depends on (a) the accuracy for locating a set of navigation aids in a CT image, (b) the accuracy for determining the positions of the navigation aids during the intervention by means of optical tracking, (c) the accuracy for tracking the applicator (e.g. the biopsy needle), and (d) the accuracy of the real-time deformation model which continuously computes the location of the initially determined target point from the current positions of the navigation aids. In this paper, we focus on the first two aspects. We introduce the navigation aids we constructed for our system and show that the needle tips can be tracked with submillimeter accuracy. Furthermore, we present and evaluate three methods for registering a set of navigation aid models with a given CT image. The fully-automatic algorithm outperforms both the manual method and the semi-automatic algorithm, yielding an average distance of 0.27 ± 0.08 mm between the estimated needle tip position and the reference position.
Proceedings of SPIE | 2011
Anja Groch; Alexander Seitel; Susanne Hempel; Stefanie Speidel; Rainer Engelbrecht; J. Penne; Kurt Höller; Sebastian Röhl; Kwong Yung; Sebastian Bodenstedt; Felix Pflaum; T. R. dos Santos; Sven Mersmann; Hans-Peter Meinzer; Joachim Hornegger; Lena Maier-Hein
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patients anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organs surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.
Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling | 2008
Lena Maier-Hein; Alfred M. Franz; Hans-Peter Meinzer; Ivo Wolf
We compare two optical tracking systems with regard to their suitability for soft tissue navigation with fiducial needles: The Polaris system with passive markers (Northern Digital Inc. (NDI); Waterloo, Ontario, Canada), and the MicronTracker 2, model H40 (Claron Technology, Inc.; Toronto, Ontario, Canada). We introduce appropriate tool designs and assess the tool tip tracking accuracy under typical clinical light conditions in a sufficiently sized measurement volume. To assess the robustness of the tracking systems, we further evaluate their sensitivity to illumination conditions as well as to the velocity and the orientation of a tracked tool. While the Polaris system showed robust tracking accuracy under all conditions, the MicronTracker 2 was highly sensitive to the examined factors.
Medical Imaging 2007: Visualization and Image-Guided Procedures | 2007
Lena Maier-Hein; Sascha A. Müller; Frank Pianka; Alexander Seitel; Beat P. Müller-Stich; Carsten N. Gutt; Urte Rietdorf; G. M. Richter; Hans-Peter Meinzer; Bruno M. Schmied; Ivo Wolf
In this paper, we evaluate the target position estimation accuracy of a novel soft tissue navigation system with a custom-designed respiratory liver motion simulator. The system uses a real-time deformation model to estimate the position of the target (e.g. a tumor) during a minimally invasive intervention from the location of a set of optically tracked needle-shaped navigation aids which are placed in the vicinity of the target. A respiratory liver motion simulator was developed to evaluate the performance of the system in-vitro. It allows the mounting of an explanted liver which can be moved along the longitudinal axis of a corpus model to simulate breathing motion. In order to assess the accuracy of our system we utilized an optically trackable tool as target and estimated its position continuously from the current position of the navigation aids. Four different transformation types were compared as base for the real-time deformation model: Rigid transformations, thinplate splines, volume splines, and elastic body splines. The respective root-mean-square target position estimation errors are 2.15 mm, 1.60 mm, 1.88 mm, and 1.92 mm averaged over a set of experiments obtained from a total of six navigation aid configurations in two pig livers. The error is reduced by 76.3%, 82.4%, 79.3%, and 78.8%, respectively, compared to the case when no deformation model is applied, i.e., a constant organ position is assumed throughout the breathing cycle.
Bildverarbeitung für die Medizin | 2014
Thomas Köhler; Sven Haase; Sebastian Bauer; Jakob Wasza; Thomas Kilgus; Lena Maier-Hein; Hubertus Feußner; Joachim Hornegger
In hybrid 3D endoscopy, range data is used to augment pho- tometric information for minimally invasive surgery. As range sensors suffer from a rough spatial resolution and a low signal-to-noise ratio, subpixel motion between multiple range images is used as a cue for super- resolution to obtain reliable range data. Unfortunately, this method is sensitive to outliers in range images and the estimated subpixel displace- ments. In this paper, we propose an outlier detection scheme for robust super-resolution. First, we derive confidence maps to identify outliers in the displacement fields by correlation analysis of photometric data. Second, we apply an iteratively re-weighted least squares algorithm to obtain the associated range confidence maps. The joint confidence map is used to obtain super-resolved range data. We evaluate our approach on synthetic images and phantom data acquired by a Time-of-Flight/RGB endoscope. Our outlire detection improves the median peak-signal-to- noise ratio by 1.1 dB.
Bildverarbeitung für die Medizin | 2012
Sven Haase; Christoph Forman; Thomas Kilgus; Roland Bammer; Lena Maier-Hein; Joachim Hornegger
Three-dimensional Endoscopy is an evolving field of research and offers great benefits for minimally invasive procedures. Besides the pure topology, color texture is an inevitable feature to provide an optimal visualization. Therefore, in this paper, we propose a sensor fusion of a Time-of-Flight (ToF) and an RGB sensor. This requires an intrinsic and extrinsic calibration of both cameras. In particular, the low resolution of the ToF camera (64×50 px) and inhomogeneous illumination precludes the use of standard calibration techniques. By enhancing the image data the use of self-encoded markers for automatic checkerboard detection, a re-projection error of less than 0.23 px for the ToF camera was achieved. The relative transformation of both sensors for data fusion was calculated in an automatic manner.
Proceedings of SPIE | 2011
Lena Maier-Hein; T. R. dos Santos; Alfred M. Franz; Hans-Peter Meinzer; J. M. Fitzpatrick
The Iterative Closest Point (ICP) algorithm is a widely used method for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. It can be shown that the method converges to an at least local minimimum with respect to a mean-square distance metric. From a statistical point of view, the algorithm implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we (1) present the first variant of the ICP that accounts for anisotropic localization uncertainty in both shapes as well as in both steps of the algorithm and (2) show how to apply the method for robust fine registration of surface meshes. According to an evaluation on medical imaging data, the proposed method is better suited for fine surface registration than the original ICP, reducing the target registration error (TRE) for a set of targets located inside or near the mesh by 80% on average.
Workshops Bildverarbeitung fur die Medizin: Algorithmen - Systeme - Anwendungen, BVM 2009 - Workshop on Image Processing for Medicine: Algorithms - Systems - Applications, BVM 2009 | 2009
Jochen Neuhaus; Ingmar Wegner; Johannes Käst; Matthias Baumhauer; Alexander Seitel; Ingmar Gergel; Marco Nolden; Daniel Maleike; Ivo Wolf; Hans-Peter Meinzer; Lena Maier-Hein
MITK-IGT ist eine Erweiterung des Medical Imaging Interaction Toolkits, die es ermoglicht Softwareprogramme im Bereich bildgestutzte Therapie zu erstellen. Dieser Beitrag stellt die Architektur und Designprinzipien von MITK-IGT vor und vergleicht sie mit anderen Open Source Losungen. Neben der Ansteuerung von Trackingsystemen und Visualisierungsmodulen liegt der Fokus von MITK-IGT auf einer Filterarchitektur, die das schrittweise Verarbeiten von Trackingdaten erlaubt. Zur BVM 2009 wird die erste Version von MITK-IGT als Open Source veroffentlicht.
Photons Plus Ultrasound: Imaging and Sensing 2018 | 2018
Dominik Waibel; Janek Gröhl; Fabian Isensee; Thomas Kirchner; Klaus Maier-Hein; Lena Maier-Hein
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Bildverarbeitung für die Medizin | 2016
Esther Wild; Dogu Teber; Daniel Schmid; Tobias Simpfendörfer; Michael Müller; Hannes Kenngott; Lena Maier-Hein
Laparoskopische Interventionen erfordern die prazise Navigation von chirurgischen Instrumenten unter Berucksichtigung von Risikostrukturen. Obwohl zahlreiche Konzepte zur Einblendung von anatomischen Details auf Basis intraoperativer Registrierungsmethoden existieren, scheitert die klinische Translation bislang an fehlender Robustheit und aufwendiger Integration in den klinischen Arbeitsablauf. In diesem Beitrag prasentieren wir einen neuartigen Ansatz zur robusten intraoperativen Datenfusion basierend auf fluoreszierenden Markern. In einer in vitro Pilotstudie zeigen wir, dass sich die neuen Marker im Gegensatz zu herkommlichen Nadelmarkern auch in der Gegenwart von Rauch, Blut oder Gewebestucken im Sichtfeld der laparoskopischen Kamera lokalisieren und tracken lassen. So wird eine robuste Registrierung von 3D-Bilddaten mit der aktuellen Patientenanatomie ermoglicht. Durch die einfache Integrierbarkeit in den medizinischen Arbeitsablauf ist das Potential des neuen Ansatzes hoch.