Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anja Groch is active.

Publication


Featured researches published by Anja Groch.


Medical Image Analysis | 2013

Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

Lena Maier-Hein; Peter Mountney; Adrien Bartoli; Haytham Elhawary; Daniel S. Elson; Anja Groch; Andreas Kolb; Marcos A. Rodrigues; Jonathan M. Sorger; Stefanie Speidel; Danail Stoyanov

One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeons navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D optical imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions.


IEEE Transactions on Medical Imaging | 2014

Comparative Validation of Single-Shot Optical Techniques for Laparoscopic 3-D Surface Reconstruction

Lena Maier-Hein; Anja Groch; A. Bartoli; Sebastian Bodenstedt; G. Boissonnat; Ping-Lin Chang; Neil T. Clancy; Daniel S. Elson; S. Haase; E. Heim; Joachim Hornegger; Pierre Jannin; Hannes Kenngott; Thomas Kilgus; B. Muller-Stich; D. Oladokun; Sebastian Röhl; T. R. Dos Santos; Heinz Peter Schlemmer; Alexander Seitel; Stefanie Speidel; Martin Wagner; Danail Stoyanov

Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper.


Biomedical Optics Express | 2011

Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging.

Neil T. Clancy; Danail Stoyanov; Lena Maier-Hein; Anja Groch; Guang-Zhong Yang; Daniel S. Elson

Three dimensional quantification of organ shape and structure during minimally invasive surgery (MIS) could enhance precision by allowing the registration of multi-modal or pre-operative image data (US/MRI/CT) with the live optical image. Structured illumination is one technique to obtain 3D information through the projection of a known pattern onto the tissue, although currently these systems tend to be used only for macroscopic imaging or open procedures rather than in endoscopy. To account for occlusions, where a projected feature may be hidden from view and/or confused with a neighboring point, a flexible multispectral structured illumination probe has been developed that labels each projected point with a specific wavelength using a supercontinuum laser. When imaged by a standard endoscope camera they can then be segmented using their RGB values, and their 3D coordinates calculated after camera calibration. The probe itself is sufficiently small (1.7 mm diameter) to allow it to be used in the biopsy channel of commonly used medical endoscopes. Surgical robots could therefore also employ this technology to solve navigation and visualization problems in MIS, and help to develop advanced surgical procedures such as natural orifice translumenal endoscopic surgery.


Workshops Bildverarbeitung fur die Medizin: Algorithmen - Systeme - Anwendungen, BVM 2011 - Workshop on Image Processing for Medicine: Algorithms - Systems - Applications, BVM 2011 | 2011

Towards mobile augmented reality for on-patient visualization of medical images

Lena Maier-Hein; Alfred M. Franz; M. Fangerau; M. Schmidt; Alexander Seitel; Sven Mersmann; Thomas Kilgus; Anja Groch; Kwong Yung; T. R. dos Santos; Hans-Peter Meinzer

Despite considerable technical and algorithmic developments related to the fields of medical image acquisition and processing in the past decade, the devices used for visualization of medical images have undergone rather minor changes. As anatomical information is typically shown on monitors provided by a radiological work station, the physician has to mentally transfer internal structures shown on the screen to the patient. In this work, we present a new approach to on-patient visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive interaction scheme. The method requires mounting a Time-of-Flight (ToF) camera to a portable display (e.g., a tablet PC). During the visualization process, the pose of the camera and thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving the device along the body of the patient, the physician gets the impression of being able to look directly into the human body. The concept can be used for intervention planning, anatomy teaching and various other applications that require intuitive visualization of 3D data.


Proceedings of SPIE | 2011

3D surface reconstruction for laparoscopic computer-assisted interventions: comparison of state-of-the-art methods

Anja Groch; Alexander Seitel; Susanne Hempel; Stefanie Speidel; Rainer Engelbrecht; J. Penne; Kurt Höller; Sebastian Röhl; Kwong Yung; Sebastian Bodenstedt; Felix Pflaum; T. R. dos Santos; Sven Mersmann; Hans-Peter Meinzer; Joachim Hornegger; Lena Maier-Hein

One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patients anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organs surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.


computer assisted radiology and surgery | 2012

MITK-ToF—Range data within MITK

Alexander Seitel; Kwong Yung; Sven Mersmann; Thomas Kilgus; Anja Groch; Thiago R. Dos Santos; Alfred M. Franz; Marco Nolden; Hans-Peter Meinzer; Lena Maier-Hein

PurposeThe time-of-flight (ToF) technique is an emerging technique for rapidly acquiring distance information and is becoming increasingly popular for intra-operative surface acquisition. Using the ToF technique as an intra-operative imaging modality requires seamless integration into the clinical workflow. We thus aim to integrate ToF support in an existing framework for medical image processing.MethodsMITK-ToF was implemented as an extension of the open-source C++ Medical Imaging Interaction Toolkit (MITK) and provides the basic functionality needed for rapid prototyping and development of image-guided therapy (IGT) applications that utilize range data for intra-operative surface acquisition. This framework was designed with a module-based architecture separating the hardware-dependent image acquisition task from the processing of the range data.ResultsThe first version of MITK-ToF has been released as an open-source toolkit and supports several ToF cameras and basic processing algorithms. The toolkit, a sample application, and a tutorial are available from http://mitk.org.ConclusionsWith the increased popularity of time-of-flight cameras for intra-operative surface acquisition, integration of range data supports into medical image processing toolkits such as MITK is a necessary step. Handling acquisition of range data from different cameras and processing of the data requires the establishment and use of software design principles that emphasize flexibility, extendibility, robustness, performance, and portability. The open-source toolkit MITK-ToF satisfies these requirements for the image-guided therapy community and was already used in several research projects.


Proceedings of SPIE | 2011

Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

Alexander Seitel; Thiago R. Dos Santos; Sven Mersmann; Jochen Penne; Anja Groch; Kwong Yung; Ralf Tetzlaff; Hans-Peter Meinzer; Lena Maier-Hein

Image-guided therapy systems generally require registration of pre-operative planning data with the patients anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.


Proceedings of SPIE | 2012

Registration of partially overlapping surfaces for range image based augmented reality on mobile devices

Thomas Kilgus; Alfred M. Franz; Alexander Seitel; Keno März; Laura Bartha; Markus Fangerau; Sven Mersmann; Anja Groch; Hans-Peter Meinzer; Lena Maier-Hein

Visualization of anatomical data for disease diagnosis, surgical planning, or orientation during interventional therapy is an integral part of modern health care. However, as anatomical information is typically shown on monitors provided by a radiological work station, the physician has to mentally transfer internal structures shown on the screen to the patient. To address this issue, we recently presented a new approach to on-patient visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive interaction scheme. Our method requires mounting a range imaging device, such as a Time-of-Flight (ToF) camera, to a portable display (e.g. a tablet PC). During the visualization process, the pose of the camera and thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving the device along the body of the patient, the physician is given the impression of looking directly into the human body. In this paper, we present and evaluate a new method for camera pose estimation based on an anisotropic trimmed variant of the well-known iterative closest point (ICP) algorithm. According to in-silico and in-vivo experiments performed with computed tomography (CT) and ToF data of human faces, knees and abdomens, our new method is better suited for surface registration with ToF data than the established trimmed variant of the ICP, reducing the target registration error (TRE) by more than 60%. The TRE obtained (approx. 4-5 mm) is promising for AR visualization, but clinical applications require maximization of robustness and run-time.


Workshops Bildverarbeitung fur die Medizin: Algorithmen - Systeme - Anwendungen, BVM 2011 - Workshop on Image Processing for Medicine: Algorithms - Systems - Applications, BVM 2011 | 2011

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Thomas Kilgus; Thiago R. Dos Santos; Alexander Seitel; Kwong Yung; Alfred M. Franz; Anja Groch; Ivo Wolf; Hans-Peter Meinzer; Lena Maier-Hein

One approach to intra-operative registration in computerassisted medical interventions involves matching intra-operatively acquired organ surfaces with pre-operatively generated high resolution surfaces. The matching is based on so-called curvature descriptors assigned to the vertices of the two meshes. Therefore, high compliance of the input meshes with respect to curvature properties is essential. Time-of-Flight cameras can provide the required surface data during the intervention as a point cloud. Although different methods for generation of triangle meshes from range data have been proposed in the literature, their effect on the quality of the mesh with respect to curvature properties has not yet been investigated. In this paper, we evaluate six of these methods and derive application-specific recommendations for their usage.


Proceedings of SPIE | 2012

Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

Michael Müller; Anja Groch; Matthias Baumhauer; Lena Maier-Hein; Dogu Teber; Jens Rassweiler; Hans-Peter Meinzer; Ingmar Wegner

Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

Collaboration


Dive into the Anja Groch's collaboration.

Top Co-Authors

Avatar

Lena Maier-Hein

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Alexander Seitel

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Kwong Yung

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Thomas Kilgus

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanie Speidel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sven Mersmann

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge