Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Kilgus is active.

Publication


Featured researches published by Thomas Kilgus.


IEEE Transactions on Medical Imaging | 2014

Comparative Validation of Single-Shot Optical Techniques for Laparoscopic 3-D Surface Reconstruction

Lena Maier-Hein; Anja Groch; A. Bartoli; Sebastian Bodenstedt; G. Boissonnat; Ping-Lin Chang; Neil T. Clancy; Daniel S. Elson; S. Haase; E. Heim; Joachim Hornegger; Pierre Jannin; Hannes Kenngott; Thomas Kilgus; B. Muller-Stich; D. Oladokun; Sebastian Röhl; T. R. Dos Santos; Heinz Peter Schlemmer; Alexander Seitel; Stefanie Speidel; Martin Wagner; Danail Stoyanov

Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper.


Medical Image Analysis | 2014

Pose-independent surface matching for intra-operative soft-tissue marker-less registration

Thiago Ramos dos Santos; Alexander Seitel; Thomas Kilgus; Stefan Suwelack; Anna Laura Wekerle; Hannes Kenngott; Stefanie Speidel; Heinz Peter Schlemmer; Hans-Peter Meinzer; Tobias Heimann; Lena Maier-Hein

One of the main challenges in computer-assisted soft tissue surgery is the registration of multi-modal patient-specific data for enhancing the surgeons navigation capabilities by observing beyond exposed tissue surfaces. A new approach to marker-less guidance involves capturing the intra-operative patient anatomy with a range image device and doing a shape-based registration. However, as the target organ is only partially visible, typically does not provide salient features and underlies severe non-rigid deformations, surface matching in this context is extremely challenging. Furthermore, the intra-operatively acquired surface data may be subject to severe systematic errors and noise. To address these issues, we propose a new approach to establishing surface correspondences, which can be used to initialize fine surface matching algorithms in the context of intra-operative shape-based registration. Our method does not require any prior knowledge on the relative poses of the input surfaces to each other, does not rely on the detection of prominent surface features, is robust to noise and can be used for overlapping surfaces. It takes into account (1) similarity of feature descriptors, (2) compatibility of multiple correspondence pairs, as well as (3) the spatial configuration of the entire correspondence set. We evaluate the algorithm on time-of-flight (ToF) data from porcine livers in a respiratory liver motion simulator. In all our experiments the alignment computed from the established surface correspondences yields a registration error below 1cm and is thus well suited for initializing fine surface matching algorithms for intra-operative soft-tissue registration.


Workshops Bildverarbeitung fur die Medizin: Algorithmen - Systeme - Anwendungen, BVM 2011 - Workshop on Image Processing for Medicine: Algorithms - Systems - Applications, BVM 2011 | 2011

Towards mobile augmented reality for on-patient visualization of medical images

Lena Maier-Hein; Alfred M. Franz; M. Fangerau; M. Schmidt; Alexander Seitel; Sven Mersmann; Thomas Kilgus; Anja Groch; Kwong Yung; T. R. dos Santos; Hans-Peter Meinzer

Despite considerable technical and algorithmic developments related to the fields of medical image acquisition and processing in the past decade, the devices used for visualization of medical images have undergone rather minor changes. As anatomical information is typically shown on monitors provided by a radiological work station, the physician has to mentally transfer internal structures shown on the screen to the patient. In this work, we present a new approach to on-patient visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive interaction scheme. The method requires mounting a Time-of-Flight (ToF) camera to a portable display (e.g., a tablet PC). During the visualization process, the pose of the camera and thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving the device along the body of the patient, the physician gets the impression of being able to look directly into the human body. The concept can be used for intervention planning, anatomy teaching and various other applications that require intuitive visualization of 3D data.


workshop on applications of computer vision | 2013

Laparoscopic instrument localization using a 3-D Time-of-Flight/RGB endoscope

Sven Haase; Jakob Wasza; Thomas Kilgus; Joachim Hornegger

Minimally invasive procedures are of importance in modern surgery due to reduced operative trauma and recovery time. To enable robot assisted interventions, automatic tracking of endoscopie tools is an essential task. State-of-the-art techniques rely on 2-D color information only which is error prone for varying illumination and unpredictable color distribution within the human body. In this paper, we use a novel 3-D Time-of-Flight/RGB endoscope that allows to use both color and range information to locate laparoscopic instruments in 3-D. Regarding color and range information the proposed technique calculates a score to indicate which information is more reliable and adopts the next steps of the localization procedure based on this reliability. In experiments on real data the tool tip is located with an average 3-D distance error of less than 4 mm compared to manually labeled ground truth data with a frame-rate of 10 fps.


medical image computing and computer-assisted intervention | 2013

ToF meets RGB: novel multi-sensor super-resolution for hybrid 3-D endoscopy.

Thomas Köhler; Sven Haase; Sebastian Bauer; Jakob Wasza; Thomas Kilgus; Lena Maier-Hein; Hubertus Feußner; Joachim Hornegger

3-D endoscopy is an evolving field of research with the intention to improve safety and efficiency of minimally invasive surgeries. Time-of-Flight (ToF) imaging allows to acquire range data in real-time and has been engineered into a 3-D endoscope in combination with an RGB sensor (640x480px) as a hybrid imaging system, recently. However, the ToF sensor suffers from a low spatial resolution (640 x 480 px) and a poor signal-to-noise ratio. In this paper, we propose a novel multi-frame super-resolution framework to improve range images in a ToF/RGB multi-sensor setup. Our approach exploits high-resolution RGB data to estimate subpixel motion used as a cue for range super-resolution. The underlying non-parametric motion model based on optical flow makes the method applicable to endoscopic scenes with arbitrary endoscope movements. The proposed method was evaluated on synthetic and real images. Our approach improves the peak-signal-to-noise ratio by 1.6 dB and structural similarity by 0.02 compared to single-sensor super-resolution.


computer assisted radiology and surgery | 2012

MITK-ToF—Range data within MITK

Alexander Seitel; Kwong Yung; Sven Mersmann; Thomas Kilgus; Anja Groch; Thiago R. Dos Santos; Alfred M. Franz; Marco Nolden; Hans-Peter Meinzer; Lena Maier-Hein

PurposeThe time-of-flight (ToF) technique is an emerging technique for rapidly acquiring distance information and is becoming increasingly popular for intra-operative surface acquisition. Using the ToF technique as an intra-operative imaging modality requires seamless integration into the clinical workflow. We thus aim to integrate ToF support in an existing framework for medical image processing.MethodsMITK-ToF was implemented as an extension of the open-source C++ Medical Imaging Interaction Toolkit (MITK) and provides the basic functionality needed for rapid prototyping and development of image-guided therapy (IGT) applications that utilize range data for intra-operative surface acquisition. This framework was designed with a module-based architecture separating the hardware-dependent image acquisition task from the processing of the range data.ResultsThe first version of MITK-ToF has been released as an open-source toolkit and supports several ToF cameras and basic processing algorithms. The toolkit, a sample application, and a tutorial are available from http://mitk.org.ConclusionsWith the increased popularity of time-of-flight cameras for intra-operative surface acquisition, integration of range data supports into medical image processing toolkits such as MITK is a necessary step. Handling acquisition of range data from different cameras and processing of the data requires the establishment and use of software design principles that emphasize flexibility, extendibility, robustness, performance, and portability. The open-source toolkit MITK-ToF satisfies these requirements for the image-guided therapy community and was already used in several research projects.


computer assisted radiology and surgery | 2016

Towards markerless navigation for percutaneous needle insertions.

Alexander Seitel; Nadine Bellemann; Mohammadreza Hafezi; Alfred M. Franz; Mark Servatius; Arash Saffari; Thomas Kilgus; Heinz Peter Schlemmer; Arianeb Mehrabi; Boris Radeleff; Lena Maier-Hein

PurposePercutaneous needle insertions are increasingly used for diagnosis and treatment of abdominal lesions. The challenging part of computed tomography (CT)-guided punctures is the transfer of the insertion trajectory planned in the CT image to the patient. Conventionally, this often results in several needle repositionings and control CT scans. To address this issue, several navigation systems for percutaneous needle insertions have been presented; however, none of them has thus far become widely accepted in clinical routine. Their benefit for the patient could not exceed the additional higher costs and the increased complexity in terms of bulky tracking systems and specialized markers for registration and tracking.MethodsWe present the first markerless and trackerless navigation concept for real-time patient localization and instrument guidance. It has specifically been designed to be integrated smoothly into the clinical workflow and does not require markers or an external tracking system. The main idea is the utilization of a range imaging device that allows for contactless and radiation-free acquisition of both range and color information used for patient localization and instrument guidance.ResultsA first feasibility study in phantom and porcine models yielded a median targeting accuracy of 6.9 and 19.4 mm, respectively.ConclusionsAlthough system performance remains to be improved for clinical use, expected advances in camera technology as well as consideration of respiratory motion and automation of the individual steps will make this approach an interesting alternative for guiding percutaneous needle insertions.


Current Medical Imaging Reviews | 2013

ToF/RGB Sensor Fusion for 3-D Endoscopy

Sven Haase; Christoph Forman; Thomas Kilgus; Roland Bammer; Lena Maier-Hein; Joachim Hornegger

Acquisition of 3-D anatomical structure in minimally invasive surgery is an important step towards intra-operative guidance. In this context, the rst prototype of a Time-of-Flight/RGB endoscope was engineered for simultaneous range and color data acquisition. Intrinsic and stereo camera calibration are essential to achieve an intuitive visualization of colored surfaces. Due to the early prototype stage, inhomogeneous illumination and low resolution (64×50 px) complicate the calibration signi cantly. To overcome these challenges, we propose a fully automatic multiscale calibration framework using a self-encoded marker for checkerboard detection. A rst application demonstrates the feasibility of intra-operative measurement. Using our calibration scheme, we achieved a reprojection error of less than 0.7 px for the Time-of-Flight camera and 0.5 px of the RGB camera. Our framework eases calibration and enables future applications to use combined range and colored data.


Bildverarbeitung für die Medizin | 2014

Outlier Detection for Multi-Sensor Super-Resolution in Hybrid 3D Endoscopy

Thomas Köhler; Sven Haase; Sebastian Bauer; Jakob Wasza; Thomas Kilgus; Lena Maier-Hein; Hubertus Feußner; Joachim Hornegger

In hybrid 3D endoscopy, range data is used to augment pho- tometric information for minimally invasive surgery. As range sensors suffer from a rough spatial resolution and a low signal-to-noise ratio, subpixel motion between multiple range images is used as a cue for super- resolution to obtain reliable range data. Unfortunately, this method is sensitive to outliers in range images and the estimated subpixel displace- ments. In this paper, we propose an outlier detection scheme for robust super-resolution. First, we derive confidence maps to identify outliers in the displacement fields by correlation analysis of photometric data. Second, we apply an iteratively re-weighted least squares algorithm to obtain the associated range confidence maps. The joint confidence map is used to obtain super-resolved range data. We evaluate our approach on synthetic images and phantom data acquired by a Time-of-Flight/RGB endoscope. Our outlire detection improves the median peak-signal-to- noise ratio by 1.1 dB.


Bildverarbeitung für die Medizin | 2012

ToF/RGB Sensor Fusion for Augmented 3D Endoscopy using a Fully Automatic Calibration Scheme

Sven Haase; Christoph Forman; Thomas Kilgus; Roland Bammer; Lena Maier-Hein; Joachim Hornegger

Three-dimensional Endoscopy is an evolving field of research and offers great benefits for minimally invasive procedures. Besides the pure topology, color texture is an inevitable feature to provide an optimal visualization. Therefore, in this paper, we propose a sensor fusion of a Time-of-Flight (ToF) and an RGB sensor. This requires an intrinsic and extrinsic calibration of both cameras. In particular, the low resolution of the ToF camera (64×50 px) and inhomogeneous illumination precludes the use of standard calibration techniques. By enhancing the image data the use of self-encoded markers for automatic checkerboard detection, a re-projection error of less than 0.23 px for the ToF camera was achieved. The relative transformation of both sensors for data fusion was calculated in an automatic manner.

Collaboration


Dive into the Thomas Kilgus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sven Haase

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Alexander Seitel

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Alfred M. Franz

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Anja Groch

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Jakob Wasza

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Kwong Yung

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge