Daniel Kondermann
Heidelberg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Kondermann.
Medical Imaging 2007: Image Processing | 2007
Claudia Kondermann; Daniel Kondermann; Michelle Yan
The prevalence of diabetes is expected to increase dramatically in coming years; already today it accounts for a major proportion of the health care budget in many countries. Diabetic Retinopathy (DR), a micro vascular complication very often seen in diabetes patients, is the most common cause of visual loss in working age population of developed countries today. Since the possibility of slowing or even stopping the progress of this disease depends on the early detection of DR, an automatic analysis of fundus images would be of great help to the ophthalmologist due to the small size of the symptoms and the large number of patients. An important symptom for DR are abnormally wide veins leading to an unusually low ratio of the average diameter of arteries to veins (AVR). There are also other diseases like high blood pressure or diseases of the pancreas with one symptom being an abnormal AVR value. To determine it, a classification of vessels as arteries or veins is indispensable. As to our knowledge despite the importance there have only been two approaches to vessel classification yet. Therefore we propose an improved method. We compare two feature extraction methods and two classification methods based on support vector machines and neural networks. Given a hand-segmentation of vessels our approach achieves 95.32% correctly classified vessel pixels. This value decreases by 10% on average, if the result of a segmentation algorithm is used as basis for the classification.
computer vision and pattern recognition | 2013
Ralf Haeusler; Rahul Nair; Daniel Kondermann
With the aim to improve accuracy of stereo confidence measures, we apply the random decision forest framework to a large set of diverse stereo confidence measures. Learning and testing sets were drawn from the recently introduced KITTI dataset, which currently poses higher challenges to stereo solvers than other benchmarks with ground truth for stereo evaluation. We experiment with semi global matching stereo (SGM) and a census data term, which is the best performing real-time capable stereo method known to date. On KITTI images, SGM still produces a significant amount of error. We obtain consistently improved area under curve values of sparsification measures in comparison to best performing single stereo confidence measures where numbers of stereo errors are large. More specifically, our method performs best in all but one out of 194 frames of the KITTI dataset.
Image Processing On Line | 2013
Enric Meinhardt-Llopis; Javier Sánchez Pérez; Daniel Kondermann
The seminal work of Horn and Schunck is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations; or the one we follow, that computes the full flow during the iterations. The solutions are incrementally refined over the scales. This pyramidal structure is a standard tool in many optical flow methods.
Optical Engineering | 2012
Stephan Meister; Bernd Jähne; Daniel Kondermann
We describe a high-performance stereo camera system to capture image sequences with high temporal and spatial resolution for the evaluation of various image processing tasks. The system was primarily designed for complex outdoor and traffic scenes that frequently occur in the automotive industry, but is also suited for other applications. For this task the system is equipped with a very accurate inertial measurement unit and global positioning system, which provides exact camera movement and position data. The system is already in active use and has produced several terabytes of challenging image sequences which are partly available for download.
medical image computing and computer assisted intervention | 2014
Lena Maier-Hein; Sven Mersmann; Daniel Kondermann; Sebastian Bodenstedt; Alexandro Sanchez; Christian Stock; Hannes Kenngott; Mathias Eisenmann; Stefanie Speidel
Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.
dagm conference on pattern recognition | 2007
Claudia Kondermann; Daniel Kondermann; Bernd Jähne; Christoph S. Garbe
Confidence measures are important for the validation of optical flow fields by estimating the correctness of each displacement vector. There are several frequently used confidence measures, which have been found of at best intermediate quality. Hence, we propose a new confidence measure based on linear subspace projections. The results are compared to the best previously proposed confidence measures with respect to an optimal confidence. Using the proposed measure we are able to improve previous results by up to 31%.
asian conference on computer vision | 2016
Katrin Honauer; Ole Johannsen; Daniel Kondermann; Bastian Goldluecke
In computer vision communities such as stereo, optical flow, or visual tracking, commonly accepted and widely used benchmarks have enabled objective comparison and boosted scientific progress.
medical image computing and computer assisted intervention | 2014
Lena Maier-Hein; Sven Mersmann; Daniel Kondermann; Christian Stock; Hannes Kenngott; Alexandro Sanchez; Martin Wagner; Anas Preukschas; Anna-Laura Wekerle; Stefanie Helfert; Sebastian Bodenstedt; Stefanie Speidel
Computer-assisted minimally-invasive surgery (MIS) is often based on algorithms that require establishing correspondences between endoscopic images. However, reference annotations frequently required to train or validate a method are extremely difficult to obtain because they are typically made by a medical expert with very limited resources, and publicly available data sets are still far too small to capture the wide range of anatomical/scene variance. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. To our knowledge, this paper is the first to investigate the concept of crowdsourcing in the context of endoscopic video image annotation for computer-assisted MIS. According to our study on publicly available in vivo data with manual reference annotations, anonymous non-experts obtain a median annotation error of 2 px (n = 10,000). By applying cluster analysis to multiple annotations per correspondence, this error can be reduced to about 1 px, which is comparable to that obtained by medical experts (n = 500). We conclude that crowdsourcing is a viable method for generating high quality reference correspondences in endoscopic video images.
Time-of-Flight and Depth Imaging | 2013
Kai Berger; Stephan Meister; Rahul Nair; Daniel Kondermann
During the last three years after the launch of the Microsoft Kinect® in the end-consumer market we have become witnesses of a small revolution in computer vision research towards the use of a standardized consumer-grade RGBD sensor for scene content retrieval. Beside classical localization and motion capturing tasks the Kinect has successfully been employed for the reconstruction of opaque and transparent objects. This report gives a comprehensive overview over the main publications using the Microsoft Kinect out of its original context as a decision-forest based motion-capturing tool.
Time-of-Flight and Depth Imaging | 2013
Rahul Nair; Kai Ruhl; Frank Lenzen; Stephan Meister; Henrik Schäfer; Christoph S. Garbe; Martin Eisemann; Marcus A. Magnor; Daniel Kondermann
Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-of-Flight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.