Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephan Meister is active.

Publication


Featured researches published by Stephan Meister.


vision modeling and visualization | 2013

Datasets and Benchmarks for Densely Sampled 4D Light Fields

Sven Wanner; Stephan Meister; Bastian Goldluecke

We present a new benchmark database to compare and evaluate existing and upcoming algorithms which are tailored to light field processing. The data is characterised by a dense sampling of the light fields, which best fits current plenoptic cameras and is a characteristic property not found in current multi-view stereo benchmarks. It allows to treat the disparity space as a continuous space, and enables algorithms based on epipolar plane image analysis without having to refocus first. All datasets provide ground truth depth for at least the center view, while some have additional segmentation data available. Part of the light fields are computer graphics generated, the rest are acquired with a gantry, with ground truth depth established by a previous scanning of the imaged objects using a structured light scanner. In addition, we provide source code for an extensive evaluation of a number of previously published stereo, epipolar plane image analysis and segmentation algorithms on the database.


Optical Engineering | 2012

Outdoor Stereo Camera System for the Generation of Real-World Benchmark Data Sets

Stephan Meister; Bernd Jähne; Daniel Kondermann

We describe a high-performance stereo camera system to capture image sequences with high temporal and spatial resolution for the evaluation of various image processing tasks. The system was primarily designed for complex outdoor and traffic scenes that frequently occur in the automotive industry, but is also suited for other applications. For this task the system is equipped with a very accurate inertial measurement unit and global positioning system, which provides exact camera movement and position data. The system is already in active use and has produced several terabytes of challenging image sequences which are partly available for download.


Time-of-Flight and Depth Imaging | 2013

A State of the Art Report on Kinect Sensor Setups in Computer Vision

Kai Berger; Stephan Meister; Rahul Nair; Daniel Kondermann

During the last three years after the launch of the Microsoft Kinect® in the end-consumer market we have become witnesses of a small revolution in computer vision research towards the use of a standardized consumer-grade RGBD sensor for scene content retrieval. Beside classical localization and motion capturing tasks the Kinect has successfully been employed for the reconstruction of opaque and transparent objects. This report gives a comprehensive overview over the main publications using the Microsoft Kinect out of its original context as a decision-forest based motion-capturing tool.


Time-of-Flight and Depth Imaging | 2013

A Survey on Time-of-Flight Stereo Fusion

Rahul Nair; Kai Ruhl; Frank Lenzen; Stephan Meister; Henrik Schäfer; Christoph S. Garbe; Martin Eisemann; Marcus A. Magnor; Daniel Kondermann

Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-of-Flight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.


international conference on computer vision | 2012

High accuracy TOF and stereo sensor fusion at interactive rates

Rahul Nair; Frank Lenzen; Stephan Meister; Henrik Schäfer; Christoph S. Garbe; Daniel Kondermann

We propose two new GPU-based sensor fusion approaches for time of flight (TOF) and stereo depth data. Data fidelity measures are defined to deal with the fundamental limitations of both techniques alone. Our algorithms combine TOF and stereo, yielding megapixel depth maps, enabling our approach to be used in a movie production scenario. Our local model works at interactive rates but yields noisier results, whereas our variational technique is more robust at a higher computational cost. The results show an improvement over each individual method with TOF interreflection remaining an open challenge. To encourage quantitative evaluations, a ground truth dataset is made publicly available.


asian conference on computer vision | 2014

Stereo Ground Truth with Error Bars

Daniel Kondermann; Rahul Nair; Stephan Meister; Wolfgang Mischler; Burkhard Güssefeld; Katrin Honauer; Sabine Hofmann; Claus Brenner; Bernd Jähne

Creating stereo ground truth based on real images is a measurement task. Measurements are never perfectly accurate: the depth at each pixel follows an error distribution. A common way to estimate the quality of measurements are error bars. In this paper we describe a methodology to add error bars to images of previously scanned static scenes. The main challenge for stereo ground truth error estimates based on such data is the nonlinear matching of 2D images to 3D points. Our method uses 2D feature quality, 3D point and calibration accuracy as well as covariance matrices of bundle adjustments. We sample the reference data error which is the 3D depth distribution of each point projected into 3D image space. The disparity distribution at each pixel location is then estimated by projecting samples of the reference data error on the 2D image plane. An analytical Gaussian error propagation is used to validate the results. As proof of concept, we created ground truth of an image sequence with 100 frames. Results show that disparity accuracies well below one pixel can be achieved, albeit with much large errors at depth discontinuities mainly caused by uncertain estimates of the camera location.


Time-of-Flight and Depth Imaging | 2013

Denoising Strategies for Time-of-Flight Data

Frank Lenzen; Kwang In Kim; Henrik Schäfer; Rahul Nair; Stephan Meister; Florian Becker; Christoph S. Garbe; Christian Theobalt

When considering the task of denoising ToF data, two issues arise concerning the optimal strategy. The first one is the choice of an appropriate denoising method and its adaptation to ToF data, the second one is the issue of the optimal positioning of the denoising step within the processing pipeline between acquisition of raw data of the sensor and the final output of the depth map. Concerning the first issue, several denoising approaches specifically for ToF data have been proposed in literature, and one contribution of this chapter is to provide an overview. To tackle the second issue, we exemplarily focus on two state-of-the-art methods, the bilateral filtering and total variation (TV) denoising and discuss several alternatives of positions in the pipeline, where these methods can be applied. In our experiments, we compare and evaluate the results of each combination of method and position both qualitatively and quantitatively. It turns out, that for TV denoising the optimal position is at the very end of the pipeline. For the bilateral filter, a quantitative comparison shows that applying it to the raw data together with a subsequent median filtering provides a low error to ground truth. Qualitatively, it competes with applying the (cross-)bilateral filter to the depth data. In particular, the optimal position in general depends on the considered method. As a consequence, for any newly introduced denoising technique, finding its optimal position within the pipeline is an open issue.


Time-of-Flight and Depth Imaging | 2013

Ground Truth for Evaluating Time of Flight Imaging

Rahul Nair; Stephan Meister; Martin Lambers; Michael Balda; Hannes G. Hofmann; Andreas Kolb; Daniel Kondermann; Bernd Jähne

In this work, we systematically analyze how good ground truth (GT) datasets for evaluating methods based on Time-of-Flight (ToF) imaging data should look like. Starting from a high level characterization of the application domains and requirements they typically have, we characterize how good datasets should look like and discuss how algorithms can be evaluated using them. Furthermore, we discuss the two different ways of obtaining ground truth data: By measurement and by simulation.


international conference on image processing | 2014

Time of flight motion compensation revisited

Jens-Malte Gottfried; Rahul Nair; Stephan Meister; Christoph S. Garbe; Daniel Kondermann

In this paper, we study motion artifacts that arise in Time-ofFlight imaging of dynamic scenes caused by the sequential nature of the raw image acquisition process used to compute the final depth image. Many methods for compensation of such errors have been proposed to date, but still lack a proper comparison. We bridge this gap by not only evaluating those methods, but also by providing implementations for all of them as a base-line to the community. By exchanging the calibration model necessary for these methods with a model closer to reality we were able to improve the results on all related methods without any loss of performance.


european conference on computer vision | 2012

Erratum: High Accuracy TOF and Stereo Sensor Fusion at Interactive Rates

Rahul Nair; Frank Lenzen; Stephan Meister; Henrik Schäfer; Christoph S. Garbe; Daniel Kondermann

There was an error in the acknowledgements section of this paper. The correct acknowledgement text is as follows:

Collaboration


Dive into the Stephan Meister's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge