Sven Wanner
Heidelberg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sven Wanner.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Sven Wanner; Bastian Goldluecke
We develop a continuous framework for the analysis of 4D light fields, and describe novel variational methods for disparity reconstruction as well as spatial and angular super-resolution. Disparity maps are estimated locally using epipolar plane image analysis without the need for expensive matching cost minimization. The method works fast and with inherent subpixel accuracy since no discretization of the disparity space is necessary. In a variational framework, we employ the disparity maps to generate super-resolved novel views of a scene, which corresponds to increasing the sampling rate of the 4D light field in spatial as well as angular direction. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. All optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithms on a number of real-world examples as well as our new benchmark data set for light fields, and compare results to a multiview stereo method. The proposed method is both faster as well as more accurate. Data sets and source code are provided online for additional evaluation.
computer vision and pattern recognition | 2012
Sven Wanner; Bastian Goldluecke
We present a novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework. Taking into account the special structure of light field data, we reformulate the problem of stereo matching to a constrained labeling problem on epipolar plane images, which can be thought of as vertical and horizontal 2D cuts through the field. This alternative formulation allows to estimate accurate depth values even for specular surfaces, while simultaneously taking into account global visibility constraints in order to obtain consistent depth maps for all views. The resulting optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithm on a number of synthetic and real-world examples captured with a light field gantry and a plenoptic camera, and compare to ground truth where available. All data sets as well as source code are provided online for additional evaluation.
vision modeling and visualization | 2013
Sven Wanner; Stephan Meister; Bastian Goldluecke
We present a new benchmark database to compare and evaluate existing and upcoming algorithms which are tailored to light field processing. The data is characterised by a dense sampling of the light fields, which best fits current plenoptic cameras and is a characteristic property not found in current multi-view stereo benchmarks. It allows to treat the disparity space as a continuous space, and enables algorithms based on epipolar plane image analysis without having to refocus first. All datasets provide ground truth depth for at least the center view, while some have additional segmentation data available. Part of the light fields are computer graphics generated, the rest are acquired with a gantry, with ground truth depth established by a previous scanning of the imaged objects using a structured light scanner. In addition, we provide source code for an extensive evaluation of a number of previously published stereo, epipolar plane image analysis and segmentation algorithms on the database.
european conference on computer vision | 2012
Sven Wanner; Bastian Goldluecke
We present a variational framework to generate super-resolved novel views from 4D light field data sampled at low resolution, for example by a plenoptic camera. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. High-accuracy depth maps for the input views are locally estimated using epipolar plane image analysis, which yields floating point depth precision without the need for expensive matching cost minimization. The disparity maps are further improved by increasing angular resolution with synthesized intermediate views. Minimization of the super-resolution model energy is performed with state of the art convex optimization algorithms within seconds.
computer vision and pattern recognition | 2013
Sven Wanner; Christoph N. Straehle; Bastian Goldluecke
We present the first variational framework for multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, features need to be extracted from the 2D projection of a three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.
computer vision and pattern recognition | 2013
Bastian Goldluecke; Sven Wanner
Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
german conference on pattern recognition | 2013
Sven Wanner; Bastian Goldluecke
While multi-view stereo reconstruction of Lambertian surfaces is nowadays highly robust, reconstruction methods based on correspondence search usually fail in the presence of ambiguous information, like in the case of partially reflecting and transparent surfaces. On the epipolar plane images of a 4D light field, however, surfaces like these give rise to overlaid patterns of oriented lines. We show that these can be identified and analyzed quickly and accurately with higher order structure tensors. The resulting method can reconstruct with high precision both the geometry of the surface as well as the geometry of the reflected or transmitted object. Accuracy and feasibility are shown on both ray-traced synthetic scenes and real-world data recorded by our gantry.
computer vision and pattern recognition | 2017
Ole Johannsen; Katrin Honauer; Bastian Goldluecke; Anna Alperovich; Federica Battisti; Yunsu Bok; Michele Brizzi; Marco Carli; Gyeongmin Choe; Maximilian Diebold; Marcel Gutsche; Hae-Gon Jeon; In So Kweon; Jaesik Park; Jinsun Park; Hendrik Schilling; Hao Sheng; Lipeng Si; Michael Strecke; Antonin Sulc; Yu-Wing Tai; Qing Wang; Ting-Chun Wang; Sven Wanner; Zhang Xiong; Jingyi Yu; Shuo Zhang; Hao Zhu
This paper presents the results of the depth estimation challenge for dense light fields, which took place at the second workshop on Light Fields for Computer Vision (LF4CV) in conjunction with CVPR 2017. The challenge consisted of submission to a recent benchmark [7], which allows a thorough performance analysis. While individual results are readily available on the benchmark web page http://www.lightfield-analysis.net, we take this opportunity to give a detailed overview of the current participants. Based on the algorithms submitted to our challenge, we develop a taxonomy of light field disparity estimation algorithms and give a report on the current state-ofthe- art. In addition, we include more comparative metrics, and discuss the relative strengths and weaknesses of the algorithms. Thus, we obtain a snapshot of where light field algorithm development stands at the moment and identify aspects with potential for further improvement.
Videometrics, Range Imaging, and Applications XIII | 2015
Maximilian Diebold; O. Blum; Marcel Gutsche; Sven Wanner; Christoph S. Garbe; H. Baker; Bernd Jähne
Light-field imaging is a research field with applicability in a variety of imaging areas including 3D cinema, entertainment, robotics, and any task requiring range estimation. In contrast to binocular or multi-view stereo approaches, capturing light fields means densely observing a target scene through a window of viewing directions. A principal benefit in light-field imaging for range computation is that one can eliminate the error-prone and computationally expensive process of establishing correspondence. The nearly continuous space of observation allows to compute highly accurate and dense depth maps free of matching. Here, we discuss how to structure the imaging system for optimal ranging over a defined volume - what we term a bounded frustum. We detail the process of designing the light-field setup, including practical issues such as camera footprint and component size influence the depth of field, lateral and range resolution. Both synthetic and real captured scenes are used to analyze the depth precision resulting from a design, and to show how unavoidable inaccuracies such as camera position and focal length variation limit depth precision. Finally, inaccuracies may be sufficiently well compensated through calibration and must be eliminated at the outset.
vision modeling and visualization | 2011
Sven Wanner; Christoph Sommer; Roland Rocholz; Michael Jung; Fred A. Hamprecht; Bernd Jähne
A framework for the visualization and classification of multi-channel spatio-temporal data from water wave imaging is presented. Our interactive visualization tool, WaveVis, allows a detailed study of the water surface shape in reference to additional data streams, like thermographic images or classification results. This facilitates an intuitive and effective inspection of huge amounts of data. WaveVis was used to select representative training examples of events for a supervised learning approach and to evaluate the results of the classification. The interactive classification and segmentation software ilastik was used to train a Random Forest classifier. The benefit of the combination of both programs is demonstrated for two applications, the estimation of the rain rate from the segmentation of impact craters, and the detection of small scale breaking waves. The classification of the impact crater of raindrops on the water surface worked very well, whereas the detection of the breaking waves was satisfactory only under certain experimental conditions. Nevertheless, the combination of WaveVis and ilastik proved to be valuable in both cases.