Christian Häne
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Häne.
ieee intelligent vehicles symposium | 2013
Paul Timothy Furgale; Ulrich Schwesinger; Martin Rufli; Wojciech Waclaw Derendarz; Hugo Grimmett; Peter Mühlfellner; Stefan Wonneberger; Julian Timpner; Stephan Rottmann; Bo Li; Bastian Schmidt; Thien-Nghia Nguyen; Elena Cardarelli; Stefano Cattani; Stefan Brüning; Sven Horstmann; Martin Stellmacher; Holger Mielenz; Kevin Köser; Markus Beermann; Christian Häne; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Rene Iser; Rudolph Triebel; Ingmar Posner; Paul Newman; Lars C. Wolf; Marc Pollefeys
Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times. Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption. The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.
international conference on 3d vision | 2014
Christian Häne; Lionel Heng; Gim Hee Lee; Alexey Sizov; Marc Pollefeys
In this paper, we propose an adaptation of camera projection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our approach also works for other non-pinhole cameras such as omni directional and catadioptric cameras when using the unified projection model. Despite the simplicity of our proposed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps.
computer vision and pattern recognition | 2015
Nikolay Savinov; Lubor Ladicky; Christian Häne; Marc Pollefeys
Dense semantic 3D reconstruction is typically formulated as a discrete or continuous problem over label assignments in a voxel grid, combining semantic and depth likelihoods in a Markov Random Field framework. The depth and semantic information is incorporated as a unary potential, smoothed by a pairwise regularizer. However, modelling likelihoods as a unary potential does not model the problem correctly leading to various undesirable visibility artifacts. We propose to formulate an optimization problem that directly optimizes the reprojection error of the 3D model with respect to the image estimates, which corresponds to the optimization over rays, where the cost function depends on the semantic class and depth of the first occupied voxel along the ray. The 2-label formulation is made feasible by transforming it into a graph-representable form under QPBO relaxation, solvable using graph cut. The multi-label problem is solved by applying α-expansion using the same relaxation in each expansion move. Our method was indeed shown to be feasible in practice, running comparably fast to the competing methods, while not suffering from ray potential approximation artifacts.
intelligent robots and systems | 2011
Christian Häne; Christopher Zach; Jongwoo Lim; Ananth Ranganathan; Marc Pollefeys
We present a method to reconstruct indoor environments from stereo image pairs, suitable for the navigation of robots. To enable a robot to navigate solely using visual cues it receives from a stereo camera, the depth information needs to be extracted from the image pairs and combined into a common representation. The initially determined raw depthmaps are fused into a two level heightmap representation which contains a floor and a ceiling height level. To reduce the noise in the height maps we employ a total variation regularized energy functional. With this 2.5D representation of the scene the computational complexity of the energy optimization is reduced by one dimension in contrast to other fusion techniques that work on the full 3D space such as volumetric fusion. While we show only results for indoor environments the approach can be extended to generate heightmaps for outdoor environments.
intelligent robots and systems | 2015
Christian Häne; Torsten Sattler; Marc Pollefeys
Mapping the environment is crucial to enable path planning and obstacle avoidance for self-driving vehicles and other robots. In this paper, we concentrate on ground-based vehicles and present an approach which extracts static obstacles from depth maps computed out of multiple consecutive images. In contrast to existing approaches, our system does not require accurate visual inertial odometry estimation but solely relies on the readily available wheel odometry. To handle the resulting higher pose uncertainty, our system fuses obstacle detections over time and between cameras to estimate the free and occupied space around the vehicle. Using monocular fisheye cameras, we are able to cover a wider field of view and detect obstacles closer to the car, which are often not within the standard field of view of a classical binocular stereo camera setup. Our quantitative analysis shows that our system is accurate enough for navigation purposes of self-driving cars and runs in real-time.
international conference on 3d imaging, modeling, processing, visualization & transmission | 2012
Christian Häne; Christopher Zach; Bernhard Zeisl; Marc Pollefeys
Dense 3D reconstruction in man-made environments has to contend with weak and ambiguous observations due to texture-less surfaces which are predominant in such environments. This challenging task calls for strong, domain-specific priors. These are usually modeled via regularization or smoothness assumptions. Generic smoothness priors, e.g. total variation are often not sufficient to produce convincing results. Consequently, we propose a more powerful prior directly modeling the expected local surface-structure, without the need to utilize expensive methods such as higher-order MRFs. Our approach is inspired by patch-based representations used in image processing. In contrast to the over-complete dictionaries used e.g. for sparse representations our patch dictionary is much smaller. The proposed energy can be optimized by utilizing an efficient first-order primal dual algorithm. Our formulation is in particular very natural to model priors on the 3D structure of man-made environments. We demonstrate the applicability of our prior on synthetic data and on real data, where we recover dense, piece-wise planar 3Dmodels using stereo and fusion of multiple depth images.
computer vision and pattern recognition | 2014
Christian Häne; Nikolay Savinov; Marc Pollefeys
Dense 3D reconstruction of real world objects containing textureless, reflective and specular parts is a challenging task. Using general smoothness priors such as surface area regularization can lead to defects in the form of disconnected parts or unwanted indentations. We argue that this problem can be solved by exploiting the object class specific local surface orientations, e.g. a car is always close to horizontal in the roof area. Therefore, we formulate an object class specific shape prior in the form of spatially varying anisotropic smoothness terms. The parameters of the shape prior are extracted from training data. We detail how our shape prior formulation directly fits into recently proposed volumetric multi-label reconstruction approaches. This allows a segmentation between the object and its supporting ground. In our experimental evaluation we show reconstructions using our trained shape prior on several challenging datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Christopher Zach; Christian Häne; Marc Pollefeys
In this work, we present a unified view on Markov random fields (MRFs) and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased toward the grid geometry than Markov random fields on grids. It turns out that the continuous methods are nonlinear extensions of the well-established local polytope MRF relaxation. In view of this result, a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. Finally, we generalize the underlying energy formulation from isotropic metric smoothness costs to arbitrary nonmetric and orientation dependent smoothness terms.In this work, we present a unified view on Markov random fields (MRFs) and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased toward the grid geometry than Markov random fields on grids. It turns out that the continuous methods are nonlinear extensions of the well-established local polytope MRF relaxation. In view of this result, a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. Finally, we generalize the underlying energy formulation from isotropic metric smoothness costs to arbitrary nonmetric and orientation dependent smoothness terms.
computer vision and pattern recognition | 2016
Nikolay Savinov; Christian Häne; Lubor Ladicky; Marc Pollefeys
We propose an approach for dense semantic 3D reconstruction which uses a data term that is defined as potentials over viewing rays, combined with continuous surface area penalization. Our formulation is a convex relaxation which we augment with a crucial non-convex constraint that ensures exact handling of visibility. To tackle the non-convex minimization problem, we propose a majorizeminimize type strategy which converges to a critical point. We demonstrate the benefits of using the non-convex constraint experimentally. For the geometry-only case, we set a new state of the art on two datasets of the commonly used Middlebury multi-view stereo benchmark. Moreover, our general-purpose formulation directly reconstructs thin objects, which are usually treated with specialized algorithms. A qualitative evaluation on the dense semantic 3D reconstruction task shows that we improve significantly over previous methods.
computer vision and pattern recognition | 2015
Rabeeh Karimi Mahabadi; Christian Häne; Marc Pollefeys
Dense 3D reconstruction still remains a hard task for a broad number of object classes which are not sufficiently textured or contain transparent and reflective parts. Shape priors are the tool of choice when the input data itself is not descriptive enough to get a faithful reconstruction. We propose a novel shape prior formulation that splits the object into multiple convex parts. The reconstruction problem is posed as a volumetric multi-label segmentation. Each of the transitions between labels is penalized with its individual anisotropic smoothness term. This powerful formulation allows us to represent a descriptive shape prior. For the object classes used in this paper the individual segments naturally correspond to different semantic parts of the object. This leads to a semantic segmentation as a side product of our shape prior formulation. We evaluate our method on several challenging real-world datasets. Our results show that we can resolve issues such as undesired holes and disconnected parts. Taking into account a segmentation of the free space, we show that we are able to reconstruct concavities, such as the interior of a mug.