Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henning Lategahn is active.

Publication


Featured researches published by Henning Lategahn.


ieee intelligent vehicles symposium | 2010

Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme

Bernd Kitt; Andreas Geiger; Henning Lategahn

A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicles own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time.


IEEE Intelligent Transportation Systems Magazine | 2014

Making Bertha Drive?An Autonomous Journey on a Historic Route

Julius Ziegler; Philipp Bender; Markus Schreiber; Henning Lategahn; Tobias Strauss; Christoph Stiller; Thao Dang; Uwe Franke; Nils Appenrodt; Christoph Gustav Keller; Eberhard Kaus; Ralf Guido Herrtwich; Clemens Rabe; David Pfeiffer; Frank Lindner; Fridtjof Stein; Friedrich Erbs; Markus Enzweiler; Carsten Knöppel; Jochen Hipp; Martin Haueis; Maximilian Trepte; Carsten Brenk; Andreas Tamke; Mohammad Ghanaat; Markus Braun; Armin Joos; Hans Fritz; Horst Mock; Martin Hein

125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios.


IEEE Transactions on Image Processing | 2010

Texture Classification by Modeling Joint Distributions of Local Patterns With Gaussian Mixtures

Henning Lategahn; Sebastian Gross; Thomas Stehle; Til Aach

Texture classification generally requires the analysis of patterns in local pixel neighborhoods. Statistically, the underlying processes are comprehensively described by their joint probability density functions (jPDFs). Even for small neighborhoods, however, stable estimation of jPDFs by joint histograms (jHSTs) is often infeasible, since the number of entries in the jHST exceeds by far the number of pixels in a typical texture region. Moreover, evaluation of distance functions between jHSTs is often computationally prohibitive. Practically, the number of entries in a jHST is therefore reduced by considering only two-pixel patterns, leading to 2D-jHSTs known as cooccurrence matrices, or by quantization of the gray levels in local patterns to only two gray levels, yielding local binary patterns (LBPs). Both approaches result in a loss of information. We introduce here a framework for supervised texture classification which reduces or avoids this information loss. Local texture neighborhoods are first filtered by a filter bank. Without further quantization, the jPDF of the filter responses is then described parametrically by Gaussian mixture models (GMMs). We show that the parameters of the GMMs can be reliably estimated from small image regions. Moreover, distances between the thus modelled jPDFs of different texture patterns can be computed efficiently in closed form from their model parameters. We furthermore extend this texture descriptor to achieve full invariance to rotation. We evaluate the framework for different filter banks on the Brodatz texture set. We first show that combining the LBP difference filters with the GMM-based density estimator outperforms the classical LBP approach and its codebook extensions. When replacing these-rather elementary-difference filters by the wavelet frame transform (WFT), the performance of the framework on all 111 Brodatz textures exceeds the one obtained more recently by spin image and RIFT descriptors by Lazebnik et al.


international conference on robotics and automation | 2011

Visual SLAM for autonomous ground vehicles

Henning Lategahn; Andreas Geiger; Bernd Kitt

Simultaneous Localization and Mapping (SLAM) and Visual SLAM (V-SLAM) in particular have been an active area of research lately. In V-SLAM the main focus is most often laid on the localization part of the problem allowing for a drift free motion estimate. To this end, a sparse set of landmarks is tracked and their position is estimated. However, this set of landmarks (rendering the map) is often too sparse for tasks in autonomous driving such as navigation, path planning, obstacle avoidance etc. Some methods keep the raw measurements for past robot poses to address the sparsity problem often resulting in a pose only SLAM akin to laser scanner SLAM. For the stereo case, this is however impractical due to the high noise of stereo reconstructed point clouds. In this paper we propose a dense stereo V-SLAM algorithm that estimates a dense 3D map representation which is more accurate than raw stereo measurements. Thereto, we run a sparse V-SLAM system, take the resulting pose estimates to compute a locally dense representation from dense stereo correspondences. This dense representation is expressed in local coordinate systems which are tracked as part of the SLAM estimate. This allows the dense part to be continuously updated. Our system is driven by visual odometry priors to achieve high robustness when tracking landmarks. Moreover, the sparse part of the SLAM system uses recently published sub mapping techniques to achieve constant runtime complexity most of the time. The improved accuracy over raw stereo measurements is shown in a Monte Carlo simulation. Finally, we demonstrate the feasibility of our method by presenting outdoor experiments of a car like robot.


ieee intelligent vehicles symposium | 2013

Urban localization with camera and inertial measurement unit

Henning Lategahn; Markus Schreiber; Julius Ziegler; Christoph Stiller

Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pronounced. Herein we present a system for six degrees of freedom (DOF) ego localization using a mono camera and an inertial measurement unit (IMU). The camera image is processed to yield a rough position estimate using a previously computed landmark map. Thereafter IMU measurements are fused with the position estimate for a refined localization update. Moreover, we present the mapping pipeline required for the creation of landmark maps. Finally, we present experiments on real world data. The accuracy of the system is evaluated by computing two independent ego positions of the same trajectory from two distinct cameras and investigating these estimates for consistency. A mean localization accuracy of 10 cm is achieved on a 10 km sequence in an inner city scenario.


IEEE Transactions on Intelligent Transportation Systems | 2014

Vision-Only Localization

Henning Lategahn; Christoph Stiller

Autonomous and intelligent vehicles will undoubtedly depend on an accurate ego localization solution. Global navigation satellite systems suffer from multipath propagation rendering this solution insufficient. Herein, we present a real-time system for six-degrees-of-freedom ego localization that uses only a single monocular camera. The camera image is harnessed to yield an ego pose relative to a previously computed visual map. We describe a process to automatically extract the ingredients of this map from stereoscopic image sequences. These include a mapping trajectory relative to the first pose, global scene signatures and local landmark descriptors. The localization algorithm then consists of a topological localization step that completely obviates the need for any global positioning sensors such as GNSS. A metric refinement step that recovers an accurate metric pose is subsequently applied. Metric localization recovers the ego pose in a factor graph optimization process based on local landmarks. We demonstrate centimeter-level accuracy by a set of experiments in an urban environment. To this end, two localization estimates are computed for two independent cameras mounted on the same vehicle. These two independent trajectories are thereafter compared for consistency. Finally, we present qualitative experiments of an augmented reality (AR) system that depends on the aforementioned localization solution. Several screen shots of the AR system are shown confirming centimeter-level accuracy and subdegree angular precision.


intelligent vehicles symposium | 2014

Video based localization for Bertha

Julius Ziegler; Henning Lategahn; Markus Schreiber; Christoph Gustav Keller; Carsten Knöppel; Jochen Hipp; Martin Haueis; Christoph Stiller

In August 2013, the modified Mercedes-Benz SClass S500 Intelligent Drive (“Bertha”) completed the historic Bertha-Benz-Memorial-Route fully autonomously. The self-driving 103 km journey passed through urban and rural areas. The system used detailed geometric maps to supplement its online perception systems. A map based approach is only feasible if a precise, map relative localization is provided. The purpose of this paper is to give a survey on this corner stone of the system architecture. Two supplementary vision based localization methods have been developed. One of them is based on the detection of lane markings and similar road elements, the other exploits descriptors for point shaped features. A final filter step combines both estimates while handling out-of-sequence measurements correctly.


ieee intelligent vehicles symposium | 2013

How to learn an illumination robust image feature for place recognition

Henning Lategahn; Johannes Beck; Bernd Kitt; Christoph Stiller

Place recognition for loop closure detection lies at the heart of every Simultaneous Localization and Mapping (SLAM) method. Recently methods that use cameras and describe the entire image by one holistic feature vector have experienced a resurgence. Despite the success of these methods, it remains unclear how a descriptor should be constructed for this particular purpose. The problem of choosing the right descriptor becomes even more pronounced in the context of life long mapping. The appearance of a place may vary considerably under different illumination conditions and over the course of a day. None of the handcrafted descriptors published in literature are particularly designed for this purpose. Herein, we propose to use a set of elementary building blocks from which millions of different descriptors can be constructed automatically. Moreover, we present an evaluation function which evaluates the performance of a given image descriptor for place recognition under severe lighting changes. Finally we present an algorithm to efficiently search the space of descriptors to find the best suited one. Evaluating the trained descriptor on a test set shows a clear superiority over its hand crafted counter parts like BRIEF and U-SURF. Finally we show how loop closures can be reliably detected using the automatically learned descriptor. Two overlapping image sequences from two different days and times are merged into one pose graph. The resulting merged pose graph is optimized and does not contain a single false link while at the same time all true loop closures were detected correctly. The descriptor and the place recognizer source code is published with datasets on http://www.mrt.kit.edu/libDird.php.


ieee intelligent vehicles symposium | 2010

Occupancy grid computation from dense stereo and sparse structure and motion points for automotive applications

Henning Lategahn; Wojciech Waclaw Derendarz; Thorsten Graf; Bernd Kitt; Jan Effertz

We present a complete processing chain for computing 2D occupancy grids from image sequences. A multi layer grid is introduced which serves several purposes. First the 3D points reconstructed from the images are distributed onto the underlying grid. Thereafter a virtual measurement is computed for each cell thus reducing computational complexity and rejecting potential outliers. Subsequently a height profile is updated from which the current measurement is partitioned into ground and obstacle pixels. Different height profile update strategies are tested and compared yielding a stable height profile estimation. Lastly the occupancy layer of the grid is updated. To asses the algorithm we evaluate it quantitatively by comparing the output of it to ground truth data illustrating its accuracy. We show the applicability of the algorithm by using both, dense stereo reconstructed and sparse structure and motion points. The algorithm was implemented and run online on one of our test vehicles in real time.


international conference on vehicular electronics and safety | 2012

City GPS using stereo vision

Henning Lategahn; Christoph Stiller

Next generation driver assistance systems require a precise localization. However, global navigation satellite systems (GNSS) often exhibit a paucity of accuracy due to shadowing effects in street canyon like scenarios rendering this solution insufficient for many tasks. Alternatively 3D laser scanners can be utilized to localize the vehicle within a previously recorded 3D map. These scanners however, are expensive and bulky hampering a wide spread use. Herein we propose to use stereo cameras to localize the ego vehicle within a previously computed visual 3D map. The proposed localization solution is low cost, precise and runs in real time. The map is computed once and kept fixed thereafter using cameras as sole sensors without GPS readings. The presented mapping algorithm is largely inspired by current state of the art simultaneous localization and mapping (SLAM) methods. Moreover, the map merely consists of a sparse set of landmark points keeping the map storage manageably low.

Collaboration


Dive into the Henning Lategahn's collaboration.

Top Co-Authors

Avatar

Bernd Kitt

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Stiller

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julius Ziegler

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Schreiber

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge