Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clemens Rabe is active.

Publication


Featured researches published by Clemens Rabe.


IEEE Intelligent Transportation Systems Magazine | 2014

Making Bertha Drive?An Autonomous Journey on a Historic Route

Julius Ziegler; Philipp Bender; Markus Schreiber; Henning Lategahn; Tobias Strauss; Christoph Stiller; Thao Dang; Uwe Franke; Nils Appenrodt; Christoph Gustav Keller; Eberhard Kaus; Ralf Guido Herrtwich; Clemens Rabe; David Pfeiffer; Frank Lindner; Fridtjof Stein; Friedrich Erbs; Markus Enzweiler; Carsten Knöppel; Jochen Hipp; Martin Haueis; Maximilian Trepte; Carsten Brenk; Andreas Tamke; Mohammad Ghanaat; Markus Braun; Armin Joos; Hans Fritz; Horst Mock; Martin Hein

125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios.


dagm conference on pattern recognition | 2005

6D-vision: fusion of stereo and motion for robust environment perception

Uwe Franke; Clemens Rabe; Hernán Badino; Stefan K. Gehrig

Obstacle avoidance is one of the most important challenges for mobile robots as well as future vision based driver assistance systems. This task requires a precise extraction of depth and the robust and fast detection of moving objects. In order to reach these goals, this paper considers vision as a process in space and time. It presents a powerful fusion of depth and motion information for image sequences taken from a moving observer. 3D-position and 3D-motion for a large number of image points are estimated simultaneously by means of Kalman-Filters. There is no need of prior error-prone segmentation. Thus, one gets a rich 6D representation that allows the detection of moving obstacles even in the presence of partial occlusion of foreground or background.


IEEE Transactions on Intelligent Transportation Systems | 2011

Active Pedestrian Safety by Automatic Braking and Evasive Steering

Christoph Gustav Keller; Thao Dang; Hans Fritz; Armin Joos; Clemens Rabe; Dariu M. Gavrila

Active safety systems hold great potential for reducing accident frequency and severity by warning the driver and/or exerting automatic vehicle control ahead of crashes. This paper presents a novel active pedestrian safety system that combines sensing, situation analysis, decision making, and vehicle control. The sensing component is based on stereo vision, and it fuses the following two complementary approaches for added robustness: 1) motion-based object detection and 2) pedestrian recognition. The highlight of the system is its ability to decide, within a split second, whether it will perform automatic braking or evasive steering and reliably execute this maneuver at relatively high vehicle speed (up to 50 km/h). We performed extensive precrash experiments with the system on the test track (22 scenarios with real pedestrians and a dummy). We obtained a significant benefit in detection performance and improved lateral velocity estimation by the fusion of motion-based object detection and pedestrian recognition. On a fully reproducible scenario subset, involving the dummy that laterally enters into the vehicle path from behind an occlusion, the system executed, in more than 40 trials, the intended vehicle action, i.e., automatic braking (if a full stop is still possible) or automatic evasive steering.


european conference on computer vision | 2008

Efficient Dense Scene Flow from Sparse or Dense Stereo Data

Andreas Wedel; Clemens Rabe; Tobi Vaudrey; Thomas Brox; Uwe Franke; Daniel Cremers

This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.


International Journal of Computer Vision | 2011

Stereoscopic Scene Flow Computation for 3D Motion Understanding

Andreas Wedel; Thomas Brox; Tobi Vaudrey; Clemens Rabe; Uwe Franke; Daniel Cremers

Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.


image and vision computing new zealand | 2008

Differences between stereo and motion behaviour on synthetic and real-world stereo sequences

Tobi Vaudrey; Clemens Rabe; Reinhard Klette; James Milburn

Performance evaluation of stereo or motion analysis techniques is commonly done either on synthetic data where the ground truth can be calculated from ray-tracing principals, or on engineered data where ground truth is easy to estimate. Furthermore, these scenes are usually only shown in a very short sequence of images. This paper shows why synthetic scenes may not be the only testing criteria by giving evidence of conflicting results of disparity and optical flow estimation for real-world and synthetic testing. The data dealt with in this paper are images taken from a moving vehicle. Each real-world sequence contains 250 image pairs or more. Synthetic driver assistance scenes (with ground truth) are 100 or more image pairs. Particular emphasis is paid to the estimation and evaluation of scene flow on the synthetic stereo sequences. All image data used in this paper is made publicly available at http: //www.mi.auckland.ac.nz/EISATS.


international conference on computer vision | 2013

Making Bertha See

Uwe Franke; David Pfeiffer; Clemens Rabe; Carsten Knoeppel; Markus Enzweiler; Fridtjof Stein; Ralf Guido Herrtwich

With the market introduction of the 2014 Mercedes-Benz S-Class vehicle equipped with a stereo camera system, autonomous driving has become a reality, at least in low speed highway scenarios. This raises hope for a fast evolution of autonomous driving that also extends to rural and urban traffic situations. In August 2013, an S-Class vehicle with close-to-production sensors drove completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, following the well-known historic Bertha Benz Memorial Route. Next-generation stereo vision was the main sensing component and as such formed the basis for the indispensable comprehensive understanding of complex traffic situations, which are typical for narrow European villages. This successful experiment has proved both the maturity and the significance of machine vision for autonomous driving. This paper presents details of the employed vision algorithms for object recognition and tracking, free-space analysis, traffic light recognition, lane recognition, as well as self-localization.


european conference on computer vision | 2010

Dense, robust, and accurate motion field estimation from stereo image sequences in real-time

Clemens Rabe; Thomas Müller; Andreas Wedel; Uwe Franke

In this paper a novel approach for estimating the three dimensional motion field of the visible world from stereo image sequences is proposed. This approach combines dense variational optical flow estimation, including spatial regularization, with Kalman filtering for temporal smoothness and robustness. The result is a dense, robust, and accurate reconstruction of the three-dimensional motion field of the current scene that is computed in real-time. Parallel implementation on a GPU and an FPGA yields a vision-system which is directly applicable in real-world scenarios, like automotive driver assistance systems or in the field of surveillance. Within this paper we systematically show that the proposed algorithm is physically motivated and that it outperforms existing approaches with respect to computation time and accuracy.


IEEE Transactions on Intelligent Transportation Systems | 2009

B-Spline Modeling of Road Surfaces With an Application to Free-Space Estimation

Andreas Wedel; Hernán Badino; Clemens Rabe; Heidi Loose; Uwe Franke; Daniel Cremers

We propose a general technique for modeling the visible road surface in front of a vehicle. The common assumption of a planar road surface is often violated in reality. A workaround proposed in the literature is the use of a piecewise linear or quadratic function to approximate the road surface. Our approach is based on representing the road surface as a general parametric B-spline curve. The surface parameters are tracked over time using a Kalman filter. The surface parameters are estimated from stereo measurements in the free space. To this end, we adopt a recently proposed road-obstacle segmentation algorithm to include disparity measurements and the B-spline road-surface representation. Experimental results in planar and undulating terrain verify the increase in free-space availability and accuracy using a flexible B-spline for road-surface modeling.


ieee intelligent vehicles symposium | 2007

Fast detection of moving objects in complex scenarios

Clemens Rabe; Uwe Franke; Stefan K. Gehrig

More than one third of all traffic accidents with injuries occur in urban areas, especially at intersections. A suitable driver assistance system for such complex situations requires the understanding of the scene, in particular a reliable detection of other moving traffic participants. This contribution shows how a robust and fast detection of relevant moving objects is obtained by a smart combination of stereo vision and motion analysis. This approach, called 6D Vision, estimates location and motion of pixels simultaneously which enables the detection of moving objects on a pixel level. Using a Kalman filter attached to each tracked pixel, the algorithm propagates the current interpretation to the next image. In addition, a Kalman filter based ego-motion compensation is described that takes advantage of the 6D information. This precise information enables us to discriminate between static and moving objects exactly and to obtain a better prediction. This speeds up tracking and a real-time implementation is achieved. Examples of critical situations in urban areas exhibit the potential of the 6D Vision concept which can also be extended to robotics applications.

Collaboration


Dive into the Clemens Rabe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hernán Badino

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Reinhard Klette

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Brox

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge