Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fridtjof Stein is active.

Publication


Featured researches published by Fridtjof Stein.


IEEE Intelligent Transportation Systems Magazine | 2014

Making Bertha Drive?An Autonomous Journey on a Historic Route

Julius Ziegler; Philipp Bender; Markus Schreiber; Henning Lategahn; Tobias Strauss; Christoph Stiller; Thao Dang; Uwe Franke; Nils Appenrodt; Christoph Gustav Keller; Eberhard Kaus; Ralf Guido Herrtwich; Clemens Rabe; David Pfeiffer; Frank Lindner; Fridtjof Stein; Friedrich Erbs; Markus Enzweiler; Carsten Knöppel; Jochen Hipp; Martin Haueis; Maximilian Trepte; Carsten Brenk; Andreas Tamke; Mohammad Ghanaat; Markus Braun; Armin Joos; Hans Fritz; Horst Mock; Martin Hein

125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios.


joint pattern recognition symposium | 2004

Efficient Computation of Optical Flow Using the Census Transform

Fridtjof Stein

This paper presents an approach for the estimation of visual motion over an image sequence in real-time. A new algorithm is proposed which solves the correspondence problem between two images in a very efficient way. The method uses the Census Transform as the representation of small image patches. These primitives are matched using a table based indexing scheme. We demonstrate the robustness of this technique on real-world image sequences of a road scenario captured from a vehicle based on-board camera. We focus on the computation of the optical flow. Our method runs in real-time on general purpose platforms and handles large displacements.


international conference on computer vision | 2013

Making Bertha See

Uwe Franke; David Pfeiffer; Clemens Rabe; Carsten Knoeppel; Markus Enzweiler; Fridtjof Stein; Ralf Guido Herrtwich

With the market introduction of the 2014 Mercedes-Benz S-Class vehicle equipped with a stereo camera system, autonomous driving has become a reality, at least in low speed highway scenarios. This raises hope for a fast evolution of autonomous driving that also extends to rural and urban traffic situations. In August 2013, an S-Class vehicle with close-to-production sensors drove completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, following the well-known historic Bertha Benz Memorial Route. Next-generation stereo vision was the main sensing component and as such formed the basis for the indispensable comprehensive understanding of complex traffic situations, which are typical for narrow European villages. This successful experiment has proved both the maturity and the significance of machine vision for autonomous driving. This paper presents details of the employed vision algorithms for object recognition and tracking, free-space analysis, traffic light recognition, lane recognition, as well as self-localization.


IEEE Transactions on Intelligent Transportation Systems | 2007

Collision Avoidance for Vehicle-Following Systems

Stefan K. Gehrig; Fridtjof Stein

The vehicle-following concept has been widely used in several intelligent-vehicle applications. Adaptive cruise control systems, platooning systems, and systems for stop-and-go traffic employ this concept: The ego vehicle follows a leader vehicle at a certain distance. The vehicle-following concept comes to its limitations when obstacles interfere with the path between the ego vehicle and the leader vehicle. We call such situations dynamic driving situations. This paper introduces a planning and decision component to generalize vehicle following to situations with nonautomated interfering vehicles in mixed traffic. As a demonstrator, we employ a car that is able to navigate autonomously through regular traffic that is longitudinally and laterally guided by actuators controlled by a computer. This paper focuses on and limits itself to lateral control for collision avoidance. Previously, this autonomous-driving capability was purely based on the vehicle-following concept using vision. The path of the leader vehicle was tracked. To extend this capability to dynamic driving situations, a dynamic path-planning component is introduced. Several driving situations are identified that necessitate responses to more than the leader vehicle. We borrow an idea from robotics to solve the problem. Treat the path of the leader vehicle as an elastic band that is subjected to repelling forces of obstacles in the surroundings. This elastic-band framework offers the necessary features to cover dynamic driving situations. Simulation results show the power of this approach. Real-world results obtained with our demonstrator validate the simulation results


systems man and cybernetics | 1998

A trajectory-based approach for the lateral control of car following systems

Stefan K. Gehrig; Fridtjof Stein

A crucial task for steering an autonomous vehicle along a safe path in a car following scenario is the lateral control. The sensory input of such a lateral control are the position coordinates of the leader vehicle. The following problem occurs: due to the distance between the leader vehicle and the autonomous ego-vehicle, the lateral control has to interpolate a trajectory between the two vehicles. Using as a trajectory either a straight line or a curve of constant curvature causes the ego-vehicle to deviate from the leader vehicles trajectory. Given a system delivering 3D points of the leader vehicle with time lags, one has a handle to reconstruct the leader vehicles trajectory. In addition, one has to compensate the motion of the ego-vehicle by using its motion parameters. Once this transformation is performed, the position coordinates of the leader vehicle are available in a coordinate system at rest. Knowing the position of the ego-vehicle in that coordinate system, one can select the trajectory point of the leader vehicle that is closest to the ego-vehicle as input to the lateral controller. This simple approach increases significantly the precision of car following systems. The algorithm is applied successfully to an autonomous car for platooning at small velocities.


ieee intelligent transportation systems | 2001

Elastic bands to enhance vehicle following

Stefan K. Gehrig; Fridtjof Stein

The vehicle-following principle comes to its limitations when obstacles interfere with the path between the ego-vehicle and the leader vehicle. This work introduces a planning and decision component to generalize vehicle following to situations dealing with non-automated interfering vehicles in mixed traffic. We focus on lateral control for collision avoidance. Previously, this autonomous driving capability was purely based on the vehicle-following principle. To extend this capability to dynamic driving situations, a dynamic path planning component is introduced. Several driving situations are identified that necessitate responses to more than the leader vehicle. We borrow an idea from robotics to solve the problem: treat the path of the leader vehicle as an elastic band that is subjected to repelling forces of obstacles in the surroundings. This elastic band framework offers the necessary features to cover dynamic driving situations. Simulation results show the effectiveness of this approach. Real-world results obtained with our demonstrator validate the simulation results.


Optical Metrology in Production Engineering | 2004

In-factory calibration of multiocular camera systems

Lars Krüger; Christian Wöhler; Alexander Würz-Wessel; Fridtjof Stein

A complete framework for automatic calibration of camera systems with an arbitrary number of image sensors is presented. This new approach is superior to other methods in that it obtains both the internal and external parameters of camera systems with arbitrary resolutions, focal lengths, pixel sizes, positions and orientations from calibration rigs printed on paper. The only requirement on the placement of the cameras is an overlapping field of view. Although the basic algorithms are suitable for a very wide range of camera models (including OmniView and fish eye lenses) we concentrate on the camera model by Bouguet (http://www.vision.caltech.edu/bouguetj/). The most important part of the calibration process is the search for the calibration rig, a checkerboard. Our approach is based on the topological analysis of the corner candidates. It is suitable for a wide range of sensors, including OmniView cameras, which is demonstrated by finding the rig in images of such a camera. The internal calibration of each camera is performed as proposed by Bouguet, although this may be replaced with a different model. The calibration of all cameras into a common coordinate system is an optimization process on the spatial coordinates of the calibration rig. This approach shows significant advantages compared to the method of Bouguet, esp. for cameras with a large field of view. A comparison of our automatic system with the camera calibration toolbox for MATLAB, which contains an implementation of the Bouguet calibration, shows its increased accuracy compared to the manual approach.


computer vision and pattern recognition | 2012

The challenge of putting vision algorithms into a car

Fridtjof Stein

Current and future driver assistance systems rely heavily on environment perception. Using radar, ultrasound, laser, and camera systems they address the demand for comfortable and safe transportation. In this talk, we present the special biotope of vehicle-based machine vision algorithms. Developing software for computer vision tasks in the car industry is accompanied by several tough constraints, such as: · working with limited hardware resources · running with little energy consumption · using software economically · demanding reliable and robust building blocks · requiring methods that handle adverse environmental conditions, e.g. atmospheric disturbances or low light. We outline and discuss the steps of the development process starting with a first idea for an assistance system up to a final embedded system.


ieee intelligent vehicles symposium | 2006

Monocular Motion Detection Using Spatial Constraints in a Unified Manner

Jens Klappstein; Fridtjof Stein; Uwe Franke

The knowledge about moving objects plays an important role in robot navigation and driver assistance systems. Several motion detection techniques based on the optical flow were developed in the past. To our knowledge none of them exploit the available constraint envelope of a 3D point. In this paper a two-view algorithm is proposed taking advantage of the epipolar, the positive depth, and the positive height constraint, allowing the detection of independent motion and most collinear motions. The constraints are combined in a unified manner resulting in a scalar error function. This enables a direct weighting of the error with the certainty of the measured optical flow. Furthermore, the detection limit of objects moving collinear with the camera and with identical speed is investigated. Experimental results in traffic and indoor scenarios demonstrate the effectiveness of the proposed algorithm


intelligent robots and systems | 1999

Dead reckoning and cartography using stereo vision for an autonomous car

Stefan K. Gehrig; Fridtjof Stein

Our main objective in the paper is to perform a cartography of a road scene into a reference frame at rest, where 3D measurements delivered by on-board sensors serve as input. The main sensors of our autonomous vehicle are two CCD cameras. Their pictures are combined using stereopsis to generate 3D data. We need dead reckoning to properly associate 3D data among the frames. This necessitates us to obtain a precise ego-motion estimation. Dead reckoning using only standard vehicle odometry (velocity and steering angle) can cause non-negligible errors, especially in situations where side slip or skidding occurs. We use stationary points in the scene to support the determination of our ego-motion. Two types of stationary objects are used: Firstly, stationary vertical landmarks such as traffic signs are used to compensate errors in our localization prediction. Secondly, lane markings measured in consecutive frames are used to compensate orientation errors. Preliminary results show that dead reckoning using stationary objects can vastly improve self-localization.

Collaboration


Dive into the Fridtjof Stein's collaboration.

Researchain Logo
Decentralizing Knowledge