Ralf Kohlhaas
Center for Information Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ralf Kohlhaas.
ieee intelligent vehicles symposium | 2013
Ralf Kohlhaas; Thomas Schamm; Dominik Lenk; J. Marius Zöllner
For automatic driving, vehicles must be able to recognize their environment and take control of the vehicle. The vehicle must perceive relevant objects, which includes other traffic participants as well as infrastructure information, assess the situation and generate appropriate actions. This work is a first step of integrating previous works on environment perception and situation analysis toward automatic driving strategies. We present a method for automatic cruise control of vehicles in urban environments. The longitudinal velocity is influenced by the speed limit, the curvature of the lane, the state of the next traffic light and the most relevant target on the current lane. The necessary acceleration is computed in respect to the information which is estimated by an instrumented vehicle.
international conference on intelligent transportation systems | 2014
Ralf Kohlhaas; Thomas Bittner; Thomas Schamm; J. Marius Zöllner
Originating from simple cruise control systems that monitor and control the speed of the vehicle, driver assistance systems have evolved into intelligent systems. Future assistance systems will combine information from different sensors and data sources to build up a model of the current traffic scene. This way they will be able to assist in challenging tasks in complex situations. Towards this goal, we present a semantic scene representation for modeling traffic scenes. Based on a geometric representation a semantic representation is defined using an ontology to model relevant traffic elements and relations. Considering potential relations of the ego vehicle, a semantic state space of the ego vehicle is derived. Transitions are defined that model state changes (maneuvers). The model can be used for example for situation analysis and high level planning for driving hint generation or automated driving. The method is evaluated in different traffic situations and on real sensor data. It is going to be applied to (semi-)automated driving in a real test vehicle.
intelligent vehicles symposium | 2014
Marc René Zofka; Ralf Kohlhaas; Thomas Schamm; J. Marius Zöllner
The design and development process of advanced driver assistance systems (ADAS) is divided into different phases, where the algorithms are implemented as a model, then as software and finally as hardware. Since it is unfeasable to simulate all possible driving situations for environmental perception and interpretation algorithms, there is still a need for expensive and time-consuming real test drives of thousands of kilometers. Therefore we present a novel approach for testing and evaluation of vision-based ADAS, where reliable simulations are fused with recorded data from test drives to provide a task-specific reference model. This approach provides ground truth with much higher reliability and reproducability than real test drives and authenticity than using pure simulations and can be applied already in early steps of the design process. We illustrate the effectiveness of our approach by testing a vision-based collision mitigation system on recordings of a german highway.
international conference on intelligent transportation systems | 2014
Florian Kuhnt; Ralf Kohlhaas; Rüdiger Jordan; Thomas Gußner; Thomas Gumpp; Thomas Schamm; J. Marius Zöllner
For Advanced Driver Assistance Systems and Autonomous Driving it is of major advantage to know future trajectories of traffic participants. These are influenced by many factors in the environment. One important factor is the geometry of the intersection a vehicle is approaching. In this paper we describe how we can extract a spline based intersection model from low detail map data like Open-StreetMap that can be adjusted over time. A particle filter based map matching algorithm is used to localize the ego vehicle relative to the intersection model. Additionally, objects detected from the ego vehicles sensors are matched onto the intersection model in order to predict the future trajectories of the ego vehicle and other traffic participants using the intersection model.
international conference on intelligent transportation systems | 2016
Sebastian Klemm; Marc Essinger; Jan Oberlander; Marc René Zofka; Florian Kuhnt; Michael Weber; Ralf Kohlhaas; Alexander Kohs; Arne Roennau; Thomas Schamm; J. Marius Zöllner
Electric mobility combined with recent advances in autonomous driving provides a solution to the environmental and traffic challenges of the modern metropolis. In this work we present an innovative system that completely changes valet parking and the process of charging electric vehicles. The introduced system tackles the problem of precise and efficient autonomous navigation for vehicles in gps-denied environments like 3-D multi-story parking garages. In addition a robot is employed to autonomously charge the parked electric vehicles. We give insight into the concept and implementation of such a system, and evaluate it in real parking garages. We extensively tested the system in a real-world application, where a driver leaves the vehicle at the entry of a parking garage and the vehicle then performs the navigation and parking task on its own. Our test vehicle autonomously navigated more than 50 times from the entry of a parking garage to an assigned parking spot on the 6th floor and docked with the charging robot. The navigation system is precise, efficient and capable of running online in real-world scenarios.
international conference on intelligent transportation systems | 2015
Ralf Kohlhaas; Daniel Hammann; Thomas Schamm; J. Marius Zöllner
Highly automated driving is addressed more and more by research and also by vehicle manufacturers. In the past few years several demonstrations of automated vehicles driving on highways and even in urban scenarios were performed. In this context several challenges arose. One challenge is the understanding of complex situations and behavior generation within these especially in urban areas. Trajectory planning in these scenarios can be complex and expensive. Semantic scene modeling and planning can provide vital information to generate reliable and safe trajectories for automated vehicles. In this work we present a novel approach for high-level maneuver planning. It is based on a semantic state space that describes possible actions of a vehicle with respect to other scene elements like lane segments and traffic participants. The semantic characteristic of this state space allow for generalized planning even in complex situations. Concepts like heuristics and homotopies are utilized to optimize planning. Therefore, it is possible to efficiently generate high-level maneuver sequences for automated driving. The approach is tested on synthetic data as well as sensor data of a real test drive. and homotopies are utilized to optimize planning. Therefore, it is possible to efficiently generate high-level maneuver sequences for automated driving. The approach is tested on synthetic data as well as sensor data of a real test drive.
simulation modeling and programming for autonomous robots | 2016
Marc René Zofka; Florian Kuhnt; Ralf Kohlhaas; J. Marius Zöllner
The development and validation of highly automated driving functions towards autonomous driving requires efficient frameworks to reduce the necessity of expensive, time-consuming and dangerous real test drives. This applies for the development of autonomous small scale as well as real scale vehicles. In the present work we introduce a simulation framework for the development and virtual validation of small scale vehicles to tackle this issue. At the concrete challenge of the student competition AUDI Autonomous Driving Cup, we demonstrate how a closed world can be transformed into appropriate simulation models in order to stimulate higher level automated driving functions. The framework is demonstrated at the example of highly automated driving functions on a small scale vehicle. At the end, we transfer the important results to the virtual validation of real scale autonomous vehicles.
simulation modeling and programming for autonomous robots | 2016
Jacques Kaiser; J. Camilo Vasquez Tieck; Christian Hubschneider; Peter Wolf; Michael Weber; Michael Hoff; Alexander Friedrich; Konrad Wojtasik; Arne Roennau; Ralf Kohlhaas; Rüdiger Dillmann; J. Marius Zöllner
Spiking neural networks are in theory more computationally powerful than rate-based neural networks often used in deep learning architectures. However, unlike rate-based neural networks, it is yet unclear how to train spiking networks to solve complex problems. There are still no standard algorithms and it is preventing roboticists to use spiking networks, yielding a lack of Neurorobotics applications. The contribution of this paper is twofold. First, we present a modular framework to evaluate neural self-driving vehicle applications. It provides a visual encoder from camera images to spikes inspired by the silicon retina (DVS), and a steering wheel decoder based on an agonist antagonist muscle model. Secondly, using this framework, we demonstrate a spiking neural network which controls a vehicle end-to-end for lane following behavior. The network is feed-forward and relies on hand-crafted feature detectors. In future work, this framework could be used to design more complex networks and use the evaluation metrics for learning.
international conference on intelligent transportation systems | 2016
Florian Kuhnt; Micha Pfeiffer; Peter Zimmer; David Zimmerer; Jan-Markus Gomer; Vitali Kaiser; Ralf Kohlhaas; J. Marius Zöllner
One of the biggest challenges towards fully automated driving is achieving robustness. Autonomous vehicles will have to fully recognize their environment even in harsh weather conditions. Additionally, they have to be able to detect sensor and algorithm failures and react properly to keep the vehicle in a safe state.
international conference on informatics in control automation and robotics | 2016
Ming Gao; Ralf Kohlhaas; J. Marius Zöllner
We focus on the problem of learning and recognizing contextual tasks from human demonstrations, aiming to efficiently assist mobile robot teleoperation through sharing autonomy. We present in this study a novel unsupervised contextual task learning and recognition approach, consisting of two phases. Firstly, we use Dirichlet Process Gaussian Mixture Model (DPGMM) to cluster the human motion patterns of task executions from unannotated demonstrations, where the number of possible motion components is inferred from the data itself instead of being manually specified a priori or determined through model selection. Post clustering, we employ Sparse Online Gaussian Process (SOGP) to classify the query point with the learned motion patterns, due to its superior introspective capability and scalability to large datasets. The effectiveness of the proposed approach is confirmed with the extensive evaluations on real data.