Philipp Lindner
Chemnitz University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philipp Lindner.
international conference on intelligent transportation systems | 2009
Philipp Lindner; Eric Richter; Gerd Wanielik; Kiyokazu Takagi; Akira Isogai
Lane recognition is a function which is needed for a variety of driver assistance systems. For example Lane Departure Warning and Lane Keeping rely on information provided by a lane estimation algorithm. One important step of the lane estimation procedure is the extraction of measurements or detections which can be used to estimate the shape of the road or lane. These detections are generated by white lane markers or the road border itself. Lane estimation has for many years been under heavy development using a gray scale camera. Passive camera based systems can be degraded on its performance under certain circumstances, e.g. at dynamic changes of ambient brightness. The fusion with an active sensor can here increase the robustness of these systems significantly. In this paper an approach is presented to detect lane marks using an active light detection and ranging device (lidar). It can be shown that high reflective lane marks can be reliably detected. A polar Lane Detector Grid is used to combine the distance and intensity measurement channel of the lidar sensor.
ieee intelligent vehicles symposium | 2009
Hendrik Weigel; Philipp Lindner; Gerd Wanielik
A multi-sensor system for vehicle tracking and lane detection is presented in this contribution. The system utilizes a lidar and a monocular camera sensor. The main focus for the lidar lies in the field of vehicle detection and the camera is initially used for lane detection. Therewith realized and introduced applications onto the market are driver assistance systems for adaptive cruise control (ACC) and lane departure warning (LDW). More sophisticated ACC functionalities like collision mitigation and collision avoidance systems require a higher reliability and accuracy of the environment recognition. The joint use of both sensors facilitates this without additional expenses for hardware. It exploits the advantages of the different sensor concepts to extend the capabilities for interpretation of the vehicle environment. In our case this ensures a more accurate estimation of the dimensions and positions of the vehicles within the lane respectively the entire road. Finally, we present an innovative human machine interface (HMI) solution to display the desired assistance functionality with high transparency and clarity to the driver.
Archive | 2010
Robin Schubert; Eric Richter; Norman Mattern; Philipp Lindner; Gerd Wanielik
The ongoing development of Advanced Driver Assistance Systems (ADAS) requires new prototyping concepts. In this paper the concept vehicle Carai is presented as a generic vehicle with a variety of sensors, computing units, and different HMI components in order to allow the fast implementation and evaluation of different ADAS applications. In addition to the description of the technical components, a software framework is presented which enables fast software prototyping by providing basic data acquisition and processing modules including sophisticated data fusion algorithms. Finally, the usage of the Carai is demonstrated on the example of different ADAS applications.
international conference on multisensor fusion and integration for intelligent systems | 2010
Philipp Lindner; Stephan Blokzyl; Gerd Wanielik; Ullrich Scheunert
Lane feature extraction is a function which is needed for example for autonomous driving and driver assistance systems. For example Lane Departure Warning and Lane Keeping rely on information provided by a lane estimation algorithm. One important step of the lane estimation procedure is the extraction of measurements or detections which can be used to estimate the shape of the road or lane. These detections are generated by white lane markers or the road border itself. Additionally traffic rules can be derived if a system is able to distinguish for example between solid and dashed lane marks as well as between the different types of lane marks itself (length, thickness). Every state estimation filter needs a properly defined model and reliable measurements to work correctly. This paper presents an approach to extract reliable lane mark measurements using multi level feature extraction [1] and classification. Geometric features will be generated for lane mark candidates and used for the distinction between true and false lane mark detections.
2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems | 2009
Philipp Lindner; Gerd Wanielik
Automotive safety systems like “Adaptive Cruise Control” (ACC), “Lane Change Assist” and pre-crash systems are nowadays dealing with the detection of vehicles on the road. All major upper class vehicle manufacturers like Mercedes, BMW, Chrysler and Lexus as well as leading suppliers (Bosch, Continental, Delphi) are currently developing intelligent vehicle safety systems. For vehicle environment perception many different sensors are under investigation. Well known and cheap devices like grayscale cameras are already integrated in many cars today, e.g. for parking assist functions. To meet the requirements for extremely reliable and robust environment recognition additional sensors and multi sensor data fusion approaches have to be applied. New sensors like 3-dimensional measuring multilayer laser scanner systems will be introduced in the near future to deliver environment information for these systems. By fusing multiple sensors like laser and radar systems the reliability level of automotive safety applications will be improved significantly. The paper presents an approach for the processing of the data of a multilayer laser scanner (lidar) for the detection of vehicles in road environments. The lidar data is processed using a new 3-dimensional occupancy grid. An example is given for an automotive pre-crash safety application.
international conference on intelligent transportation systems | 2009
Eric Richter; Philipp Lindner; Gerd Wanielik; Kiyokazu Takagi; Akira Isogai
The robust and reliable detection of objects in the surrounding of a vehicle is an important prerequisite for collision avoidance and collision mitigation systems. In this paper, an ego-motion compensated tracking approach is presented which uses extended occupancy grid methods for both detection and tracking of objects observed by lidar. The approach is able to estimate the velocity and direction of moving objects as well as to distinguish between moving and stationary objects. Additionally, an approach for drastically reducing the computational effort is presented.
international conference on intelligent transportation systems | 2014
Philipp Lindner; Gerd Wanielik
This paper describes a method to analyse driver behaviour before lane change manoeuvres to detect the lane change intent before the actual manoeuvre itself is initiated. Recent research shows that drivers visual behaviour is an essential indicator to estimate intent to change lane. To create stochastic models for this process, it is necessary to detect which areas are frequently taken into focus by the driver (like looking towards the front window and side mirrors, for example). In a laboratory setting these regions could directly be measured using eye tracking methods. When focusing on solutions which can be integrated into cars under realistic settings, common eye tracking methods fail. More stable solutions are head tracking systems, mostly based on mono or stereo cameras. In this article an approach is presented for estimating the drivers glance areas from head tracking data.
ieee intelligent vehicles symposium | 2007
Ullrich Scheunert; Philipp Lindner; Eric Richter; Thomas Tatschke; Dominik Schestauber; Erich Fuchs; Gerd Wanielik
The fusion of data from different sensorial sources is today the most promising method to increase robustness and reliability of environmental perception. The project ProFusion2 pushes the sensor data fusion for automotive applications in the field of driver assistance systems. ProFusion2 was created to enhance fusion techniques and algorithms beyond the current state-of-the-art. It is a horizontal subproject in the Integrated Project PReVENT (funded by the EC). The paper presents two approaches concerning the detection of vehicles in road environments. An early fusion and a multi level fusion processing strategy are described. The common framework for the representation of the environment model and the representation of perception results is introduced. The key feature of this framework is the storing and representation of all data involved in one perception memory in a common data structure and the holistic accessibility.
international conference on information fusion | 2006
Ullrich Scheunert; Philipp Lindner; Heiko Cramer
The paper presents a methodology for using fuzzy operators for the hierarchical fusion of processing results in a multi sensor data processing system. Tracking and fusion of intermediate results is performed in several levels of processing (signal level, several feature levels, object level). To produce higher level hypotheses on the basis of lower level components, grouping rules using certain assignment decisions are used. In this paper this is seen as a classification procedure that is step by step testing and assigning components to a higher level feature or object. For these classifications a suitable combination of a fuzzy operator for fusion and membership functions for classification is proposed to meet the requirements of the hierarchical classification and the necessity to include confidence values for that. Especially the dependencies between the n-fold one-dimensional classification and the n-dimensional classification is addressed. We use a straightforward example to demonstrate the concept of the multi level fusion and classification procedure
international conference on intelligent transportation systems | 2006
Ullrich Scheunert; Philipp Lindner; Heiko Cramer; Thomas Tatschke; A. Polychronopoulos; Gerd Wanielik
The fusion of data from different sensorial sources is nowadays an often-used method to increase robustness and reliability of automatic environmental perception. The project ProFusion2, which is a horizontal subproject in the IP PReVENT (funded by the EC) was created to enhance fusion techniques and algorithms beyond the current state-of-the-art. The enhancement of the algorithms is strongly connected with the creation of a methodology to describe vehicle environments in an adequate manner to meet the requirements of robustness and reliability. In this paper, the definition of such a general environmental description is proposed and an according general data structure is introduced. This data structure is able to handle all kind of information occurring in a data fusion process. Additionally, an output structure of the perception system is proposed to work as an interface to the applications. The ProFusion2 community suggested a model for sensor data fusion in compliance with the Joint Directors of Laboratories (JDL) model which is a widely known model for information fusion systems (Hall and Llinas, 2001)