Roland Schweiger
Daimler AG
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roland Schweiger.
ieee intelligent vehicles symposium | 2006
P. Smuda; Roland Schweiger; H. Neumann; Werner Ritter
In the following we present a robust, real time applicable fusion system for detection of the road course up to 100m. For that, using a particle filter we developed a robust and flexible system which fuses the information from a digital map system and different cues from an image sensor. In addition we present a new image based feature for road surface detection. The system works in realtime
intelligent vehicles symposium | 2005
Roland Schweiger; Heiko Neumann; Werner Ritter
In this contribution we present a sensor data fusing concept utilizing particle filters. The investigation aims at the development of a robust and easy to extend approach, capable of combining the information of different sensors. We use the particle filters characteristics and introduce weighting functions that are multiplied during the measurement update stage of the particle filter implementation. The concept is demonstrated in a vehicle detection system that conjoins symmetry detection, tail lamp detection and radar measurements in night vision applications.
ieee intelligent vehicles symposium | 2006
I. Kallenbach; Roland Schweiger; Günther Palm; Otto Löhlein
Boosted cascades for fast and reliable object detection for one object class were introduced by Viola et al. (2001). Using this scheme for multi-class detection requires parallel usage of multiple cascades and increases computation time. We present an extension to the cascade which examines multiple classes jointly in the first stages of the cascade. Adaboost is selecting common features for all considered object classes, which are then computed only once and thus reduce the computation time of the overall system. We also show how to define the search-window, as it needs to be adjusted to the specific objects. The multi-class capable cascade is applied to traffic scenes on rural roads where pedestrians and reflection posts are detected
ieee intelligent vehicles symposium | 2008
Matthias Serfling; Roland Schweiger; Werner Ritter
This contribution presents a road course estimation system during night which covers distances up to 120 m in rural environment. This is essential for upcoming warning systems to decide, whether a detected object is on the road and thus of immediate importance. In order to realize a robust road course detection system we present a fusion system that combines the information provided by a prototypical imaging radar system, a digital map and a night vision camera sensor. The digital map is used to calculate the shape of the road, an ego motion estimator determines heading and position of the vehicle and a particle filter combines features from the camera and the radar sensor in order to match the road shape with the road visible in the image.
international symposium on neural networks | 2013
Raimar Wagner; Markus Thom; Roland Schweiger; Günther Palm; Albrecht Rothermel
Learning Convolutional Neural Networks (CNN) is commonly carried out by plain supervised gradient descent. With sufficient training data, this leads to very competitive results for visual recognition tasks when starting from a random initialization. When the amount of labeled data is limited, CNNs reveal their strong dependence on large amounts of training data. However, recent results have shown that a well chosen optimization starting point can be beneficial for convergence to a good generalizing minimum. This starting point was mostly found using unsupervised feature learning techniques such as sparse coding or transfer learning from related recognition tasks. In this work, we compare these two approaches against a simple patch based initialization scheme and a random initialization of the weights. We show that pre-training helps to train CNNs from few samples and that the correct choice of the initialization scheme can push the networks performance by up to 41% compared to random initialization.
ieee intelligent vehicles symposium | 2009
Matthias Serfling; Otto Loehlein; Roland Schweiger; Klaus Dietmayer
This contribution presents a robust pedestrian detection system at night that fuses a camera sensor and a scanning radar sensor on feature level. Each sensor defines an overdetermined set of features to be selected and parameterized using the supervised training algorithm AdaBoost. This technique assures an optimal selection and weighting of the features from both sensors depending on their discriminative power for the classification task. In the radar plane a new complex signal filter has been derived which describes a local similarity measure of velocity differences. In order to achieve realtime capability multiple classifiers are combined using a cascade.
ieee intelligent vehicles symposium | 2013
Florian Schüle; Roland Schweiger; Klaus Dietmayer
Todays night vision driver assistance systems help the driver by displaying an infrared image and detecting and highlighting other road users such as pedestrians or cyclists. To further increase active safety, future night vision systems could also visualize road course information. Especially the road courses at greater distances can help drivers interpret upcoming scenes. However, longer distance road course estimation is a challenging task because on-board sensors have a limited viewing range. This paper proposes a sensor fusion system that employs digital map information in combination with radar and camera sensors to estimate the 3D road course even at longer distances. The positioning task on the digital map is solved by a Bayesian framework that estimates position probability by means of map registration. By fusing road course data from the digital map and an optical lane recognition module, an accurate 3D road course estimation is obtained.
international conference on consumer electronics berlin | 2012
Michael Gabb; Raimar Wagner; Oliver Hartmann; Otto Löhlein; Roland Schweiger; Klaus Dietmayer
For object detection in monocular images, the Boosted Cascade [1] has become the standard approach for driver assistance systems. This paper studies the discriminative power of different features for common automotive object detection tasks: pedestrian and vehicle detection using infrared cameras at night, as well as pedestrian and vehicle detection in daylight conditions. It is shown that the use of intra-stage information propagation with Activation History Features (AHFs) [2] significantly speeds up the detection at the same detection rate. Thus, AHFs offer a speedup at no cost.
Proceedings of SPIE | 2010
Roland Schweiger; Stefan Franz; Otto Löhlein; Werner Ritter; Jan-Erik Källhammer; John Franks; T. Krekels
The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with todays FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.
agent-directed simulation | 2004
Roland Schweiger; Pierre Bayerl; Heiko Neumann
In this pilot study, a neural architecture for temporal emotion recognition from image sequences is proposed. The investigation aims at the development of key principles in an extendable experimental framework to study human emotions. Features representing temporal facial variations were extracted within a bounding box around the face that is segregated into regions. Within each region, the optical flow is tracked over time. The dense flow field in a region is subsequently integrated whose principal components were estimated as a representative velocity of face motion. For each emotion a Fuzzy ARTMAP neural network was trained by incremental learning to classify the feature vectors resulting from the motion processing stage. Single category nodes corresponding to the expected feature representation code the respective emotion classes. The architecture was tested on the Cohn-Kanade facial expression database.