Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Wöhler is active.

Publication


Featured researches published by Christian Wöhler.


IEEE Transactions on Neural Networks | 1999

An adaptable time-delay neural-network algorithm for image sequence analysis

Christian Wöhler; Joachim K. Anlauf

In this letter we present an algorithm based on a time-delay neural network with spatio-temporal receptive fields and adaptable time delays for image sequence analysis. Our main result is that tedious manual adaptation of the temporal size of the receptive fields can be avoided by employing a novel method to adapt the corresponding time delay and related network structure parameters during the training process.


ieee intelligent vehicles symposium | 2009

Long-term vehicle motion prediction

Christoph Hermes; Christian Wöhler; Konrad Schenk; Franz Kummert

Future driver assistance systems will have to cope with complex traffic situations, especially in the road crossing scenario. To detect potentially hazardous situations as early as possible, it is therefore desirable to know the position and motion of the ego-vehicle and vehicles around it for several seconds in advance. For this purpose, we propose in this study a long-term prediction approach based on a combined trajectory classification and particle filter framework. As a measure for the similarity between trajectories, we introduce the quaternion-based rotationally invariant longest common subsequence (QRLCS) metric. The trajectories are classified by a radial basis function (RBF) classifier with an architecture that is able to process trajectories of arbitrary non-uniform length. The particle filter framework simultaneously tracks and assesses a large number of motion hypotheses (∼102), where the class-specific probabilities estimated by the RBF classifier are used as a-priori probabilities for the hypotheses of the particle filter. The hypotheses are clustered with a mean-shift technique and are assigned a likelihood value. Motion prediction is performed based on the cluster centre with the highest likelihood. While traditional motion prediction based on curve radius and acceleration is inaccurate especially during turning manoeuvres, we show that our approach achieves a reasonable motion prediction even for long prediction intervals of 3 s for these complex motion patterns.


Image and Vision Computing | 2001

Real-time object recognition on image sequences with the adaptable time delay neural network algorithm — applications for autonomous vehicles

Christian Wöhler; Joachim K. Anlauf

Abstract Within the framework of the vision-based “Intelligent Stop&Go” driver assistance system for both the motorway and the inner city environment, we present a system for segmentation-free detection of overtaking vehicles and estimation of ego-position on motorways as well as a system for the recognition of pedestrians in the inner city traffic scenario. Both systems are running in real-time in the test vehicle UTA of the DaimlerChrysler computer vision lab, relying on the adaptable time delay neural network (ATDNN) algorithm. For object recognition, this neural network processes complete image sequences at a time instead of single images, as it is the case in most conventional neural algorithms. The results are promising in that using the ATDNN algorithm, we are able to perform the described recognition tasks in a large variety of real-world scenarios in a computationally highly efficient and rather robust and reliable manner.


international conference on robotics and automation | 2010

Recognition of situation classes at road intersections

Eugen Käfer; Christoph Hermes; Christian Wöhler; Helge Ritter; Franz Kummert

The recognition and prediction of situations is an indispensable skill of future driver assistance systems. This study focuses on the recognition of situations involving two vehicles at intersections. For each vehicle, a set of possible future motion trajectories is estimated and rated based on a motion database for a time interval of 2–4 seconds ahead. Realistic situations are generated by a pairwise combination of these individual motion trajectories and classified according to nine categories with a polynomial classifier. In the proposed framework, situations are penalised for which the time to collision significantly exceeds the typical human reaction time. The correspondingly favoured situations are combined by a probabilistic framework, resulting in a more reliable situation recognition and collision detection than obtained based on independent motion hypotheses. The proposed method is evaluated on a real-world differential GPS data set acquired during a test drive of 10 km, including three road intersections. Our method is typically able to recognise the situation correctly about 1–2 seconds before the distance to the intersection centre becomes minimal.


ieee intelligent vehicles symposium | 2010

Vehicle tracking and motion prediction in complex urban scenarios

Christoph Hermes; Julian Einhaus; Markus Hahn; Christian Wöhler; Franz Kummert

The recognition of potentially hazardous situations on road intersections is an indispensable skill of future driver assistance systems. In this context, this study focuses on the task of vehicle tracking in combination with a long-term motion prediction (1-2 s into the future) in a dynamic scenario. A motion-attributed stereo point cloud obtained using computationally efficient feature-based methods represents the scene, relying on images of a stereo camera system mounted on a vehicle. A two-stage mean-shift algorithm is used for detection and tracking of the traffic participants. A hierarchical setup depending on the history of the tracked object is applied for prediction. This includes prediction by optical flow, a standard kinematic prediction, and a particle filter based motion pattern method relying on learned object trajectories. The evaluation shows that the proposed system is able to track the road users in a stable manner and predict their positions at least one order of magnitude more accurately than a standard kinematic prediction method.


ieee intelligent vehicles symposium | 2009

3D pose estimation of vehicles using a stereo camera

Björn Barrois; Stela Hristova; Christian Wöhler; Franz Kummert; Christoph Hermes

This study introduces an approach to three-dimensional vehicle pose estimation using a stereo camera system. After computation of stereo and optical flow on the investigated scene, a four-dimensional clustering approach separates the static from the moving objects in the scene. The iterative closest point algorithm (ICP) estimates the vehicle pose using a cuboid as a weak vehicle model. In contrast to classical ICP optimisation a polar distance metric is used which especially takes into account the error distribution of the stereo measurement process. The tracking approach is based on tracking-by-detection such that no temporal filtering is used. The method is evaluated on seven different real-world sequences, where different stereo algorithms, baseline distances, distance metrics, and optimisation algorithms are examined. The results show that the proposed polar distance metric yields a higher accuracy for yaw angle estimation of vehicles than the common Euclidean distance metric, especially when using pixel-accurate stereo points.


Optical Metrology in Production Engineering | 2004

In-factory calibration of multiocular camera systems

Lars Krüger; Christian Wöhler; Alexander Würz-Wessel; Fridtjof Stein

A complete framework for automatic calibration of camera systems with an arbitrary number of image sensors is presented. This new approach is superior to other methods in that it obtains both the internal and external parameters of camera systems with arbitrary resolutions, focal lengths, pixel sizes, positions and orientations from calibration rigs printed on paper. The only requirement on the placement of the cameras is an overlapping field of view. Although the basic algorithms are suitable for a very wide range of camera models (including OmniView and fish eye lenses) we concentrate on the camera model by Bouguet (http://www.vision.caltech.edu/bouguetj/). The most important part of the calibration process is the search for the calibration rig, a checkerboard. Our approach is based on the topological analysis of the corner candidates. It is suitable for a wide range of sensors, including OmniView cameras, which is demonstrated by finding the rig in images of such a camera. The internal calibration of each camera is performed as proposed by Bouguet, although this may be replaced with a different model. The calibration of all cameras into a common coordinate system is an optimization process on the spatial coordinates of the calibration rig. This approach shows significant advantages compared to the method of Bouguet, esp. for cameras with a large field of view. A comparison of our automatic system with the camera calibration toolbox for MATLAB, which contains an implementation of the Bouguet calibration, shows its increased accuracy compared to the manual approach.


Pattern Recognition Letters | 2011

Accurate chequerboard corner localisation for camera calibration

Lars Krüger; Christian Wöhler

In this article we describe a novel approach to obtain the position of a chequerboard corner at sub-pixel accuracy from digital images. Applications of this method include photogrammetric scene reconstruction, pose estimation, self localisation of (mobile) robots, and camera calibration. Chequerboard patterns are especially suitable for calibrating non-pinhole cameras such as fisheye or catadioptric cameras. We model the grey values of an imaged corner by a simulated imaging process. In order to obtain an efficient implementation on standard hardware, several approximations are presented. The grey value model is used to perform a least-squares fit to the input image using a Levenberg-Marquardt optimisation. The model is described by four geometric parameters (position, rotation, and skew angle of the chequerboard corner), the width of the point spread function, and two photometric parameters (gain and offset). We compare our non-linear algorithm with two linear chequerboard corner localisation algorithms and the classical localisation of photogrammetric circular targets. Ground truth is obtained by mechanically moving a target pattern in front of the camera at sub-pixel accuracy. The corner localisation algorithm is then used to measure the displacement. On the average, our algorithm achieves a displacement error (half the difference between the 75% and 25% quantiles) of 0.032pixels, while it becomes 0.024pixels for high contrast and 0.043pixels for low contrast conditions. The classical photogrammetric method based on circular targets achieves 0.045pixels in the average case, 0.017pixels under high contrast and 0.132pixels under low contrast conditions. The actual positional errors of the corner point positions are lower by a factor of 1/2 than the measured displacement errors.


Intelligent Vehicle Technologies#R##N#Theory and Applications | 2001

6 – From door to door — principles and applications of computer vision for driver assistant systems

Uwe Franke; Dariu M. Gavrila; Axel Gern; Steffen Görzig; Reinhard Janssen; Frank Paetzold; Christian Wöhler

Publisher Summary In this chapter, the achievements in vision-based driver assistance at DaimlerChrysler are described. The chapter presents the systems that have been developed for both highways and urban traffic, and describe principles that have proven robustness and efficiency for image understanding in traffic scenes. The development of computer vision systems for cars is promoted for three main reasons: safety, convenience, and efficiency. At least three guiding principles have emerged for robust vision-based driver assistant systems: (1) vision in cars is vision over time; (2) stereo vision providing 3D information became a central component of robust vision systems; and (3) object recognition can be considered as a classification problem. Besides continuous improvement of the robustness of the image analysis modules, sensor problems have to be overcome. As other information sources, such as radar, digital maps, and communication become available in modern cars, their utilization will help to raise the performance of vision based environment perception.


computer vision and pattern recognition | 2007

3D Pose Estimation Based on Multiple Monocular Cues

Björn Barrois; Christian Wöhler

In this study we propose an integrated approach to the problem of 3D pose estimation. The main difference to the majority of known methods is the usage of complementary image information, including intensity and polarisation state of the light reflected from the object surface, edge information, and absolute depth values obtained based on a depth from defocus approach. Our method is based on the comparison of the input image to synthetic images generated by an OpenGL-based renderer using model information about the object provided by CAD data. This comparison provides an error term which is minimised by an iterative optimisation algorithm. Although all six degrees of freedom are estimated, our method requires only a monocular camera, circumventing disadvantages of multiocular camera systems such as the need for external camera calibration. Our framework is open for the inclusion of independently acquired depth data. We evaluate our method on a toy example as well as in two realistic scenarios in the domain of industrial quality inspection. Our experiments regarding complex real-world objects located at a distance of about 0.5 m to the camera show that the algorithm achieves typical accuracies of better than 1 degree for the rotation angles, 1-2 image pixels for the lateral translations, and several millimetres or about 1 percent for the object distance.

Collaboration


Dive into the Christian Wöhler's collaboration.

Top Co-Authors

Avatar

Arne Grumpe

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arne Grumpe

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.A. Berezhnoy

Sternberg Astronomical Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge