Matthias Ochs
Goethe University Frankfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Ochs.
ieee intelligent vehicles symposium | 2015
Marc Barnada; Christian Conrad; Henry Bradler; Matthias Ochs; Rudolf Mester
The online-estimation of yaw, pitch, and roll of a moving vehicle is an important ingredient for systems which estimate egomotion, and 3D structure of the environment in a moving vehicle from video information. We present an approach to estimate these angular changes from monocular visual data, based on the fact that the motion of far distant points is not dependent on translation, but only on the current rotation of the camera. The presented approach does not require features (corners, edges, ...) to be extracted. It allows to estimate in parallel also the illumination changes from frame to frame, and thus allows to largely stabilize the estimation of image correspondences and motion vectors, which are most often central entities needed for computating scene structure, distances, etc. The method is significantly less complex and much faster than a full egomotion computation from features, such as PTAM [6], but it can be used for providing motion priors and reduce search spaces for more complex methods which perform a complete analysis of egomotion and dynamic 3D structure of the scene in which a vehicle moves.
ieee intelligent vehicles symposium | 2016
Daniel H. Biedermann; Matthias Ochs; Rudolf Mester
We present a framework that supports the development and evaluation of vision algorithms in the context of driver assistance applications and traffic surveillance. This framework allows the creation of highly realistic image sequences featuring traffic scenarios. The sequences are created with a realistic state of the art vehicle physics model; different kinds of environments are featured, thus providing a wide range of testing scenarios. Due to the physically-based rendering technique and variable camera models employed for the image rendering process, we can simulate different sensor setups and provide appropriate and fully accurate ground truth data.
ieee intelligent vehicles symposium | 2016
Nolang Fanani; Matthias Ochs; Henry Bradler; Rudolf Mester
One of the major steps in visual environment perception for automotive applications is to track keypoints and to subsequently estimate egomotion and environment structure from the trajectories of these keypoints. This paper presents a propagation based tracking method to obtain the 2D trajectories of keypoints from a sequence of images in a monocular camera setup. Instead of relying on the classical RANSAC to obtain accurate keypoint correspondences, we steer the search for keypoint matches by means of propagating the estimated 3D position of the keypoint into the next frame and verifying the photometric consistency. In this process, we continuously predict, estimate and refine the frame-to-frame relative pose which induces the epipolar relation. Experiments on the KITTI dataset as well as on the synthetic COnGRATS dataset show promising results on the estimated courses and accurate keypoint trajectories.
asian conference on computer vision | 2016
Jan van den Brand; Matthias Ochs; Rudolf Mester
The recognition of individual object instances in single monocular images is still an incompletely solved task. In this work, we propose a new approach for detecting and separating vehicles in the context of autonomous driving. Our method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances. In our experiments, we show on two challenging datasets (Cityscapes and KITTI) that we achieve state-of-the-art performance, despite the usage of a subsampling rate of two. Our approach even outperforms all recent works w.r.t. several rating scores.
pacific rim symposium on image and video technology | 2015
Matthias Ochs; Henry Bradler; Rudolf Mester
Phase correlation is one of the classic methods for sparse motion or displacement estimation. It is renowned in the literature for high precision and insensitivity against illumination variations. We propose several important enhancements to the phase correlation PhC method which render it more robust against those situations where a motion measurement is not possible low structure, too much noise, too different image content in the corresponding measurement windows. This allows the method to perform self-diagnosis in adverse situations. Furthermore, we extend the PhC method by a robust scheme for detecting and classifying the presence of multiple motions and estimating their uncertainties. Experimental results on the Middlebury Stereo Dataset and on the KITTI Optical Flow Dataset show the potential offered by the enhanced method in contrast to the PhC implementation of OpenCV.
image and vision computing new zealand | 2015
Daniel H. Biedermann; Matthias Ochs; Rudolf Mester
For evaluating or training different kinds of vision algorithms, a large amount of precise and reliable data is needed. In this paper we present a system to create extended synthetic sequences of traffic environment scenarios, associated with several types of ground truth data. By integrating vehicle dynamics in a configuration tool, and by using path-tracing in an external rendering engine to render the scenes, a system is created that allows ongoing and flexible creation of highly realistic traffic images. For all images, ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling. Sequences that are produced with this system are more varied and closer to natural images than other synthetic datasets before.
ieee intelligent vehicles symposium | 2017
Matthias Ochs; Henry Bradler; Rudolf Master
Most iterative optimization algorithms for motion, depth estimation or scene reconstruction, both sparse and dense, rely on a coarse and reliable dense initialization to bootstrap their optimization procedure. This makes techniques important that allow to obtain a dense but still approximative representation of a desired 2D structure (e.g., depth maps, optical flow, disparity maps) from a very sparse measurement of this structure. The method presented here exploits the complete information given by the principal component analysis (PCA), the principal basis and its prior distribution. The method is able to determine a dense reconstruction even if only a very sparse measurement is available. When facing such situations, typically the number of principal components is further reduced which results in a loss of expressiveness of the basis. We overcome this problem and inject prior knowledge in a maximum a posteriori (MAP) approach. We test our approach on the KITTI and the Virtual KITTI dataset and focus on the interpolation of depth maps for driving scenes. The evaluation of the results shows good agreement to the ground truth and is clearly superior to the results of an interpolation by the nearest neighbor method which disregards statistical information.
Image and Vision Computing | 2017
Nolang Fanani; Alina Sturck; Matthias Ochs; Henry Bradler; Rudolf Mester
Abstract Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC.
workshop on applications of computer vision | 2017
Henry Bradler; Matthias Ochs; Nolang Fanani; Rudolf Mester
ieee intelligent vehicles symposium | 2018
Matthias Ochs; Henry Bradler; Rudolf Mester