Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Schlipsing is active.

Publication


Featured researches published by Marc Schlipsing.


Neural Networks | 2012

2012 Special Issue: Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition

Johannes Stallkamp; Marc Schlipsing; Jan Salmen; Christian Igel

Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do todays algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.


international symposium on neural networks | 2011

The German Traffic Sign Recognition Benchmark: A multi-class classification competition

Johannes Stallkamp; Marc Schlipsing; Jan Salmen; Christian Igel

The “German Traffic Sign Recognition Benchmark” is a multi-category classification competition held at IJCNN 2011. Automatic recognition of traffic signs is required in advanced driver assistance systems and constitutes a challenging real-world computer vision and pattern recognition problem. A comprehensive, lifelike dataset of more than 50,000 traffic sign images has been collected. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations. The images are complemented by several precomputed feature sets to allow for applying machine learning algorithms without background knowledge in image processing. The dataset comprises 43 classes with unbalanced class frequencies. Participants have to classify two test sets of more than 12,500 images each. Here, the results on the first of these sets, which was used in the first evaluation stage of the two-fold challenge, are reported. The methods employed by the participants who achieved the best results are briefly described and compared to human traffic sign recognition performance and baseline results.


international symposium on neural networks | 2013

Detection of traffic signs in real-world images: The German traffic sign detection benchmark

Sebastian Houben; Johannes Stallkamp; Jan Salmen; Marc Schlipsing; Christian Igel

Real-time detection of traffic signs, the task of pinpointing a traffic signs location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.


computer analysis of images and patterns | 2009

Real-Time Stereo Vision: Making More Out of Dynamic Programming

Jan Salmen; Marc Schlipsing; Johann Edelbrunner; Stefan Hegemann; Stefan Lüke

Dynamic Programming (DP) is a popular and efficient method for calculating disparity maps from stereo images. It allows for meeting real-time constraints even on low-cost hardware. Therefore, it is frequently used in real-world applications, although more accurate algorithms exist. We present a refined DP stereo processing algorithm which is based on a standard implementation. However it is more flexible and shows increased performance. In particular, we introduce the idea of multi-path backtracking to exploit the information gained from DP more effectively. We show how to automatically tune all parameters of our approach offline by an evolutionary algorithm. The performance was assessed on benchmark data. The number of incorrect disparities was reduced by 40 % compared to the DP reference implementation while the overall complexity increased only slightly.


ieee intelligent vehicles symposium | 2013

Real-time stereo vision: Optimizing Semi-Global Matching

Matthias Michael; Jan Salmen; Johannes Stallkamp; Marc Schlipsing

Semi-Global Matching (SGM) is arguably one of the most popular algorithms for real-time stereo vision. It is already employed in mass production vehicles today. Thinking of applications in intelligent vehicles (and fully autonomous vehicles in the long term), we aim at further improving SGM regarding its accuracy. In this study, we propose a straight-forward extension of the algorithms parametrization. We consider individual penalties for different path orientations, weighted integration of paths, and penalties depending on intensity gradients. In order to tune all parameters, we applied evolutionary optimization. For a more efficient offline optimization and evaluation, we implemented SGM on graphics hardware. We describe the implementation using CUDA in detail. For our experiments, we consider two publicly available datasets: the popular Middlebury benchmark as well as a synthetic sequence from the .enpeda. project. The proposed extensions significantly improve the performance of SGM. The number of incorrect disparities was reduced by up to 27.5 % compared to the original approach, while the runtime was not increased.


ieee intelligent vehicles symposium | 2013

Towards autonomous driving in a parking garage: Vehicle localization and tracking using environment-embedded LIDAR sensors

André Ibisch; Stefan Stümper; Harald Altinger; Marcel Neuhausen; Marc Tschentscher; Marc Schlipsing; Jan Salinen; Alois Knoll

In this paper, we propose a new approach for localization and tracking of a vehicle in a parking garage, based on environment-embedded LIDAR sensors. In particular, we present an integration of data from multiple sensors, allowing to track vehicles in a common, parking garage coordinate system. In order to perform detection and tracking in realtime, a combination of appropriate methods, namely a grid-based approach, a RANSAC algorithm, and a Kalman filter is proposed and evaluated. The system achieves highly confident and exact vehicle positioning. In the context of a larger framework, our approach was used as a reference system to enable autonomous driving within a parking garage. In our experiments, we showed that the proposed algorithm allows a precise vehicle localization and tracking. Our systems results were compared to human-labeled ground-truth data. Based on this comparison we prove a high accuracy with a mean lateral and longitudinal error of 6.3cm and 8.5 cm, respectively.


Computing in Civil Engineering | 2013

Comparing Image Features and Machine Learning Algorithms for Real-Time Parking Space Classification

Marc Tschentscher; Marcel Neuhausen; Christian Koch; Markus König; Jan Salmen; Marc Schlipsing

Finding a vacant parking lot in urban areas is mostly time-consuming and not satisfying for potential visitors or customers. Efficient car-park routing systems could support drivers to find a nun occupied parking lot. Current systems detecting vacant parking lots are either very expensive due to the hardware requirement or do not provide a detailed occupancy map. In this paper, we propose a video-based system for low-cost parking space classification. A wide-angle lens camera is used in combination with a desktop computer. We evaluate image features and machine learning algorithms to determine the occupancy of parking lots. Each combination of feature set and classifier was trained and tested on our dataset containing approximately 10,000 samples. We assessed the performance of all combinations of feature extraction and classification methods. Our final system, incorporating temporal filtering, reached an accuracy of 99.8 %.


international conference on intelligent transportation systems | 2013

On-vehicle video-based parking lot recognition with fisheye optics

Sebastian Houben; Matthias Komar; Andree Hohm; Stefan Lüke; Marcel Neuhausen; Marc Schlipsing

The search for free parking space in a crowded car park is a time-consuming and tedious task. Todays park assistance systems provide the driver with acoustic or visual feedback when approaching an obstacle or semi-autonomously navigate the vehicle into the parking lot. However, finding a free parking lot is usually left to the driver. In this paper, we address this search problem via video sensors only. This can be used as a help to the driver to quickly pass a parking deck and, more important, can be regarded as a cornerstone to fully autonomously parking vehicles.


intelligent vehicles symposium | 2014

Towards highly automated driving in a parking garage: General object localization and tracking using an environment-embedded camera system

André Ibisch; Sebastian Houben; Marc Schlipsing; Robert Kesten; Paul Reimche; Florian Schuller; Harald Altinger

In this study, we present a new indoor positioning and environment perception system for generic objects based on multiple surveillance cameras. In order to assist highly automated driving, our system detects the vehicles position and any object along its current path to avoid collisions. A main advantage of the proposed approach is the usage of cameras that are already installed in the majority of parking garages. We generate precise object hypotheses in 3D world coordinates based on a given extrinsic camera calibration. Starting with a background subtraction algorithm for the segmentation of each camera image, we propose a robust view-ray intersection approach that enables the system to match and triangulate segmented hypotheses from all cameras. Comparing with LIDAR-based ground truth, we were able to evaluate the systems mean localization accuracy of 0.37 m for a variety of different sequences.


ieee intelligent vehicles symposium | 2012

Roll angle estimation for motorcycles: Comparing video and inertial sensor approaches

Marc Schlipsing; Jan Salmen; B. Lattke; Kai Schröter; Hermann Winner

Advanced Rider Assistance Systems (ARAS) for powered two-wheelers improve driving behaviour and safety. Further developments of intelligent vehicles will also include video-based systems, which are successfully deployed in cars. Porting such modules to motorcycles, the camera pose has to be taken into account, as e. g. large roll angles produce significant variations in the recorded images. Therefore, roll angle estimation is an important task for the development of various kinds of ARAS. This study introduces alternative approaches based on inertial measurement units (IMU) as well as video only. The latter learns orientation distributions of image gradients that code the current roll angle. Until now only preliminary results on synthetic data have been published. Here, an evaluation on real video data will be presented along with three valuable improvements and an extensive parameter optimisation using the Covariance Matrix Adaptation Evolution Strategy. For comparison of the very dissimilar approaches a test vehicle is equipped with IMU, camera and a highly accurate reference sensor. The results state high performance of about 2 degrees error for the improved vision method and, therefore proofs the proposed concept on real-world data. The IMU-based Kalman filter estimation performed on par. As a naive result averaging of both estimates already increased performance an elaborate fusion of the proposed methods is expected to yield further improvements.

Collaboration


Dive into the Marc Schlipsing's collaboration.

Top Co-Authors

Avatar

Jan Salmen

Ruhr University Bochum

View shared research outputs
Top Co-Authors

Avatar

Christian Igel

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Lattke

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Hermann Winner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Schröter

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge