Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Salmen is active.

Publication


Featured researches published by Jan Salmen.


Neural Networks | 2012

2012 Special Issue: Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition

Johannes Stallkamp; Marc Schlipsing; Jan Salmen; Christian Igel

Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do todays algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.


international symposium on neural networks | 2011

The German Traffic Sign Recognition Benchmark: A multi-class classification competition

Johannes Stallkamp; Marc Schlipsing; Jan Salmen; Christian Igel

The “German Traffic Sign Recognition Benchmark” is a multi-category classification competition held at IJCNN 2011. Automatic recognition of traffic signs is required in advanced driver assistance systems and constitutes a challenging real-world computer vision and pattern recognition problem. A comprehensive, lifelike dataset of more than 50,000 traffic sign images has been collected. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations. The images are complemented by several precomputed feature sets to allow for applying machine learning algorithms without background knowledge in image processing. The dataset comprises 43 classes with unbalanced class frequencies. Participants have to classify two test sets of more than 12,500 images each. Here, the results on the first of these sets, which was used in the first evaluation stage of the two-fold challenge, are reported. The methods employed by the participants who achieved the best results are briefly described and compared to human traffic sign recognition performance and baseline results.


international symposium on neural networks | 2013

Detection of traffic signs in real-world images: The German traffic sign detection benchmark

Sebastian Houben; Johannes Stallkamp; Jan Salmen; Marc Schlipsing; Christian Igel

Real-time detection of traffic signs, the task of pinpointing a traffic signs location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.


computer analysis of images and patterns | 2009

Real-Time Stereo Vision: Making More Out of Dynamic Programming

Jan Salmen; Marc Schlipsing; Johann Edelbrunner; Stefan Hegemann; Stefan Lüke

Dynamic Programming (DP) is a popular and efficient method for calculating disparity maps from stereo images. It allows for meeting real-time constraints even on low-cost hardware. Therefore, it is frequently used in real-world applications, although more accurate algorithms exist. We present a refined DP stereo processing algorithm which is based on a standard implementation. However it is more flexible and shows increased performance. In particular, we introduce the idea of multi-path backtracking to exploit the information gained from DP more effectively. We show how to automatically tune all parameters of our approach offline by an evolutionary algorithm. The performance was assessed on benchmark data. The number of incorrect disparities was reduced by 40 % compared to the DP reference implementation while the overall complexity increased only slightly.


ieee intelligent vehicles symposium | 2013

Real-time stereo vision: Optimizing Semi-Global Matching

Matthias Michael; Jan Salmen; Johannes Stallkamp; Marc Schlipsing

Semi-Global Matching (SGM) is arguably one of the most popular algorithms for real-time stereo vision. It is already employed in mass production vehicles today. Thinking of applications in intelligent vehicles (and fully autonomous vehicles in the long term), we aim at further improving SGM regarding its accuracy. In this study, we propose a straight-forward extension of the algorithms parametrization. We consider individual penalties for different path orientations, weighted integration of paths, and penalties depending on intensity gradients. In order to tune all parameters, we applied evolutionary optimization. For a more efficient offline optimization and evaluation, we implemented SGM on graphics hardware. We describe the implementation using CUDA in detail. For our experiments, we consider two publicly available datasets: the popular Middlebury benchmark as well as a synthetic sequence from the .enpeda. project. The proposed extensions significantly improve the performance of SGM. The number of incorrect disparities was reduced by up to 27.5 % compared to the original approach, while the runtime was not increased.


Computing in Civil Engineering | 2013

Comparing Image Features and Machine Learning Algorithms for Real-Time Parking Space Classification

Marc Tschentscher; Marcel Neuhausen; Christian Koch; Markus König; Jan Salmen; Marc Schlipsing

Finding a vacant parking lot in urban areas is mostly time-consuming and not satisfying for potential visitors or customers. Efficient car-park routing systems could support drivers to find a nun occupied parking lot. Current systems detecting vacant parking lots are either very expensive due to the hardware requirement or do not provide a detailed occupancy map. In this paper, we propose a video-based system for low-cost parking space classification. A wide-angle lens camera is used in combination with a desktop computer. We evaluate image features and machine learning algorithms to determine the occupancy of parking lots. Each combination of feature set and classifier was trained and tested on our dataset containing approximately 10,000 samples. We assessed the performance of all combinations of feature extraction and classification methods. Our final system, incorporating temporal filtering, reached an accuracy of 99.8 %.


international conference on evolutionary multi criterion optimization | 2011

Real-time estimation of optical flow based on optimized haar wavelet features

Jan Salmen; Lukas Caup; Christian Igel

Estimation of optical flow is required in many computer vision applications. These applications often have to deal with strict time constraints. Therefore, flow algorithms with both high accuracy and computational efficiency are desirable. Accordingly, designing such a flow algorithm involves multi-objective optimization. In this work, we build on a popular algorithm developed for realtime applications. It is originally based on the Census transform and benefits from this encoding for table-based matching and tracking of interest points. We propose to use the more universal Haar wavelet features instead of the Census transform within the same framework. The resulting approach is more flexible, in particular it allows for sub-pixel accuracy. For comparison with the original method and another baseline algorithm, we considered both popular benchmark datasets as well as a long synthetic video sequence. We employed evolutionary multi-objective optimization to tune the algorithms. This allows to compare the different approaches in a systematic and unbiased way. Our results show that the overall performance of our method is significantly higher compared to the reference implementation.


ieee intelligent vehicles symposium | 2012

Roll angle estimation for motorcycles: Comparing video and inertial sensor approaches

Marc Schlipsing; Jan Salmen; B. Lattke; Kai Schröter; Hermann Winner

Advanced Rider Assistance Systems (ARAS) for powered two-wheelers improve driving behaviour and safety. Further developments of intelligent vehicles will also include video-based systems, which are successfully deployed in cars. Porting such modules to motorcycles, the camera pose has to be taken into account, as e. g. large roll angles produce significant variations in the recorded images. Therefore, roll angle estimation is an important task for the development of various kinds of ARAS. This study introduces alternative approaches based on inertial measurement units (IMU) as well as video only. The latter learns orientation distributions of image gradients that code the current roll angle. Until now only preliminary results on synthetic data have been published. Here, an evaluation on real video data will be presented along with three valuable improvements and an extensive parameter optimisation using the Covariance Matrix Adaptation Evolution Strategy. For comparison of the very dissimilar approaches a test vehicle is equipped with IMU, camera and a highly accurate reference sensor. The results state high performance of about 2 degrees error for the improved vision method and, therefore proofs the proposed concept on real-world data. The IMU-based Kalman filter estimation performed on par. As a naive result averaging of both estimates already increased performance an elaborate fusion of the proposed methods is expected to yield further improvements.


ieee intelligent vehicles symposium | 2013

Video-based trailer detection and articulation estimation

Lukas Caup; Jan Salmen; Ibro Muharemovic; Sebastian Houben

Even for experienced drivers handling a roll trailer with a passenger car is a difficult and often tedious task. Moreover, the driver needs to keep track of the trailers driving stability on unsteady roads. There are driver assistance systems that can simplify trajectory planning and observe the oscillation amplitude, but they require additional hardware. In this paper, we present a method for trailer detection and articulation angle measurement based on video data from a rear end wide-angle camera. It consists of two stages: to decide whether or not a trailer is coupled to the vehicle and to estimate its articulation angle. These calculations work on single video frames. The vehicle is therefore not required to be in motion. However, we stabilize the single frame estimations by temporal integration. We perform training and parameter optimization and evaluate the accuracy of our approach by comparing the results to those of an articulation measurement unit attached to a test vehicles hitch. Results show that it can very reliably be determined whether or not a trailer is coupled to the vehicle. Furthermore, its articulation can be estimated with a mean error of less than two degrees.


Pattern Recognition Letters | 2010

Efficient update of the covariance matrix inverse in iterated linear discriminant analysis

Jan Salmen; Marc Schlipsing; Christian Igel

For fast classification under real-time constraints, as required in many image-based pattern recognition applications, linear discriminant functions are a good choice. Linear discriminant analysis (LDA) computes such discriminant functions in a space spanned by real-valued features extracted from the input. The accuracy of the trained classifier crucially depends on these features, its time complexity on their number. As the number of available features is immense in most real-world problems, it becomes essential to use meta-heuristics for feature selection and/or feature optimization. These methods typically involve iterated training of a classifier after substitutions or modifications of features. Therefore, we derive an efficient incremental update formula for LDA discriminant functions for the substitution of features. It scales linearly in the number of altered features and quadratically in the overall number of features, while completely retraining scales cubically in the number of features. The update rule allows for efficient feature selection and optimization with any meta-heuristic that is based on iteratively modifying existing solutions. The proposed method was tested on an artificial benchmark problem as well as on a real-world problem. Results show that significant time savings during training are achieved while numerical stability is maintained.

Collaboration


Dive into the Jan Salmen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Igel

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Lattke

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Hermann Winner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Schröter

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lukas Caup

Ruhr University Bochum

View shared research outputs
Researchain Logo
Decentralizing Knowledge