Ulrich Kressel
Daimler AG
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ulrich Kressel.
ieee intelligent vehicles symposium | 2004
Frank Lindner; Ulrich Kressel; Stephan Kaelberer
In this paper a general system for real-time detection and recognition of traffic signals is proposed. The key sensor is a camera installed in a moving vehicle. The software system consists of three main modules: detection, tracking, and sample-based classification. Additional sensor information, such as vehicle data, GPS, and enhanced digital maps, or a second camera for stereo vision, are used to enhance the performance and robustness of the system. Since the detection step is the most critical one, different detection schemes are compared. They are based on color, shape, texture and complete-object classification. The color system, with a high dynamic range camera and precise location information of the vehicle and the searched traffic signals, offers valuable and reliable help in directing the drivers attention to traffic signals and, thus, can reduce red-light running accidents.
international conference on document analysis and recognition | 1993
Thomas Bayer; Ulrich Kressel
In optical character recognition (OCR) and document analysis, many reading errors are not caused by inadequate classifier power, but by segmentation errors. In particular, merged characters are a major remaining problem. An efficient and powerful method of determining cut hypotheses for the segmentation of merged characters is presented. The method is based on a classifier deciding for each column of the character image, whether it represents a cut hypothesis or not. Since in the training phase the classifier is adapted by a sample set consisting of images of merged character patterns, the decision rules are created automatically rather than being man-made heuristics. The results obtained from a large test set show that a high recognition rate can be achieved with a reasonable computational effort.<<ETX>>
Archive | 1992
Thomas Bayer; Jürgen Franke; Ulrich Kressel; Eberhard Mandler; Matthias Oberländer; Jürgen Schürmann
Document analysis aims at the transformation of data presented on paper and addressed to human comprehension into a computer-revisable form. The pixel representation of a scanned document must be converted into a structured set of symbolic entities, which are appropriate for the intended kind of computerized information processing. It can be argued that the achieved symbolic description level resembles the degree of understanding acquired by a document analysis system. This interpretation of the term ‘understanding’ shall be explained a little more deeply. An attempt shall be made to clarify the important question: “Up to what level can a machine really understand a given document?” Looking at the many problems still unsolved, this is indeed questionable.
international conference on pattern recognition | 1992
T. Bayer; Ulrich Kressel; M. Hammelsbeck
In optical character recognition (OCR) and document analysis many errors are not caused by inadequate classifier power, but by segmentation errors. Besides broken characters, merged characters constitute the major remaining problem. This paper presents an efficient method for segmenting merged characters. The algorithm combines a statistical adapted cut classifier and a search algorithm, which employs further experts for selecting the proper cut positions from a set of hypotheses. It is designed especially for proportional fonts and even succeeds, if the characters are in italics font style.<<ETX>>
Computer Vision and Image Understanding | 1998
Thomas Bayer; Ulrich Kressel; Heike Mogg-Schneider; Ingrid Renz
Text categorization assigns predefined categories to either electronically available texts or those resulting from document image analysis. A generic system for text categorization is presented which is based on statistical analysis of representative text corpora. Significant features are automatically derived from training texts by selecting substrings from actual word forms and applying statistical information and general linguistic knowledge. The dimension of the feature vectors is then reduced by linear transformation, keeping the essential information. The classification is a minimum least-squares approach based on polynomials. The described system can be efficiently adapted to new domains or different languages. In application, the adapted text categorizers are reliable, fast, and completely automatic. Two example categorization tasks achieve recognition scores of approximately 80% and are very robust against recognition or typing errors.
international conference on pattern recognition | 2004
Annika Kuhl; Lars Krüger; Christian Wöhler; Ulrich Kressel
This paper describes the training of classifiers entity based on virtual images, rendered by a ray-tracing software. Two classifiers, a support vector machine and a polynomial classifier, are trained solely with virtual samples and used for the classification of real samples. The objects to be distinguished are holes vs. garbage (non-holes) out of a set of hole candidates in images of flanges. We analysed the effect of different classifier parameters and manipulation of the virtual samples. Error rates of 1.6% on real test samples are achieved.
Computer Vision and Image Understanding | 1998
Thomas Bayer; Ulrich Kressel; Heike Mogg-Schneider; Ingrid Renz
Text categorization assigns predefined categories to either electronically available texts or those resulting from document image analysis. A generic system for text categorization is presented which is based on statistical analysis of representative text corpora. Significant features are automatically derived from training texts by selecting substrings from actual word forms and applying statistical information and general linguistic knowledge. The dimension of the feature vectors is then reduced by linear transformation, keeping the essential information. The classification is a minimum least-squares approach based on polynomials. The described system can be efficiently adapted to new domains or different languages. In application, the adapted text categorizers are reliable, fast, and completely automatic. Two example categorization tasks achieve recognition scores of approximately 80% and are very robust against recognition or typing errors.
international conference on artificial neural networks | 1997
Ingo Graf; Ulrich Kressel; Jürgen Franke
Polynomial support vector machines have shown a competitive performance for the problem of handwritten digit recognition. However, there is a large gap in performance vs. computing resources between the linear and the quadratic approach. By computing the complete quadratic classifier out of the quadratic support vector machine, a pivot point is found to trade between performance and effort. Different selection strategies are presented to reduce the complete quadratic classifier, which lower the required computing and memory resources by a factor of more than ten without affecting the generalization performance.
international conference on 3d vision | 2013
Matthias Höffken; Tianyi Wang; Jürgen Wiest; Ulrich Kressel; Klaus Dietmayer
Automatic head pose estimation plays an important part in the development of human machine interfaces. This paper proposes a fast and frugal method for accurate and person-independent head pose estimation Based on range images. Head pose estimation is treated as a nonlinear regression problem and addressed with Synchronized Sub manifold Embedding (SSE). The offline training step exploits the local linear structure of label and feature space for a cross-wise synchronization of pose samples from different subjects. Based on this, multiclass Linear Discriminant Analysis (M-LDA) identifies a dimensionality-reducing linear projection, which diminishes non head pose related information. New samples are then projected into this lower dimensional feature space and classified Based on training samples within their local neighborhood. In case of sequential data, the occurrence of outliers can be reduced using a reasonable preselection of neighborhood candidates Based on tracking of pose changes. The experimental results on a publicly available dataBase prove, that the proposed algorithm can handle a large range of pose changes and outperforms existing methods in accuracy.
Mustererkennung 1997, 19. DAGM-Symposium | 1997
Ulrich Kressel; Ingo Graf
Die Support-Vector-Machine nach V. Vapnik [1] adaptiert Klassifikatoren, indem sie das strukturelle Risiko minimiert, das neben dem — ublicherweise verwendeten — empirischen Risiko auch die Zuverlassigkeit des adaptierten Klassifikators abhangig von der gegebenen Lernstichprobe berucksichtigt. Realisiert wird die Support-Vector-Machine durch die Hypertrennebene, die zwei gegebene Klassen mit moglichst grosem Abstand und moglichst wenig Fehlern voneinander trennt. Fur das Mehrklassenproblem schlagen wir vor, die Klassen jeweils paarweise zu separieren anstatt der ublichen Vorgehensweise ‘eine Klasse vs. alle anderen’. Die Vorteile dieses Ansatzes werden am Beispiel der Klassifikation von handgeschriebenen Ziffern bestatigt.