Joachim Gloger
Daimler AG
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joachim Gloger.
international conference on document analysis and recognition | 1993
Alfred Kaltenmeier; Torsten Caesar; Joachim Gloger; Eberhard Mandler
The paper describes an adaptation of hidden Markov models (HMM) to automatic recognition of unrestricted handwritten words. Many interesting details of a 50,000 vocabulary recognition system for US city names are described. This system includes feature extraction, classification, estimation of model parameters, and word recognition. The feature extraction module transforms a binary image to a sequence of feature vectors. The classification module consists of a transformation based on linear discriminant analysis and Gaussian soft-decision vector quantizers which transform feature vectors into sets of symbols and associated likelihoods. Symbols and likelihoods form the input to both HMM training and recognition. HMM training performed in several successive steps requires only a small amount of gestalt labeled data on the level of characters for initialization. HMM recognition based on the Viterbi algorithm runs on subsets of the whole vocabulary.<<ETX>>
international conference on document analysis and recognition | 1997
Joachim Gloger; Alfred Kaltenmeier; Eberhard Mandler; L. Andrews
The most scientific papers dealing with handwriting recognition systems make statements relating to the recognition performance based on a forced-recognition rate. This rate describes the ratio between the number of the correct recognized samples and the number of all possible samples. For systems applied in real applications this rate is not very relevant. They have to work with a very low error-rate, which can be only achieved by introducing effective reject criteria. So the real interesting thing is a function describing the recognition rate in relation to a specific error rate, including implicitly a corresponding reject rate. This paper presents two approaches for handling rejects in a hidden Markov based handwriting recognition system. The features to determine a reject are values which are derived from the hidden Markov recognizer. One of the techniques relies on relative frequencies of those values, the other one utilizes standard classification techniques to train a reject decision unit, the reject classifier. Both methods are presented with some noteworthy results.
international conference on document analysis and recognition | 1995
Torsten Caesar; Joachim Gloger; Eberhard Mandler
Estimation of aiding typographical rulers is a challenging task especially for everyday handwriting. The method described here performs very well as long as the model assumption of one straight line is satisfied. It is completely based on contour processing. A sophisticated iterated regression analysis processing weighted points is the central algorithm. The method can be used for handwriting as well as machine printed texts without adjusting the parameters.
international conference on document analysis and recognition | 1997
Jürgen Franke; Joachim Gloger; Alfred Kaltenmeier; Eberhard Mandler
Handwriting recognition systems based on hidden Markov models commonly use a vector quantizer to get the required symbol sequence. In order to get better recognition rates semi-continuous hidden Markov models have been applied. Those recognizers need a soft vector quantizer which superimposes a statistical distribution for symbol generation. In general, Gaussian distributions are applied. A disadvantage of this technique is the assumption of a specific distribution. No proof can be given whether this presupposition holds in practice. Therefore, the application of a method which employs no model of a distribution may achieve some improvements. The paper presents the employment of a polynomial classifier as a replacement of a Gaussian classifier in the handwriting recognition system. The replacement improves the recognition rate significantly, as the results show.
international conference on document analysis and recognition | 1995
Torsten Caesar; Joachim Gloger; Eberhard Mandler
Handwriting recognition systems usually need the support of lexical knowledge in order to achieve acceptable results. Lexicons of practical applications are often very large which results in prohibitive run time and recognition performance. So there is a need to reduce large lexicons efficiently without loosing the correct entry. Often it is possible to recognize some isolated or resegmented characters of a word but not the whole word. These recognition results may be used as hints for an initial lexicon reduction. In order to use these hints techniques are needed which are able to handle character alternatives as well as touched and broken characters. The article discusses lexicon techniques in respect to their efficiency and robustness. A hybrid approach is proposed which reduces large lexicons efficiently and shows a robust behavior when broken and touched characters are observed.
Archive | 1994
Torsten Caesar; Joachim Gloger; Alfred Kaltenmeier; Eberhard Mandler
In January 1992 a project was started which is focused on the recognition of handwritten words, constraint by a given lexicon. The target application is the recognition of US city names in address reading systems.
international conference on document analysis and recognition | 1995
Axel Braun; Torsten Caesar; Joachim Gloger; Eberhard Mandler
In a binary image contours may be seen as the discriminating curve between objects and background. Contours of connected components are always a Jordan curve. One symbol (e.g., a character) may consist of more than one such curve. Processing these curves is a one-dimensional task. Almost all common processing steps can be designed to work on contours rather than on the two-dimensional image. Moreover, contour processing gives new insight to well know problems and enables new processing steps or produces more information about the relations between connected components or objects of the image. The authors present preprocessing operations which work directly on the level of contours. Compared to the corresponding iconic operations, algorithms working on the contour level are mostly more efficient. Based on the contours of the connected components methods for filtering and slant normalization are described.
international conference on document analysis and recognition | 1993
Torsten Caesar; Joachim Gloger; Eberhard Mandler
A methodology for structuring large disordered sample sets for classifiers is presented. The object-oriented framework is an essential part of this methodology. Classes can be viewed as sets, and sets again can be viewed as objects. For this reason, operations and techniques from both domains (sets and OO technology) can be utilized to set up a system for computer-aided labeling. Since labeling is a time-consuming task, the handling of the system has to support efficient labeling. A second important aspect of the application is easy system handling to allow inexperienced examiners to use the system.<<ETX>>
Archive | 2001
Joachim Gloger; Matthias Oberlaender; Bernd Woltermann
international conference on document analysis and recognition | 1993
Torsten Caesar; Joachim Gloger; Eberhard Mandler