Jürgen Franke
Daimler AG
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jürgen Franke.
IEEE Internet Computing | 2008
Vlado Stankovski; Martin T. Swain; Valentin Kravtsov; Thomas Niessen; Dennis Wegener; Matthias Röhm; Jernej Trnkoczy; Michael May; Jürgen Franke; Assaf Schuster; Werner Dubitzky
As modern data mining applications increase in complexity, so too do their demands for resources. Grid computing is one of several emerging networked computing paradigms promising to meet the requirements of heterogeneous, large-scale, and distributed data mining applications. Despite this promise, there are still too many issues to be resolved before grid technology is commonly applied to large-scale data mining tasks. To address some of these issues, the authors developed the DataMiningGrid system. It integrates a diverse set of programs and application scenarios within a single framework, and features scalability, flexible extensibility, sophisticated support for relevant standards and different users.
international conference on document analysis and recognition | 1993
Jürgen Franke; Matthias Oberländer
The authors deal with the recognition of writing style (whether a data field is hand or machine printed) in the context of form reading applications. Due to the form readers hardware restrictions, the approach had to be based only on the knowledge of the surrounding rectangles of the black connected components of the data field. Different statistical classifiers were developed which were adapted to different feature vectors calculated separately for each data field. The output of these classifiers was combined, allowing a much higher performance than each single classifier. The combination was carried out by another polynomial (statistical) classifier using the estimations, not decisions, of these classifiers as the new feature vector. The improvement by combination was significant. Meanwhile the approach has proven its practical viability while running successfully in commercially distributed form readers.<<ETX>>
Archive | 1992
Thomas Bayer; Jürgen Franke; Ulrich Kressel; Eberhard Mandler; Matthias Oberländer; Jürgen Schürmann
Document analysis aims at the transformation of data presented on paper and addressed to human comprehension into a computer-revisable form. The pixel representation of a scanned document must be converted into a structured set of symbolic entities, which are appropriate for the intended kind of computerized information processing. It can be argued that the achieved symbolic description level resembles the degree of understanding acquired by a document analysis system. This interpretation of the term ‘understanding’ shall be explained a little more deeply. An attempt shall be made to clarify the important question: “Up to what level can a machine really understand a given document?” Looking at the many problems still unsolved, this is indeed questionable.
Mustererkennung 1991, 13. DAGM-Symposium | 1991
U. Kreßel; Jürgen Schürmann; Jürgen Franke
In diesem Beitrag wird die Anwendung von konnektionistischen Konzepten auf Problemstellungen aus dem Bereich Musterklassifikation mit Lehrer (supervised learning) untersucht. Zu diesem Zweck wurden die beiden wichtigsten neuronalen Modelle — Multilayer-Perzeptron und RadialBasis-Funktionen — ausgewahlt. Diese Ansatze werden mathematisch beschrieben und einheitlich dargestellt, wobei eine moglichst vollstandige Auflistung des in der Fachliteratur bekannten Wissens angestrebt wird. Mit Hilfe des entscheidungstheoretischen Losungsansatzes (Bayes-Klassifikator) gelingt es, die beiden Konzepte in das Arsenal klassischer Verfahren der Mustererkennung einzuordnen und ihre Eigenschaften zu bewerten. Zum Abschlus werden kurz Vergleichsergebnisse fur die Klassifikation von handgeschriebenen Ziffern angegeben.
Mustererkennung 1990, 12. DAGM-Symposium, | 1990
U. Kreßel; Jürgen Franke; Jürgen Schürmann
In diesem Beitrag werden verschiedene Zusammenhange zwischen dem Polynomklassifikator [9] und dem Multilayer-Perzeptron [8] aufgezeigt. Ausgehend von dem entscheidungstheoretischen Losungsansatz fur die Klassifikation von Mustern, werden beide Methoden als Approximation der Ruckschluswahrscheinlichkeiten diskutiert. Neben dem theoretischen Vergleich werden auch erste Ergebnisse fur eine wirklichkeitsnahe Aufgabenstellung der Mustererkennung — namlich die Klassifikation von handgeschriebenen Ziffern [1] — angegeben.
international conference on document analysis and recognition | 1997
Jürgen Franke; Joachim Gloger; Alfred Kaltenmeier; Eberhard Mandler
Handwriting recognition systems based on hidden Markov models commonly use a vector quantizer to get the required symbol sequence. In order to get better recognition rates semi-continuous hidden Markov models have been applied. Those recognizers need a soft vector quantizer which superimposes a statistical distribution for symbol generation. In general, Gaussian distributions are applied. A disadvantage of this technique is the assumption of a specific distribution. No proof can be given whether this presupposition holds in practice. Therefore, the application of a method which employs no model of a distribution may achieve some improvements. The paper presents the employment of a polynomial classifier as a replacement of a Gaussian classifier in the handwriting recognition system. The replacement improves the recognition rate significantly, as the results show.
international conference on artificial neural networks | 1997
Ingo Graf; Ulrich Kressel; Jürgen Franke
Polynomial support vector machines have shown a competitive performance for the problem of handwritten digit recognition. However, there is a large gap in performance vs. computing resources between the linear and the quadratic approach. By computing the complete quadratic classifier out of the quadratic support vector machine, a pivot point is found to trade between performance and effort. Different selection strategies are presented to reduce the complete quadratic classifier, which lower the required computing and memory resources by a factor of more than ten without affecting the generalization performance.
Lecture Notes in Computer Science | 2004
Ulrich Bohnacker; Jürgen Franke; Heike Mogg-Schneider; Ingrid Renz
The paper introduces two procedures which allow information seekers to inspect large document collections. The first method structures document collections into sensible groups. Here, three different approaches are presented: grouping based on the topology of the collection (i.e. linking and directory structure of intranet documents), grouping based on the content of the documents (i.e. similarity relation), and grouping based on the reader’s behavior when using the document collection. After the formation of groups, the second method supports readers by characterizing text through extracting short and relevant information from single documents and groups. Using statistical approaches, representative keywords of each document and also of the document groups are calculated. Later, the most important sentences from single documents and document groups are extracted as summaries. Geared to the different information needs, algorithms for indicative, informative, and thematic summaries are developed. In this process, special care is taken to generate readable and sensible summaries. Finally, we present three applications which utilize these procedures to fulfill various information-seeking needs.
parallel and distributed processing techniques and applications | 2004
Vlado Stankovski; Michael May; Jürgen Franke; Assaf Schuster; Damian McCourt; Werner Dubitzky
riao conference | 2000
Ulrich Bohnacker; Lars Dehning; Jürgen Franke; Ingrid Renz; René Schneider