Matthias Rychetsky
Technische Universität Darmstadt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Rychetsky.
international symposium on neural networks | 1999
Matthias Rychetsky; Stefan Ortmann; Manfred Glesner
We show the application of large margin classifiers to the real world problem of engine knock detection. Large margin classifiers, like support vector machines (SVM) or the Adatron, promise a good generalization performance. Furthermore, the support vector approach has some bounds (e.g. for generalization error and learning convergence) which give this technique a more firm background than the neural network leaning algorithms. One drawback of the SVM, especially the Adatron, is that they tend to produce classification systems which need large computational effort for recall. This is caused by the fact that support vectors are normally sparse, but their number of calls is high. Therefore, we propose here a method which prunes (removes) support vectors that are less important. By an adjustment of the training data and remaining steps of the classifier a performance degradation is avoided.
international symposium on neural networks | 1999
Matthias Rychetsky; Stefan Ortmann; Michael Ullmann; Manfred Glesner
This paper introduces two methods to reduce the training time of large scale support vector machines (SVMs). To optimize a SVM a quadratic optimization problem has to be solved. For large scale applications with many training vectors this can only be done by splitting the data set into smaller pieces called chunks. The chunking algorithms normally start with a random subset. In this paper we propose two methods that can to find a better than a random starting subset, and therefore accelerate the optimization process. They both estimate which training vectors are likely to be support vectors in the final SVM. In the input space this is difficult to determine, because the decision surface can have an (nearly) arbitrary shape. Therefore, this is done in the high dimensional projected space of the SVM.
international symposium on neural networks | 1999
Radu Dogaru; Marinel Alangiu; Matthias Rychetsky; Manfred Glesner
In this paper we describe a novel type of adaptive system and compare its representation and classification performances with classical solutions. The main feature of our system is that it is based on combining simple perceptrons with a compact and simple to implement nonlinear transform defined as a finite recursion of simple nonmonotonic functions. When such a nonlinear recursion replaces the standard output function of a perceptron-like structure, the representation capability of Boolean functions enhances beyond that of the standard linear threshold gate and arbitrary Boolean functions can be learned. While the use of nonlinear recursion at the output accounts for compact learning and memorization of arbitrary functions, it was found that good generalization capabilities are obtained when the nonlinear recursion is placed at the inputs. It is thus concluded that the proper addition of a simple nonlinear structure to the well known linear perceptron removes most of its drawbacks, the resulting structure being compact, easy to implement, and functionally equivalent to more sophisticated neural systems.
international symposium on neural networks | 1999
Stefan Ortmann; Matthias Rychetsky; Manfred Glesner
We show the results of an empirical comparison using neural network learning methods which reduce the estimation variance by combining the outputs of individual networks. These network topologies are also known as committee or ensemble of networks. Alternatively, we examine constructive networks which adapt their internal complexity, reducing the overfitting problem automatically. Both classes of networks have been compared within the framework of an engine knock detection system taking into account the generalization performance, the network size and the needed computational load for the training procedure.
Proceedings. 24th EUROMICRO Conference (Cat. No.98EX204) | 1998
Manfred Glesner; Matthias Rychetsky; Stefan Ortmann
Computational intelligence and its applications have been under dynamic development in the last years. The research areas as fuzzy logic, neural networks or evolutionary computation have demonstrated their power on a large amount of problems, e.g. in pattern recognition, system control, system diagnosis, and intelligent signal processing. We give a short review of the developments in this area over the last years. We show here that a developer of dedicated hardware should be aware of the problems in developing specialized hardware for computational intelligence, i.e. he has to compete with mainstream microcomputer implementations of the same techniques. Furthermore we point out some directions for promising research and possibilities for system improvements, especially in the area of neural network algorithm and system research.
SAE transactions | 1998
Stefan Ortmann; Matthias Rychetsky; Manfred Glesner; Riccardo Groppo; Paolo Tubetti; Gianluca Morra
Sensors Update | 1998
Marc Theisen; A. Steudel; Matthias Rychetsky; Manfred Glesner
international conference on machine learning | 2000
Matthias Rychetsky; John Shawe-Taylor; Manfred Glesner
Natural Computing | 1998
Stefan Ortmann; Matthias Rychetsky; Manfred Glesner
Natural Computing | 1998
Matthias Rychetsky; Stefan Ortmann; Manfred Glesner