Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Rychetsky is active.

Publication


Featured researches published by Matthias Rychetsky.


international symposium on neural networks | 1999

Support vector approaches for engine knock detection

Matthias Rychetsky; Stefan Ortmann; Manfred Glesner

We show the application of large margin classifiers to the real world problem of engine knock detection. Large margin classifiers, like support vector machines (SVM) or the Adatron, promise a good generalization performance. Furthermore, the support vector approach has some bounds (e.g. for generalization error and learning convergence) which give this technique a more firm background than the neural network leaning algorithms. One drawback of the SVM, especially the Adatron, is that they tend to produce classification systems which need large computational effort for recall. This is caused by the fact that support vectors are normally sparse, but their number of calls is high. Therefore, we propose here a method which prunes (removes) support vectors that are less important. By an adjustment of the training data and remaining steps of the classifier a performance degradation is avoided.


international symposium on neural networks | 1999

Accelerated training of support vector machines

Matthias Rychetsky; Stefan Ortmann; Michael Ullmann; Manfred Glesner

This paper introduces two methods to reduce the training time of large scale support vector machines (SVMs). To optimize a SVM a quadratic optimization problem has to be solved. For large scale applications with many training vectors this can only be done by splitting the data set into smaller pieces called chunks. The chunking algorithms normally start with a random subset. In this paper we propose two methods that can to find a better than a random starting subset, and therefore accelerate the optimization process. They both estimate which training vectors are likely to be support vectors in the final SVM. In the input space this is difficult to determine, because the decision surface can have an (nearly) arbitrary shape. Therefore, this is done in the high dimensional projected space of the SVM.


international symposium on neural networks | 1999

Perceptrons revisited: the addition of a non-monotone recursion greatly enhances their representation and classification properties

Radu Dogaru; Marinel Alangiu; Matthias Rychetsky; Manfred Glesner

In this paper we describe a novel type of adaptive system and compare its representation and classification performances with classical solutions. The main feature of our system is that it is based on combining simple perceptrons with a compact and simple to implement nonlinear transform defined as a finite recursion of simple nonmonotonic functions. When such a nonlinear recursion replaces the standard output function of a perceptron-like structure, the representation capability of Boolean functions enhances beyond that of the standard linear threshold gate and arbitrary Boolean functions can be learned. While the use of nonlinear recursion at the output accounts for compact learning and memorization of arbitrary functions, it was found that good generalization capabilities are obtained when the nonlinear recursion is placed at the inputs. It is thus concluded that the proper addition of a simple nonlinear structure to the well known linear perceptron removes most of its drawbacks, the resulting structure being compact, easy to implement, and functionally equivalent to more sophisticated neural systems.


international symposium on neural networks | 1999

On the variance reduction of neural networks-experimental results for an automotive application

Stefan Ortmann; Matthias Rychetsky; Manfred Glesner

We show the results of an empirical comparison using neural network learning methods which reduce the estimation variance by combining the outputs of individual networks. These network topologies are also known as committee or ensemble of networks. Alternatively, we examine constructive networks which adapt their internal complexity, reducing the overfitting problem automatically. Both classes of networks have been compared within the framework of an engine knock detection system taking into account the generalization performance, the network size and the needed computational load for the training procedure.


Proceedings. 24th EUROMICRO Conference (Cat. No.98EX204) | 1998

Advanced hardware and software architectures for computational intelligence-application to a real world problem

Manfred Glesner; Matthias Rychetsky; Stefan Ortmann

Computational intelligence and its applications have been under dynamic development in the last years. The research areas as fuzzy logic, neural networks or evolutionary computation have demonstrated their power on a large amount of problems, e.g. in pattern recognition, system control, system diagnosis, and intelligent signal processing. We give a short review of the developments in this area over the last years. We show here that a developer of dedicated hardware should be aware of the problems in developing specialized hardware for computational intelligence, i.e. he has to compete with mainstream microcomputer implementations of the same techniques. Furthermore we point out some directions for promising research and possibilities for system improvements, especially in the area of neural network algorithm and system research.


SAE transactions | 1998

Engine Knock Estimation Using Neural Networks Based on a Real-World Database

Stefan Ortmann; Matthias Rychetsky; Manfred Glesner; Riccardo Groppo; Paolo Tubetti; Gianluca Morra


Sensors Update | 1998

Fuzzy Logic and Neuro-Systems Assisted Intelligent Sensors

Marc Theisen; A. Steudel; Matthias Rychetsky; Manfred Glesner


international conference on machine learning | 2000

Direct Bayes Point Machines

Matthias Rychetsky; John Shawe-Taylor; Manfred Glesner


Natural Computing | 1998

Constructive Learning of a Sub-Feature Detector Network by Means of Prediction Risk Estimation.

Stefan Ortmann; Matthias Rychetsky; Manfred Glesner


Natural Computing | 1998

Pruning and Regularization Techniques for Feed Forward Nets Applied on a Real World Data Base.

Matthias Rychetsky; Stefan Ortmann; Manfred Glesner

Collaboration


Dive into the Matthias Rychetsky's collaboration.

Top Co-Authors

Avatar

Manfred Glesner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Stefan Ortmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

A. Steudel

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Marc Theisen

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Michael Ullmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Radu Dogaru

Politehnica University of Bucharest

View shared research outputs
Researchain Logo
Decentralizing Knowledge