Michael Hüsken
Ruhr University Bochum
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Hüsken.
Neurocomputing | 2003
Christian Igel; Michael Hüsken
Abstract The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark problems. The improved Rprop outperforms the other methods; only the BFGS performs better in the later stages of learning on some of the test problems. For the analysis of the local search behavior, we compare the Rprop algorithms on general hyperparabolic error landscapes, where the new variants confirm their improvement.
Neurocomputing | 2003
Michael Hüsken; Peter Stagge
Abstract Recurrent neural networks (RNN) are a widely used tool for the prediction of time series. In this paper we use the dynamic behaviour of the RNN to categorize input sequences into different specified classes. These two tasks do not seem to have much in common. However, the prediction task strongly supports the development of a suitable internal structure, representing the main features of the input sequence, to solve the classification problem. Therefore, the speed and success of the training as well as the generalization ability of the trained RNN are significantly improved. The trained RNN provides good classification performance and enables the user to assess efficiently the degree of reliability of the classification result.
soft computing | 2005
Michael Hüsken; Yaochu Jin; Bernhard Sendhoff
We study the use of neural networks as approximate models for the fitness evaluation in evolutionary design optimization. To improve the quality of the neural network models, structure optimization of these networks is performed with respect to two different criteria: One is the commonly used approximation error with respect to all available data, and the other is the ability of the networks to learn different problems of a common class of problems fast and with high accuracy. Simulation results from turbine blade optimizations using the structurally optimized neural network models are presented to show that the performance of the models can be improved significantly through structure optimization.
Connection Science | 2002
Michael Hüsken; Christian Igel; Marc Toussaint
There exist many ideas and assumptions about the development and meaning of modularity in biological and technical neural systems. We empirically study the evolution of connectionist models in the context of modular problems. For this purpose, we define quantitative measures for the degree of modularity and monitor them during evolutionary processes under different constraints. It turns out that the modularity of the problem is reflected by the architecture of adapted systems, although learning can counterbalance some imperfection of the architecture. The demand for fast learning systems increases the selective pressure towards modularity.
2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks (Cat. No.00 | 2000
Michael Hüsken; Jens Gayko; Bernhard Sendhoff
The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks-the ability of the network to efficiently solve related problems denoted as a class of problems. In a more theoretical framework the aim is to develop neural networks for adaptability-networks that learn (during evolution) to learn (during operation). Evolutionary algorithms have turned out to be a robust method for the optimization of neural networks. As this process is time consuming, it is therefore also from the perspective of efficiency desirable to design structures that are applicable to many related problems. In this paper, two different approaches to solve this problem are studied, called ensemble method and generation method. We empirically show that an averaged Lamarckian inheritance seems to be the most efficient way to optimize networks for problem classes, both for artificial regression problems as well as for real-world system state diagnosis problems.
Archive | 2005
Yaochu Jin; Michael Hüsken; Markus Olhofer; Bernhard Sendhoff
Approximate models such as neural networks are very helpful in evolutionary optimization when the original fitness function is computationally very expensive. This chapter presents a general introduction to methods for using approximate models in conjunction with the original fitness function. Individual and generation based evolution control is introduced to ensure that evolutionary algorithms using approximate fitness functions will converge to the true optimum. Frameworks for managing approximate models with generation-based or individualbased evolution control are described. To improve the approximation quality of the neural networks, techniques for optimizing the structure optimization of neural networks and for generating neural network ensembles are presented. The frameworks are illustrated on benchmark problems as well as on an example of aerodynamic design optimization.
international symposium on neural networks | 2000
Michael Hüsken; Christian Goerick
The success of learning as well as the learning speed of an artificial neural network (ANN) strongly depends on the initial weights. If problem or domain specific knowledge exists, it can be transferred to the ANN by means of a special choice of the initial weights. In this paper, we focus on the choice of a set of initial weights, well suited to fast and robust learning of all particular problems out of a class of related problems. Our evolutionary approach particularly takes the learning algorithm into consideration in the design of the initial weights. The superior properties of the initial weights resulting from this algorithm are corroborated using a class defined by solving a differential equation with variable boundary conditions.
ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb | 2001
Klaus Weinert; Jörn Mehnen; Oliver Webber; Anja M. Busse; Michael Hüsken; Peter Stagge
Kurzfassung Das spezielle Verfahren des BTA-Tiefbohrens wurde entwickelt, um lange Bohrungen mit hoher Oberflächengüte herstellen zu können. Auf Grund der sehr geringen Steifigkeiten der dabei verwendeten schlanken Werkzeuge ergeben sich Probleme wie Rattern, Abweichungen im Mittenverlauf und Wellenbildung, die auch Drall genannt wird. Die Oberflächen- und Formfehler stellen eine erhebliche Schädigung des Werkstücks dar. Durch den Einsatz einer speziell an den schwer zugänglichen Prozess angepassten Sensorik wurde die noch weitgehend unbekannte Systemdynamik messtechnisch erfasst und mit Hilfe statistischer und mathematischer Methoden ausgewertet sowie modelliert.
Natural Computing | 2000
Christian Igel; Michael Hüsken
genetic and evolutionary computation conference | 2002
Michael Hüsken; Christian Igel