Michael Herrmann
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Herrmann.
Springer Berlin Heidelberg | 1999
Michael Herrmann; Ralf Der
For some reinforcement learning algorithms the optimality of the generated strategies can be proven. In practice, however, restrictions in the number of training examples and computational resources corrupt optimality. The efficiency of the algorithms depends strikingly on the formulation of the task, including the choice of the learning parameters and the representation of the system states. We propose here to improve the learning efficiency by an adaptive classification of the system states which tends to group together states if they are similar and aquire the same action during learning. The approach is illustrated by two simple examples. Two further applications serve as a test of the proposed algorithm.
international conference of the ieee engineering in medicine and biology society | 2013
Sebastian Amsüss; Liliana Paredes; Nina Rudigkeit; Bernhard Graimann; Michael Herrmann; Dario Farina
Long-term functioning of a hand prosthesis is crucial for its acceptance by patients with upper limb deficit. In this study the reliability over days of the performance of pattern classification approaches based on surface electromyography (sEMG) signal for the control of upper limb prostheses was investigated. Recordings of sEMG from the forearm muscles were obtained across five consecutive days from five healthy subjects. It was demonstrated that the classification performance decreased monotonically on average by 4.1% per day. It was also found that the accumulated error was confined to three of the eight movement classes investigated. This contribution gives insight on the long term behavior of pattern classification, which is crucial for commercial viability.
Springer London | 1995
Michael Herrmann; H.-U. Bauer; Ralf Der
The perceptual magnet effect describes an increased generalization capability for the perception of vowels, if the perceived vowels are prototypical. We here propose an unsupervised, adaptive neural network model which allows to control the relation between stimulus density and generalization capability, and which can account for the perceptual magnet effect. Our model is based on a modification of the self-organizing feature map algorithm, and includes local variations of the adaptability. Numerical and analytical results for the model are given, together with a brief discussion of possible other domains of application for the model.
international symposium on neural networks | 1995
Michael Herrmann
Self-organizing feature maps with self-determined local neighborhood widths are applied to construct principal manifolds of data distributions. This task exemplifies the problem of the learning of learning parameters in neural networks. The proposed algorithm is based upon analytical results on phase transitions in self-organizing feature maps available f o r idealized situations. By illustrative simulations it is demonstrated that deviations f rom the theoretically studied situation are compensated adaptively and that the capability of topology preservation is crucial f o r avoiding overftting effects. Further, the relevance of the parameter learning scheme f o r hierarchical feature maps is stated.
international conference on artificial neural networks | 1998
N. Mayer; Michael Herrmann; Hans-Ulrich Bauer; Theo Geisel
Self-organizing maps have been successfully used to model map formation in the visual cortex of mammals. When applying natural images as stimuli, properties of the maps obtained for low-dimensional input manifolds, such as retinotopy, are not equally well reproducible. The present study points to the virtues of the adaptive subspace self-organizing map (ASSOM) in modeling neural maps. Since the representation of position and orientation and that of stimulus phase are automatically mapped to different hierarchical levels of the ASSOM, topography is established for orientation and position, but not for phases. This agrees to evidence for the absence of smooth phase maps. Further, we show that some biologically implausible conditions of the ASSOM rule can be relaxed.
International Journal of Adaptive Control and Signal Processing | 1997
Michael Funke; Michael Herrmann; Ralf Der
A simple but efficient neural-network-based algorithm for non-linear control of chaotic systems is presented. The scheme relies on the method proposed by Ott et al. (Phys. Rev. Lett.,64, 1196 (1990)) to stabilize unstable periodic orbits by appropriate small changes in a control parameter. In contrast with this, our approach does not make use of an analytical description of the system evolution. The dynamics is evaluated by a self-organizing Kohonen network with an altered learning rule, which is able to learn the map of the system and to determine the positions of unstable periodic orbits of a given period. At the end of learning, a set of control neurons is generated which target the system along a quasi-optimal path towards the orbit. Besides its intrinsic tolerance against weak noise, the main advantage of the algorithm is its ability to take into account system constraints that occur in practical applications. The mean value of the control parameter and the range of allowed changes can be chosen in advance, and if more than one fixed point exists, the algorithm adapts to the most appropriate one concerning the control effort.
Kohonen Maps | 1999
Ralf Der; Michael Herrmann
Publisher Summary Kohonens self-organizing map (SOM) has immense potential as a universal tool of nonlinear data analysis. From the practical point of view, control parameters like the learning rate and the neighborhood width need special attention in order to exploit the possibilities of the approach. In Kohonens self-organizing feature map, the control parameters are the learning rate ɛ and the neighborhood width σ. The learning rate controls the plasticity of the map since it defines the attention that a neuron attributes to a single stimulus. By an individual tuning of the neural attention, the magnification factor of the map can be controlled locally, such as to achieve maps minimizing particular error criteria. In another approach, the plasticity of the map is improved by a heuristical scheme for choosing optimal learning step lengths. One of the most prominent applications of the SOM is dimension reduction of noisy data. Mathematically, this corresponds to the problem of extracting principal curves (PC) or principal manifolds (PM) in the general case—from higher dimensional data distributions.
international symposium on neural networks | 1996
Ralf Der; G. Balzuweit; Michael Herrmann
We study the extraction of principal manifolds (PMs) in high-dimensional spaces with modified self-organizing feature maps. Our algorithm embeds a lower-dimensional lattice into a high-dimensional space without topology violations by tuning the neighborhood widths locally. Topology preservation, however, is not sufficient for determining PMs. It still allows for considerable deviations from the PM and is rather unreliable in the case of sparse data sets. These two problems are solved by the introduction of a new principle exploiting the specific dynamical properties of the first-order phase transition induced by dimensional conflicts.
Archive | 1998
Michael Herrmann; Klaus Pawelzik; Theo Geisel
We present a framework for generating representations of space in an autonomous agent which does not obtain any direct information about its location. Instead the algorithm relies exclusively on sensory input and internal estimations of actions. The activations within a neural network are propagated in time depending on internal estimations of actions. Sensory input connections are adapted according to a Hebbian learning rules derived from the prediction error on sensory inputs one step ahead. During exploration of the environment the respective cells develop location and direction selectivity even when relying on highly ambiguous stimuli.
international symposium on neural networks | 1996
Michael Herrmann; Ralf Der; G. Balzuweit
Based on earlier work on self-organizing maps with adaptive local neighborhood widths suitable for construction of principal manifolds, we propose an algorithm for hierarchical maps of heterogeneous high-dimensional data onto a structurally similar output space. Instead of a fixed output grid a network structure evolves that is locally orthogonal, but globally shaped by prominent data features. These features form principal manifolds in subspaces being determined by earlier hierarchical levels. The algorithm allows for an efficient separation of the interdependent learning tasks of acquiring optimal maps, learning parameters, and network structure.