Günther Palm
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Günther Palm.
Biological Cybernetics | 1980
Günther Palm
The information storing capacity of certain associative and auto-associative memories is calculated. For example, in a 100×100 matrix of 1 bit storage elements more than 6,500 bits can be stored associatively, and more than 688,000 bits in a 1,000×1,000 matrix. Asymptotically, the storage capacity of an associative memory increases proportionally to the number of storage elements. The usefulness of associative memories, as opposed to conventional listing memories, is discussed — especially in connection with brain modelling.
Neural Networks | 2001
Friedhelm Schwenker; Hans A. Kestler; Günther Palm
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLPs parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Biological Cybernetics | 1988
Günther Palm; Ad Aertsen; George L. Gerstein
We consider several measures for the correlation of firing activity among different neurons, based on coincidence counts obtained from simultaneously recorded spike trains. We obtain explicit formulae for the probability distributions of these measures. This allows an exact, quantitative assessment of significance levels, and thus a comparison of data obtained in different experimental paradigms. In particular it is possible to compare stimulus-locked, and therefore time dependent correlations for different stimuli and also for different times relative to stimulus onset. This allows to separate purely stimulus-induced correlation from intrinsic interneuronal correlation. It further allows investigation of the dynamic characteristics of the interneuronal correlation. For the display of significance levels or the corresponding probabilities we propose a logarithmic measure, called “surprise”.
Neural Computation | 2010
Andreas Knoblauch; Günther Palm; Friedrich T. Sommer
Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets ( ln 2 0.7). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C 1eln 2 0.53), and the analysis of the computational complexity of associative memories with and without network compression.
Biological Cybernetics | 1978
David Marr; Günther Palm; Tomaso Poggio
Marr and Poggio (1976) recently described a cooperative algorithm that solves the correspondence problem for stereopsis. This article uses a probabilistic technique to analyze the convergence of that algorithm, and derives the conditions governing the stability of the solution state. The actual results of applying the algorithm to random-dot stereograms are compared with the probabilistic analysis. A satisfactory mathematical analysis of the asymptotic behaviour of the algorithm is possible for a suitable choice of the parameter values and loading rules, and again the actual performance of the algorithm under these conditions is compared with the theoretical predictions. Finally, some problems raised by the analysis of this type of “cooperative” algorithm are briefly discussed.
Biological Cybernetics | 1978
Günther Palm
In a discrete-time framework any nonlinear system can be approximated with arbitrarily small error by a Volterra series and also by a “sandwich” system.
Siam Journal on Applied Mathematics | 1977
Günther Palm; Tomaso Poggio
Volterra and Wiener series provide a general representation for a wide class of nonlinear systems. In this paper we derive rigorous results concerning (a) the conditions under which a nonlinear functional admits a Volterra-like integral representation, (b) the class of systems that admit a Wiener representation and the meaning of such a representation, (c) some sufficient conditions providing a connection between the Volterra-like and the Wiener representations, (d) the mathematical validity of the method of Lee and Schetzen for identifying a nonlinear system.
Neural Networks | 1996
Friedhelm Schwenker; Friedrich T. Sommer; Günther Palm
Abstract We investigate the pattern completion performance of neural auto-associative memories composed of binary threshold neurons for sparsely coded binary memory patterns. By focussing on iterative retrieval, we are able to introduce effective threshold control strategies. These are investigated by means of computer simulation experiments and analytical treatment. To evaluate the systems performance we consider the completion capacity C and the mean retrieval errors. The asymptotic completion capacity values for the recall of sparsely coded binary patterns in one-step retrieval is known to be ln 2 4 ≈ 17.32% for binary Hebbian learning, and 1 (8 ln 2) ≈ 18% for additive Hebbian learning. These values are accomplished with vanishing error probability and yet are higher than those obtained in other known neural memory models. Recent investigations on binary Hebbian learning have proved that iterative retrieval as a more refined retrieval method does not improve the asymptotic completion capacity of one step retrieval. In a finite size auto-associative memory we show that iterative retrieval achieves higher capacity and better error correction than one-step retrieval. One-step retrieval produces high retrieval errors at optimal memory load. Iterative retrieval reduces the retrieval errors within a few iteration steps (t ⩽ 5). Experiments with additive Hebbian learning show that in the finite model, binary Hebbian learning exhibits much better performance. Thus the main concern of this paper is binary Hebbian learning. We examine iterative retrieval in experiments with up to n = 20,000 threshold neurons. With this system size one-step retrieval yields a completion capacity of about 16%, the second retrieval step increases this value to 17.9% and with iterative retrieval we obtain 19%. The first two retrieval steps in the finite system have also been treated analytically. For one-step retrieval the asymptotic capacity value is approximated from below with growing system size. In the second retrieval step (and as the experiments suggest also for iterative retrieval) the finite size behaviour is different. The capacity exceeds the asymptotic value, reaches an optimum for finite system size, and decreases to the asymptotic limit.
Archive | 2012
Alessandro E. P. Villa; Włodzisław Duch; Péter Érdi; Francesco Masulli; Günther Palm
A complex-valued multilayer perceptron (MLP) can approximate a periodic or unbounded function, which cannot be easily realized by a real-valued MLP. Its search space is full of crevasse-like forms having huge condition numbers; thus, it is very hard for existing methods to perform efficient search in such a space. The space also includes the structure of reducibility mapping. The paper proposes a new search method for a complex-valued MLP, which employs both eigen vector descent and reducibility mapping, aiming to stably find excellent solutions in such a space. Our experiments showed the proposed method worked well.
Biological Cybernetics | 1995
L. Martignon; H. Hassein; S. Grün; Ad Aertsen; Günther Palm
We propose a formal framework for the description of interactions among groups of neurons. This framework is not restricted to the common case of pair interactions, but also incorporates higher-order interactions, which cannot be reduced to lower-order ones. We derive quantitative measures to detect the presence of such interactions in experimental data, by statistical analysis of the frequency distribution of higher-order correlations in multiple neuron spike train data. Our first step is to represent a frequency distribution as a Markov field on the minimal graph it induces. We then show the invariance of this graph with regard to changes of state. Clearly, only linear Markov fields can be adequately represented by graphs. Higher-order interdependencies, which are reflected by the energy expansion of the distribution, require more complex graphical schemes, like constellations or assembly diagrams, which we introduce and discuss. The coefficients of the energy expansion not only point to the interactions among neurons but are also a measure of their strength. We investigate the statistical meaning of detected interactions in an information theoretic sense and propose minimum relative entropy approximations as null hypotheses for significance tests. We demonstrate the various steps of our method in the situation of an empirical frequency distribution on six neurons, extracted from data on simultaneous multineuron recordings from the frontal cortex of a behaving monkey and close with a brief outlook on future work.