Klaus Pawelzik
University of Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Klaus Pawelzik.
Neural Computation | 1998
Misha Tsodyks; Klaus Pawelzik; Henry Markram
Transmission across neocortical synapses depends on the frequency of presynaptic activity (Thomson & Deuchars, 1994). Interpyramidal synapses in layer V exhibit fast depression of synaptic transmission, while other types of synapses exhibit facilitation of transmission. To study the role of dynamic synapses in network computation, we propose a unified phenomenological model that allows computation of the postsynaptic current generated by both types of synapses when driven by an arbitrary pattern of action potential (AP) activity in a presynaptic population. Using this formalism, we analyze different regimes of synaptic transmission and demonstrate that dynamic synapses transmit different aspects of the presynaptic activity depending on the average presynaptic frequency. The model also allows for derivation of mean-field equations, which govern the activity of large, interconnected networks. We show that the dynamics of synaptic transmission results in complex sets of regular and irregular regimes of network activity.
IEEE Transactions on Neural Networks | 1992
Hans-Ulrich Bauer; Klaus Pawelzik
It is shown that a topographic product P, first introduced in nonlinear dynamics, is an appropriate measure of the preservation or violation of neighborhood relations. It is sensitive to large-scale violations of the neighborhood ordering, but does not account for neighborhood ordering distortions caused by varying areal magnification factors. A vanishing value of the topographic product indicates a perfect neighborhood preservation; negative (positive) values indicate a too small (too large) output space dimensionality. In a simple example of maps from a 2D input space onto 1D, 2D, and 3D output spaces, it is demonstrated how the topographic product picks the correct output space dimensionality. In a second example, 19D speech data are mapped onto various output spaces and it is found that a 3D output space (instead of 2D) seems to be optimally suited to the data. This is an agreement with a recent speech recognition experiment on the same data set.
EPL | 1991
Wolfgang Liebert; Klaus Pawelzik; H. G. Schuster
Guided by topological considerations, a new method is introduced to obtain optimal delay coordinates for data from chaotic dynamic systems. By determining simultaneously the minimal necessary embedding dimension as well as the proper delay time we achieve optimal reconstructions of attractors. This can be demonstrated, e.g., by reliable dimension estimations from limited data series.
Neural Computation | 1996
Klaus Pawelzik; Jens Kohlmorgen; Klaus-Robert Müller
We present a method for the unsupervised segmentation of data streams originating from different unknown sources that alternate in time. We use an architecture consisting of competing neural networks. Memory is included to resolve ambiguities of input-output relations. To obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training procedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction.
Neural Computation | 2002
Matthias Bethge; David Rotermund; Klaus Pawelzik
Efficient coding has been proposed as a first principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are influenced by the finite number of neurons N (place constraint), the limited decoding time window length T (time constraint), the maximum neuronal firing rate fmax (power constraint), and the maximal average rate fmax (energy constraint). While Fisher information provides a general lower bound for the mean squared error of unbiased signal reconstruction, its use to characterize the coding precision is limited. Analyzing simple examples, we illustrate some typical pitfalls and thereby show that Fisher information provides a valid measure for the precision of a code only if the dynamic range (fmin T, fmax T) is sufficiently large. In particular, we demonstrate that the optimal width of gaussian tuning curves depends on the available decoding time T. Within the broader class of unimodal tuning functions, it turns out that the shape of a Fisher-optimal coding scheme is not unique. We solve this ambiguity by taking the minimum mean square error into account, which leads to flat tuning curves. The tuning width, however, remains to be determined by energy constraints rather than by the principle of efficient coding.
Nature Neuroscience | 2001
Udo Ernst; Klaus Pawelzik; C. Sahar-Pikielny; Misha Tsodyks
Previous experiments indicate that the shape of maps of preferred orientation in the primary visual cortex does not depend on visual experience. We propose a network model that demonstrates that the orientation and direction selectivity of individual units and the structure of the corresponding angle maps could emerge from local recurrent connections. Our model reproduces the structure of preferred orientation and direction maps, and explains the origin of their interrelationship. The model also provides an explanation for the correlation between position shifts of receptive fields and changes of preferred orientations of single neurons across the surface of the cortex.
Biological Cybernetics | 2000
Jens Kohlmorgen; Klaus-Robert Müller; J. Rittweger; Klaus Pawelzik
Abstract. We present a novel framework for the analysis of time series from dynamical systems that alternate between different operating modes. The method simultaneously segments and identifies the dynamical modes by using predictive models. In extension to previous approaches, it allows an identification of smooth transition between successive modes. The method can be used for analysis, diagnosis, prediction, and control. In an application to EEG and respiratory data recorded from humans during afternoon naps, the obtained segmentations of the data agree with the sleep stage segmentation of a medical expert to a large extent. However, in contrast to the manual segmentation, our method does not require a priori knowledge about physiology. Moreover, it has a high temporal resolution and reveals previously unclassified details of the transitions. In particular, a parameter is found that is potentially helpful for vigilance monitoring. We expect that the method will generally be useful for the analysis of nonstationary dynamical systems, which are abundant in medicine, chemistry, biology and engineering.
international symposium on physical design | 1993
Hans-Ulrich Bauer; Klaus Pawelzik
Abstract In recent neurophysiological experiments stimulus-related neuronal oscillations were discovered in various species. The oscillations are not persistent during the whole time of stimulation, but instead seem to be restricted to rather short periods, interrupted by stochastic periods. In this contribution we argue, that these observations can be explained by a bistability in the ensemble dynamics of coupled integrate and fire neurons. This dynamics can be cast in terms of a high-dimensional map for the time evolution of a phase density which represents the ensemble state. A numerical analysis of this map reveals the coexistence of two stationary states in a broad parameter regime when the synaptic transmission is nonlinear. The one state corresponds to a stochastic firing of individual neurons, the other state describes a periodic activation. We demonstrate that under the influence of additional external noise the system can switch between these states, in this way reproducing the experimentally activity. We also investigate the connection between the nonlinearity of the synaptic transmission function and the bistability of the dynamics. To this purpose we heuristically reduce the high-dimensional assembly dynamics to a one-dimensional map, which in turn yields a simple explanation for the relation between nonlinearity and bistability in our system.
Network: Computation In Neural Systems | 1993
Josef Deppisch; Hans-Ulrich Bauer; Thomas B. Schillen; Peter König; Klaus Pawelzik; Theo Geisel
We focus on a phenomenon observed in cat visual cortex, namely the alternation of oscillatory and irregular neuronal activity. This aspect of the dynamics has been neglected in brain modelling, but it may be essential for the dynamic binding of different neuronal assemblies. The authors present a simple, but physiologically plausible model network which exhibits such a behaviour in spite of its simplicity—e.g. dendritic dynamics is neglected—as an emergent network property. It comprises a number of spiking neurons which are interconnected in a mutually excitatory way. Each neuron is stimulated by several stochastic spike trains. The resulting large input variance is shown to be important for the response properties of the network, which they characterize in terms of two parameters of the autocorrelation function: the frequency and the modulation amplitude. They calculate these parameters as functions of the internal coupling strength, the external input strength and several input connectivity schemes and ...
Biological Cybernetics | 1994
Josef Deppisch; Klaus Pawelzik; Theo Geisel
Synchronous network excitation is believed to play an outstanding role in neuronal information processing. Due to the stochastic nature of the contributing neurons, however, those synchronized states are difficult to detect in electrode recordings. We present a frame-work and a model for the identification of such network states and of their dynamics in a specific experimental situation. Our approach operationalizes the notion of neuronal groups forming assemblies via synchronization based on experimentally obtained spike trains. The dynamics of such groups is reflected in the sequence of synchronized states, which we describe as a renewal dynamics. We furthermore introduce a rate function which is dependent on the internal network phase that quantifies the activity of neurons contributing to the observed spike train. This constitutes a hidden state model which is formally equivalent to a hidden Markov model, and all its parameters can be accurately determined from the experimental time series using the Baum-Welch algorithm. We apply our method to recordings from the cat visual cortex which exhibit oscillations and synchronizations. The parameters obtained for the hidden state model uncover characteristic properties of the system including synchronization, oscillation, switching, back-ground activity and correlations. In applications involving multielectrode recordings, the extracted models quantify the extent of assembly formation and can be used for a temporally precise localization of system states underlying a specific spike train.