Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Wennekers is active.

Publication


Featured researches published by Thomas Wennekers.


European Journal of Neuroscience | 2008

A neuroanatomically grounded Hebbian-learning model of attention–language interactions in the human brain

Max Garagnani; Thomas Wennekers; Friedemann Pulvermüller

Meaningful familiar stimuli and senseless unknown materials lead to different patterns of brain activation. A late major neurophysiological response indexing ‘sense’ is the negative component of event‐related potential peaking at around 400 ms (N400), an event‐related potential that emerges in attention‐demanding tasks and is larger for senseless materials (e.g. meaningless pseudowords) than for matched meaningful stimuli (words). However, the mismatch negativity (latency 100–250 ms), an early automatic brain response elicited under distraction, is larger to words than to pseudowords, thus exhibiting the opposite pattern to that seen for the N400. So far, no theoretical account has been able to reconcile and explain these findings by means of a single, mechanistic neural model. We implemented a neuroanatomically grounded neural network model of the left perisylvian language cortex and simulated: (i) brain processes of early language acquisition and (ii) cortical responses to familiar word and senseless pseudoword stimuli. We found that variation of the area‐specific inhibition (the model correlate of attention) modulated the simulated brain response to words and pseudowords, producing either an N400‐ or a mismatch negativity‐like response depending on the amount of inhibition (i.e. available attentional resources). Our model: (i) provides a unifying explanatory account, at cortical level, of experimental observations that, so far, had not been given a coherent interpretation within a single framework; (ii) demonstrates the viability of purely Hebbian, associative learning in a multilayered neural network architecture; and (iii) makes clear predictions on the effects of attention on latency and magnitude of event‐related potentials to lexical items. Such predictions have been confirmed by recent experimental evidence.


Network: Computation In Neural Systems | 2003

Pattern formation in intracortical neuronal fields

Axel Hutt; Michael Bestehorn; Thomas Wennekers

This paper introduces a neuronal field model for both excitatory and inhibitory connections. A single integro-differential equation with delay is derived and studied at a critical point by stability analysis, which yields conditions for static periodic patterns and wave instabilities. It turns out that waves only occur below a certain threshold of the activity propagation velocity. An additional brief study exhibits increasing phase velocities of waves with decreasing slope subject to increasing activity propagation velocities, which are in accordance with experimental results. Numerical studies near and far from instability onset supplement the work.


Neural Networks | 2001

Associative memory in networks of spiking neurons.

Friedrich T. Sommer; Thomas Wennekers

Here, we develop and investigate a computational model of a network of cortical neurons on the base of biophysically well constrained and tested two-compartmental neurons developed by Pinsky and Rinzel [Pinsky, P. F., & Rinzel, J. (1994). Intrinsic and network rhythmogenesis in a reduced Traub model for CA3 neurons. Journal of Computational Neuroscience, 1, 39-60]. To study associative memory, we connect a pool of cells by a structured connectivity matrix. The connection weights are shaped by simple Hebbian coincidence learning using a set of spatially sparse patterns. We study the neuronal activity processes following an external stimulation of a stored memory. In two series of simulation experiments, we explore the effect of different classes of external input, tonic and flashed stimulation. With tonic stimulation, the addressed memory is an attractor of the network dynamics. The memory is displayed rhythmically, coded by phase-locked bursts or regular spikes. The participating neurons have rhythmic activity in the gamma-frequency range (30-80 Hz). If the input is switched from one memory to another, the network activity can follow this change within one or two gamma cycles. Unlike similar models in the literature, we studied the range of high memory capacity (in the order of 0.1 bit/synapse), comparable to optimally tuned formal associative networks. We explored the robustness of efficient retrieval varying the memory load, the excitation/inhibition parameters, and background activity. A stimulation pulse applied to the identical simulation network can push away ongoing network activity and trigger a phase-locked association event within one gamma period. Unlike as under tonic stimulation, the memories are not attractors. After one association process, the network activity moves to other states. Applying in close succession pulses addressing different memories, one can switch through the space of memory patterns. The readout speed can be increased up to the point where in every gamma cycle another pattern is displayed. With pulsed stimulation. bursts become relevant for coding, their occurrence can be used to discriminate relevant processes from background activity.


Neural Networks | 2009

2009 Special Issue: Hippocampus, microcircuits and associative memory

Vassilis Cutsuridis; Thomas Wennekers

The hippocampus is one of the most widely studied brain region. One of its functional roles is the storage and recall of declarative memories. Recent hippocampus research has yielded a wealth of data on network architecture, cell types, the anatomy and membrane properties of pyramidal cells and interneurons, and synaptic plasticity. Understanding the functional roles of different families of hippocampal neurons in information processing, synaptic plasticity and network oscillations poses a great challenge but also promises deep insight into one of the major brain systems. Computational and mathematical models play an instrumental role in exploring such functions. In this paper, we provide an overview of abstract and biophysical models of associative memory with particular emphasis on the operations performed by the diverse (inter)neurons in encoding and retrieval of memories in the hippocampus.


Neural Networks | 2003

Dynamical properties of strongly interacting Markov chains

Nihat Ay; Thomas Wennekers

Spatial interdependences of multiple stochastic units can be suitably quantified by the Kullback-Leibler divergence of the joint probability distribution from the corresponding factorized distribution. In the present paper, a generalized measure for stochastic interaction, which also captures temporal interdependences, is analysed within the setting of Markov chains. The dynamical properties of systems with strongly interacting stochastic units are analytically studied and illustrated by computer simulations. In particular, the emergence of determinism in such systems is demonstrated.


PLOS Computational Biology | 2011

A neurocomputational model of stimulus-specific adaptation to oddball and Markov sequences.

Robert Mill; Martin Coath; Thomas Wennekers; Susan L. Denham

Stimulus-specific adaptation (SSA) occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential), and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response.


Biological Cybernetics | 2014

Thinking in circuits: toward neurobiological explanation in cognitive neuroscience

Friedemann Pulvermüller; Max Garagnani; Thomas Wennekers

Cognitive theory has decomposed human mental abilities into cognitive (sub) systems, and cognitive neuroscience succeeded in disclosing a host of relationships between cognitive systems and specific structures of the human brain. However, an explanation of why specific functions are located in specific brain loci had still been missing, along with a neurobiological model that makes concrete the neuronal circuits that carry thoughts and meaning. Brain theory, in particular the Hebb-inspired neurocybernetic proposals by Braitenberg, now offers an avenue toward explaining brain–mind relationships and to spell out cognition in terms of neuron circuits in a neuromechanistic sense. Central to this endeavor is the theoretical construct of an elementary functional neuronal unit above the level of individual neurons and below that of whole brain areas and systems: the distributed neuronal assembly (DNA) or thought circuit (TC). It is shown that DNA/TC theory of cognition offers an integrated explanatory perspective on brain mechanisms of perception, action, language, attention, memory, decision and conceptual thought. We argue that DNAs carry all of these functions and that their inner structure (e.g., core and halo subcomponents), and their functional activation dynamics (e.g., ignition and reverberation processes) answer crucial localist questions, such as why memory and decisions draw on prefrontal areas although memory formation is normally driven by information in the senses and in the motor system. We suggest that the ability of building DNAs/TCs spread out over different cortical areas is the key mechanism for a range of specifically human sensorimotor, linguistic and conceptual capacities and that the cell assembly mechanism of overlap reduction is crucial for differentiating a vocabulary of actions, symbols and concepts.


Cognitive Computation | 2009

Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network

Max Garagnani; Thomas Wennekers; Friedemann Pulvermüller

Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly’s halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.


Neural Computation | 2001

Orientation Tuning Properties of Simple Cells in Area V1 Derived from an Approximate Analysis of Nonlinear Neural Field Models

Thomas Wennekers

We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with difference of gaussians connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x2) or concave (e.g., sqrt (x)) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supra-linear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can memorize tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.


international conference on artificial neural networks | 1996

Controlling the Speed of Synfire Chains

Thomas Wennekers; Günther Palm

This paper deals with the propagation velocity of synfire chain activation in locally connected networks of artificial spiking neurons. Analytical expressions for the propagation speed are derived taking into account form and range of local connectivity, explicitly modelled synaptic potentials, transmission delays and axonal conduction velocities. Wave velocities particularly depend on the level of external input to the network indicating that synfire chain propagation in real networks should also be controllable by appropriate inputs. The results are numerically tested for a network consisting of ‘integrate-and-fire’ neurons.

Collaboration


Dive into the Thomas Wennekers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Salvador Dura-Bernal

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge