Christoph Kolodziejski
University of Göttingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christoph Kolodziejski.
Biological Cybernetics | 2012
Christian Tetzlaff; Christoph Kolodziejski; Irene Markelić; Florentin Wörgötter
After only about 10 days would the storage capacity of our nervous system be reached if we stored every bit of input. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis.
Frontiers in Computational Neuroscience | 2011
Christian Tetzlaff; Christoph Kolodziejski; Marc Timme; Florentin Wörgötter
Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks.
Frontiers in Computational Neuroscience | 2012
Christian Tetzlaff; Christoph Kolodziejski; Marc Timme; Florentin Wörgötter
Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly (Hebb, 1949) by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network. These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures (excitatory rings, etc.) into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties.
Biological Cybernetics | 2008
Christoph Kolodziejski; Bernd Porr; Florentin Wörgötter
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications.
Biological Cybernetics | 2010
Tomas Kulvicius; Christoph Kolodziejski; Minija Tamosiunaite; Bernd Porr; Florentin Wörgötter
Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties.
Frontiers in Computational Neuroscience | 2013
Faramarz Faghihi; Christoph Kolodziejski; André Fiala; Florentin Wörgötter; Christian Tetzlaff
Fruit flies (Drosophila melanogaster) rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells) was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system efficiency will be substantially reduced.
Advances in Complex Systems | 2013
Poramate Manoonpong; Christoph Kolodziejski; Florentin Wörgötter; Jun Morimoto
Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor–critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.
Siam Journal on Applied Dynamical Systems | 2012
Christian Bick; Marc Timme; Christoph Kolodziejski
Stabilizing unstable periodic orbits in a chaotic invariant set not only reveals information about its structure but also leads to various interesting applications. For the successful application of a chaos control scheme, convergence speed is of crucial importance. Here we present a predictive feedback chaos control method that adapts a control parameter online to yield optimal asymptotic convergence speed. We study the adaptive control map both analytically and numerically and prove that it converges at least linearly to a value determined by the spectral radius of the control map at the periodic orbit to be stabilized. The method is easy to implement algorithmically and may find applications for adaptive online control of biological and engineering systems.
Neural Computation | 2009
Christoph Kolodziejski; Bernd Porr; Florentin Wörgötter
In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learningcorrelation-based differential Hebbian learning and reward-based temporal difference learningare asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons.
Neural Networks | 2011
Bernd Porr; Lynsey McCabe; Paolo Di Prodi; Christoph Kolodziejski; Florentin Wörgötter
It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here, we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing-dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing-dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia.