Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreea Lazar is active.

Publication


Featured researches published by Andreea Lazar.


Frontiers in Computational Neuroscience | 2009

SORN: a self-organizing recurrent neural network.

Andreea Lazar; Gordon Pipa; Jochen Triesch

Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the networks success.


Cerebral Cortex | 2014

Untangling Perceptual Memory: Hysteresis and Adaptation Map into Separate Cortical Networks

Caspar M. Schwiedrzik; Christian C. Ruff; Andreea Lazar; Frauke C. Leitner; Wolf Singer; Lucia Melloni

Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain “decide” what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function).


PLOS Computational Biology | 2015

Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

Christoph Hartmann; Andreea Lazar; Bernhard Nessler; Jochen Triesch

Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.


international conference on artificial neural networks | 2011

Emerging Bayesian priors in a self-organizing recurrent network

Andreea Lazar; Gordon Pipa; Jochen Triesch

We explore the role of local plasticity rules in learning statistical priors in a self-organizing recurrent neural network (SORN). The network receives input sequences composed of different symbols and learns the structure embedded in these sequences via a simple spike-timing-dependent plasticity rule, while synaptic normalization and intrinsic plasticity maintain a low level of activity. After learning, the network exhibits spontaneous activity that matches the stimulus-evoked activity during training and thus can be interpreted as samples from the networks prior probability distribution over evoked activity states. Further, we show how learning the frequency and spatio-temporal characteristics of the input sequences influences network performance in several classification tasks. These results suggest a novel connection between low level learning mechanisms and high level concepts of statistical inference.


Frontiers in Computational Neuroscience | 2016

Does the Cerebral Cortex Exploit High-Dimensional, Non-linear Dynamics for Information Processing?

Wolf Singer; Andreea Lazar

The discovery of stimulus induced synchronization in the visual cortex suggested the possibility that the relations among low-level stimulus features are encoded by the temporal relationship between neuronal discharges. In this framework, temporal coherence is considered a signature of perceptual grouping. This insight triggered a large number of experimental studies which sought to investigate the relationship between temporal coordination and cognitive functions. While some core predictions derived from the initial hypothesis were confirmed, these studies, also revealed a rich dynamical landscape beyond simple coherence whose role in signal processing is still poorly understood. In this paper, a framework is presented which establishes links between the various manifestations of cortical dynamics by assigning specific coding functions to low-dimensional dynamic features such as synchronized oscillations and phase shifts on the one hand and high-dimensional non-linear, non-stationary dynamics on the other. The data serving as basis for this synthetic approach have been obtained with chronic multisite recordings from the visual cortex of anesthetized cats and from monkeys trained to solve cognitive tasks. It is proposed that the low-dimensional dynamics characterized by synchronized oscillations and large-scale correlations are substates that represent the results of computations performed in the high-dimensional state-space provided by recurrently coupled networks.


The Journal of Neuroscience | 2013

Orienting Towards Ensembles: From Single Cells to Neural Populations

Christopher Murphy Lewis; Andreea Lazar

The tradition of single-cell electrophysiology has taught us a great deal about the response properties of individual cells in primary sensory cortices. However, the study of neurons in isolation provides a limited and possibly distorted picture of how the simultaneous activity of distributed


bioRxiv | 2014

Where's the noise? Key features of neuronal variability and inference emerge from self-organized learning

Christoph Hartmann; Andreea Lazar; Jochen Triesch

Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing. In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences. We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms. Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.


international conference on artificial neural networks | 2008

Predictive Coding in Cortical Microcircuits

Andreea Lazar; Gordon Pipa; Jochen Triesch

We investigate the influence of spike timing dependent plasticity (STDP) on the prediction properties of recurrent microcircuits. We use sparsely connected networks in which the synaptic modifications introduced by STDP are complemented by two homeostatic plasticity mechanisms: synaptic normalization and intrinsic plasticity. In the presence of structured external input, STDP changes the connectivity matrix of the network such that the recurrent connections capture the particularities of the input stimuli, allowing the network to anticipate future inputs. Network activation patterns reflect different input expectations and can be separated by an unsupervised clustering technique.


bioRxiv | 2017

Stimulus content shapes cortical response statistics

Mihály Bányai; Andreea Lazar; Liane Klein; Johanna Klon-Lipok; Wolf Singer; Gergő Orbán

Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.


BMC Neuroscience | 2015

Key features of neural variability emerge from self-organized sequence learning in a deterministic neural network

Christoph Hartmann; Andreea Lazar; Jochen Triesch

Cortical responses to identical stimuli show high trial-to-trial variability. This variability is commonly interpreted as resulting from internal noise. However, much of the variability can be explained by the pre-stimulus spontaneous activity [1]. In fact, the contribution of this spontaneous activity to the evoked response is sufficiently strong to bias perceptual decisions [2]. Importantly, spontaneous activity is structurally similar to evoked activity [3] and this similarity may be the result of learning an internal model of the environment during development [4]. Consistent with this idea, spontaneous activity seems to be a superset of possible evoked responses [5] and trial-to-trial variability drops at stimulus onset [6]. At present, it is unclear how these features of neural variability arise in cortical circuits. Here, we show that all of these phenomena emerge in a completely deterministic self-organizing recurrent network (SORN) model [7]. The network consists of recurrently connected excitatory and inhibitory populations of McCulloch-Pitts units. The dynamics are shaped by spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms in response to structured input sequences. After a period of self-organization, during which the network learns an internal model of the input sequences, we observe all phenomena mentioned above: evoked responses and perceptual decisions can be predicted from prior spontaneous activity, spontaneous activity outlines the realm of evoked responses, Fano factors drop at stimulus onset, and spontaneous activity closely matches evoked activity patterns. In addition, the network produces the common signs of Poissonian variability in single units. In sum, our model demonstrates that key features of neural variability emerge in a fully deterministic network from self-organized sequence learning via the interaction of STDP and homeostatic plasticity mechanisms. These results suggest that the high trial-to-trial variability of neural responses need not be taken as evidence for noisy neural processing elements. Figure 1 Two example results after self-organization. a) The neural variability drops at stimulus onset. b) Spontaneous and evoked activity become more similar during learning

Collaboration


Dive into the Andreea Lazar's collaboration.

Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Gordon Pipa

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Hartmann

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge