Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Romain D. Cazé is active.

Publication


Featured researches published by Romain D. Cazé.


PLOS Computational Biology | 2013

Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions

Romain D. Cazé; Mark D. Humphries; Boris Gutkin

Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.


Frontiers in Cellular Neuroscience | 2015

Contribution of sublinear and supralinear dendritic integration to neuronal computations.

Alexandra Tran-Van-Minh; Romain D. Cazé; Therése Abrahamsson; Laurence Cathala; Boris Gutkin; David A. DiGregorio

Nonlinear dendritic integration is thought to increase the computational ability of neurons. Most studies focus on how supralinear summation of excitatory synaptic responses arising from clustered inputs within single dendrites result in the enhancement of neuronal firing, enabling simple computations such as feature detection. Recent reports have shown that sublinear summation is also a prominent dendritic operation, extending the range of subthreshold input-output (sI/O) transformations conferred by dendrites. Like supralinear operations, sublinear dendritic operations also increase the repertoire of neuronal computations, but feature extraction requires different synaptic connectivity strategies for each of these operations. In this article we will review the experimental and theoretical findings describing the biophysical determinants of the three primary classes of dendritic operations: linear, sublinear, and supralinear. We then review a Boolean algebra-based analysis of simplified neuron models, which provides insight into how dendritic operations influence neuronal computations. We highlight how neuronal computations are critically dependent on the interplay of dendritic properties (morphology and voltage-gated channel expression), spiking threshold and distribution of synaptic inputs carrying particular sensory features. Finally, we describe how global (scattered) and local (clustered) integration strategies permit the implementation of similar classes of computations, one example being the object feature binding problem.


Biological Cybernetics | 2013

Adaptive properties of differential learning rates for positive and negative outcomes

Romain D. Cazé; Matthijs A. A. van der Meer

The concept of the reward prediction error—the difference between reward obtained and reward predicted—continues to be a focal point for much theoretical and experimental work in psychology, cognitive science, and neuroscience. Models that rely on reward prediction errors typically assume a single learning rate for positive and negative prediction errors. However, behavioral data indicate that better-than-expected and worse-than-expected outcomes often do not have symmetric impacts on learning and decision-making. Furthermore, distinct circuits within cortico-striatal loops appear to support learning from positive and negative prediction errors, respectively. Such differential learning rates would be expected to lead to biased reward predictions and therefore suboptimal choice performance. Contrary to this intuition, we show that on static “bandit” choice tasks, differential learning rates can be adaptive. This occurs because asymmetric learning enables a better separation of learned reward probabilities. We show analytically how the optimal learning rate asymmetry depends on the reward distribution and implement a biologically plausible algorithm that adapts the balance of positive and negative learning rates from experience. These results suggest specific adaptive advantages for separate, differential learning rates in simple reinforcement learning settings and provide a novel, normative perspective on the interpretation of associated neural data.


Scientific Reports | 2016

Performance in a GO/NOGO perceptual task reflects a balance between impulsive and instrumental components of behaviour

Aleksandra Berditchevskaia; Romain D. Cazé; Simon R. Schultz

In recent years, simple GO/NOGO behavioural tasks have become popular due to the relative ease with which they can be combined with technologies such as in vivo multiphoton imaging. To date, it has been assumed that behavioural performance can be captured by the average performance across a session, however this neglects the effect of motivation on behaviour within individual sessions. We investigated the effect of motivation on mice performing a GO/NOGO visual discrimination task. Performance within a session tended to follow a stereotypical trajectory on a Receiver Operating Characteristic (ROC) chart, beginning with an over-motivated state with many false positives, and transitioning through a more or less optimal regime to end with a low hit rate after satiation. Our observations are reproduced by a new model, the Motivated Actor-Critic, introduced here. Our results suggest that standard measures of discriminability, obtained by averaging across a session, may significantly underestimate behavioural performance.


bioRxiv | 2017

On the distribution and function of synaptic clusters

Romain D. Cazé; Amanda J. Foust; Claudia Clopath; Simon R. Schultz

Local non-linearities in dendrites render neuronal output dependent on the spatial distribution of synapses. A neuron will activate differently depending on whether active synapses are spatially clustered or dispersed. While this sensitivity can in principle expand neuronal computational capacity, it has thus far been employed in very few learning paradigms. To make use of this sensitivity, groups of correlated neurons need to make contact with distinct dendrites, and this requires a mechanism to ensure the correct distribution of synapses contacting from distinct ensembles. To address this problem, we introduce the requirement that on a short time scale, a pre-synaptic neuron makes a constant number of synapses with the same strength on a post-synaptic neuron. We find that this property enables clusters to distribute correctly and guarantees their functionality. Furthermore, we demonstrate that a change in the input statistics can reshape the spatial distribution of synapses. Finally, we show under which conditions clusters do not distribute correctly, e.g. when cross-talk between dendrites is too strong. As well as providing insight into potential biological mechanisms of learning, this work paves the way for new learning algorithms for artificial neural networks that exploit the spatial distribution of synapses.A large body of theoretical work has shown how dendrites can increase the computational capacity of the neuron. This work predicted that synapses active together should be close together in space, a phenomenon called synaptic clustering. Experimental evidence has shown that, in the absence of sensory stimulation, synapses nearby on the same dendrite tend to be active together more than expected by chance. Synaptic clustering, however, does not seem to be ubiquitous: other groups have reported that nearby synapses can respond to different features of a stimulus during sensory evoked activity. In other words, synapses that are active together during sensory evoked activity can be far apart in space, a phenomenon we term synaptic scattering. To unify these apparently inconsistent experimental results, we use a computational framework to study the formation of a synaptic architecture -- a set of synaptic weights -- displaying both synaptic clustering and scattering. We present three conditions under which a neuron can learn such synaptic architecture: (i) presynaptic inputs are organized into correlated groups of neurons; (ii) the postsynaptic neuron is compartmentalized in subunits representing dendrites; and (iii) the synaptic plasticity rule is local within a subunit. Importantly, we show that given the same synaptic architecture, synaptic clustering is expressed during spontaneous activity, i.e. in the absence of sensory evoked activity, whereas synaptic scattering is present under evoked activity. Interestingly, reduced dendritic morphology in our model leads to a pathological hyper-excitability, as observed for instance in Alzheimers Disease. This work therefore unifies a seemingly contradictory set of experimental observations: we demonstrate that the same synaptic architecture can lead to synaptic clustering and scattering depending on the input structure.


Archive | 2014

Dendrites Enhance Both Single Neuron and Network Computation

Romain D. Cazé; Mark D. Humphries; Boris Gutkin

In a single dendritic branch of a neuron, multiple excitatory inputs can locally interact in a nonlinear fashion. The local sum of multiple excitatory post-synaptic potentials (EPSPs) can be inferior or superior to their arithmetic sum; in these cases summation is respectively sublinear or supralinear. While this experimental observation can be well explained by conductance-based models, the computational impact of these local nonlinearities remains to be elucidated. Are there any examples of computation that are only possible with nonlinear dendrites? What is the impact of nonlinear dendrites at the network scale? We show here that both supralinear summation and sublinear summation enhance single neuron computation. We use Boolean functions, whose input and output consists of zeros and ones, and demonstrate that a few local dendritic nonlinearities allow a single neuron to compute new functions like the well-known exclusive OR (XOR). Furthermore, we show that these new computational capacities help resolve two problems faced by network composed of linearly integrating units. Certain functions require (1) that at least one unit in the network have an arbitrarily large receptive field and (2) that the range of synaptic weights be large. This chapter demonstrates that both of these limitations can be overcome in a network of nonlinearly integrating units.


Neural Computation | 2017

Dendrites Enable a Robust Mechanism for Neuronal Stimulus Selectivity

Romain D. Cazé; Sarah Jarvis; Amanda J. Foust; Simon R. Schultz

Hearing, vision, touch: underlying all of these senses is stimulus selectivity, a robust information processing operation in which cortical neurons respond more to some stimuli than to others. Previous models assume that these neurons receive the highest weighted input from an ensemble encoding the preferred stimulus, but dendrites enable other possibilities. Nonlinear dendritic processing can produce stimulus selectivity based on the spatial distribution of synapses, even if the total preferred stimulus weight does not exceed that of nonpreferred stimuli. Using a multi-subunit nonlinear model, we demonstrate that stimulus selectivity can arise from the spatial distribution of synapses. We propose this as a general mechanism for information processing by neurons possessing dendritic trees. Moreover, we show that this implementation of stimulus selectivity increases the neurons robustness to synaptic and dendritic failure. Importantly, our model can maintain stimulus selectivity for a larger range of loss of synapses or dendrites than an equivalent linear model. We then use a layer 2/3 biophysical neuron model to show that our implementation is consistent with two recent experimental observations: (1) one can observe a mixture of selectivities in dendrites that can differ from the somatic selectivity, and (2) hyperpolarization can broaden somatic tuning without affecting dendritic tuning. Our model predicts that an initially nonselective neuron can become selective when depolarized. In addition to motivating new experiments, the models increased robustness to synapses and dendrites loss provides a starting point for fault-resistant neuromorphic chip development.


bioRxiv | 2015

Non-linear dendrites enable robust stimulus selectivity.

Romain D. Cazé; Sarah Jarvis; Simon R. Schultz

Hubel and Wiesel discovered that some neurons in the visual cortex respond selectively to elongated visual stimuli of a particular orientation, proposing an elegant feedforward model to account for this selectivity. Since then, there has been much experimental support for this model, however several apparently counter-intuitive recent results, from in vivo two photon imaging of the dendrites of layer 2/3 pyramidal neurons in visual and somatosensory cortex cast doubt on the basic form of the model. Firstly, the dendrites may have different stimulus tuning to that of the soma. Secondly, hyperpolarizing a cell can result in it losing its stimulus selectivity, while the dendritic tuning remains unaffected. These results demonstrate the importance of dendrites in generating stimulus selectivity. Here, we implement stimulus selectivity in a biophysical model based on the realistic morphology of a layer 2/3 neuron, that can account for both of these experimental observations, within the feedforward framework motivated by Hubel and Wiesel. We show that this new model of stimulus selectivity is robust to the loss of synapses or dendrites, with stimulus selectivity maintained up to losses of 1/2 of the synapses, or 2/7 of the dendrites, demonstrating that in addition to increasing the computational capacity of neurons, dendrites also increase the robustness of neuronal computation. As well as explaining experimental results not predicted by Hubel and Wiesel, our study shows that dendrites enhance the resilience of cortical information processing, and prompts the development of new neuromorphic chips incorporating dendritic processing into their architecture.Hearing, vision, touch-underlying all of these senses is stimulus selectivity, a robust information processing operation in which cortical neurons respond more to some stimuli than to others. Previous models assume that these neurons receive the highest weighted input from an ensemble encoding the preferred stimulus, but dendrites enable other possibilities. Non-linear dendritic processing can produce stimulus selectivity based on the spatial distribution of synapses, even if the total preferred stimulus weight does not exceed that of non-preferred stimuli. Using a multi-subunit non-linear model, we demonstrate that stimulus selectivity can arise from the spatial distribution of synapses. We propose this as a general mechanism for information processing by neurons possessing dendritic trees. Moreover, we show that this implementation of stimulus selectivity increases the neurons robustness to synaptic and dendritic failure. Importantly, our model can maintain stimulus selectivity for a larger range of synapses or dendrites loss than an equivalent linear model. We then use a layer 2/3 biophysical neuron model to show that our implementation is consistent with two recent experimental observations: (1) one can observe a mixture of selectivities in dendrites, that can differ from the somatic selectivity, and (2) hyperpolarization can broaden somatic tuning without affecting dendritic tuning. Our model predicts that an initially non-selective neuron can become selective when depolarized. In addition to motivating new experiments, the models increased robustness to synapses and dendrites loss provides a starting point for fault-resistant neuromorphic chip development.


BMC Neuroscience | 2015

A robust model of sensory tuning using dendritic non-linearities

Romain D. Cazé; Sarah Jarvis; Simon R. Schultz

Dendrites, like neurons, can preferentially activate for certain stimuli, but recent experimental evidence suggests that dendritic tuning can differ from the neuronal tuning. For instance, in a L2/3 pyramidal neuron in the mouse visual cortex, dendritic calcium signals display a wide range of tuning profiles, some of which differ from the tuning of the neuronal output [1]. This puzzling observation was unanticipated by the standard Hubel and Wiesel model explaining the origin of visual tuning [2]. The standard model can survive this observation, but only with the addition of superfluous synapses. We propose here an alternative model where synapses responsible for neuronal tuning are dispersed over dendrites. This alternative model builds on previously published results [3]. It possesses non-linear dendritic compartments, and in each compartment the result of multiple excitatory inputs can be smaller than their arithmetic sum. These non-linear and independent sites of synaptic integration create neuronal tuning: groups of correlated presynaptic inputs encode the stimulus identity, and only the group that encodes the preferred stimulus targets different dendrites and leads to a response. Groups coding for non-preferred stimuli instead target the same dendrite, and explain the wide range of dendritic tunings observed experimentally. Moreover, we demonstrate that this implementation of neuronal tuning is robust to the loss of dendrites. Thus, our alternative model not only reproduces the experimental observations, but is also


BMC Neuroscience | 2015

Extending the tempotron with hierarchical dendrites allows faster learning

Sarah Jarvis; Romain D. Cazé; Claudia Clopath

Certain functional classes of neurons seem to be able to differentiate between input patterns with high temporal precision. As input patterns to neurons can consist of up to thousands of inputs, the ability to identify target patterns amongst statistically similar background patterns is impressive and has been suggested to occur via modification to synaptic weights. Yet how does learning that occurs locally at the level of synapses converge, without global coordination? While this question is important for understanding how synaptic learning gives rise to dendritic computation, it is impractical to test experimentally. Insight from abstract neuronal models, such as the multilayer perceptron networks [1], provide a potential glimpse of the difficulty of how to ensure that global convergence during learning when weight changes are local. In this work, we report that by extending the tempotron model [2], we are able to demonstrate that learning can, indeed, learn locally but also converge globally. By arranging dendritic units in a hierarchical manner which feeds into a master dendrite and soma, learning occurs over two timescales: locally on each dendritic branch, using a simple incremental plasticity rule; and at a slower timescale on the main branch, where information is integrated across branches. We observe that the inclusion of dendrites reduces the learning time required by allowing dendrites to subsample the entire input space. In comparison to one single dendrite receiving n inputs, the inclusion of m dendrites means that each dendrite is now subsampling n/m inputs, which not results in faster learning epochs before convergence but also improves the overall robustness against noise. Thus, the move from a single tempotron to a set of hierarchically configured tempotrons, representing dendrites, imbues the unit with recognition of pattern fragments (with a pattern capacity > m!), faster convergence during learning and increased noise tolerance (both of which scale with m). The inclusion of dendrites also allows for them to signal in sequences with varying relative temporal offsets, granting the neuron the opportunity to differentiate between multiple positive patterns to identify not only which pattern was observed but also when. Furthermore, we have also demonstrated that tempotrons can be extended to work for non-episodic patterns i.e. ongoing and without reset, and can also perform well when number of distractor patterns greatly outnumber positive patterns. Conceptually, our model reconciles the tempotron learning rule with work on dendritic computation which argues for dendrites as computation units [3]. It also provides a potential explanation to phenomena observed experimentally, such as neurons in visual cortex whose dendrites had other preferred orientations that were different to those of the soma [4].

Collaboration


Dive into the Romain D. Cazé's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boris Gutkin

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Jarvis

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amanda J. Foust

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge