Sara E. Morrison
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sara E. Morrison.
Neuron | 2007
Marina A. Belova; Joseph J. Paton; Sara E. Morrison; C. Daniel Salzman
Animals and humans learn to approach and acquire pleasant stimuli and to avoid or defend against aversive ones. However, both pleasant and aversive stimuli can elicit arousal and attention, and their salience or intensity increases when they occur by surprise. Thus, adaptive behavior may require that neural circuits compute both stimulus valence--or value--and intensity. To explore how these computations may be implemented, we examined neural responses in the primate amygdala to unexpected reinforcement during learning. Many amygdala neurons responded differently to reinforcement depending upon whether or not it was expected. In some neurons, this modulation occurred only for rewards or aversive stimuli, but not both. In other neurons, expectation similarly modulated responses to both rewards and punishments. These different neuronal populations may subserve two sorts of processes mediated by the amygdala: those activated by surprising reinforcements of both valences-such as enhanced arousal and attention-and those that are valence-specific, such as fear or reward-seeking behavior.
Current Opinion in Neurobiology | 2010
Sara E. Morrison; C. Daniel Salzman
Recent advances indicate that the amygdala represents valence: a general appetitive/aversive affective characteristic that bears similarity to the neuroeconomic concept of value. Neurophysiological studies show that individual amygdala neurons respond differentially to a range of stimuli with positive or negative affective significance. Meanwhile, increasingly specific lesion/inactivation studies reveal that the amygdala is necessary for processes--for example, fear extinction and reinforcer devaluation--that involve updating representations of value. Furthermore, recent neuroimaging studies suggest that the human amygdala mediates performance on many reward-based decision-making tasks. The encoding of affective significance by the amygdala might be best described as a representation of state value-a representation that is useful for coordinating physiological, behavioral, and cognitive responses in an affective/emotional context.
Annals of the New York Academy of Sciences | 2007
C. Daniel Salzman; Joseph J. Paton; Marina A. Belova; Sara E. Morrison
Abstract: The amygdala and orbitofrontal cortex (OFC) are often thought of as components of a neural circuit that assigns affective significance—or value—to sensory stimuli so as to anticipate future events and adjust behavioral and physiological responses. Much recent work has been aimed at understanding the distinct contributions of the amygdala and OFC to these processes, but a detailed understanding of the physiological mechanisms underlying learning about value remains lacking. To gain insight into these processes, we have focused initially on characterizing the neural signals of the primate amygdala, and more recently of the primate OFC, during appetitive and aversive reinforcement learning procedures. We have employed a classical conditioning procedure whereby monkeys form associations between visual stimuli and rewards or aversive stimuli. After learning these initial associations, we reverse the stimulus‐reinforcement contingencies, and monkeys learn these new associations. We have discovered that separate populations of neurons in the amygdala represent the positive and negative value of conditioned visual stimuli. This representation of value updates rapidly upon image value reversal, as fast as monkeys learn, often within a single trial. We suggest that representations of value in the amygdala may change through multiple interrelated mechanisms: some that arise from fairly simple Hebbian processes, and others that may involve gated inputs from other brain areas, such as the OFC.
The Journal of Neuroscience | 2013
Zhang W; Schneider Dm; Belova Ma; Sara E. Morrison; Paton Jj; Salzman Cd
Recent electrophysiological studies on the primate amygdala have advanced our understanding of how individual neurons encode information relevant to emotional processes, but it remains unclear how these neurons are functionally and anatomically organized. To address this, we analyzed cross-correlograms of amygdala spike trains recorded during a task in which monkeys learned to associate novel images with rewarding and aversive outcomes. Using this task, we have recently described two populations of amygdala neurons: one that responds more strongly to images predicting reward (positive value-coding), and another that responds more strongly to images predicting an aversive stimulus (negative value-coding). Here, we report that these neural populations are organized into distinct, but anatomically intermingled, appetitive and aversive functional circuits, which are dynamically modulated as animals used the images to predict outcomes. Furthermore, we report that responses to sensory stimuli are prevalent in the lateral amygdala, and are also prevalent in the medial amygdala for sensory stimuli that are emotionally significant. The circuits identified here could potentially mediate valence-specific emotional behaviors thought to involve the amygdala.
Annals of the New York Academy of Sciences | 2011
Sara E. Morrison; C. Daniel Salzman
Individuals weigh information about both rewarding and aversive stimuli to make adaptive decisions. Most studies of the orbitofrontal cortex (OFC), an area where appetitive and aversive neural subsystems might interact, have focused only on reward. Using a classical conditioning task where novel stimuli are paired with a reward or an aversive air puff, we discovered that two groups of orbitofrontal neurons respond preferentially to conditioned stimuli associated with rewarding and aversive outcomes; however, information about appetitive and aversive stimuli converges on individual neurons from both populations. Therefore, neurons in the OFC might participate in appetitive and aversive networks that track the motivational significance of stimuli even when they vary in valence and sensory modality. Further, we show that these networks, which also extend to the amygdala, exhibit different rates of change during reversal learning. Thus, although both networks represent appetitive and aversive associations, their distinct temporal dynamics might indicate different roles in learning processes.
Frontiers in Neuroscience | 2015
Sara E. Morrison; Michael A. Bamkole; Saleem M. Nicola
During Pavlovian conditioning, a conditioned stimulus (CS) may act as a predictor of a reward to be delivered in another location. Individuals vary widely in their propensity to engage with the CS (sign tracking) or with the site of eventual reward (goal tracking). It is often assumed that sign tracking involves the association of the CS with the motivational value of the reward, resulting in the CS acquiring incentive value independent of the outcome. However, experimental evidence for this assumption is lacking. In order to test the hypothesis that sign tracking behavior does not rely on a neural representation of the outcome, we employed a reward devaluation procedure. We trained rats on a classic Pavlovian paradigm in which a lever CS was paired with a sucrose reward, then devalued the reward by pairing sucrose with illness in the absence of the CS. We found that sign tracking behavior was enhanced, rather than diminished, following reward devaluation; thus, sign tracking is clearly independent of a representation of the outcome. In contrast, goal tracking behavior was decreased by reward devaluation. Furthermore, when we divided rats into those with high propensity to engage with the lever (sign trackers) and low propensity to engage with the lever (goal trackers), we found that nearly all of the effects of devaluation could be attributed to the goal trackers. These results show that sign tracking and goal tracking behavior may be the output of different associative structures in the brain, providing insight into the mechanisms by which reward-associated stimuli—such as drug cues—come to exert control over behavior in some individuals.
NeuroImage | 2010
Mattia Rigotti; Daniel B. Rubin; Sara E. Morrison; C. Daniel Salzman; Stefano Fusi
Complex tasks often require the memory of recent events, the knowledge about the context in which they occur, and the goals we intend to reach. All this information is stored in our mental states. Given a set of mental states, reinforcement learning (RL) algorithms predict the optimal policy that maximizes future reward. RL algorithms assign a value to each already-known state so that discovering the optimal policy reduces to selecting the action leading to the state with the highest value. But how does the brain create representations of these mental states in the first place? We propose a mechanism for the creation of mental states that contain information about the temporal statistics of the events in a particular context. We suggest that the mental states are represented by stable patterns of reverberating activity, which are attractors of the neural dynamics. These representations are built from neurons that are selective to specific combinations of external events (e.g. sensory stimuli) and pre-existent mental states. Consistent with this notion, we find that neurons in the amygdala and in orbitofrontal cortex (OFC) often exhibit this form of mixed selectivity. We propose that activating different mixed selectivity neurons in a fixed temporal order modifies synaptic connections so that conjunctions of events and mental states merge into a single pattern of reverberating activity. This process corresponds to the birth of a new, different mental state that encodes a different temporal context. The concretion process depends on temporal contiguity, i.e. on the probability that a combination of an event and mental states follows or precedes the events and states that define a certain context. The information contained in the context thereby allows an animal to assign unambiguously a value to the events that initially appeared in different situations with different meanings.
Frontiers in Neuroscience | 2012
Crista L. Barberini; Sara E. Morrison; Alex Saez; Brian Lau; C. Daniel Salzman
Decision-making often involves using sensory cues to predict possible rewarding or punishing reinforcement outcomes before selecting a course of action. Recent work has revealed complexity in how the brain learns to predict rewards and punishments. Analysis of neural signaling during and after learning in the amygdala and orbitofrontal cortex, two brain areas that process appetitive and aversive stimuli, reveals a dynamic relationship between appetitive and aversive circuits. Specifically, the relationship between signaling in appetitive and aversive circuits in these areas shifts as a function of learning. Furthermore, although appetitive and aversive circuits may often drive opposite behaviors – approaching or avoiding reinforcement depending upon its valence – these circuits can also drive similar behaviors, such as enhanced arousal or attention; these processes also may influence choice behavior. These data highlight the formidable challenges ahead in dissecting how appetitive and aversive neural circuits interact to produce a complex and nuanced range of behaviors.
The Journal of Neuroscience | 2014
Sara E. Morrison; Saleem M. Nicola
Both animals and humans often prefer rewarding options that are nearby over those that are distant, but the neural mechanisms underlying this bias are unclear. Here we present evidence that a proximity signal encoded by neurons in the nucleus accumbens drives proximate reward bias by promoting impulsive approach to nearby reward-associated objects. On a novel decision-making task, rats chose the nearer option even when it resulted in greater effort expenditure and delay to reward; therefore, proximate reward bias was unlikely to be caused by effort or delay discounting. The activity of individual neurons in the nucleus accumbens did not consistently encode the reward or effort associated with specific alternatives, suggesting that it does not participate in weighing the values of options. In contrast, proximity encoding was consistent and did not depend on the subsequent choice, implying that accumbens activity drives approach to the nearest rewarding option regardless of its specific associated reward size or effort level.
Nature | 2006
Joseph J. Paton; Marina A. Belova; Sara E. Morrison; C. Daniel Salzman