Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Elliott Wimmer is active.

Publication


Featured researches published by G. Elliott Wimmer.


Science | 2012

Preference by association: How memory mechanisms in the hippocampus bias decisions

G. Elliott Wimmer; Daphna Shohamy

The Right Choice? So-called irrational decisions made by humans are popular fodder for “believe it or not” stories. But whats actually happening when we make choices that do not seem to be justifiable on purely economic or logical grounds? Presumably, we are not simply making errors; instead, our choices may reflect an internal bias that we are not aware of. Wimmer and Shohamy (p. 270) show how the hippocampus can instill an unconscious bias in valuations, whereby an object that is not highly valued on its own, increases in value when it becomes implicitly associated with a truly high-value object. As a consequence, we then end up preferring the associated object over a neutral object of equal objective value while not really knowing why. Remembered links between objects can result in the unintentional linking of their values and can affect choices. Every day people make new choices between alternatives that they have never directly experienced. Yet, such decisions are often made rapidly and confidently. Here, we show that the hippocampus, traditionally known for its role in building long-term declarative memories, enables the spread of value across memories, thereby guiding decisions between new choice options. Using functional brain imaging in humans, we discovered that giving people monetary rewards led to activation of a preestablished network of memories, spreading the positive value of reward to nonrewarded items stored in memory. Later, people were biased to choose these nonrewarded items. This decision bias was predicted by activity in the hippocampus, reactivation of associated memories, and connectivity between memory and reward regions in the brain. These findings explain how choices among new alternatives emerge automatically from the associative mechanisms by which the brain builds memories. Further, our findings demonstrate a previously unknown role for the hippocampus in value-based decisions.


Neuroreport | 2008

Nucleus Accumbens Activation Mediates the Influence of Reward Cues on Financial Risk-Taking

Brian Knutson; G. Elliott Wimmer; Camelia M. Kuhnen; Piotr Winkielman

In functional magnetic resonance imaging research, nucleus accumbens (NAcc) activation spontaneously increases before financial risk taking. As anticipation of diverse rewards can increase NAcc activation, even incidental reward cues may influence financial risk taking. Using event-related functional magnetic resonance imaging, we predicted and found that anticipation of viewing rewarding stimuli (erotic pictures for 15 heterosexual men) increased financial risk taking, and that this effect was partially mediated by increases in NAcc activation. These results are consistent with the notion that incidental reward cues influence financial risk taking by altering anticipatory affect, and so identify a neuropsychological mechanism that may underlie effective emotional appeals in financial, marketing, and political domains.


Nature Neuroscience | 2014

Representation of aversive prediction errors in the human periaqueductal gray

Mathieu Roy; Daphna Shohamy; Nathaniel D. Daw; Marieke Jepma; G. Elliott Wimmer; Tor D. Wager

Pain is a primary driver of learning and motivated action. It is also a target of learning, as nociceptive brain responses are shaped by learning processes. We combined an instrumental pain avoidance task with an axiomatic approach to assessing fMRI signals related to prediction errors (PEs), which drive reinforcement-based learning. We found that pain PEs were encoded in the periaqueductal gray (PAG), a structure important for pain control and learning in animal models. Axiomatic tests combined with dynamic causal modeling suggested that ventromedial prefrontal cortex, supported by putamen, provides an expected value–related input to the PAG, which then conveys PE signals to prefrontal regions important for behavioral regulation, including orbitofrontal, anterior mid-cingulate and dorsomedial prefrontal cortices. Thus, pain-related learning involves distinct neural circuitry, with implications for behavior and pain dynamics.


European Journal of Neuroscience | 2012

Generalization of value in reinforcement learning by humans

G. Elliott Wimmer; Nathaniel D. Daw; Daphna Shohamy

Research in decision‐making has focused on the role of dopamine and its striatal targets in guiding choices via learned stimulus–reward or stimulus–response associations, behavior that is well described by reinforcement learning theories. However, basic reinforcement learning is relatively limited in scope and does not explain how learning about stimulus regularities or relations may guide decision‐making. A candidate mechanism for this type of learning comes from the domain of memory, which has highlighted a role for the hippocampus in learning of stimulus–stimulus relations, typically dissociated from the role of the striatum in stimulus–response learning. Here, we used functional magnetic resonance imaging and computational model‐based analyses to examine the joint contributions of these mechanisms to reinforcement learning. Humans performed a reinforcement learning task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. On each trial participants chose one of four options, but the reward probabilities for pairs of options were correlated across trials. This (uninstructed) relationship between pairs of options potentially enabled an observer to learn about option values based on experience with the other options and to generalize across them. We observed blood oxygen level‐dependent (BOLD) activity related to learning in the striatum and also in the hippocampus. By comparing a basic reinforcement learning model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of reinforcement learning and striatal BOLD, both choices and striatal BOLD activity were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants’ choice. Our results thus point toward an interactive model in which striatal reinforcement learning systems may employ relational representations typically associated with the hippocampus.


The Journal of Neuroscience | 2014

Episodic Memory Encoding Interferes with Reward Learning and Decreases Striatal Prediction Errors

G. Elliott Wimmer; Erin Kendall Braun; Nathaniel D. Daw; Daphna Shohamy

Learning is essential for adaptive decision making. The striatum and its dopaminergic inputs are known to support incremental reward-based learning, while the hippocampus is known to support encoding of single events (episodic memory). Although traditionally studied separately, in even simple experiences, these two types of learning are likely to co-occur and may interact. Here we sought to understand the nature of this interaction by examining how incremental reward learning is related to concurrent episodic memory encoding. During the experiment, human participants made choices between two options (colored squares), each associated with a drifting probability of reward, with the goal of earning as much money as possible. Incidental, trial-unique object pictures, unrelated to the choice, were overlaid on each option. The next day, participants were given a surprise memory test for these pictures. We found that better episodic memory was related to a decreased influence of recent reward experience on choice, both within and across participants. fMRI analyses further revealed that during learning the canonical striatal reward prediction error signal was significantly weaker when episodic memory was stronger. This decrease in reward prediction error signals in the striatum was associated with enhanced functional connectivity between the hippocampus and striatum at the time of choice. Our results suggest a mechanism by which memory encoding may compete for striatal processing and provide insight into how interactions between different forms of learning guide reward-based decision making.


Social Cognitive and Affective Neuroscience | 2009

Available alternative incentives modulate anticipatory nucleus accumbens activation

Jeffrey C. Cooper; Nick G. Hollon; G. Elliott Wimmer; Brian Knutson

A reward or punishment can seem better or worse depending on what else might have happened. Little is known, however, about how neural representations of an anticipated incentive might be influenced by the available alternatives. We used event-related FMRI to investigate the activation in the nucleus accumbens (NAcc), while we varied the available alternative incentives in a monetary incentive delay task. Some task blocks included only uncertain gains and losses; others included the same uncertain gains and losses intermixed with certain gains and losses. The availability of certain gains and losses increased NAcc activation for uncertain losses and decreased the difference between uncertain gains and losses. We suggest that this pattern of activation can result from reference point changes across blocks, and that the worst available loss may serve as an important anchor for NAcc activation. These findings imply that NAcc activation represents anticipated incentive value relative to the current context of available alternative gains and losses.


Nature Neuroscience | 2013

Dopamine and the cost of aging

Daphna Shohamy; G. Elliott Wimmer

Cognitive function declines as part of the normal aging process. A study finds that the dopamine-boosting drug L-DOPA changes value representation in the brain and improves reinforcement learning in older individuals.


bioRxiv | 2017

Reinforcement learning over time: spaced versus massed training establishes stronger value associations

G. Elliott Wimmer; Russell A. Poldrack

Over the past few decades, neuroscience research has illuminated the neural mechanisms supporting learning from reward feedback, demonstrating a critical role for the striatum and midbrain dopamine system. Learning paradigms are increasingly being extended to understand learning dysfunctions in mood and psychiatric disorders as well as addiction in the area of computational psychiatry. However, one potentially critical characteristic that this research ignores is the effect of time on learning: human feedback learning paradigms are conducted in a single rapidly paced session, while learning experiences in ecologically relevant circumstances and in animal research are almost always separated by longer periods of time. Event spacing is known to have strong positive effects on item memory across species and in reward learning in animals. Remarkably, the effect of spaced training on human reinforcement learning has not been investigated. In our experiments, we examined reward learning distributed across weeks vs. learning completed in a traditionally-paced or “massed” single session. Participants learned to make the best response for landscape stimuli that were either associated with a positive or negative value. In our first study, as expected, we found that after equal amounts of extensive training, accuracy was high and equivalent between the spaced and massed conditions. However, in a final online test 3 weeks later, we found that participants exhibited significantly greater memory for the value of spaced-trained stimuli. In our second study, our methods allowed for a direct comparison of maintenance of conditioning. We found that spaced training again had a beneficial effect: more than 87% of conditioning was maintained for spaced-trained stimuli, while only 30% was maintained for massed-trained stimuli. In addition, supporting a role for working memory in massed learning, across both studies we found a significant positive correlation between initial learning and working memory capacity. Our results indicate that single-session learning tasks may not lead to the kind of robust and lasting value associations that are characteristic of “habitual” value associations. Overall, these studies begin to address a large gap in our knowledge of fundamental processes of human reinforcement learning, with potentially broad implications for our understanding of learning in mood disorders and addiction.


The Journal of Neuroscience | 2018

Reward learning over weeks versus minutes increases the neural representation of value in the human brain

G. Elliott Wimmer; Jamie K Li; Krzysztof J. Gorgolewski; Russell A. Poldrack

Over the past few decades, neuroscience research has illuminated the neural mechanisms supporting learning from reward feedback. Learning paradigms are increasingly being extended to study mood and psychiatric disorders as well as addiction. However, one potentially critical characteristic that this research ignores is the effect of time on learning: human feedback learning paradigms are usually conducted in a single rapidly paced session, whereas learning experiences in ecologically relevant circumstances and in animal research are almost always separated by longer periods of time. In our experiments, we examined reward learning in short condensed sessions distributed across weeks versus learning completed in a single “massed” session in male and female participants. As expected, we found that after equal amounts of training, accuracy was matched between the spaced and massed conditions. However, in a 3-week follow-up, we found that participants exhibited significantly greater memory for the value of spaced-trained stimuli. Supporting a role for short-term memory in massed learning, we found a significant positive correlation between initial learning and working memory capacity. Neurally, we found that patterns of activity in the medial temporal lobe and prefrontal cortex showed stronger discrimination of spaced- versus massed-trained reward values. Further, patterns in the striatum discriminated between spaced- and massed-trained stimuli overall. Our results indicate that single-session learning tasks engage partially distinct learning mechanisms from distributed training. Our studies begin to address a large gap in our knowledge of human learning from reinforcement, with potential implications for our understanding of mood disorders and addiction. SIGNIFICANCE STATEMENT Humans and animals learn to associate predictive value with stimuli and actions, and these values then guide future behavior. Such reinforcement-based learning often happens over long time periods, in contrast to most studies of reward-based learning in humans. In experiments that tested the effect of spacing on learning, we found that associations learned in a single massed session were correlated with short-term memory and significantly decayed over time, whereas associations learned in short massed sessions over weeks were well maintained. Additionally, patterns of activity in the medial temporal lobe and prefrontal cortex discriminated the values of stimuli learned over weeks but not minutes. These results highlight the importance of studying learning over time, with potential applications to drug addiction and psychiatry.


bioRxiv | 2016

Pain to remember: a single incidental association with pain leads to increased memory for neutral items one year later

G. Elliott Wimmer; Christian Buechel

Negative and positive experiences can exert a strong influence on later memory. Our emotional experiences are composed of many different elements – people, place, things - most of them neutral. Do emotional experiences lead to enhanced long-term for these neutral elements as well? Demonstrating a lasting effect of emotion on memory is particularly important if memory for emotional events is to adaptively guide behavior days, weeks, or years later. We thus tested whether aversive experiences modulate very long-term episodic memory in an fMRI experiment. Participants experienced episodes of high or low pain in conjunction with the presentation of incidental, trial-unique neutral object pictures. In a scanned surprise immediate memory test, we found no effect of pain on recognition strength. Critically, in a follow-up memory test one year later we found that pain significantly enhanced memory. Neurally, we provide a novel demonstration of activity predicting memory one year later, whereby greater insula activity and more unique distributed patterns of insular activity in the initial session correlated with memory for pain-associated objects. Generally, our results suggest that pairing episodes with arousing negative stimuli may lead to very long-lasting memory enhancements.

Collaboration


Dive into the G. Elliott Wimmer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Camelia M. Kuhnen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Drazen Prelec

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Rick

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Piotr Winkielman

University of Social Sciences and Humanities

View shared research outputs
Researchain Logo
Decentralizing Knowledge