Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guido Biele is active.

Publication


Featured researches published by Guido Biele.


The Journal of Neuroscience | 2010

Neural Processing of Risk

Peter N. C. Mohr; Guido Biele; Hauke R. Heekeren

In our everyday life, we often have to make decisions with risky consequences, such as choosing a restaurant for dinner or choosing a form of retirement saving. To date, however, little is known about how the brain processes risk. Recent conceptualizations of risky decision making highlight that it is generally associated with emotions but do not specify how emotions are implicated in risk processing. Moreover, little is known about risk processing in non-choice situations and how potential losses influence risk processing. Here we used quantitative meta-analyses of functional magnetic resonance imaging experiments on risk processing in the brain to investigate (1) how risk processing is influenced by emotions, (2) how it differs between choice and non-choice situations, and (3) how it changes when losses are possible. By showing that, over a range of experiments and paradigms, risk is consistently represented in the anterior insula, a brain region known to process aversive emotions such as anxiety, disappointment, or regret, we provide evidence that risk processing is influenced by emotions. Furthermore, our results show risk-related activity in the dorsolateral prefrontal cortex and the parietal cortex in choice situations but not in situations in which no choice is involved or a choice has already been made. The anterior insula was predominantly active in the presence of potential losses, indicating that potential losses modulate risk processing.


Proceedings of the National Academy of Sciences of the United States of America | 2010

How the brain integrates costs and benefits during decision making

Ulrike Basten; Guido Biele; Hauke R. Heekeren; Christian J. Fiebach

When we make decisions, the benefits of an option often need to be weighed against accompanying costs. Little is known, however, about the neural systems underlying such cost–benefit computations. Using functional magnetic resonance imaging and choice modeling, we show that decision making based on cost–benefit comparison can be explained as a stochastic accumulation of cost–benefit difference. Model-driven functional MRI shows that ventromedial and left dorsolateral prefrontal cortex compare costs and benefits by computing the difference between neural signatures of anticipated benefits and costs from the ventral striatum and amygdala, respectively. Moreover, changes in blood oxygen level dependent (BOLD) signal in the bilateral middle intraparietal sulcus reflect the accumulation of the difference signal from ventromedial prefrontal cortex. In sum, we show that a neurophysiological mechanism previously established for perceptual decision making, that is, the difference-based accumulation of evidence, is fundamental also in value-based decisions. The brain, thus, weighs costs against benefits by combining neural benefit and cost signals into a single, difference-based neural representation of net value, which is accumulated over time until the individual decides to accept or reject an option.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Genetic variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly adapt decisions

Lea K. Krugel; Guido Biele; Peter N. C. Mohr; Shu-Chen Li; Hauke R. Heekeren

The ability to rapidly and flexibly adapt decisions to available rewards is crucial for survival in dynamic environments. Reward-based decisions are guided by reward expectations that are updated based on prediction errors, and processing of these errors involves dopaminergic neuromodulation in the striatum. To test the hypothesis that the COMT gene Val158Met polymorphism leads to interindividual differences in reward-based learning, we used the neuromodulatory role of dopamine in signaling prediction errors. We show a behavioral advantage for the phylogenetically ancestral Val/Val genotype in an instrumental reversal learning task that requires rapid and flexible adaptation of decisions to changing reward contingencies in a dynamic environment. Implementing a reinforcement learning model with a dynamic learning rate to estimate prediction error and learning rate for each trial, we discovered that a higher and more flexible learning rate underlies the advantage of the Val/Val genotype. Model-based fMRI analysis revealed that greater and more differentiated striatal fMRI responses to prediction errors reflect this advantage on the neurobiological level. Learning rate-dependent changes in effective connectivity between the striatum and prefrontal cortex were greater in the Val/Val than Met/Met genotype, suggesting that the advantage results from a downstream effect of the prefrontal cortex that is presumably mediated by differences in dopamine metabolism. These results show a critical role of dopamine in processing the weight a particular prediction error has on the expectation updating for the next decision, thereby providing important insights into neurobiological mechanisms underlying the ability to rapidly and flexibly adapt decisions to changing reward contingencies.


Proceedings of the National Academy of Sciences of the United States of America | 2010

A mechanistic account of value computation in the human brain

Marios G. Philiastides; Guido Biele; Hauke R. Heekeren

To make decisions based on the value of different options, we often have to combine different sources of probabilistic evidence. For example, when shopping for strawberries on a fruit stand, one uses their color and size to infer—with some uncertainty—which strawberries taste best. Despite much progress in understanding the neural underpinnings of value-based decision making in humans, it remains unclear how the brain represents different sources of probabilistic evidence and how they are used to compute value signals needed to drive the decision. Here, we use a visual probabilistic categorization task to show that regions in ventral temporal cortex encode probabilistic evidence for different decision alternatives, while ventromedial prefrontal cortex integrates information from these regions into a value signal using a difference-based comparator operation.


PLOS Biology | 2011

The Neural Basis of Following Advice

Guido Biele; Jörg Rieskamp; Lea K. Krugel; Hauke R. Heekeren

Learning by following explicit advice is fundamental for human cultural evolution, yet the neurobiology of adaptive social learning is largely unknown. Here, we used simulations to analyze the adaptive value of social learning mechanisms, computational modeling of behavioral data to describe cognitive mechanisms involved in social learning, and model-based functional magnetic resonance imaging (fMRI) to identify the neurobiological basis of following advice. One-time advice received before learning had a sustained influence on peoples learning processes. This was best explained by social learning mechanisms implementing a more positive evaluation of the outcomes from recommended options. Computer simulations showed that this “outcome-bonus” accumulates more rewards than an alternative mechanism implementing higher initial reward expectation for recommended options. fMRI results revealed a neural outcome-bonus signal in the septal area and the left caudate. This neural signal coded rewards in the absence of advice, and crucially, it signaled greater positive rewards for positive and negative feedback after recommended rather than after non-recommended choices. Hence, our results indicate that following advice is intrinsically rewarding. A positive correlation between the models outcome-bonus parameter and amygdala activity after positive feedback directly relates the computational model to brain activity. These results advance the understanding of social learning by providing a neurobiological account for adaptive learning from advice.


Pain | 2013

The importance of context: when relative relief renders pain pleasant.

Siri Leknes; Chantal Berna; Michael C. Lee; Gregory D. Snyder; Guido Biele; Irene Tracey

Summary When moderate pain was presented in a context of intense pain, it induced a relief and ‘hedonic flip’ such that pain was reported as pleasant. ABSTRACT Context can influence the experience of any event. For instance, the thought that “it could be worse” can improve feelings towards a present misfortune. In this study we measured hedonic feelings, skin conductance, and brain activation patterns in 16 healthy volunteers who experienced moderate pain in two different contexts. In the “relative relief context,” moderate pain represented the best outcome, since the alternative outcome was intense pain. However, in the control context, moderate pain represented the worst outcome and elicited negative hedonic feelings. The context manipulation resulted in a “hedonic flip,” such that moderate pain elicited positive hedonics in the relative relief context. Somewhat surprisingly, moderate pain was even rated as pleasant in this context, despite being reported as painful in the control context. This “hedonic flip” was corroborated by physiological and functional neuroimaging data. When moderate pain was perceived as pleasant, skin conductance and activity in insula and dorsal anterior cingulate were significantly attenuated relative to the control moderate stimulus. “Pleasant pain” also increased activity in reward and valuation circuitry, including the medial orbitofrontal and ventromedial prefrontal cortices. Furthermore, the change in outcome hedonics correlated with activity in the periacqueductal grey (PAG) of the descending pain modulatory system (DPMS). The context manipulation also significantly increased functional connectivity between reward circuitry and the PAG, consistent with a functional change of the DPMS due to the altered motivational state. The findings of this study point to a role for brainstem and reward circuitry in a context‐induced “hedonic flip” of pain.


NeuroImage | 2010

Temporal dynamics of prediction error processing during reward-based decision making

Marios G. Philiastides; Guido Biele; Niki Vavatzanidis; Philipp Kazzer; Hauke R. Heekeren

Adaptive decision making depends on the accurate representation of rewards associated with potential choices. These representations can be acquired with reinforcement learning (RL) mechanisms, which use the prediction error (PE, the difference between expected and received rewards) as a learning signal to update reward expectations. While EEG experiments have highlighted the role of feedback-related potentials during performance monitoring, important questions about the temporal sequence of feedback processing and the specific function of feedback-related potentials during reward-based decision making remain. Here, we hypothesized that feedback processing starts with a qualitative evaluation of outcome-valence, which is subsequently complemented by a quantitative representation of PE magnitude. Results of a model-based single-trial analysis of EEG data collected during a reversal learning task showed that around 220ms after feedback outcomes are initially evaluated categorically with respect to their valence (positive vs. negative). Around 300ms, and parallel to the maintained valence-evaluation, the brain also represents quantitative information about PE magnitude, thus providing the complete information needed to update reward expectations and to guide adaptive decision making. Importantly, our single-trial EEG analysis based on PEs from an RL model showed that the feedback-related potentials do not merely reflect error awareness, but rather quantitative information crucial for learning reward contingencies.


Frontiers in Human Neuroscience | 2010

Differential influence of levodopa on reward-based learning in Parkinson's disease

Susanne Graef; Guido Biele; Lea K. Krugel; Frank Marzinzik; M. Wahl; Johann Wotka; Fabian Klostermann; Hauke R. Heekeren

The mesocorticolimbic dopamine (DA) system linking the dopaminergic midbrain to the prefrontal cortex and subcortical striatum has been shown to be sensitive to reinforcement in animals and humans. Within this system, coexistent segregated striato-frontal circuits have been linked to different functions. In the present study, we tested patients with Parkinsons disease (PD), a neurodegenerative disorder characterized by dopaminergic cell loss, on two reward-based learning tasks assumed to differentially involve dorsal and ventral striato-frontal circuits. 15 non-depressed and non-demented PD patients on levodopa monotherapy were tested both on and off medication. Levodopa had beneficial effects on the performance on an instrumental learning task with constant stimulus-reward associations, hypothesized to rely on dorsal striato-frontal circuits. In contrast, performance on a reversal learning task with changing reward contingencies, relying on ventral striato-frontal structures, was better in the unmedicated state. These results are in line with the “overdose hypothesis” which assumes detrimental effects of dopaminergic medication on functions relying upon less affected regions in PD. This study demonstrates, in a within-subject design, a double dissociation of dopaminergic medication and performance on two reward-based learning tasks differing in regard to whether reward contingencies are constant or dynamic. There was no evidence for a dose effect of levodopa on reward-based behavior with the patients’ actual levodopa dose being uncorrelated to their performance on the reward-based learning tasks.


Journal of Attention Disorders | 2015

A Meta-Analysis of Decision-Making and Attention in Adults With ADHD:

Athanasia M. Mowinckel; Mads Lund Pedersen; Espen Eilertsen; Guido Biele

Objective: Deficient reward processing has gained attention as an important aspect of ADHD, but little is known about reward-based decision-making (DM) in adults with ADHD. This article summarizes research on DM in adult ADHD and contextualizes DM deficits by comparing them to attention deficits. Method: Meta-analytic methods were used to calculate average effect sizes for different DM domains and continuous performance task (CPT) measures. Results: None of the 59 included studies (DM: 12 studies; CPT: 43; both: 4) had indications of publication bias. DM and CPT measures showed robust, small to medium effects. Large effect sizes were found for a drift diffusion model analysis of the CPT. Conclusion: The results support the existence of DM deficits in adults with ADHD, which are of similar magnitude as attention deficits. These findings warrant further examination of DM in adults with ADHD to improve the understanding of underlying neurocognitive mechanisms.


NeuroImage | 2010

Neural foundations of risk-return trade-off in investment decisions.

Peter N. C. Mohr; Guido Biele; Lea K. Krugel; Shu-Chen Li; Hauke R. Heekeren

Many decisions people make can be described as decisions under risk. Understanding the mechanisms that drive these decisions is an important goal in decision neuroscience. Two competing classes of risky decision making models have been proposed to describe human behavior, namely utility-based models and risk-return models. Here we used a novel investment decision task that uses streams of (past) returns as stimuli to investigate how consistent the two classes of models are with the neurobiological processes underlying investment decisions (where outcomes usually follow continuous distributions). By showing (a) that risk-return models can explain choices behaviorally and (b) that the components of risk-return models (value, risk, and risk attitude) are represented in the brain during choices, we provide evidence that risk-return models describe the neural processes underlying investment decisions well. Most importantly, the observed correlation between risk and brain activity in the anterior insula during choices supports risk-return models more than utility-based models because risk is an explicit component of risk-return models but not of the utility-based models.

Collaboration


Dive into the Guido Biele's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heidi Aase

Norwegian Institute of Public Health

View shared research outputs
Top Co-Authors

Avatar

Shu-Chen Li

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pål Zeiner

Oslo University Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge