Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Bossaerts is active.

Publication


Featured researches published by Peter Bossaerts.


The Journal of Neuroscience | 2008

Human Insula Activation Reflects Risk Prediction Errors As Well As Risk

Kerstin Preuschoff; Steven R. Quartz; Peter Bossaerts

Understanding how organisms deal with probabilistic stimulus-reward associations has been advanced by a convergence between reinforcement learning models and primate physiology, which demonstrated that the brain encodes a reward prediction error signal. However, organisms must also predict the level of risk associated with reward forecasts, monitor the errors in those risk predictions, and update these in light of new information. Risk prediction serves a dual purpose: (1) to guide choice in risk-sensitive organisms and (2) to modulate learning of uncertain rewards. To date, it is not known whether or how the brain accomplishes risk prediction. Using functional imaging during a simple gambling task in which we constantly changed risk, we show that an early-onset activation in the human insula correlates significantly with risk prediction error and that its time course is consistent with a role in rapid updating. Additionally, we show that activation previously associated with general uncertainty emerges with a delay consistent with a role in risk prediction. The activations correlating with risk prediction and risk prediction errors are the analogy for risk of activations correlating with reward prediction and reward prediction errors for reward expectation. As such, our findings indicate that our understanding of the neural basis of reward anticipation under uncertainty needs to be expanded to include risk prediction.


Neuron | 2006

Neural Differentiation of Expected Reward and Risk in Human Subcortical Structures

Kerstin Preuschoff; Peter Bossaerts; Steven R. Quartz

In decision-making under uncertainty, economic studies emphasize the importance of risk in addition to expected reward. Studies in neuroscience focus on expected reward and learning rather than risk. We combined functional imaging with a simple gambling task to vary expected reward and risk simultaneously and in an uncorrelated manner. Drawing on financial decision theory, we modeled expected reward as mathematical expectation of reward, and risk as reward variance. Activations in dopaminoceptive structures correlated with both mathematical parameters. These activations differentiated spatially and temporally. Temporally, the activation related to expected reward was immediate, while the activation related to risk was delayed. Analyses confirmed that our paradigm minimized confounds from learning, motivation, and salience. These results suggest that the primary task of the dopaminergic system is to convey signals of upcoming stochastic rewards, such as expected reward and risk, beyond its role in learning, motivation, and salience.


The Journal of Neuroscience | 2006

The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans

Alan N. Hampton; Peter Bossaerts; John P. O'Doherty

Many real-life decision-making problems incorporate higher-order structure, involving interdependencies between different stimuli, actions, and subsequent rewards. It is not known whether brain regions implicated in decision making, such as the ventromedial prefrontal cortex (vmPFC), use a stored model of the task structure to guide choice (model-based decision making) or merely learn action or state values without assuming higher-order structure as in standard reinforcement learning. To discriminate between these possibilities, we scanned human subjects with functional magnetic resonance imaging while they performed a simple decision-making task with higher-order structure, probabilistic reversal learning. We found that neural activity in a key decision-making region, the vmPFC, was more consistent with a computational model that exploits higher-order structure than with simple reinforcement learning. These results suggest that brain regions, such as the vmPFC, use an abstract model of task structure to guide behavioral choice, computations that may underlie the human capacity for complex social interactions and abstract strategizing.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Neural correlates of mentalizing-related computations during strategic interactions in humans

Alan N. Hampton; Peter Bossaerts; John P. O'Doherty

Competing successfully against an intelligent adversary requires the ability to mentalize an opponents state of mind to anticipate his/her future behavior. Although much is known about what brain regions are activated during mentalizing, the question of how this function is implemented has received little attention to date. Here we formulated a computational model describing the capacity to mentalize in games. We scanned human subjects with functional MRI while they participated in a simple two-player strategy game and correlated our model against the functional MRI data. Different model components captured activity in distinct parts of the mentalizing network. While medial prefrontal cortex tracked an individuals expectations given the degree of model-predicted influence, posterior superior temporal sulcus was found to correspond to an influence update signal, capturing the difference between expected and actual influence exerted. These results suggest dissociable contributions of different parts of the mentalizing network to the computations underlying higher-order strategizing in humans.


The Journal of Neuroscience | 2009

Neural Correlates of Value, Risk, and Risk Aversion Contributing to Decision Making under Risk

George I. Christopoulos; Philippe N. Tobler; Peter Bossaerts; R. J. Dolan; Wolfram Schultz

Decision making under risk is central to human behavior. Economic decision theory suggests that value, risk, and risk aversion influence choice behavior. Although previous studies identified neural correlates of decision parameters, the contribution of these correlates to actual choices is unknown. In two different experiments, participants chose between risky and safe options. We identified discrete blood oxygen level-dependent (BOLD) correlates of value and risk in the ventral striatum and anterior cingulate, respectively. Notably, increasing inferior frontal gyrus activity to low risk and safe options correlated with higher risk aversion. Importantly, the combination of these BOLD responses effectively decoded the behavioral choice. Striatal value and cingulate risk responses increased the probability of a risky choice, whereas inferior frontal gyrus responses showed the inverse relationship. These findings suggest that the BOLD correlates of decision factors are appropriate for an ideal observer to detect behavioral choices. More generally, these biological data contribute to the validity of the theoretical decision parameters for actual decisions under risk.


The Journal of Neuroscience | 2009

Encoding of Marginal Utility across Time in the Human Brain

Alex Pine; Ben Seymour; Jonathan P. Roiser; Peter Bossaerts; K. J. Friston; H. Valerie Curran; R. J. Dolan

Marginal utility theory prescribes the relationship between the objective property of the magnitude of rewards and their subjective value. Despite its pervasive influence, however, there is remarkably little direct empirical evidence for such a theory of value, let alone of its neurobiological basis. We show that human preferences in an intertemporal choice task are best described by a model that integrates marginally diminishing utility with temporal discounting. Using functional magnetic resonance imaging, we show that activity in the dorsal striatum encodes both the marginal utility of rewards, over and above that which can be described by their magnitude alone, and the discounting associated with increasing time. In addition, our data show that dorsal striatum may be involved in integrating subjective valuation systems inherent to time and magnitude, thereby providing an overall metric of value used to guide choice behavior. Furthermore, during choice, we show that anterior cingulate activity correlates with the degree of difficulty associated with dissonance between value and time. Our data support an integrative architecture for decision making, revealing the neural representation of distinct subcomponents of value that may contribute to impulsivity and decisiveness.


The Journal of Neuroscience | 2007

Neural Antecedents of Financial Decisions

Brian Knutson; Peter Bossaerts

To explain investing decisions, financial theorists invoke two opposing metrics: expected reward and risk. Recent advances in the spatial and temporal resolution of brain imaging techniques enable investigators to visualize changes in neural activation before financial decisions. Research using these methods indicates that although the ventral striatum plays a role in representation of expected reward, the insula may play a more prominent role in the representation of expected risk. Accumulating evidence also suggests that antecedent neural activation in these regions can be used to predict upcoming financial decisions. These findings have implications for predicting choices and for building a physiologically constrained theory of decision-making.


The Review of Economic Studies | 2002

An optimal IPO mechanism

Bruno Biais; Peter Bossaerts; Jean-Charles Rochet

We analyse the optimal Initial Public Offering (IPO) mechanism in a multidimensional adverse selection setting where institutional investors have private information about the market valuation of the shares, the intermediary has private information about the demand, and the institutional investors and intermediary collude. Theorem 1 states that uniform pricing is optimal (all agents pay the same price) and characterizes the IPO price in terms of conditional expectations. Theorem 2 states that the optimal mechanism can be implemented by a non-linear price schedule decreasing in the quantity allocated to retail investors. This is similar to IPO procedures used in the U.K. and France. Relying on French IPO data we perform a GMM structural estimation and test of the model. The price schedule is estimated and the conditions characterizing the optimal mechanism are not rejected.


Philosophical Transactions of the Royal Society B | 2008

Explicit neural signals reflecting reward uncertainty

Wolfram Schultz; Kerstin Preuschoff; Colin F. Camerer; Ming Hsu; Christopher D. Fiorillo; Phillippe N. Tobler; Peter Bossaerts

The acknowledged importance of uncertainty in economic decision making has stimulated the search for neural signals that could influence learning and inform decision mechanisms. Current views distinguish two forms of uncertainty, namely risk and ambiguity, depending on whether the probability distributions of outcomes are known or unknown. Behavioural neurophysiological studies on dopamine neurons revealed a risk signal, which covaried with the standard deviation or variance of the magnitude of juice rewards and occurred separately from reward value coding. Human imaging studies identified similarly distinct risk signals for monetary rewards in the striatum and orbitofrontal cortex (OFC), thus fulfilling a requirement for the mean variance approach of economic decision theory. The orbitofrontal risk signal covaried with individual risk attitudes, possibly explaining individual differences in risk perception and risky decision making. Ambiguous gambles with incomplete probabilistic information induced stronger brain signals than risky gambles in OFC and amygdala, suggesting that the brains reward system signals the partial lack of information. The brain can use the uncertainty signals to assess the uncertainty of rewards, influence learning, modulate the value of uncertain rewards and make appropriate behavioural choices between only partly known options.


Annals of the New York Academy of Sciences | 2007

Adding Prediction Risk to the Theory of Reward Learning

Kerstin Preuschoff; Peter Bossaerts

Abstract:  This article analyzes the simple Rescorla–Wagner learning rule from the vantage point of least squares learning theory. In particular, it suggests how measures of risk, such as prediction risk, can be used to adjust the learning constant in reinforcement learning. It argues that prediction risk is most effectively incorporated by scaling the prediction errors. This way, the learning rate needs adjusting only when the covariance between optimal predictions and past (scaled) prediction errors changes. Evidence is discussed that suggests that the dopaminergic system in the (human and nonhuman) primate brain encodes prediction risk, and that prediction errors are indeed scaled with prediction risk (adaptive encoding).

Collaboration


Dive into the Peter Bossaerts's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles R. Plott

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John P. O'Doherty

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Colin F. Camerer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven R. Quartz

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge