Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurence T. Hunt is active.

Publication


Featured researches published by Laurence T. Hunt.


Nature | 2008

Associative learning of social value.

Timothy E. J. Behrens; Laurence T. Hunt; Mark W. Woolrich; Matthew F. S. Rushworth

Our decisions are guided by information learnt from our environment. This information may come via personal experiences of reward, but also from the behaviour of social partners. Social learning is widely held to be distinct from other forms of learning in its mechanism and neural implementation; it is often assumed to compete with simpler mechanisms, such as reward-based associative learning, to drive behaviour. Recently, neural signals have been observed during social exchange reminiscent of signals seen in studies of associative learning. Here we demonstrate that social information may be acquired using the same associative processes assumed to underlie reward-based learning. We find that key computational variables for learning in the social and reward domains are processed in a similar fashion, but in parallel neural processing streams. Two neighbouring divisions of the anterior cingulate cortex were central to learning about social and reward-based information, and for determining the extent to which each source of information guides behaviour. When making a decision, however, the information learnt using these parallel streams was combined within ventromedial prefrontal cortex. These findings suggest that human social valuation can be realized by means of the same associative processes previously established for learning other, simpler, features of the environment.


Science | 2009

The Computation of Social Behavior

Timothy E. J. Behrens; Laurence T. Hunt; Matthew F. S. Rushworth

Social Sciences Reviewed The social sciences focus on multibody problems within changing environments, where the intentions and actions of both actor and acted-upon vary over time. In such situations, it can be challenging, to say the least, to identify unambiguously and persuasively which behaviors are causes, which are effects, and which are epiphenomena. Behrens et al. (p. 1160) review the recent application of formal behavioral models in the area of social cognitive neuroscience. Neuroscientists are beginning to advance explanations of social behavior in terms of underlying brain mechanisms. Two distinct networks of brain regions have come to the fore. The first involves brain regions that are concerned with learning about reward and reinforcement. These same reward-related brain areas also mediate preferences that are social in nature even when no direct reward is expected. The second network focuses on regions active when a person must make estimates of another person’s intentions. However, it has been difficult to determine the precise roles of individual brain regions within these networks or how activities in the two networks relate to one another. Some recent studies of reward-guided behavior have described brain activity in terms of formal mathematical models; these models can be extended to describe mechanisms that underlie complex social exchange. Such a mathematical formalism defines explicit mechanistic hypotheses about internal computations underlying regional brain activity, provides a framework in which to relate different types of activity and understand their contributions to behavior, and prescribes strategies for performing experiments under strong control.


Nature Neuroscience | 2012

Mechanisms underlying cortical activity during value-guided choice.

Laurence T. Hunt; Nils Kolling; Alireza Soltani; Mark W. Woolrich; Matthew F. S. Rushworth; Timothy E. J. Behrens

When choosing between two options, correlates of their value are represented in neural activity throughout the brain. Whether these representations reflect activity that is fundamental to the computational process of value comparison, as opposed to other computations covarying with value, is unknown. We investigated activity in a biophysically plausible network model that transforms inputs relating to value into categorical choices. A set of characteristic time-varying signals emerged that reflect value comparison. We tested these model predictions using magnetoencephalography data recorded from human subjects performing value-guided decisions. Parietal and prefrontal signals matched closely with model predictions. These results provide a mechanistic explanation of neural signals recorded during value-guided choice and a means of distinguishing computational roles of different cortical regions whose activity covaries with value.


Neuron | 2012

An agent independent axis for executed and modeled choice in medial prefrontal cortex.

Antoinette Nicolle; Miriam C. Klein-Flügge; Laurence T. Hunt; Ivo Vlaev; R. J. Dolan; Timothy E. J. Behrens

Summary Adaptive success in social animals depends on an ability to infer the likely actions of others. Little is known about the neural computations that underlie this capacity. Here, we show that the brain models the values and choices of others even when these values are currently irrelevant. These modeled choices use the same computations that underlie our own choices, but are resolved in a distinct neighboring medial prefrontal brain region. Crucially, however, when subjects choose on behalf of a partner instead of themselves, these regions exchange their functional roles. Hence, regions that represented values of the subject’s executed choices now represent the values of choices executed on behalf of the partner, and those that previously modeled the partner now model the subject. These data tie together neural computations underlying self-referential and social inference, and in so doing establish a new functional axis characterizing the medial wall of prefrontal cortex.


Nature Neuroscience | 2012

A mechanism for value-guided choice based on the excitation-inhibition balance in prefrontal cortex.

Gerhard Jocham; Laurence T. Hunt; Jamie Near; Timothy E. J. Behrens

Although the ventromedial prefrontal cortex (vmPFC) has long been implicated in reward-guided decision making, its exact role in this process has remained an unresolved issue. Here we show that, in accordance with models of decision making, vmPFC concentrations of GABA and glutamate in human volunteers predict both behavioral performance and the dynamics of a neural value comparison signal. These data provide evidence for a neural competition mechanism in vmPFC that supports value-guided choice.


Nature Neuroscience | 2014

A neural mechanism underlying failure of optimal choice with multiple alternatives

Bolton K. H. Chau; Nils Kolling; Laurence T. Hunt; Mark E. Walton; Matthew F. S. Rushworth

Despite widespread interest in neural mechanisms of decision-making, most investigations focus on decisions between just two options. Here we adapt a biophysically plausible model of decision-making to predict how a key decision variable, the value difference signal—encoding how much better one choice is than another—changes with the value of a third, but unavailable, alternative. The model predicts a surprising failure of optimal decision-making: greater difficulty choosing between two options in the presence of a third very poor, as opposed to very good, alternative. Both investigation of human decision-making and functional magnetic resonance imaging–based measurements of value difference signals in ventromedial prefrontal cortex (vmPFC) bore out this prediction. The vmPFC signal decreased in the presence of low-value third alternatives, and vmPFC effect sizes predicted individual variation in suboptimal decision-making in the presence of multiple alternatives. The effect contrasts with that of divisive normalization in parietal cortex.


NeuroImage | 2011

MEG beamforming using Bayesian PCA for adaptive data covariance matrix regularization.

Mark W. Woolrich; Laurence T. Hunt; Adrian R. Groves; Gareth R. Barnes

Beamformers are a commonly used method for doing source localization from magnetoencephalography (MEG) data. A key ingredient in a beamformer is the estimation of the data covariance matrix. When the noise levels are high, or when there is only a small amount of data available, the data covariance matrix is estimated poorly and the signal-to-noise ratio (SNR) of the beamformer output degrades. One solution to this is to use regularization whereby the diagonal of the covariance matrix is amplified by a pre-specified amount. However, this provides improvements at the expense of a loss in spatial resolution, and the parameter controlling the amount of regularization must be chosen subjectively. In this paper, we introduce a method that provides an adaptive solution to this problem by using a Bayesian Principle Component Analysis (PCA). This provides an estimate of the data covariance matrix to give a data-driven, non-arbitrary solution to the trade-off between the spatial resolution and the SNR of the beamformer output. This also provides a method for determining when the quality of the data covariance estimate maybe under question. We apply the approach to simulated and real MEG data, and demonstrate the way in which it can automatically adapt the regularization to give good performance over a range of noise and signal levels.


Neuron | 2011

Dissociable reward and timing signals in human midbrain and ventral striatum.

Miriam C. Klein-Flügge; Laurence T. Hunt; Dominik R. Bach; R. J. Dolan; Timothy E. J. Behrens

Summary Reward prediction error (RPE) signals are central to current models of reward-learning. Temporal difference (TD) learning models posit that these signals should be modulated by predictions, not only of magnitude but also timing of reward. Here we show that BOLD activity in the VTA conforms to such TD predictions: responses to unexpected rewards are modulated by a temporal hazard function and activity between a predictive stimulus and reward is depressed in proportion to predicted reward. By contrast, BOLD activity in ventral striatum (VS) does not reflect a TD RPE, but instead encodes a signal on the variable relevant for behavior, here timing but not magnitude of reward. The results have important implications for dopaminergic models of cortico-striatal learning and suggest a modification of the conventional view that VS BOLD necessarily reflects inputs from dopaminergic VTA neurons signaling an RPE.


Nature Neuroscience | 2014

Hierarchical competitions subserving multi-attribute choice

Laurence T. Hunt; R. J. Dolan; Timothy E. J. Behrens

Valuation is a key tenet of decision neuroscience, where it is generally assumed that different attributes of competing options are assimilated into unitary values. Such values are central to current neural models of choice. By contrast, psychological studies emphasize complex interactions between choice and valuation. Principles of neuronal selection also suggest that competitive inhibition may occur in early valuation stages, before option selection. We found that behavior in multi-attribute choice is best explained by a model involving competition at multiple levels of representation. This hierarchical model also explains neural signals in human brain regions previously linked to valuation, including striatum, parietal and prefrontal cortex, where activity represents within-attribute competition, competition between attributes and option selection. This multi-layered inhibition framework challenges the assumption that option values are computed before choice. Instead, our results suggest a canonical competition mechanism throughout all stages of a processing hierarchy, not simply at a final choice stage.


Nature Reviews Neuroscience | 2017

A distributed, hierarchical and recurrent framework for reward-based choice.

Laurence T. Hunt; Benjamin Y. Hayden

Many accounts of reward-based choice argue for distinct component processes that are serial and functionally localized. In this Opinion article, we argue for an alternative viewpoint, in which choices emerge from repeated computations that are distributed across many brain regions. We emphasize how several features of neuroanatomy may support the implementation of choice, including mutual inhibition in recurrent neural networks and the hierarchical organization of timescales for information processing across the cortex. This account also suggests that certain correlates of value are emergent rather than represented explicitly in the brain.

Collaboration


Dive into the Laurence T. Hunt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. J. Dolan

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean E. Cavanagh

UCL Institute of Neurology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge