Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Blundell is active.

Publication


Featured researches published by Charles Blundell.


neural information processing systems | 2011

Modelling Genetic Variations using Fragmentation-Coagulation Processes

Yee Whye Teh; Charles Blundell; Lloyd T. Elliott

We propose a novel class of Bayesian nonparametric models for sequential data called fragmentation-coagulation processes (FCPs). FCPs model a set of sequences using a partition-valued Markov process which evolves by splitting and merging clusters. An FCP is exchangeable, projective, stationary and reversible, and its equilibrium distributions are given by the Chinese restaurant process. As opposed to hidden Markov models, FCPs allow for flexible modelling of the number of clusters, and they avoid label switching non-identifiability problems. We develop an efficient Gibbs sampler for FCPs which uses uniformization and the forward-backward algorithm. Our development of FCPs is motivated by applications in population genetics, and we demonstrate the utility of FCPs on problems of genotype imputation with phased and unphased SNP data.


Neuron | 2016

Computations Underlying Social Hierarchy Learning: Distinct Neural Mechanisms for Updating and Representing Self-Relevant Information

Dharshan Kumaran; Andrea Banino; Charles Blundell; Demis Hassabis; Peter Dayan

Summary Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one’s own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain.


Nature | 2018

Vector-based navigation using grid-like representations in artificial agents

Andrea Banino; Caswell Barry; Benigno Uria; Charles Blundell; Timothy P. Lillicrap; Piotr Mirowski; Alexander Pritzel; Martin J. Chadwick; Thomas Degris; Joseph Modayil; Greg Wayne; Hubert Soyer; Fabio Viola; Brian Zhang; Ross Goroshin; Neil C. Rabinowitz; Razvan Pascanu; Charlie Beattie; Stig Petersen; Amir Sadik; Stephen Gaffney; Helen King; Koray Kavukcuoglu; Demis Hassabis; Raia Hadsell; Dharshan Kumaran

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3–5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex6. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types12. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.Grid-like representations emerge spontaneously within a neural network trained to self-localize, enabling the agent to take shortcuts to destinations using vector-based navigation.


bioRxiv | 2017

Confidence modulates exploration and exploitation in value-based learning

Annika Boldt; Charles Blundell; Benedetto De Martino

Uncertainty is ubiquitous in cognitive processing, which is why agents require a precise handle on how to deal with the noise inherent in their mental operations. Previous research suggests that people possess a remarkable ability to track and report uncertainty, often in the form of confidence judgments. Here, we argue that humans use uncertainty inherent in their representations of value beliefs to arbitrate between exploration and exploitation. Such uncertainty is reflected in explicit confidence judgments. Using a novel variant of a multi-armed bandit paradigm, we studied how beliefs were formed and how uncertainty in the encoding of these value beliefs (belief confidence) evolved over time. We found that people used uncertainty to arbitrate between exploration and exploitation, reflected in a higher tendency towards exploration when their confidence in their value representations was low. We furthermore found that value uncertainty can be linked to frameworks of metacognition in decision making in two ways. First, belief confidence drives decision confidence—that is people’s evaluation of their own choices. Second, individuals with higher metacognitive insight into their choices were also better at tracing the uncertainty in their environment. Together, these findings argue that such uncertainty representations play a key role in the context of cognitive control.


neural information processing systems | 2016

Matching networks for one shot learning

Oriol Vinyals; Charles Blundell; Timothy P. Lillicrap; Koray Kavukcuoglu; Daan Wierstra


international conference on machine learning | 2015

Weight Uncertainty in Neural Network

Charles Blundell; Julien Cornebise; Koray Kavukcuoglu; Daan Wierstra


neural information processing systems | 2016

Deep Exploration via Bootstrapped DQN.

Ian Osband; Charles Blundell; Alexander Pritzel; Benjamin Van Roy


neural information processing systems | 2012

Modelling Reciprocating Relationships with Hawkes Processes

Charles Blundell; Jeffrey M. Beck; Katherine A. Heller


international conference on machine learning | 2014

Deep AutoRegressive Networks

Karol Gregor; Ivo Danihelka; Andriy Mnih; Charles Blundell; Daan Wierstra


Cognitive Science | 2016

Learning to reinforcement learn.

Jane X. Wang; Zeb Kurth-Nelson; Dhruva Tirumala; Hubert Soyer; Joel Z. Leibo; Rémi Munos; Charles Blundell; Dharshan Kumaran; Matthew Botvinick

Collaboration


Dive into the Charles Blundell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benigno Uria

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Demis Hassabis

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge