Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurence Aitchison is active.

Publication


Featured researches published by Laurence Aitchison.


PLOS Computational Biology | 2015

Doubly Bayesian analysis of confidence in perceptual decision-making

Laurence Aitchison; Dan Bang; Bahador Bahrami; P.E. Latham

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people’s confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.


Nature Neuroscience | 2017

Active dendritic integration as a mechanism for robust and precise grid cell firing

Christoph Schmidt-Hieber; Gabija Toleikyte; Laurence Aitchison; Arnd Roth; Beverley A. Clark; Tiago Branco; Michael Häusser

Understanding how active dendrites are exploited for behaviorally relevant computations is a fundamental challenge in neuroscience. Grid cells in medial entorhinal cortex are an attractive model system for addressing this question, as the computation they perform is clear: they convert synaptic inputs into spatially modulated, periodic firing. Whether active dendrites contribute to the generation of the dual temporal and rate codes characteristic of grid cell output is unknown. We show that dendrites of medial entorhinal cortex neurons are highly excitable and exhibit a supralinear input–output function in vitro, while in vivo recordings reveal membrane potential signatures consistent with recruitment of active dendritic conductances. By incorporating these nonlinear dynamics into grid cell models, we show that they can sharpen the precision of the temporal code and enhance the robustness of the rate code, thereby supporting a stable, accurate representation of space under varying environmental conditions. Our results suggest that active dendrites may therefore constitute a key cellular mechanism for ensuring reliable spatial navigation.


PLOS Computational Biology | 2016

Zipf’s Law Arises Naturally When There Are Underlying, Unobserved Variables

Laurence Aitchison; Nicola Corradi; P.E. Latham

Zipf’s law, which states that the probability of an observation is inversely proportional to its rank, has been observed in many domains. While there are models that explain Zipf’s law in each of them, those explanations are typically domain specific. Recently, methods from statistical physics were used to show that a fairly broad class of models does provide a general explanation of Zipf’s law. This explanation rests on the observation that real world data is often generated from underlying causes, known as latent variables. Those latent variables mix together multiple models that do not obey Zipf’s law, giving a model that does. Here we extend that work both theoretically and empirically. Theoretically, we provide a far simpler and more intuitive explanation of Zipf’s law, which at the same time considerably extends the class of models to which this explanation can apply. Furthermore, we also give methods for verifying whether this explanation applies to a particular dataset. Empirically, these advances allowed us extend this explanation to important classes of data, including word frequencies (the first domain in which Zipf’s law was discovered), data with variable sequence length, and multi-neuron spiking activity.


Nature Human Behaviour | 2017

Confidence matching in group decision-making

Dan Bang; Laurence Aitchison; Rani Moran; Santiago Herce Castañón; Banafsheh Rafiee; Ali Mahmoodi; Jennifer Y. F. Lau; P.E. Latham; Bahador Bahrami; Christopher Summerfield

Most important decisions in our society are made by groups, from cabinets and commissions to boards and juries. When disagreement arises, opinions expressed with higher confidence tend to carry more weight1,2. Although an individual’s degree of confidence often reflects the probability that their opinion is correct3,4, it can also vary with task-irrelevant psychological, social, cultural and demographic factors5–9. Therefore, to combine their opinions optimally, group members must adapt to each other’s individual biases and express their confidence according to a common metric10–12. However, solving this communication problem is computationally difficult. Here we show that pairs of individuals making group decisions meet this challenge by using a heuristic strategy that we call ‘confidence matching’: they match their communicated confidence so that certainty and uncertainty is stated in approximately equal measure by each party. Combining the behavioural data with computational modelling, we show that this strategy is effective when group members have similar levels of expertise, and that it is robust when group members have no insight into their relative levels of expertise. Confidence matching is, however, sub-optimal and can cause miscommunication about who is more likely to be correct. This herding behaviour is one reason why groups can fail to make good decisions10–12.


PLOS Computational Biology | 2016

The Hamiltonian Brain: Efficient Probabilistic Inference with Excitatory-Inhibitory Neural Circuit Dynamics

Laurence Aitchison; Máté Lengyel

Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.


Current Opinion in Neurobiology | 2017

With or without you: predictive coding and Bayesian inference in the brain

Laurence Aitchison; Máté Lengyel

Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic/representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly.


neural information processing systems | 2014

Fast Sampling-Based Inference in Balanced Neuronal Networks

Guillaume Hennequin; Laurence Aitchison; Máté Lengyel


arXiv: Neurons and Cognition | 2015

Synaptic sampling: A connection between PSP variability and uncertainty explains neurophysiological observations

Laurence Aitchison; P.E. Latham


arXiv: Neurons and Cognition | 2014

Zipf's law arises naturally in structured, high-dimensional data

Laurence Aitchison; Nicola Corradi; P.E. Latham


neural information processing systems | 2017

Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit

Laurence Aitchison; Lloyd E Russell; Adam M. Packer; Jinyao Yan; Philippe Castonguay; Michael Häusser; Srinivas C. Turaga

Collaboration


Dive into the Laurence Aitchison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

P.E. Latham

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bahador Bahrami

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Srinivas C. Turaga

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnd Roth

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge