Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David P. Reichert is active.

Publication


Featured researches published by David P. Reichert.


PLOS Computational Biology | 2013

Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?

David P. Reichert; Peggy Seriès; Amos J. Storkey

Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain.


international conference on artificial neural networks | 2011

A hierarchical generative model of recurrent object-based attention in the visual cortex

David P. Reichert; Peggy Seriès; Amos J. Storkey

In line with recent work exploring Deep Boltzmann Machines (DBMs) as models of cortical processing, we demonstrate the potential of DBMs as models of object-based attention, combining generative principles with attentional ones. We show: (1) How inference in DBMs can be related qualitatively to theories of attentional recurrent processing in the visual cortex; (2) that deepness and topographic receptive fields are important for realizing the attentional state; (3) how more explicit attentional suppressive mechanisms can be implemented, depending crucially on sparse representations being formed during learning.


Science | 2018

Neural scene representation and rendering

S. M. Ali Eslami; Danilo Jimenez Rezende; Frederic Besse; Fabio Viola; Ari S. Morcos; Marta Garnelo; Avraham Ruderman; Andrei A. Rusu; Ivo Danihelka; Karol Gregor; David P. Reichert; Lars Buesing; Theophane Weber; Oriol Vinyals; Dan Rosenbaum; Neil C. Rabinowitz; Helen King; Chloe Hillier; Matt Botvinick; Daan Wierstra; Koray Kavukcuoglu; Demis Hassabis

A scene-internalizing computer program To train a computer to “recognize” elements of a scene supplied by its visual sensors, computer scientists typically use millions of images painstakingly labeled by humans. Eslami et al. developed an artificial vision system, dubbed the Generative Query Network (GQN), that has no need for such labeled data. Instead, the GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint. Science, this issue p. 1204 A computer vision system predicts how a 3D scene looks from any viewpoint after just a few 2D views from other viewpoints. Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.


BMC Neuroscience | 2011

Homeostasis causes hallucinations in a hierarchical generative model of the visual cortex: the Charles Bonnet Syndrome

David P. Reichert; Peggy Seriès; Amos J. Storkey

Hierarchical predictive models of the cortex [1,2] pose that the prediction of sensory input is a crucial aspect of cortical processing. Evaluating the internally generated predictions against actual input could be a powerful means of learning about causes in the world. During inference itself, rich high-level representations could then be utilized to resolve low-level ambiguities in sensory inputs via feed-back processing. A natural phenomenon to consider in such frameworks is that of hallucinations. In the Charles Bonnet Syndrome (CBS) [3-5], patients suffering from, primarily, eye diseases develop complex visual hallucinations containing vivid and life-like images of objects, animals, people etc. This syndrome is of particular interest as the complex content of the hallucinations rules out explanations based on simple low-level aspects of cortical organization, which are more suited to describe simpler hallucinations such as geometric patterns [6]. Moreover, the primary cause for the syndrome seems to be loss of sensory input in an otherwise healthy brain. Hence, a computational model of CBS needs to be capable of evoking rich internal representations under lack of external input, and elucidate on the underlying mechanisms. We explore Deep Boltzmann Machines (DBMs) as models of cortical processing. DBMs are hierarchical, probabilistic neural networks that learn to generate the data they are trained on based on simple Hebbian learning rules. To explain CBS, we propose that homeostatic mechanisms that serve to stabilize neuronal firing rates [7] overcompensate for the loss of sensory input. With a model trained on simple toy images that then had its input removed, we demonstrate that homeostatic adaptation is sufficient to cause spontaneous occurrence of internal representations of the toy objects. We qualitatively analyze various properties of the model in the light of clinical evidence about CBS, such as an initial latent period before hallucination onset, an occasional localization of imagery to damaged regions of the visual field, and the effects of cortical suppression and lesions. To elucidate on the potential role of drowsiness in causing hallucinations, we model acetylcholine as mediating the balance between feed-forward and feed-back processing in the hierarchy. An earlier version of this work was presented to a machine learning audience [8]. Here, we extend it with additional simulations to elaborate on our findings. In particular, we utilize more complex data sets, enforce sparsity to establish a clearer link between loss of input and decrease of cortical activity, and further justify the interpretation of the acetylcholine mechanism from a biological point of view.


international conference on machine learning | 2017

The Predictron: End-To-End Learning and Planning

David Silver; Hado van Hasselt; Matteo Hessel; Tom Schaul; Arthur Guez; Tim Harley; Gabriel Dulac-Arnold; David P. Reichert; Neil C. Rabinowitz; André da Motta Salles Barreto; Thomas Degris


neural information processing systems | 2017

Imagination-Augmented Agents for Deep Reinforcement Learning

Sébastien Racanière; Theophane Weber; David P. Reichert; Lars Buesing; Arthur Guez; Danilo Jimenez Rezende; Adrià Puigdomènech Badia; Oriol Vinyals; Nicolas Heess; Yujia Li; Razvan Pascanu; Peter Battaglia; Demis Hassabis; David Silver; Daan Wierstra


neural information processing systems | 2010

Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model

Peggy Seriès; David P. Reichert; Amos J. Storkey


international conference on learning representations | 2014

Neuronal Synchrony in Complex-Valued Deep Networks

David P. Reichert; Thomas Serre


neural information processing systems | 2011

Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability

David P. Reichert; Peggy Seriès; Amos J. Storkey


arXiv: Artificial Intelligence | 2017

Learning model-based planning from scratch.

Razvan Pascanu; Yujia Li; Oriol Vinyals; Nicolas Heess; Lars Buesing; Sébastien Racanière; David P. Reichert; Theophane Weber; Daan Wierstra; Peter Battaglia

Collaboration


Dive into the David P. Reichert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oriol Vinyals

University of California

View shared research outputs
Top Co-Authors

Avatar

Daan Wierstra

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge