Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neil C. Rabinowitz is active.

Publication


Featured researches published by Neil C. Rabinowitz.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Overcoming catastrophic forgetting in neural networks

James Kirkpatrick; Razvan Pascanu; Neil C. Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A. Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell

Significance Deep neural networks are currently the most successful machine-learning technique for solving a variety of tasks, including language translation, image classification, and image generation. One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially. In this work we propose a practical solution to train such models sequentially by protecting the weights important for previous tasks. This approach, inspired by synaptic consolidation in neuroscience, enables state of the art results on multiple reinforcement learning problems experienced sequentially. The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.


Nature | 2018

Vector-based navigation using grid-like representations in artificial agents

Andrea Banino; Caswell Barry; Benigno Uria; Charles Blundell; Timothy P. Lillicrap; Piotr Mirowski; Alexander Pritzel; Martin J. Chadwick; Thomas Degris; Joseph Modayil; Greg Wayne; Hubert Soyer; Fabio Viola; Brian Zhang; Ross Goroshin; Neil C. Rabinowitz; Razvan Pascanu; Charlie Beattie; Stig Petersen; Amir Sadik; Stephen Gaffney; Helen King; Koray Kavukcuoglu; Demis Hassabis; Raia Hadsell; Dharshan Kumaran

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3–5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex6. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types12. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.Grid-like representations emerge spontaneously within a neural network trained to self-localize, enabling the agent to take shortcuts to destinations using vector-based navigation.


Science | 2018

Neural scene representation and rendering

S. M. Ali Eslami; Danilo Jimenez Rezende; Frederic Besse; Fabio Viola; Ari S. Morcos; Marta Garnelo; Avraham Ruderman; Andrei A. Rusu; Ivo Danihelka; Karol Gregor; David P. Reichert; Lars Buesing; Theophane Weber; Oriol Vinyals; Dan Rosenbaum; Neil C. Rabinowitz; Helen King; Chloe Hillier; Matt Botvinick; Daan Wierstra; Koray Kavukcuoglu; Demis Hassabis

A scene-internalizing computer program To train a computer to “recognize” elements of a scene supplied by its visual sensors, computer scientists typically use millions of images painstakingly labeled by humans. Eslami et al. developed an artificial vision system, dubbed the Generative Query Network (GQN), that has no need for such labeled data. Instead, the GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint. Science, this issue p. 1204 A computer vision system predicts how a 3D scene looks from any viewpoint after just a few 2D views from other viewpoints. Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.


Proceedings of the National Academy of Sciences of the United States of America | 2018

Reply to Huszár: The elastic weight consolidation penalty is empirically valid

James Kirkpatrick; Razvan Pascanu; Neil C. Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A. Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell

In our recent work on elastic weight consolidation (EWC) (1) we show that forgetting in neural networks can be alleviated by using a quadratic penalty whose derivation was inspired by Bayesian evidence accumulation. In his letter (2), Dr. Huszar provides an alternative form for this penalty by following the standard work on expectation propagation using the Laplace approximation (3). He correctly argues that in cases when more than two tasks are undertaken the two forms of the penalty are different. Dr. Huszar also shows that for a toy linear regression problem his expression appears to be better. We would like to thank Dr. Huszar for pointing out … [↵][1]1To whom correspondence should be addressed. Email: [email protected]. [1]: #xref-corresp-1-1


arXiv: Learning | 2017

PROGRESSIVE NEURAL NETWORKS

Andrei A. Rusu; Neil C. Rabinowitz; Guillaume Desjardins; Hubert Soyer; James Kirkpatrick; Koray Kavukcuoglu; Razvan Pascanu; Raia Hadsell


international conference on machine learning | 2017

The Predictron: End-To-End Learning and Planning

David Silver; Hado van Hasselt; Matteo Hessel; Tom Schaul; Arthur Guez; Tim Harley; Gabriel Dulac-Arnold; David P. Reichert; Neil C. Rabinowitz; André da Motta Salles Barreto; Thomas Degris


international conference on learning representations | 2018

On the importance of single directions for generalization

Ari S. Morcos; David G. T. Barrett; Neil C. Rabinowitz; Matthew Botvinick


Behavioral and Brain Sciences | 2017

Building machines that learn and think for themselves

Matthew Botvinick; David G. T. Barrett; Peter Battaglia; Nando de Freitas; Darshan Kumaran; Joel Z. Leibo; Timothy P. Lillicrap; Joseph Modayil; Shakir Mohamed; Neil C. Rabinowitz; Danilo Jimenez Rezende; Adam Santoro; Tom Schaul; Christopher Summerfield; Greg Wayne; Theophane Weber; Daan Wierstra; Shane Legg; Demis Hassabis


international conference on machine learning | 2018

Machine Theory of Mind

Neil C. Rabinowitz; Frank Perbet; H. Francis Song; Chiyuan Zhang; S. M. Ali Eslami; Matthew Botvinick


arXiv: Artificial Intelligence | 2017

Building Machines that Learn and Think for Themselves: Commentary on Lake et al., Behavioral and Brain Sciences, 2017.

Matthew Botvinick; David G. T. Barrett; Peter Battaglia; Nando de Freitas; Dharshan Kumaran; Joel Z. Leibo; Tim Lillicrap; Joseph Modayil; Shakir Mohamed; Neil C. Rabinowitz; Danilo Jimenez Rezende; Adam Santoro; Tom Schaul; Christopher Summerfield; Greg Wayne; Theophane Weber; Daan Wierstra; Shane Legg; Demis Hassabis

Collaboration


Dive into the Neil C. Rabinowitz's collaboration.

Researchain Logo
Decentralizing Knowledge