Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Wayne is active.

Publication


Featured researches published by Greg Wayne.


Nature | 2016

Hybrid computing using a neural network with dynamic external memory

Alex Graves; Greg Wayne; Malcolm Reynolds; Tim Harley; Ivo Danihelka; Agnieszka Grabska-Barwinska; Sergio Gómez Colmenarejo; Edward Grefenstette; Tiago Ramalho; John Agapiou; Adrià Puigdomènech Badia; Karl Moritz Hermann; Yori Zwols; Georg Ostrovski; Adam Cain; Helen King; Christopher Summerfield; Phil Blunsom; Koray Kavukcuoglu; Demis Hassabis

Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.


Frontiers in Computational Neuroscience | 2016

Toward an Integration of Deep Learning and Neuroscience

Adam Henry Marblestone; Greg Wayne; Konrad P. Körding

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brains specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.


Nature | 2018

Vector-based navigation using grid-like representations in artificial agents

Andrea Banino; Caswell Barry; Benigno Uria; Charles Blundell; Timothy P. Lillicrap; Piotr Mirowski; Alexander Pritzel; Martin J. Chadwick; Thomas Degris; Joseph Modayil; Greg Wayne; Hubert Soyer; Fabio Viola; Brian Zhang; Ross Goroshin; Neil C. Rabinowitz; Razvan Pascanu; Charlie Beattie; Stig Petersen; Amir Sadik; Stephen Gaffney; Helen King; Koray Kavukcuoglu; Demis Hassabis; Raia Hadsell; Dharshan Kumaran

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3–5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex6. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types12. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.Grid-like representations emerge spontaneously within a neural network trained to self-localize, enabling the agent to take shortcuts to destinations using vector-based navigation.


arXiv: Neural and Evolutionary Computing | 2014

Neural Turing Machines

Alex Graves; Greg Wayne; Ivo Danihelka


neural information processing systems | 2015

Learning continuous control policies by stochastic value gradients

Nicolas Heess; Greg Wayne; David Silver; Timothy P. Lillicrap; Yuval Tassa; Tom Erez


arXiv: Artificial Intelligence | 2017

Emergence of Locomotion Behaviours in Rich Environments

Nicolas Heess; Dhruva Tb; Srinivasan Sriram; Jay Lemmon; Josh Merel; Greg Wayne; Yuval Tassa; Tom Erez; Ziyu Wang; S. M. Ali Eslami; Martin A. Riedmiller; David Silver


international conference on machine learning | 2016

Associative long short-term memory

Ivo Danihelka; Greg Wayne; Benigno Uria; Nal Kalchbrenner; Alex Graves


arXiv: Robotics | 2017

Learning human behaviors from motion capture by adversarial imitation

Josh Merel; Yuval Tassa; Dhruva Tb; Sriram Srinivasan; Jay Lemmon; Ziyu Wang; Greg Wayne; Nicolas Heess


arXiv: Learning | 2017

Generative Temporal Models with Memory.

Mevlana Gemici; Chia-Chun Hung; Adam Santoro; Greg Wayne; Shakir Mohamed; Danilo Jimenez Rezende; David Amos; Timothy P. Lillicrap


Behavioral and Brain Sciences | 2017

Building machines that learn and think for themselves

Matthew Botvinick; David G. T. Barrett; Peter Battaglia; Nando de Freitas; Darshan Kumaran; Joel Z. Leibo; Timothy P. Lillicrap; Joseph Modayil; Shakir Mohamed; Neil C. Rabinowitz; Danilo Jimenez Rezende; Adam Santoro; Tom Schaul; Christopher Summerfield; Greg Wayne; Theophane Weber; Daan Wierstra; Shane Legg; Demis Hassabis

Collaboration


Dive into the Greg Wayne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Demis Hassabis

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joel Z. Leibo

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge