Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joel Z. Leibo is active.

Publication


Featured researches published by Joel Z. Leibo.


Journal of Neurophysiology | 2014

The dynamics of invariant object recognition in the human visual system

Leyla Isik; Ethan Meyers; Joel Z. Leibo; Tomaso Poggio

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.


Frontiers in Computational Neuroscience | 2012

Learning and disrupting invariance in visual recognition with a temporal association rule.

Leyla Isik; Joel Z. Leibo; Tomaso Poggio

Learning by temporal association rules such as Foldiaks trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms.


Machine Learning for Computer Vision | 2013

Throwing Down the Visual Intelligence Gauntlet

Cheston Tan; Joel Z. Leibo; Tomaso Poggio

In recent years, scientific and technological advances have produced artificial systems that have matched or surpassed human capabilities in narrow domains such as face detection and optical character recognition. However, the problem of producing truly intelligent machines still remains far from being solved. In this chapter, we first describe some of these recent advances, and then review one approach to moving beyond these limited successes – the neuromorphic approach of studying and reverse-engineering the networks of neurons in the human brain (specifically, the visual system). Finally, we discuss several possible future directions in the quest for visual intelligence.


Nature Neuroscience | 2018

Prefrontal cortex as a meta-reinforcement learning system

Jane X. Wang; Zeb Kurth-Nelson; Dharshan Kumaran; Dhruva Tirumala; Hubert Soyer; Joel Z. Leibo; Demis Hassabis; Matthew M. Botvinick

Over the past 20 years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine ‘stamps in’ associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. We now draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research.Humans and other mammals are prodigious learners, partly because they also ‘learn how to learn’. Wang and colleagues present a new theory showing how learning to learn may arise from interactions between prefrontal cortex and the dopamine system.


Archive | 2017

Invariant Recognition Predicts Tuning of Neurons in Sensory Cortex

Jim Mutch; Fabio Anselmi; Andrea Tacchetti; Lorenzo Rosasco; Joel Z. Leibo; Tomaso Poggio

Tuning properties of simple cells in cortical V1 can be described in terms of a “universal shape” characterized quantitatively by parameter values which hold across different species (Jones and Palmer 1987; Ringach 2002; Niell and Stryker 2008). This puzzling set of findings begs for a general explanation grounded on an evolutionarily important computational function of the visual cortex. We show here that these properties are quantitatively predicted by the hypothesis that the goal of the ventral stream is to compute for each image a “signature” vector which is invariant to geometric transformations (Anselmi et al. 2013b). The mechanism for continuously learning and maintaining invariance may be the memory storage of a sequence of neural images of a few (arbitrary) objects via Hebbian synapses, while undergoing transformations such as translation, scale changes and rotation. For V1 simple cells this hypothesis implies that the tuning of neurons converges to the eigenvectors of the covariance of their input. Starting with a set of dendritic fields spanning a range of sizes, we show with simulations suggested by a direct analysis, that the solution of the associated “cortical equation” effectively provides a set of Gabor-like shapes with parameter values that quantitatively agree with the physiology data. The same theory provides predictions about the tuning of cells in V4 and in the face patch AL (Leibo et al. 2013a) which are in qualitative agreement with physiology data.


arXiv: Computer Vision and Pattern Recognition | 2013

Unsupervised Learning of Invariant Representations in Hierarchical Architectures

Fabio Anselmi; Joel Z. Leibo; Lorenzo Rosasco; Jim Mutch; Andrea Tacchetti; Tomaso Poggio


adaptive agents and multi agents systems | 2017

Multi-agent Reinforcement Learning in Sequential Social Dilemmas

Joel Z. Leibo; Vinícius Flores Zambaldi; Marc Lanctot; Janusz Marecki; Thore Graepel


arXiv: Machine Learning | 2016

Model-Free Episodic Control.

Charles Blundell; Benigno Uria; Alexander Pritzel; Yazhe Li; Avraham Ruderman; Joel Z. Leibo; Jack W. Rae; Daan Wierstra; Demis Hassabis


Theoretical Computer Science | 2016

Unsupervised learning of invariant representations

Fabio Anselmi; Joel Z. Leibo; Lorenzo Rosasco; James Vincent Mutch; Andrea Tacchetti; Tomaso Poggio


national conference on artificial intelligence | 2018

Deep Q-learning from Demonstrations

Todd Hester; Matej Vecerik; Olivier Pietquin; Marc Lanctot; Tom Schaul; Dan Horgan; John Quan; Andrew Sendonaris; Gabriel Dulac-Arnold; Ian Osband; John Agapiou; Joel Z. Leibo; Audrunas Gruslys

Collaboration


Dive into the Joel Z. Leibo's collaboration.

Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jim Mutch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Rosasco

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qianli Liao

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Karl Tuyls

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar

Leyla Isik

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fabio Anselmi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Andrea Tacchetti

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge