Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. M. Ali Eslami is active.

Publication


Featured researches published by S. M. Ali Eslami.


computer vision and pattern recognition | 2012

The Shape Boltzmann Machine: A strong model of object shape

S. M. Ali Eslami; Nicolas Heess; John Winn

A good model of object shape is essential in applications such as segmentation, object detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shape can help where the object boundary is noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to part of the object. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of Deep Boltzmann Machine [22] that we call a Shape Boltzmann Machine (ShapeBM) for the task of modeling binary shape images. We show that the ShapeBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the ShapeBM learns distributions that are qualitatively and quantitatively better than existing models for this task.


Science | 2018

Neural scene representation and rendering

S. M. Ali Eslami; Danilo Jimenez Rezende; Frederic Besse; Fabio Viola; Ari S. Morcos; Marta Garnelo; Avraham Ruderman; Andrei A. Rusu; Ivo Danihelka; Karol Gregor; David P. Reichert; Lars Buesing; Theophane Weber; Oriol Vinyals; Dan Rosenbaum; Neil C. Rabinowitz; Helen King; Chloe Hillier; Matt Botvinick; Daan Wierstra; Koray Kavukcuoglu; Demis Hassabis

A scene-internalizing computer program To train a computer to “recognize” elements of a scene supplied by its visual sensors, computer scientists typically use millions of images painstakingly labeled by humans. Eslami et al. developed an artificial vision system, dubbed the Generative Query Network (GQN), that has no need for such labeled data. Instead, the GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint. Science, this issue p. 1204 A computer vision system predicts how a 3D scene looks from any viewpoint after just a few 2D views from other viewpoints. Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.


neural information processing systems | 2016

Unsupervised Learning of 3D Structure from Images

Danilo Jimenez Rezende; S. M. Ali Eslami; Shakir Mohamed; Peter Battaglia; Max Jaderberg; Nicolas Heess


arXiv: Artificial Intelligence | 2017

Emergence of Locomotion Behaviours in Rich Environments

Nicolas Heess; Dhruva Tb; Srinivasan Sriram; Jay Lemmon; Josh Merel; Greg Wayne; Yuval Tassa; Tom Erez; Ziyu Wang; S. M. Ali Eslami; Martin A. Riedmiller; David Silver


international conference on machine learning | 2018

Synthesizing Programs for Images using Reinforced Adversarial Learning

Iaroslav Ganin; Tejas D. Kulkarni; Igor Babuschkin; S. M. Ali Eslami; Oriol Vinyals


neural information processing systems | 2014

Just-In-Time Learning for Fast and Flexible Inference

S. M. Ali Eslami; Daniel Tarlow; Pushmeet Kohli; John Winn


international conference on machine learning | 2018

Conditional Neural Processes

Marta Garnelo; Dan Rosenbaum; Chris J. Maddison; Tiago Ramalho; David Saxton; Murray Shanahan; Yee Whye Teh; Danilo Jimenez Rezende; S. M. Ali Eslami


arXiv: Learning | 2018

Learning and Querying Fast Generative Models for Reinforcement Learning.

Lars Buesing; Theophane Weber; Sébastien Racanière; S. M. Ali Eslami; Danilo Jimenez Rezende; David P. Reichert; Fabio Viola; Frederic Besse; Karol Gregor; Demis Hassabis; Daan Wierstra


international conference on machine learning | 2018

Machine Theory of Mind

Neil C. Rabinowitz; Frank Perbet; H. Francis Song; Chiyuan Zhang; S. M. Ali Eslami; Matthew Botvinick


arXiv: Learning | 2018

Neural Processes.

Marta Garnelo; Jonathan Schwarz; Dan Rosenbaum; Fabio Viola; Danilo Jimenez Rezende; S. M. Ali Eslami; Yee Whye Teh

Collaboration


Dive into the S. M. Ali Eslami's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Rosenbaum

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge