Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomer Ullman is active.

Publication


Featured researches published by Tomer Ullman.


Behavioral and Brain Sciences | 2017

Building Machines That Learn and Think Like People

Brenden M. Lake; Tomer Ullman; Joshua B. Tenenbaum; Samuel J. Gershman

Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.


Psychological Review | 2011

Learning a theory of causality

Noah D. Goodman; Tomer Ullman; Joshua B. Tenenbaum

The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned--an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.


Science | 2017

Ten-month-old infants infer the value of goals from the costs of actions

Shari Liu; Tomer Ullman; Joshua B. Tenenbaum; Elizabeth S. Spelke

Ranking valuations on the basis of observed choices Obliged to make a choice between two goals, we evaluate the benefits of achieving the goals compared with the costs of the actions required before deciding what to do. This seems perfectly straightforward, and it is unsurprising to learn that we can also apply this reasoning to others; that is, someone that we see choosing a goal that requires a more costly action must value that goal more highly. What is remarkable, as Liu et al. report, is that preverbal children can reason in this same fashion. Science, this issue p. 1038 Infants can assess how worthwhile or valuable an object or goal may be from others’ behaviors in achieving or acquiring it. Infants understand that people pursue goals, but how do they learn which goals people prefer? We tested whether infants solve this problem by inverting a mental model of action planning, trading off the costs of acting against the rewards actions bring. After seeing an agent attain two goals equally often at varying costs, infants expected the agent to prefer the goal it attained through costlier actions. These expectations held across three experiments that conveyed cost through different physical path features (height, width, and incline angle), suggesting that an abstract variable—such as “force,” “work,” or “effort”—supported infants’ inferences. We modeled infants’ expectations as Bayesian inferences over utility-theoretic calculations, providing a bridge to recent quantitative accounts of action understanding in older children and adults.


international conference on development and learning | 2012

Sticking to the Evidence? A computational and behavioral case study of micro-theory change in the domain of magnetism

Elizabeth Bonawitz; Tomer Ullman; Alison Gopnik; Joshua B. Tenenbaum

An intuitive theory is a system of abstract concepts and laws relating those concepts that together provide a framework for explaining some domain of phenomena. Constructing an intuitive theory based on observing the world, as in building a scientific theory from data, confronts learners with a “chicken-and-egg” problem: the laws can only be expressed in terms of the theorys core concepts, but these concepts are only meaningful in terms of the role they play in the theorys laws; how is a learner to discover appropriate concepts and laws simultaneously, knowing neither to begin with? Even knowing the number of categories in a theory does not resolve this problem: without knowing how individuals should be sorted (which categories each belongs to), a the causal relationships between categories cannot be resolved. We explore how children can solve this chicken-and-egg problem in the domain of magnetism, drawing on perspectives from history of science, computational modeling, and behavioral experiments. We present preschoolers with a simplified magnet learning task and show how our empirical results can be explained as rational inferences within a Bayesian computational framework.


Cognitive Psychology | 2018

Learning physical parameters from dynamic scenes

Tomer Ullman; Andreas Stuhlmüller; Noah D. Goodman; Joshua B. Tenenbaum

Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space.


Behavioral and Brain Sciences | 2017

Ingredients of intelligence: From classic debates to an engineering roadmap

Brenden M. Lake; Tomer Ullman; Joshua B. Tenenbaum; Samuel J. Gershman

We were encouraged by the broad enthusiasm for building machines that learn and think in more human-like ways. Many commentators saw our set of key ingredients as helpful, but there was disagreement regarding the origin and structure of those ingredients. Our response covers three main dimensions of this disagreement: nature versus nurture, coherent theories versus theory fragments, and symbolic versus sub-symbolic representations. These dimensions align with classic debates in artificial intelligence and cognitive science, although, rather than embracing these debates, we emphasize ways of moving beyond them. Several commentators saw our set of key ingredients as incomplete and offered a wide range of additions. We agree that these additional ingredients are important in the long run and discuss prospects for incorporating them. Finally, we consider some of the ethical questions raised regarding the research program as a whole.


Developmental Science | 2013

The mentalistic basis of core social cognition: experiments in preverbal infants and a computational model

J. Kiley Hamlin; Tomer Ullman; Josh Tenenbaum; Noah D. Goodman; Chris I. Baker


neural information processing systems | 2009

Help or Hinder: Bayesian Models of Social Goal Inference

Tomer Ullman; Chris L. Baker; Owen Macindoe; Owain Evans; Noah D. Goodman; Joshua B. Tenenbaum


international conference on learning representations | 2017

A Compositional Object-Based Approach to Learning Physical Dynamics

Michael Chang; Tomer Ullman; Antonio Torralba; Joshua B. Tenenbaum


Cognitive Development | 2012

Theory learning as stochastic search in the language of thought

Tomer Ullman; Noah D. Goodman; Joshua B. Tenenbaum

Collaboration


Dive into the Tomer Ullman's collaboration.

Top Co-Authors

Avatar

Joshua B. Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Josh Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tobias Gerstenberg

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Stuhlmüller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Battaglia

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge