Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noah D. Goodman is active.

Publication


Featured researches published by Noah D. Goodman.


Science | 2011

How to Grow a Mind: Statistics, Structure, and Abstraction

Joshua B. Tenenbaum; Charles Kemp; Thomas L. Griffiths; Noah D. Goodman

In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?


Cognition | 2011

The Double-edged Sword of Pedagogy: Instruction limits spontaneous exploration and discovery

Elizabeth Bonawitz; Patrick Shafto; Hyowon Gweon; Noah D. Goodman; Elizabeth S. Spelke; Laura Schulz

Motivated by computational analyses, we look at how teaching affects exploration and discovery. In Experiment 1, we investigated childrens exploratory play after an adult pedagogically demonstrated a function of a toy, after an interrupted pedagogical demonstration, after a naïve adult demonstrated the function, and at baseline. Preschoolers in the pedagogical condition focused almost exclusively on the target function; by contrast, children in the other conditions explored broadly. In Experiment 2, we show that children restrict their exploration both after direct instruction to themselves and after overhearing direct instruction given to another child; they do not show this constraint after observing direct instruction given to an adult or after observing a non-pedagogical intentional action. We discuss these findings as the result of rational inductive biases. In pedagogical contexts, a teachers failure to provide evidence for additional functions provides evidence for their absence; such contexts generalize from child to child (because children are likely to have comparable states of knowledge) but not from adult to child. Thus, pedagogy promotes efficient learning but at a cost: children are less likely to perform potentially irrelevant actions but also less likely to discover novel information.


Science | 2012

Predicting Pragmatic Reasoning in Language Games

Michael C. Frank; Noah D. Goodman

Whos Who Different languages rely on distinct sets of terminology to classify relatives, such as maternal grandfather in English, and precision in language usage is a key component for successful communication (see the Perspective by Levinson). Kemp and Regier (p. 1049) propose an organizing framework whereby kinship classification systems can all be seen to optimize or nearly optimize both simplicity and precision. The labels applied to kin are constructed from simple units and are precise enough to reduce confusion and ambiguity when used in communication. Frank and Goodman (p. 998) show that simplicity and precision also explain how listeners correctly infer the meaning of speech in the context of referential communication. A Bayesian inference model predicts how listeners decode communications. One of the most astonishing features of human language is its capacity to convey information efficiently in context. Many theories provide informal accounts of communicative inference, yet there have been few successes in making precise, quantitative predictions about pragmatic reasoning. We examined judgments about simple referential communication games, modeling behavior in these games by assuming that speakers attempt to be informative and that listeners use Bayesian inference to recover speakers’ intended referents. Our model provides a close, parameter-free fit to human judgments, suggesting that the use of information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication.


Cognitive Science | 2008

A Rational Analysis of Rule-based Concept Learning

Noah D. Goodman; Joshua B. Tenenbaum; Jacob Feldman; Thomas L. Griffiths

This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space-a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well-known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7-feature concepts-a more natural setting in several ways-and again finds that the model explains human performance.


Cognitive Science | 2014

One and done? Optimal decisions from very few samples.

Edward Vul; Noah D. Goodman; Thomas L. Griffiths; Joshua B. Tenenbaum

In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples--but as samples are costly--how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition.


Psychological Review | 2011

Learning a theory of causality

Noah D. Goodman; Tomer Ullman; Joshua B. Tenenbaum

The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned--an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.


Cognition | 2012

Bootstrapping in a language of thought: A formal model of numerical concept learning

Steven T. Piantadosi; Joshua B. Tenenbaum; Noah D. Goodman

In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning.


Perspectives on Psychological Science | 2012

Learning From Others: The Consequences of Psychological Reasoning for Human Learning

Patrick Shafto; Noah D. Goodman; Michael C. Frank

From early childhood, human beings learn not only from collections of facts about the world but also from social contexts through observations of other people, communication, and explicit teaching. In these contexts, the data are the result of human actions—actions that come about because of people’s goals and intentions. To interpret the implications of others’ actions correctly, learners must understand the people generating the data. Most models of learning, however, assume that data are randomly collected facts about the world and cannot explain how social contexts influence learning. We provide a Bayesian analysis of learning from knowledgeable others, which formalizes how learners may use a person’s actions and goals to make inferences about the actor’s knowledge about the world. We illustrate this framework using two examples from causal learning and conclude by discussing the implications for cognition, social reasoning, and cognitive development.


international conference on computer graphics and interactive techniques | 2012

Synthesizing open worlds with constraints using locally annealed reversible jump MCMC

Yi-Ting Yeh; Lingfeng Yang; Matthew Watson; Noah D. Goodman; Pat Hanrahan

We present a novel Markov chain Monte Carlo (MCMC) algorithm that generates samples from transdimensional distributions encoding complex constraints. We use factor graphs, a type of graphical model, to encode constraints as factors. Our proposed MCMC method, called locally annealed reversible jump MCMC, exploits knowledge of how dimension changes affect the structure of the factor graph. We employ a sequence of annealed distributions during the sampling process, allowing us to explore the state space across different dimensionalities more freely. This approach is motivated by the application of layout synthesis where relationships between objects are characterized as constraints. In particular, our method addresses the challenge of synthesizing open world layouts where the number of objects are not fixed and optimal configurations for different numbers of objects may be drastically different. We demonstrate the applicability of our approach on two open world layout synthesis problems: coffee shops and golf courses.


Cognitive Psychology | 2014

A rational account of pedagogical reasoning: teaching by, and learning from, examples.

Patrick Shafto; Noah D. Goodman; Thomas L. Griffiths

Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teachers examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning.

Collaboration


Dive into the Noah D. Goodman's collaboration.

Top Co-Authors

Avatar

Joshua B. Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Stuhlmüller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomer Ullman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles Kemp

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge