Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Kemp is active.

Publication


Featured researches published by Charles Kemp.


Science | 2011

How to Grow a Mind: Statistics, Structure, and Abstraction

Joshua B. Tenenbaum; Charles Kemp; Thomas L. Griffiths; Noah D. Goodman

In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?


Trends in Cognitive Sciences | 2006

Theory-based Bayesian models of inductive learning and reasoning.

Joshua B. Tenenbaum; Thomas L. Griffiths; Charles Kemp

Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.


Proceedings of the National Academy of Sciences of the United States of America | 2008

The discovery of structural form

Charles Kemp; Joshua B. Tenenbaum

Algorithms for finding structure in data have become increasingly important both as tools for scientific data analysis and as models of human learning, yet they suffer from a critical limitation. Scientists discover qualitatively new forms of structure in observed data: For instance, Linnaeus recognized the hierarchical organization of biological species, and Mendeleev recognized the periodic structure of the chemical elements. Analogous insights play a pivotal role in cognitive development: Children discover that object category labels can be organized into hierarchies, friendship networks are organized into cliques, and comparative relations (e.g., “bigger than” or “better than”) respect a transitive order. Standard algorithms, however, can only learn structures of a single form that must be specified in advance: For instance, algorithms for hierarchical clustering create tree structures, whereas algorithms for dimensionality-reduction create low-dimensional spaces. Here, we present a computational model that learns structures of many different forms and that discovers which form is best for a given dataset. The model makes probabilistic inferences over a space of graph grammars representing trees, linear orders, multidimensional spaces, rings, dominance hierarchies, cliques, and other forms and successfully discovers the underlying structure of a variety of physical, biological, and social domains. Our approach brings structure learning methods closer to human abilities and may lead to a deeper computational understanding of cognitive development.


Archive | 2008

The Cambridge Handbook of Computational Psychology: Bayesian Models of Cognition

Thomas L. Griffiths; Charles Kemp; Joshua B. Tenenbaum

For over 200 years, philosophers and mathematicians have be en using probability theory to describe human cognition. While the theory of prob abilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty (Gigerenzer et al., 1989; Hacking, 1975). Our goal in this ch apter is to illustrate the kinds of computational models of cognition that we can build if we assume that human learning and inference approximately follow the principles of Bayesian probabilistic inference, and to explain some of the mathematical ideas and techniques underlying those models


Trends in Cognitive Sciences | 2010

Probabilistic models of cognition: exploring representations and inductive biases

Thomas L. Griffiths; Nick Chater; Charles Kemp; Amy Perfors; Joshua B. Tenenbaum

Cognitive science aims to reverse-engineer the mind, and many of the engineering challenges the mind faces involve induction. The probabilistic approach to modeling cognition begins by identifying ideal solutions to these inductive problems. Mental processes are then modeled using algorithms for approximating these solutions, and neural processes are viewed as mechanisms for implementing these algorithms, with the result being a top-down analysis of cognition starting with the function of cognitive processes. Typical connectionist models, by contrast, follow a bottom-up approach, beginning with a characterization of neural mechanisms and exploring what macro-level functional phenomena might emerge. We argue that the top-down approach yields greater flexibility for exploring the representations and inductive biases that underlie human cognition.


Psychological Review | 2009

Structured statistical models of inductive reasoning.

Charles Kemp; Joshua B. Tenenbaum

Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.


Science | 2012

Kinship categories across languages reflect general communicative principles.

Charles Kemp; Terry Regier

Whos Who Different languages rely on distinct sets of terminology to classify relatives, such as maternal grandfather in English, and precision in language usage is a key component for successful communication (see the Perspective by Levinson). Kemp and Regier (p. 1049) propose an organizing framework whereby kinship classification systems can all be seen to optimize or nearly optimize both simplicity and precision. The labels applied to kin are constructed from simple units and are precise enough to reduce confusion and ambiguity when used in communication. Frank and Goodman (p. 998) show that simplicity and precision also explain how listeners correctly infer the meaning of speech in the context of referential communication. The systems of terms used in different languages to describe kin are optimized for simplicity and informativeness. Languages vary in their systems of kinship categories, but the scope of possible variation appears to be constrained. Previous accounts of kin classification have often emphasized constraints that are specific to the domain of kinship and are not derived from general principles. Here, we propose an account that is founded on two domain-general principles: Good systems of categories are simple, and they enable informative communication. We show computationally that kin classification systems in the world’s languages achieve a near-optimal trade-off between these two competing principles. We also show that our account explains several specific constraints on kin classification proposed previously. Because the principles of simplicity and informativeness are also relevant to other semantic domains, the trade-off between them may provide a domain-general foundation for variation in category systems across languages.


Cognitive Science | 2010

Learning to Learn Causal Models

Charles Kemp; Noah D. Goodman; Joshua B. Tenenbaum

Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning.


Archive | 2007

Inductive Reasoning: Theory-Based Bayesian Models of Inductive Reasoning

Joshua B. Tenenbaum; Charles Kemp; Patrick Shafto

Philosophers since Hume have struggled with the logical problem of induction, but children solve an even more difficult task — the practical problem of induction. Children somehow manage to learn concepts, categories, and word meanings, and all on the basis of a set of examples that seems hopelessly inadequate. The practical problem of induction does not disappear with adolescence: adults face it every day whenever they make any attempt to predict an uncertain outcome. Inductive inference is a fundamental part of everyday life, and for cognitive scientists, a fundamental phenomenon of human learning and reasoning in need of computational explanation. There are at least two important kinds of questions that we can ask about human inductive capacities. First, what is the knowledge on which a given instance of induction is based? Second, how does that knowledge support generalization beyond the specific data observed: how do we judge the strength of an inductive argument from a given set of premises to new cases, or infer which new entities fall under a concept given a set of examples? We provide a computational approach to answering these questions. Experimental psychologists have studied both the process of induction and the nature of prior knowledge representations in depth, but previous computational models of induction have tended to emphasize process to the exclusion of knowledge representation. The approach we describe here attempts to redress this imbalance, by showing how domain-specific prior knowledge can be formalized as a crucial ingredient in a domain-general framework for rational statistical inference. The value of prior knowledge has been attested by both psychologists and machine learning theorists, but with somewhat different emphases. Formal analyses in machine learning show that meaningful generalization is not possible unless a learner begins with some sort of inductive bias: some set of constraints on the space of hypotheses that will be considered (Mitchell, 1997). However, the best known statistical machine-learning algorithms adopt relatively weak inductive biases and thus require much more data for successful generalization than humans do: tens or hundreds of positive and negative examples, in contrast to the human ability to generalize from just one or few positive examples. These machine algorithms lack ways to represent and exploit the rich forms of prior knowledge that guide people’s inductive inferences, and that have been the focus of much attention in cognitive and developmental psychology under the name of “intuitive theories” (Murphy and Medin, 1985). Murphy (1993) characterizes an intuitive theory as “a set of causal relations that collectively generate or explain the phenomena in a domain.” We think of a theory more generally as any system of abstract principles that generates hypotheses for inductive inference in a domain, such as hypotheses about the meanings of new concepts, the conditions for new rules, or the extensions of new properties in that domain. Carey (1985), Wellman and Gelman (1992), and Gopnik and Meltzoff (1997) emphasize the central role of intuitive theories in cognitive development, both as sources of constraint on children’s inductive reasoning and as the locus of deep conceptual change. Only recently have psychologists begun to consider seriously the roles that these intuitive theories might play in formal models of inductive inference (Gopnik and Schulz, 2004; Tenenbaum,


Cognition | 2010

A probabilistic model of theory formation

Charles Kemp; Joshua B. Tenenbaum; Sourabh Niyogi; Thomas L. Griffiths

Concept learning is challenging in part because the meanings of many concepts depend on their relationships to other concepts. Learning these concepts in isolation can be difficult, but we present a model that discovers entire systems of related concepts. These systems can be viewed as simple theories that specify the concepts that exist in a domain, and the laws or principles that relate these concepts. We apply our model to several real-world problems, including learning the structure of kinship systems and learning ontologies. We also compare its predictions to data collected in two behavioral experiments. Experiment 1 shows that our model helps to explain how simple theories are acquired and used for inductive inference. Experiment 2 suggests that our model provides a better account of theory discovery than a more traditional alternative that focuses on features rather than relations.

Collaboration


Dive into the Charles Kemp's collaboration.

Top Co-Authors

Avatar

Joshua B. Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan Jern

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Shafto

University of Louisville

View shared research outputs
Top Co-Authors

Avatar

Terry Regier

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Perfors

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Vikash K. Mansinghka

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Xu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge