Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay I. Myung is active.

Publication


Featured researches published by Jay I. Myung.


Psychological Review | 2006

Global model analysis by parameter space partitioning

Mark A. Pitt; Woojae Kim; Daniel J. Navarro; Jay I. Myung

To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g., interval-scale performance) often mismatches their testable precision in experiments, where qualitative, ordinal predictions are the norm. Parameter space partitioning is a solution that evaluates model performance at a qualitative level. There exists a partition on the models parameter space that divides it into regions that correspond to each data pattern. Three application examples demonstrate its potential and versatility for studying the global behavior of psychological models.


Neural Computation | 2010

Adaptive design optimization: A mutual information-based approach to model discrimination in cognitive science

Daniel R. Cavagnaro; Jay I. Myung; Mark A. Pitt; Janne V. Kujala

Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.


Psychological Review | 2009

Optimal Experimental Design for Model Discrimination

Jay I. Myung; Mark A. Pitt

Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values and thereby identify an optimal experimental design. After describing the method, it is demonstrated in 2 content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method.


Methods in Enzymology | 2004

Model comparison methods.

Jay I. Myung; Mark A. Pitt

Publisher Summary This chapter describes the various aspects of model comparison methods. The question of how one should choose among competing explanations of observed data is at the core of science. Model comparison is ubiquitous and arises, for example, when a toxicologist must decide between two dose-response models or when a biochemist needs to determine which of a set of enzyme-kinetics models best accounts for the observed data. Given a data sample, the descriptive adequacy of a model is assessed by finding parameter values of the model that best fit the data in some defined sense. Selecting among models using a goodness-of-fit measure would make sense if data were free of noise. Generalizability or predictive accuracy refers to how well a model predicts the statistical properties of future, as yet unseen, samples from a replication of the experiment that generated the current data sample. Generalizability is a mean discrepancy between the true model and the model of interest, averaged across all data that could possibly be observed under the true model. It is found that cross-validation can easily be implemented using any computer programming language as its calculation does not require sophisticated computational techniques, in contrast to minimum description length.


Methods in Enzymology | 2009

Evaluation and Comparison of Computational Models

Jay I. Myung; Yun Tang; Mark A. Pitt

Computational models are powerful tools that can enhance the understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying the adequacy of a model, and possibly its superiority to other models, are understood. This chapter begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. The final section demonstrates the use of five versatile and easy-to-use selection methods for choosing between two mathematical models of protein folding.


Psychonomic Bulletin & Review | 2010

Minimum description length model selection of multinomial processing tree models

Hao Wu; Jay I. Myung; William H. Batchelder

Multinomial processing tree (MPT) modeling has been widely and successfully applied as a statistical methodology for measuring hypothesized latent cognitive processes in selected experimental paradigms. In this article, we address the problem of selecting the best MPT model from a set of scientifically plausible MPT models, given observed data. We introduce a minimum description length (MDL) based model-selection approach that overcomes the limitations of existing methods such as the G2-based likelihood ratio test, the Akaike information criterion, and the Bayesian information criterion. To help ease the computational burden of implementing MDL, we provide a computer program in MATLAB that performs MDL-based model selection for any MPT model, with or without inequality constraints. Finally, we discuss applications of the MDL approach to well-studied MPT models with real data sets collected in two different experimental paradigms: source monitoring and pair clustering. The aforementioned MATLAB program may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.


Neural Computation | 2014

A hierarchical adaptive approach to optimal experimental design

Woojae Kim; Mark A. Pitt; Zhong-Lin Lu; Mark Steyvers; Jay I. Myung

Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception.


Psychonomic Bulletin & Review | 2011

Model discrimination through adaptive experimentation

Daniel R. Cavagnaro; Mark A. Pitt; Jay I. Myung

An ideal experiment is one in which data collection is efficient and the results are maximally informative. This standard can be difficult to achieve because of uncertainties about the consequences of design decisions. We demonstrate the success of a Bayesian adaptive method (adaptive design optimization, ADO) in optimizing design decisions when comparing models of the time course of forgetting. Across a series of testing stages, ADO intelligently adapts the retention interval in order to maximally discriminate power and exponential models. Compared with two different control (non-adaptive) methods, ADO distinguishes the models decisively, with the results unambiguously favoring the power model. Analyses suggest that ADO’s success is due in part to its flexibility in adjusting to individual differences. This implementation of ADO serves as an important first step in assessing its applicability and usefulness to psychology.


Psychonomic Bulletin & Review | 2007

Does Response Scaling Cause the Generalized Context Model to Mimic a Prototype Model

Jay I. Myung; Mark A. Pitt; Daniel J. Navarro

Smith and Minda (1998, 2002) argued that the response scaling parameter γ in the exemplar-based generalized context model (GCM) makes the model unnecessarily complex and allows it to mimic the behavior of a prototype model. We evaluated this criticism in two ways. First, we estimated the complexity of the GCM with and without the γ parameter and also compared its complexity to that of a prototype model. Next, we assessed the extent to which the models mimic each other, using two experimental designs (Nosofsky & Zaki, 2002, Experiment 3; Smith & Minda, 1998, Experiment 2), chosen because these designs are thought to differ in the degree to which they can discriminate the models. The results show that γ can increase the complexity of the GCM, but this complexity does not necessarily allow mimicry. Furthermore, if statistical model selection methods such as minimum description length are adopted as the measure of model performance, the models will be highly discriminable, irrespective of design.


Cognitive Science | 2008

Measuring Model Flexibility With Parameter Space Partitioning: An Introduction and Application Example

Mark A. Pitt; Jay I. Myung; Maximiliano Montenegro; James Pooley

A primary criterion on which models of cognition are evaluated is their ability to fit empirical data. To understand the reason why a model yields a good or poor fit, it is necessary to determine the data-fitting potential (i.e., flexibility) of the model. In the first part of this article, methods for comparing models and studying their flexibility are reviewed, with a focus on parameter space partitioning (PSP), a general-purpose method for analyzing and comparing all classes of cognitive models. PSP is then demonstrated in the second part of the article in which two connectionist models of speech perception (TRACE and ARTphone) are compared to learn how design differences affect model flexibility.

Collaboration


Dive into the Jay I. Myung's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yun Tang

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Fang Hou

Ohio State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge