Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where In Jae Myung is active.

Publication


Featured researches published by In Jae Myung.


Journal of Mathematical Psychology | 2003

Tutorial on maximum likelihood estimation

In Jae Myung

In this paper, I provide a tutorial exposition on maximum likelihood estimation (MLE). The intended audience of this tutorial are researchers who practice mathematical modeling of cognition but are unfamiliar with the estimation method. Unlike least-squares estimation which is primarily a descriptive tool, MLE is a preferred method of parameter estimation in statistics and is an indispensable tool for many statistical modeling techniques, in particular in non-linear modeling with non-normal data. The purpose of this paper is to provide a good conceptual explanation of the method with illustrative examples so the reader can have a grasp of some of the basic principles.


Psychonomic Bulletin & Review | 1997

Applying Occam’s razor in modeling cognition: A Bayesian approach

In Jae Myung; Mark A. Pitt

In mathematical modeling of cognition, it is important to have well-justified criteria for choosing among differing explanations (i.e., models) of observed data. This paper introduces a Bayesian model selection approach that formalizes Occam’s razor, choosing the simplest model that describes the data well. The choice of a model is carried out by taking into account not only the traditional model selection criteria (i.e., a model’s fit to the data and the number of parameters) but also the extension of the parameter space, and, most importantly, the functional form of the model (i.e., the way in which the parameters are combined in the model’s equation). An advantage of the approach is that it can be applied to the comparison of non-nested models as well as nested ones. Application examples are presented and implications of the results for evaluating models of cognition are discussed.


Trends in Cognitive Sciences | 2002

When a good fit can be bad

Mark A. Pitt; In Jae Myung

How should we select among computational models of cognition? Although it is commonplace to measure how well each model fits the data, this is insufficient. Good fits can be misleading because they can result from properties of the model that have nothing to do with it being a close approximation to the cognitive process of interest (e.g. overfitting). Selection methods are introduced that factor in these properties when measuring fit. Their success in outperforming standard goodness-of-fit measures stems from a focus on measuring the generalizability of a models data-fitting abilities, which should be the goal of model selection.


Journal of Experimental Psychology: General | 1992

An adaptive approach to human decision making: Learning theory, decision theory, and human performance

Jerome R. Busemeyer; In Jae Myung

This article describes a general model of decision rule learning, the rule competition model, composed of 2 parts: an adaptive network model that describes how individuals learn to predict the payoffs produced by applying each decision rule for any given situation and a hill-climbing model that describes how individuals learn to fine tune each rule by adjusting its parameters. The model was tested and compared with other models in 3 experiments on probabilistic categorization. The first experiment was designed to test the adaptive network model using a probability learning task, the second was designed to test the parameter search process using a criterion learning task, and the third was designed to test both parts of the model simultaneously by using a task that required learning both category rules and cutoff criteria.


Memory & Cognition | 2000

Toward an explanation of the power law artifact: Insights from response surface analysis

In Jae Myung; Cheongtag Kim; Mark A. Pitt

The power law (y =ax−b) has been shown to provide a good description of data collected in a wide range of fields in psychology. R. B. Anderson and Tweney (1997) suggested that the model’s data-fitting success may in part be artifactual, caused by a number of factors, one of which is the use of improper data averaging methods. The present paper follows up on their work and explains causes of the power law artifact. A method for studying the geometric relations among responses generated by mathematical models is introduced that shows the artifact is a result of the combined contributions of three factors: arithmetic averaging of data that are generated from a nonlinear model in the presence of individual differences.


Cognitive Psychology | 2004

Assessing the distinguishability of models and the informativeness of data.

Daniel J. Navarro; Mark A. Pitt; In Jae Myung

A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by model B, and vice versa (a technique that we call landscaping), much safer inferences can be made about the meaning of a models fit to data. We demonstrate the landscaping technique using four models of retention and 77 historical data sets, and show how the method can be used to: (1) evaluate the distinguishability of models, (2) evaluate the informativeness of data in distinguishing between models, and (3) suggest new ways to distinguish between models. The generality of the method is demonstrated in two other research areas (information integration and categorization), and its relationship to the important notion of model complexity is discussed.


Psychonomic Bulletin & Review | 2003

Flexibility versus generalizability in model selection

Mark A. Pitt; Woojae Kim; In Jae Myung

Which quantitative method should be used to choose among competing mathematical models of cognition? Massaro, Cohen, Campbell, and Rodriguez (2001) favor root mean squared deviation (RMSD), choosing the model that provides the best fit to the data. Their simulation results appear to legitimize its use for comparing two models of information integration because it performed just as well as Bayesian model selection (BMS), which had previously been shown by Myung and Pitt (1997) to be a superior alternative selection method because it considers a model’s complexity in addition to its fit. In the present study, after contrasting the theoretical approaches to model selection espoused by Massaro et al. and Myung and Pitt, we discuss the cause of the inconsistencies by expanding on the simulations of Massaro et al. Findings demonstrate that the results from model recovery simulations can be misleading if they are not interpreted relative to the data on which they were evaluated, and that BMS is a more robust selection method.


Psychological Science | 1993

Cue Competition Effects: Empirical Tests of Adaptive Network Learning Models

Jerome R. Busemeyer; In Jae Myung; Mark A. McDaniel

The ability to predict future consequences on the basis of previous experience with the current set of environmental cues is one of the most fundamental of all cognitive processes. This study investigated how the validity of one cue influences the effectiveness of another cue for predicting a criterion. The results demonstrate a cue competition effect—increasing the validity of one cue decreased the effectiveness of another cue in a linear prediction task, even though the two cues were statistically independent.


Journal of Experimental Psychology: Learning, Memory and Cognition | 1988

A New Method for Investigating Prototype Learning

Jerome R. Busemeyer; In Jae Myung

It seems quite easy to produce an image of an ideal circle despite the fact that our experience is based on thousands of different imperfect examples. This natural ability to abstract and reproduce a single image from a myriad of examples is often referred to as prototype learning. The purpose of this article is to describe a paradigm for investigating prototype learning. Subjects are shown a sequence of exemplars generated from one or more prototypes. After observing each exemplar, they are asked to reproduce (graphically or numerically) their current estimate of each prototype. Obviously, this procedure is limited to stimuli that can be easily reproduced by the subject. It may be useful to compare the prototype production task with the categorization task introduced by Posner and Keele (1968, 1970). Exactly the same exemplars can be used during training in both tasks. In the categorization task, subjects are presented with an exemplar and then are asked to produce a category label. In the prototype production task, subjects are presented a category label and then are asked to produce a prototype estimate. The storage of exemplar information may be quite similar in the two tasks (e.g., a multiple-trace memory system, or a composite distributed-memory system). However, the use of this stored information is quite different: The prototype production task requires some sort of abstraction procedure (e.g., form an average of the traces associated with a category), whereas the categorization task requires some sort of classification procedure (e.g., choose the category associated with a trace that is most similar to the probe). There are several reasons for investigating the prototype production task. First, it is a naturally occurring task. Prototypic drawings of organs, bones, and cell structures frequently appear in physiological and medical textbooks. A second example is the use of prototypic symptom patterns to describe


Psychological Science | 1993

Cue Competition Effects: Theoretical Implications for Adaptive Network Learning Models

Jerome R. Busemeyer; In Jae Myung; Mark A. McDaniel

A feature-free method for testing adaptive network learning models is presented. The test is based on a property called the mean matching law, a. property shared by many adaptive network models of learning. As an application of this method, we prove that cue competition effects obtained with statistically independent cues cannot be explained by many previous adaptive network learning models, including those based on the delta learning rule. These results point to the need to incorporate competitive learning properties into adaptive network learning models.

Collaboration


Dive into the In Jae Myung's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerome R. Busemeyer

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark A. McDaniel

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge