Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William H. Batchelder is active.

Publication


Featured researches published by William H. Batchelder.


Psychonomic Bulletin & Review | 1999

Theoretical and empirical review of multinomial process tree modeling.

William H. Batchelder; David M. Riefer

We review a current and popular class of cognitive models calledmultinomial processing tree (MPT) models. MPT models are simple, substantively motivated statistical models that can be applied to categorical data. They are useful as data-analysis tools for measuring underlying or latent cognitive capacities and as simple models for representing and testing competing psychological theories. We formally describe the cognitive structure and parametric properties of the class of MPT models and provide an inferential statistical analysis for the entire class. Following this, we provide a comprehensive review of over 80 applications of MPT models to a variety of substantive areas in cognitive psychology, including various types of human memory, visual and auditory perception, and logical reasoning. We then address a number of theoretical issues relevant to the creation and evaluation of MPT models, including model development, model validity, discrete-state assumptions, statistical issues, and the relation between MPT models and other mathematical models. In the conclusion, we consider the current role of MPT models in psychological research and possible future directions.


Psychometrika | 1994

The statistical analysis of general processing tree models with the EM algorithm

Xiangen Hu; William H. Batchelder

Multinomial processing tree models assume that an observed behavior category can arise from one or more processing sequences represented as branches in a tree. These models form a subclass of parametric, multinomial models, and they provide a substantively motivated alternative to loglinear models. We consider the usual case where branch probabilities are products of nonnegative integer powers in the parameters, 0≤θs≤1, and their complements, 1 - θs. A version of the EM algorithm is constructed that has very strong properties. First, the E-step and the M-step are both analytic and computationally easy; therefore, a fast PC program can be constructed for obtaining MLEs for large numbers of parameters. Second, a closed form expression for the observed Fisher information matrix is obtained for the entire class. Third, it is proved that the algorithm necessarily converges to a local maximum, and this is a stronger result than for the exponential family as a whole. Fourth, we show how the algorithm can handle quite general hypothesis tests concerning restrictions on the model parameters. Fifth, we extend the algorithm to handle the Read and Cressie power divergence family of goodness-of-fit statistics. The paper includes an example to illustrate some of these results.


Psychometrika | 1988

Test theory without an answer key

William H. Batchelder; A. Kimball Romney

A general model is presented for homogeneous, dichotomous items when the answer key is not known a priori. The model is structurally related to the two-class latent structure model with the roles of respondents and items interchanged. For very small sets of respondents, iterative maximum likelihood estimates of the parameters can be obtained by existing methods. For other situations, new estimation methods are developed and assessed with Monte Carlo data. The answer key can be accurately reconstructed with relatively small sets of respondents. The model is useful when a researcher wants to study objectively the knowledge possessed by members of a culturally coherent group that the researcher is not a member of.


Journal of Experimental Psychology: Learning, Memory and Cognition | 1994

Response strategies in source monitoring.

David M. Riefer; Xiangen Hu; William H. Batchelder

This article examines the role that response strategies play in a memory paradigm known as source monitoring. It is argued that several different response biases can interact to confound the interpretation of source-monitoring data. This problem is illustrated with 2 empirical examples, taken from the psychological literature, which examine the role of source monitoring in the generation effect and the picture superiority effect. To resolve this problem, a new multinomial model for source monitoring is presented that is capable of separately measuring memory factors from response-bias factors. The model, when applied to the results of 2 new experiments, results in a clearer picture of which source-monitoring variables are instrumental in the generation effect and picture superiority effect


Psychological Assessment | 2002

Cognitive psychometrics: assessing storage and retrieval deficits in special populations with multinomial processing tree models.

David M. Riefer; Bethany R. Knapp; William H. Batchelder; Donald Bamber; Victor Manifold

This article demonstrates how multinomial processing tree models can be used as assessment tools to measure cognitive deficits in clinical populations. This is illustrated with a model developed by W. H. Batchelder and D. M. Riefer (1980) that separately measures storage and retrieval processes in memory. The validity of the model is tested in 2 experiments, which show that presentation rate affects the storage of items (Experiment 1) and part-list cuing hurts item retrieval (Experiment 2). Experiments 3 and 4 examine 2 clinical populations: schizophrenics and alcoholics with organic brain damage. The model reveals that each group exhibits deficits in storage and retrieval, with the retrieval deficits being stronger and occurring more consistently over trials. Also, the alcoholics with organic brain damage show no improvement in retrieval over trials, although their storage improves at the same rate as a control group.


Psychological Assessment | 1998

Multinomial processing tree models and psychological assessment.

William H. Batchelder

Psychological assessment often focuses on individual participants in testing situations. Psychometric models for assessment include parameters for individual and item differences, but they rarely model the cognitive processes involved in responding to test items. Information-processing models of cognition focus on psychological mechanisms; however, they are rarely used in assessment situations. This article discusses a class of information-processing models for categorical data called multinomial processing tree (MPT) models. While MPT models have been developed mostly for experimental situations, there is a largely untapped potential for using them for assessment. Thus, the goal of this article is to discuss how MPT models can be developed into cognitively based psychometric tools.


Archive | 1991

Statistical Inference for Multinomial Processing Tree Models

David M. Riefer; William H. Batchelder

This paper addresses the issue of statistical inference for multinomial processing tree models of cognition. An important question in the statistical analysis of multinomial models concerns the accuracy of asymptotic formulas when they are applied to actual cases involving finite samples. To explore this question, we present the results of an extensive analytic and Monte Carlo investigation of loglikelihood ratio inference procedures for our multinomial model for storage and retrieval. We demonstrate how to estimate bias in the parameters, set confidence intervals for estimators, calculate power for various hypothesis tests, and estimate the sample size needed to justify the use of asymptotic theory in real settings. Also, we study the impact of moderate amounts of individual differences in the parameters. The results of the Monte Carlo simulations reveal that the storage-retrieval model is fairly robust for sample sizes around 150, and they also reveal those conditions under which larger sample sizes will be needed. The paper is structured to show potential users of multinomial models how to carry out these types of simulations for other models, and what findings and recommendations they can expect to find along the way.


Psychological Review | 1994

Measuring memory factors in source monitoring : reply to Kinchla

William H. Batchelder; David M. Riefer; Xiangen Hu

Kinchla criticizes Batchelder and Riefers multinomial model for source monitoring, primarily its high-threshold assumptions, and he advocates an approach based on statistical decision theory (SDT). In this reply, the authors lay out some of the considerations that led to their model and then raise some specific concerns with Kinchlas critique


Journal of Mathematical Psychology | 1979

The statistical analysis of a thurstonian model for rating chess players

William H. Batchelder; Neil J. Bershad

Abstract This paper formalizes and provides static and dynamic estimators for a scaling model for rating chess players. The model was suggested by the work of Arpad Elo, the inventor of the chess rating system in current use by both the United States and international chess federations. The model can be viewed as a Thurstone Case V model that permits draws (ties). A related model based on a linear approximation is also analyzed. In the chess application, possibly changing ability parameters are estimated sequentially from sparse data structures that often involve many fewer than M(M − 1) 2 observations on the M players to the rated. In contrast, psychological applications of paired-comparison scaling generally use models with no draw provision to estimate static parameters from a systematically obtained data structure such as a replicated “round robin” involving all M entities to be scaled. In the paper, both static and sequential estimators are provided and evaluated for a number of different data structures. Sampling theory for the estimators is developed. The application of rating systems to track temporally changing ability parameters may prove useful in many areas of psychology.


Psychometrika | 2003

Markov chain estimation for test theory without an answer key

George Karabatsos; William H. Batchelder

This study develops Markov Chain Monte Carlo (MCMC) estimation theory for the General Condorcet Model (GCM), an item response model for dichotomous response data which does not presume the analyst knows the correct answers to the test a priori (answer key). In addition to the answer key, respondent ability, guessing bias, and difficulty parameters are estimated. With respect to data-fit, the study compares between the possible GCM formulations, using MCMC-based methods for model assessment and model selection. Real data applications and a simulation study show that the GCM can accurately reconstruct the answer key from a small number of respondents.

Collaboration


Dive into the William H. Batchelder's collaboration.

Top Co-Authors

Avatar

David M. Riefer

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiangen Hu

University of California

View shared research outputs
Top Co-Authors

Avatar

Zita Oravecz

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Anders

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jared B. Smith

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge