Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara G. Dodd is active.

Publication


Featured researches published by Barbara G. Dodd.


Applied Psychological Measurement | 1995

Computerized adaptive testing with polytomous items

Barbara G. Dodd; R. J. De Ayala; William R. Koch

Polytomous item response theory models and the research that has been conducted to investigate a variety of possible operational procedures for polytomous model-based computerized adaptive testing (CAT) are reviewed. Studies that compared polytomous CAT systems based on competing item response theory models that are appropriate for the same measurement objective, as well as applications of polytomous CAT in marketing and educational psychology, also are reviewed. Directions for future research using polytomous model-based CAT are suggested.


Applied Psychological Measurement | 1989

Operational Characteristics of Adaptive Testing Procedures Using the Graded Response Model

Barbara G. Dodd; William R. Koch; Ralph J. De Ayala

The purpose of the present research was to develop general guidelines to assist practitioners in setting up operational computerized adaptive testing (CAT) sys tems based on the graded response model. Simulated data were used to investigate the effects of systematic manipulation of various aspects of the CAT procedures for the model. The effects of three major variables were examined: item pool size, the stepsize used along the trait continuum until maximum likelihood estima tion could be calculated, and the stopping rule em ployed. The findings suggest three guidelines for graded response CAT procedures: (1) item pools with as few as 30 items may be adequate for CAT; (2) the variable-stepsize method is more useful than the fixed- stepsize methods; and (3) the minimum-standard-error stopping rule will yield fewer cases of nonconverg ence, administer fewer items, and produce higher cor relations of CAT θ estimates with full-scale estimates and the known θs than the minimum-information stop ping rule. The implications of these findings for psy chological assessment are discussed. Index terms: computerized adaptive testing, graded response model, item response theory, polychotomous scoring.


Applied Psychological Measurement | 1990

The Effect of Item Selection Procedure and Stepsize on Computerized Adaptive Attitude Measurement Using the Rating Scale Model

Barbara G. Dodd

Real and simulated datasets were used to investigate the effects of the systematic variation of two major variables on the operating characteristics of computerized adaptive testing (CAT) applied to instruments consisting of poly- chotomously scored rating scale items. The two variables studied were the item selection procedure and the stepsize method used until maximum likelihood trait estimates could be calculated. The findings suggested that (1) item pools that consist of as few as 25 items may be adequate for CAT; (2) the variable stepsize method of preliminary trait estimation produced fewer cases of nonconvergence than the use of a fixed stepsize procedure; and (3) the scale value item selection procedure used in conjunction with a minimum standard error stopping rule outperformed the information item selection technique used in conjunction with a minimum information stopping rule in terms of the frequencies of nonconvergent cases, the number of items administered, and the correlations of CAT 0 estimates with full scale estimates and known 0 values. The implications of these findings for implementing CAT with rating scale items are discussed. Index terms:


Applied Psychological Measurement | 1987

Effects of Variations in Item Step Values on Item and Test Information in the Partial Credit Model

Barbara G. Dodd; William R. Koch

Simulated data were used to investigate systemati cally the impact of various orderings of step difficul ties on the distribution of item information for the par tial credit model. It was found that the distribution of information for an item was a function of (1) the range of the step difficulty values, (2) the number of step difficulties that were out of sequential order, and (3) the distance between the step values that were out of order. Also, by using relative efficiency compari sons, the relationship between the step estimates and the distribution of item information was used to dem onstrate the effects of various test revisions (through the addition and/or deletion of items with specific step characteristics) on the resulting tests precision of measurement. The usefulness of item and test infor mation functions for specific measurement applications of the partial credit model is also discussed.


Educational and Psychological Measurement | 1998

A Comparison of Maximum Likelihood Estimation and Expected A Posteriori Estimation in CAT Using the Partial Credit Model

Ssu Kuang Chen; Liling Hou; Barbara G. Dodd

A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood trait estimation (MLE). The performance of EAP was evaluated under different conditions: the number of quadrature points (10, 20,40, and 80) and the type of prior distribution (normal and uniform). The relative performance of MLE and the EAP estimation methods was assessed under two distributional forms of the latent trait (normal and negatively skewed). Results showed that, regardless of the latent trait distribution, MLE and EAP with a normal prior or a uniform prior using either 20, 40, or 80 quadrature points provided relatively accurate estimation in CAT based on the partial credit model. Also, increasing the number of quadrature points from 20 to 80 did not increase the accuracy of EAP estimation.


Educational and Psychological Measurement | 1993

Computerized Adaptive Testing Using the Partial Credit Model: Effects of Item Pool Characteristics and Different Stopping Rules

Barbara G. Dodd; William R. Koch; Ralph J. De Ayala

Simulated datasets were used to research the effects of the systematic variation of three major variables on the performance of computerized adaptive testing (CAT) procedures for the partial credit model. The three variables studied were the stopping rule for terminating the CATs, item pool size, and the distribution of the difficulty of the items in the pool. Results indicated that the standard error stopping rule performed better across the variety of CAT conditions than the minimum information stopping rule. In addition it was found that item pools that consisted of as few as 30 items were adequate for CAT provided that the item pool was of medium difficulty. The implications of these findings for implementing CAT systems based on the partial credit model are discussed.


Educational and Psychological Measurement | 2011

A New Stopping Rule for Computerized Adaptive Testing

Seung W. Choi; Matthew W. Grady; Barbara G. Dodd

The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was compared with that of the minimum standard error stopping rule and a modified version of the minimum information stopping rule in a series of simulated adaptive tests, drawn from a number of item pools. Results indicate that the PSER makes efficient use of CAT item pools, administering fewer items when predictive gains in information are small and increasing measurement precision when information is abundant.


Applied Psychological Measurement | 2005

Computerized Adaptive Testing with the Partial Credit Model: Estimation Procedures, Population Distributions, and Item Pool Characteristics

Joanna S. Gorin; Barbara G. Dodd; Steven J. Fitzpatrick; Yann Yann Shieh

The primary purpose of this research is to examine the impact of estimation methods, actual latent trait distributions, and item pool characteristics on the performance of a simulated computerized adaptive testing (CAT) system. In this study, three estimation procedures are compared for accuracy of estimation: maximum likelihood estimation (MLE), expected a priori (EAP), and Warms weighted likelihood estimation (WLE). Some research has shown that MLE and EAP perform equally well under certain conditions in polytomous CAT systems, such that they match the actual latent trait distribution. However, little research has compared these methods when prior estimates of. distributions are extremely poor. In general, it appears that MLE, EAP, and WLE procedures perform equally well when using an optimal item pool. However, the use of EAP procedures may be advantageous under nonoptimal testing conditions when the item pool is not appropriately matched to the examinees.


Applied Psychological Measurement | 2003

Item Exposure Constraints for Testlets in the Verbal Reasoning Section of the MCAT

Laurie Laughlin Davis; Barbara G. Dodd

The current study examined item exposure control procedures for testlet scored reading passages in the Verbal Reasoning section of the Medical College Admission Test with four computerized adaptive testing (CAT) systems using the partial credit model. The first system used a traditional CAT using maximum information item selection. The second used random item selection to provide a baseline for optimal exposure rates. The third used a variation of Lunz and Stahls randomization procedure. The fourth used Luecht and Nungesters computerized adaptive sequential testing (CAST) system. A series of simulated fixed-length CATs was run to determine the optimal item selection procedure. Results indicated that both the randomization procedure and CAST performed well in terms of exposure control and measurement precision, with the CAST system providing the best overall solution when all variables were taken into consideration.


Educational and Psychological Measurement | 1997

The Effect of Population Distribution and Method of Theta Estimation on Computerized Adaptive Testing (CAT) Using the Rating Scale Model

Ssu-Kuang Chen; Liling Hou; Steven J. Fitzpatrick; Barbara G. Dodd

A simulation study was conducted to investigate the effect of population distribution on maximum likelihood estimation (MLE) and expected a posteriori estimation (EAP) in computerized adaptive testing (CAT) based on Andrichs rating scale model. Comparisons were made among MLE and EAP with a normal prior distribution and EAP with a uniform prior distribution within two data sets: one generated using a normal trait distribution and the other using a negatively skewed trait distribution. Descriptive statistics, correlations, scattergrams, and accuracy indices were used to compare the different methods of trait estimation. EAP estimation with a normal prior or uniformprior yielded results similar to those obtained with MLE, even though the prior did not match the underlying trait distribution. An additional simulation study based on real data suggested that more work is needed to determine the optimal number of quadrature points for EAP in CAT based on the rating scale model. The choice between MLE and EAP for particular measurement situations is discussed.

Collaboration


Dive into the Barbara G. Dodd's collaboration.

Top Co-Authors

Avatar

William R. Koch

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Steven J. Fitzpatrick

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jiseon Kim

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Ryoungsun Park

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Tiffany A. Whittaker

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Hyewon Chung

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel L. Murphy

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Liling Hou

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge