Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel O. Segall is active.

Publication


Featured researches published by Daniel O. Segall.


Psychometrika | 1996

Multidimensional Adaptive Testing

Daniel O. Segall

Maximum likelihood and Bayesian procedures for item selection and scoring of multidimensional adaptive tests are presented. A demonstration using simulated response data illustrates that multidimensional adaptive testing (MAT) can provide equal or higher reliabilities with about one-third fewer items than are required by one-dimensional adaptive testing (OAT). Furthermore, holding test-length constant across the MAT and OAT approaches, substantial improvements in reliability can be obtained from multidimensional assessment. A number of issues relating to the operational use of multidimensional adaptive testing are discussed.


Psychometrika | 2001

General ability measurement: An application of multidimensional item response theory

Daniel O. Segall

Two new methods for improving the measurement precision of a general test factor are proposed and evaluated. One new method provides a multidimensional item response theory estimate obtained from conventional administrations of multiple-choice test items that span general and nuisance dimensions. The other method chooses items adaptively to maximize the precision of the general ability score. Both methods display substantial increases in precision over alternative item selection and scoring procedures. Results suggest that the use of these new testing methods may significantly enhance the prediction of learning and performance in instances where standardized tests are currently used.


Encyclopedia of Social Measurement | 2005

Computerized Adaptive Testing

Daniel O. Segall

COMPUTERIZED ADAPTIVE TESTING is an approach to individual difference assessment that tailors the administration of test questions to the trait level of the examinee. The computer chooses and displays the questions, and then records and processes the examinee’s answers. Item selection is adaptive–it is dependent in part on the examinee’s answers to previously administered questions, and in part on the specific statistical qualities of administered and candidate items. Compared to conventional testing where all examinees receive the same items, computerized adaptive testing (CAT) administers a larger percentage of items with appropriate difficulty levels. The adaptive item selection process of CAT results in higher levels of testscore precision and shorter test-lengths.


Journal of Educational and Behavioral Statistics | 2002

An Item Response Model for Characterizing Test Compromise

Daniel O. Segall

This article presents an item response model for characterizing test-compromise that enables the estimation of item-preview and score-gain distributions observed in on-demand high-stakes testing programs. Model parameters and posterior distributions are estimated by Markov Chain Monte Carlo (MCMC) procedures. Results of a simulation study suggest that when at least some of the items taken by a small sample of test takers are known to be secure (uncompromised), the procedure can provide useful summaries of test-compromise and its impact on test scores. The article includes discussions of operational use of the proposed procedure, possible model violations and extensions, and application to computerized adaptive testing.


Psychometrika | 1994

The Reliability of Linearly Equated Tests.

Daniel O. Segall

An asymptotic expression for the reliability of a linearly equated test is developed using normal theory. The reliability is expressed as the product of two terms, the reliability of the test before equating, and an adjustment term. This adjustment term is a function of the sample sizes used to estimate the linear equating transformation. The results of a simulation study indicate close agreement between the theoretical and simulated reliability values for samples greater than 200. Findings demonstrate that samples as small as 300 can be used in linear equating without an appreciable decrease in reliability.


Journal of Educational and Behavioral Statistics | 2004

A Sharing Item Response Theory Model for Computerized Adaptive Testing

Daniel O. Segall

A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection rules are expressed as functions of the item’s exposure rate in addition to other commonly used properties (characterized by difficulty, discrimination, and guessing parameters). Based on the results of simulated item responses, the new item selection and scoring algorithms compare favorably to the Sympson–Hetter exposure control method. The new SIRT approach provides higher reliability and lower score gains in instances where sharing occurs.


Applied Psychological Measurement | 1994

A comparison of item calibration media in computerized adaptive testing

Rebecca D. Hetter; Daniel O. Segall; Bruce M. Bloxom

A concern in computerized adaptive testing is whether data for calibrating items can be collected from either a paper-and-pencil (P&P) or a computer ad ministration of the items. Fixed blocks of power test items were administered by computer to one group of examinees and by P&P to a second group. These data were used to obtain computer-based and P&P-based three-parameter logistic model parameters of the items. Then each set of parameters was used to esti mate item response theory pseudo-adaptive scores for a third group of examinees who had received all of the items by computer. The effect of medium of adminis tration of the calibration items was assessed by com parative analyses of the adaptive scores using structural modeling. The results support the use of item parameters calibrated from either P&P or com puter administrations for use in computerized adaptive power tests. The calibration medium did not appear to alter the constructs measured by the adaptive test or the reliability of the adaptive test scores. Index terms: computerized adaptive testing, item calibration, item parameter estimation, item response theory, me dium of administration, trait level estimation.


Applied Psychological Measurement | 2014

Using Multidimensional CAT to Administer a Short, Yet Precise, Screening Test

Lihua Yao; Mary Pommerich; Daniel O. Segall

Multidimensional computerized adaptive testing (MCAT) provides a mechanism by which the simultaneous goals of accurate prediction and minimal testing time for a screening test could both be met. This article demonstrates the use of MCAT to administer a screening test for the Computerized Adaptive Testing–Armed Services Vocational Aptitude Battery (CAT-ASVAB) under a variety of manipulated conditions. CAT-ASVAB is a test battery administered via unidimensional CAT (UCAT) that is used to qualify applicants for entry into the U.S. military and assign them to jobs. The primary research question being evaluated is whether the use of MCAT to administer a screening test can lead to significant reductions in testing time from the full-length selection test, without significant losses in score precision. Different stopping rules, item selection methods, content constraints, time constraints, and population distributions for the MCAT administration are evaluated through simulation, and compared with results from a regular full-length UCAT administration.


Personnel Psychology | 2006

UNPROCTORED INTERNET TESTING IN EMPLOYMENT SETTINGS

Nancy T. Tippins; James Beaty; Fritz Drasgow; Wade M. Gibson; Kenneth Pearlman; Daniel O. Segall; William Shepherd


Archive | 1997

Item pool development and evaluation.

Daniel O. Segall; Kathleen E. Moreno; Rebecca D. Hetter

Collaboration


Dive into the Daniel O. Segall's collaboration.

Top Co-Authors

Avatar

Bruce M. Bloxom

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar

Mary Pommerich

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar

Kenneth Pearlman

United States Office of Personnel Management

View shared research outputs
Top Co-Authors

Avatar

Lihua Yao

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge