Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel R. Eignor is active.

Publication


Featured researches published by Daniel R. Eignor.


Journal of Educational and Behavioral Statistics | 1988

An Assessment of the Dimensionality of Three SAT-Verbal Test Editions

Linda L. Cook; Neil J. Dorans; Daniel R. Eignor

A strong assumption made by most commonly used item response theory (IRT) models is that the data are unidimensional, that is, statistical dependence among item scores can be explained by a single ability dimension. First-order and second-order factor analyses were conducted on correlation matrices among item parcels of SAT-Verbal items. The item parcels were constructed to yield correlation matrices that were amenable to linear factor analyses. The first-order analyses were employed to assess the effective dimensionality of the item parcel data. Second-order analyses were employed to test meaningful hypotheses about the structure of the data. Parcels were constructed for three SAT-Verbal editions. The dimensionality analyses revealed that one SAT-Verbal test edition was less parallel to the other two editions than these other editions were to each other. Refinements in the dimensionality methodology and a more systematic dimensionality assessment are logical extensions of the present research.


European Journal of Psychological Assessment | 2001

Standards for the Development and Use of Tests: The Standards for Educational and Psychological Testing

Daniel R. Eignor

Summary: This paper discusses the changes in the Standards for Educational and Psychological Testing between the 1985 and 1999 versions, along with the developments in testing that brought about these changes. In addition, some thoughts are presented about the future of the 1999 Standards.


International Journal of Educational Research | 1989

Using item response theory in test score equating

Linda L. Cook; Daniel R. Eignor

Abstract In this chapter, the theoretical advantages that have been offered for using item response theory (IRT) in the test equating process are discussed. In addition, IRT equating research is reviewed with regard to certain important equating issues addressed in this research. These issues fall into four general categories: (1) population invariance of equating results and sample selection in equating, (2) properties of linking items used in anchor test equatings, (3) further examinations of the use of IRT in the vertical equating of tests, and (4) robustness of IRT equating to violations of underlying IRT model assumptions. The chapter concludes with a number of practical suggestions to guide researchers interested in IRT for test equating purposes.


Applied Measurement in Education | 2010

Using Factor Analysis to Investigate Accommodations Used by Students with Disabilities on an English-Language Arts Assessment

Linda L. Cook; Daniel R. Eignor; Yasuyo Sawaki; Jonathan Steinberg; Frederick Cline

This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under standard conditions, students with learning disabilities who took the test under standard conditions, students with learning disabilities who took the test with accommodations as specified in their Individualized Educational Program (IEP) or 504 plan, and students with learning disabilities who took the test with a read-aloud accommodation/modification. The ELA assessment contained both reading and writing portions. A total of 75 multiple-choice items were analyzed. A series of nested hypotheses were tested to determine if the ELA measured the same factors for students with disabilities who took the assessment with and without accommodations and students without disabilities who took the test without accommodations. The results of these analyses, although not conclusive, indicated that the assessment had a similar factor structure for all groups included in the study.


Archive | 2009

Equating Test Scores: Toward Best Practices

Neil J. Dorans; Tim Moses; Daniel R. Eignor

Score equating is essential for any testing program that continually produces new editions of a test and for which the expectation is that scores from these editions have the same meaning over time. Different editions may be built to a common blueprint and designed to measure the same constructs, but they almost invariably differ somewhat in their psychometric properties. If one edition is more difficult than another, examinees would be expected to receive lower scores on the harder form. Score equating seeks to eliminate the effects on scores of these unintended differences in test form difficulty. Score equating is necessary to be fair to examinees and to provide score users with scores that mean the same thing across different editions or forms of the test.


Applied Psychological Measurement | 1981

Book Review : A Criterion-Referenced Measurement Model with Corrections for Guessing and Carelessness: George Morgan Hawthorn, Victoria, Australia: The Australian Council for Educational Research, 1979, 76 pp

Daniel R. Eignor; Linda L. Cook

For some time, measurement theorists have been concerned about models and methods that account for extraneous variables such as guessing, forgetting, and carelessness in the making of decisions using criterion-referenced test data. For theorists espousing a &dquo;state model&dquo; conceptualization of mastery (Meskauskas, 1976), Macready and Dayton (1977) have presented a useful model that accounts for guessing and forgetting. For those theorists who feel a &dquo;continuum model&dquo; conceptualization of mastery better describes performance on a criterion-referenced test, a model comparable to Macready and Dayton’s does not exist, except perhaps in those instances when an item response theory (IRT) approach using the three-parameter logistic model is warranted (see Lord, in press). More often than not, an IRT approach is not used, either because of model assumptions or practical constraints. The concern about accounting for extraneous variables then takes the form-particularly when considering guessing-of adjusting the cutoff score after it has been set though use of one of a variety of procedures amenable to a continuum model; when guessing is considered, the standard correction for guessing formula is typically used to make this adjustment (Davis & Diamond, 1974; Educational Testing Service, 1976). What has been needed is a continum model that (1) corrects for extraneous variables in the actual standard setting process, that (2) is easier to implement than IRT, and that (3) considers factors in addition to guessing. In A Criterion -Referenced Model with Corrections for Guessing and Careless-


Language Learning | 1999

Examining the relationship between computer familiarity and performance on computer-based language tasks

Carol Taylor; Irwin Kirsch; Joan Jamieson; Daniel R. Eignor


Journal of Educational Measurement | 1988

A Comparative Study of the Effects of Recency of Instruction on the Stability of IRT and Conventional Item Parameter Estimates

Linda L. Cook; Daniel R. Eignor; Hessy L. Taft


Review of Educational Research | 1978

Developments in Latent Trait Theory: Models, Technical Issues, and Applications

Ronald K. Hambleton; Hariharan Swaminathan; Linda L. Cook; Daniel R. Eignor; Janice A. Gifford


ETS Research Report Series | 1998

THE RELATIONSHIP BETWEEN COMPUTER FAMILIARITY AND PERFORMANCE ON COMPUTER-BASED TOEFL TEST TASKS

Carol Taylor; Joan Jamieson; Daniel R. Eignor; Irwin Kirsch

Collaboration


Dive into the Daniel R. Eignor's collaboration.

Top Co-Authors

Avatar

Linda L. Cook

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald K. Hambleton

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Joan Jamieson

Northern Arizona University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge