Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark D. Reckase is active.

Publication


Featured researches published by Mark D. Reckase.


Handbook of Statistics | 2009

Multidimensional Item Response Theory

Mark D. Reckase

Multidimensional Item Response Theory is the first book to give thorough coverage to this emerging area of psychometrics. The book describes the commonly used multidimensional item response theory (MIRT) models and the important methods needed for their practical application. These methods include ways to determine the number of dimensions required to adequately model data, procedures for estimating model parameters, ways to define the space for a MIRT model, and procedures for transforming calibrations from different samples to put them in the same space. A full chapter is devoted to methods for multidimensional computerized adaptive testing. The text is appropriate for an advanced course in psychometric theory or as a reference work for those interested in applying MIRT methodology. A working knowledge of unidimensional item response theory and matrix algebra is assumed. Knowledge of factor analysis is also helpful.


Applied Psychological Measurement | 1985

The Difficulty of Test Items That Measure More than One Ability.

Mark D. Reckase

Many test items require more than one ability to obtain a correct response. This article proposes a mul tidimensional index of item difficulty that can be used with items of this type. The proposed index describes multidimensional item difficulty as the direction in the multidimensional space in which the item provides the most information and the distance in that direction to the most informative point. The multidimensional dif ficulty is derived for a particular item response theory model and an example of its application is given using the ACT Mathematics Usage Test.


Applied Psychological Measurement | 1997

The Past and Future of Multidimensional Item Response Theory

Mark D. Reckase

Multidimensional item response theory (MIRT) is a relatively new methodology for modeling the relationships in a matrix of responses to a set of test items. MIRT has been used to help understand the skills required to successfully respond to test items, the extraneous examinee characteristics that affect the probability of response to items (DIF), and the complexities behind equating test forms, among other applications. This paper provides a short introduction to the historical antecedents of MIRT, the initial development of MIRT procedures, the similarities of MIRT procedures to other analysis techniques, and potential future directions for MIRT.


Applied Psychological Measurement | 1991

The Discriminating Power of Items that Measure More than One Dimension.

Mark D. Reckase; Robert L. McKinley

Determining a correct response to many test items frequently requires more than one ability. This paper describes the characteristics of items of this type by proposing generalizations of the item response theory concepts of discrimination and information. The conceptual framework for these statistics is presented, and the formulas for the statistics are derived for the multidimensional extension of the two-parameter logistic model. Use of the statistics is demonstrated for a form of the ACT Mathematics Usage Test.


New Horizons in Testing#R##N#Latent Trait Test Theory and Computerized Adaptive Testing | 1983

A Procedure for Decision Making Using Tailored Testing

Mark D. Reckase

Publisher Summary This chapter presents a decision procedure that operates sequentially and can easily be applied to tailored testing without loss of any of the elegance and mathematical sophistication of the examination procedures. In applying the decision procedure, two specific item response theory (IRT) models are used: the one- and three-parameter logistic models. Although any other IRT model could just as easily have been used, these models were selected because of their frequent appearance in the research literature and because of the existence of readily available calibration programs and tailored testing programs. The purposes of this research were (1) to obtain information on how the sequential probability ratio test (SPRT) procedure functioned when items were not randomly sampled from the item pool; (2) to gain experience in selecting the bounds of the indifference region; and (3) to obtain information on the effects of guessing on the accuracy of classification when the one-parameter logistic model was used. To determine the effects of these variables, the computation of the SPRT was programmed into both the one- and three-parameter logistic tailored testing procedures that were operational at the University of Missouri—Columbia.


Journal of Educational and Behavioral Statistics | 1996

Comparison of SPRT and Sequential Bayes Procedures for Classifying Examinees Into Two Categories Using a Computerized Test

Judith A. Spray; Mark D. Reckase

Many testing applications focus on classifying examinees into one of two categories (e.g., pass/fail) rather than on obtaining an accurate estimate of level of ability. Examples of such applications include licensure and certification, college selection, and placement into entry-level or developmental college courses. With the increased availability of computers for the administration and scoring of tests, computerized testing procedures have been developed for efficiently making these classification decisions. The purpose of the research reported in this article was to compare two such procedures, one based on the sequential probability ratio test and the other on sequential Bayes methodology, to determine which required fewer items for classification when the procedures were matched on classification error rates. The results showed that under the conditions studied, the SPRT procedure required fewer test items than the sequential Bayes procedure to achieve the same level of classification accuracy.


Education Finance and Policy | 2015

Can Value-Added Measures of Teacher Performance Be Trusted?

Cassandra M. Guarino; Mark D. Reckase; Jeffrey M. Wooldridge

We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures true teacher effects in all scenarios, and the potential for misclassifying teachers as high- or low-performing can be substantial. A dynamic ordinary least squares estimator is more robust across scenarios than other estimators. Misspecifying dynamic relationships can exacerbate estimation problems.


Journal of Ethnicity in Substance Abuse | 2006

The Role of Parenting in Drug Use Among Black, Latino and White Adolescents

Clifford L. Broman; Mark D. Reckase; Carol R. Freedman-Doan

Abstract This study investigates the role of parenting in adolescent drug use for black, white and Latino adolescents. Parenting has been consistently identified as a crucial factor in drug use by adolescents. This study uses data from the National Longitudinal Study of Adolescent Health. Results show that parenting has a significant effect on drug use for these adolescents. The relationship between parenting and drug use is more strongly negative for the Latino adolescents, than for black and white adolescents. This indicates that greater parental warmth and family acceptance exert a stronger impact in reducing drug use for Latino adolescents than is the case for the black and white adolescents.


Research Quarterly for Exercise and Sport | 2006

Athletes' Evaluations of Their Head Coach's Coaching Competency

Nicholas D. Myers; Deborah L. Feltz; Kimberly S. Maier; Edward W. Wolfe; Mark D. Reckase

This study provided initial validity evidence for multidimensional measures of coaching competency derived from the Coaching Competency Scale (CCS). Data were collected from intercollegiate mens (n = 8) and womens (n = 13) soccer and womens ice hockey teams (n = 11). The total number of athletes was 585. Within teams, a multidimensional internal model was retained in which motivation, game strategy, technique, and character building comprised the dimensions of coaching competency. Some redundancy among the dimensions was observed. Internal reliabilities ranged from very good to excellent. Practical recommendations for the CCS are given in the Discussion section.


Journal of Educational and Behavioral Statistics | 2015

An Evaluation of Empirical Bayes’s Estimation of Value-Added Teacher Performance Measures

Cassandra M. Guarino; Michelle Maxfield; Mark D. Reckase; Paul N. Thompson; Jeffrey M. Wooldridge

Empirical Bayes’s (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the performance of EB estimators with that of other widely used value-added estimators under different teacher assignment scenarios. We find that, although EB estimators generally perform well under random assignment (RA) of teachers to classrooms, their performance suffers under nonrandom teacher assignment. Under non-RA, estimators that explicitly (if imperfectly) control for the teacher assignment mechanism perform the best out of all the estimators we examine. We also find that shrinking the estimates, as in EB estimation, does not itself substantially boost performance.

Collaboration


Dive into the Mark D. Reckase's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian W. Stacy

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sharon L. Senk

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James A. Clardy

University of Arkansas for Medical Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge