Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimiter M. Dimitrov is active.

Publication


Featured researches published by Dimiter M. Dimitrov.


Measurement and Evaluation in Counseling and Development | 2010

Testing for Factorial Invariance in the Context of Construct Validation

Dimiter M. Dimitrov

This article describes the logic and procedures behind testing for factorial invariance across groups in the context of construct validation. The procedures include testing for configural, measurement, and structural invariance in the framework of multiple-group confirmatory factor analysis (CFA). The forward (sequential constraint imposition) approach to testing for factorial invariance is described and illustrated for the cases of first-order and second-order CFA models; computer codes in Mplus are provided. Computations of the Satorra—Bentler scaled chi-square difference, used in testing for factorial invariance under lack of multivariate normality, are also described. Some points of caution related to testing and interpreting measurement invariance are provided as well.


Structural Equation Modeling | 2010

Evaluation of Scale Reliability With Binary Measures Using Latent Variable Modeling

Tenko Raykov; Dimiter M. Dimitrov; Tihomir Asparouhov

A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets following the 2-parameter logistic model or the 1-parameter logistic model. An extension of the method is described for constructing confidence intervals of change in reliability due to instrument revision. The proposed procedure is illustrated with an example.


Educational and Psychological Measurement | 2002

Reliability: Arguments for Multiple Perspectives and Potential Problems with Generalization across Studies

Dimiter M. Dimitrov

The present article addresses reliability issues in light of recent studies and debates focused on psychometrics versus datametrics terminology and reliability generalization (RG) introduced by Vacha-Haase. The purpose here was not to moderate arguments presented in these debates but to discuss multiple perspectives on score reliability and how they may affect research practice, editorial policies, and RG across studies. Issues of classical error variance and reliability are discussed across models of classical test theory, generalizability theory, and item response theory. Potential problems with RG across studies are discussed in relation to different types of reliability, different test forms, different number of items, misspecifications, and confounding independent variables in a single RG analysis.


Applied Psychological Measurement | 2003

Marginal True-Score Measures and Reliability for Binary Items as a Function of Their IRT Parameters

Dimiter M. Dimitrov

This article provides analytic evaluations of population true-score measures for binary items given their item response theory (IRT) calibration. Under the assumption of normal trait distribution, the expected values of marginalized true scores, error variance, true-score variance, and reliability for norm-referenced and criterion-referenced interpretations are presented as a function of the item parameters. The proposed formulas have methodological and computational value in bridging concepts of IRT and true-score theory. They provide information about the individual contribution of IRT calibrated items to marginal true-score measures and may have valuable applications in test development and analysis. For example, given a bank of IRT calibrated items, one can select binary items to develop a test with known true-score characteristics prior to administering the test (without information about raw scores or trait scores). Calculations with the proposed formulas are easy to perform using basic statistical programs, spreadsheet programs, or even handheld calculators.


Applied Psychological Measurement | 2007

Least Squares Distance Method of Cognitive Validation and Analysis for Binary Items Using Their Item Response Theory Parameters

Dimiter M. Dimitrov

The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their advantages and disadvantages, require item score information and do not focus on conditional validation of cognitive attributes across ability levels and individual test items. This study proposes a method of estimating the probability of correct performance on cognitive attributes across fixed ability levels. The proposed method, referred to here as the least squares distance method (LSDM), is based on the minimization of matrix norms using the Euclidean least squares distance. The LSDM does not require raw or trait scores of examinees as long as IRT estimates of the item parameters are available.


Multivariate Behavioral Research | 2003

Validation of Cognitive Structures: A Structural Equation Modeling Approach.

Dimiter M. Dimitrov; Tenko Raykov

Determining sources of item difficulty and using them for selection or development of test items is a bridging task of psychometrics and cognitive psychology. A key problem in this task is the validation of hypothesized cognitive operations required for correct solution of test items. In previous research, the problem has been addressed frequently via use of the linear logistic test model for prediction of item difficulties. The validation procedure discussed in this article is alternatively based on structural equation modeling of cognitive subordination relationships between test items. The method is illustrated using scores of ninth graders on an algebra test where the structural equation model fit supports the cognitive model. Results obtained with the linear logistic test model for the algebra test are also used for comparative purposes.


Journal of Research in Special Educational Needs | 2016

The teacher efficacy for inclusive practices (TEIP) scale: dimensionality and factor structure

Mi-Hwa Park; Dimiter M. Dimitrov; Ajay Das; Margaret Gichuru

The Teacher Efficacy for Inclusive Practices (TEIP) scale is designed to measure teacher-self efficacy to teach in inclusive classrooms. The original study identified three scale factors: efficacy in using inclusive instruction (EII), efficacy in collaboration (EC), and efficacy in managing behavior (EMB) (Sharma et al., 2012). The purpose of our study was to examine the TEIP scale for dimensionality and to cross-validate its factor structure for pre-service teachers in the context of early childhood education. A bifactor model fit to the data revealed that the TEIP scale is essentially unidimensional, that is, there is one dominant latent factor and the originally found three scale factors (EII, EC, and EMB) represent specific aspects of the general factor of teacher self-efficacy to teach in inclusive classrooms. Along with providing validation evidence, these findings have important implications for the scoring on the TEIP scale using classical test analysis or unidimensional item response theory models.


Educational and Psychological Measurement | 2012

Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis

Dimiter M. Dimitrov; Dimitar V. Atanasov

Many models of cognitive diagnosis, including the least squares distance model (LSDM), work under the conjunctive assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a disjunctive version of the LSDM under which the correct item response occurs when at least one attribute is correctly applied. Also, under both the conjunctive and disjunctive versions of the LSDM, this article demonstrates an approach to estimating the conditional probability that (a) a specific pattern of p attributes, (b) exactly p attributes, and (c) at least p attributes will be correctly performed across locations on the logit scale in the item response theory under the one-, two-, or three-parameter logistic model. Such information can be useful for interpretations and decisions based on a person’s performance on attributes that govern the correct responses on binary items under unidimensional item response theory calibrations for assessment in education, psychology, and other fields.


Structural Equation Modeling | 2016

Maximal Reliability and Composite Reliability: Examining Their Difference for Multicomponent Measuring Instruments Using Latent Variable Modeling

Tenko Raykov; Siegfried Gabler; Dimiter M. Dimitrov

A latent variable modeling method for examining the difference between maximal reliability and composite reliability for homogenous multicomponent measuring instruments is outlined. The procedure allows point and interval estimation of the discrepancy between the reliability coefficients associated with the optimal linear combination and with the popular unit-weighted sum of the scale components. The approach permits a researcher to make an informed choice if needed between the maximal reliability and composite reliability coefficients and concepts in an empirical setting as indexes of quality of measurement with an instrument under consideration. The discussed method is illustrated using numerical data.


Measurement and Evaluation in Counseling and Development | 2017

Examining Differential Item Functioning: IRT-Based Detection in the Framework of Confirmatory Factor Analysis

Dimiter M. Dimitrov

ABSTRACT This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.

Collaboration


Dive into the Dimiter M. Dimitrov's collaboration.

Top Co-Authors

Avatar

Tenko Raykov

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mi-Hwa Park

Murray State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven McGee

Wheeling Jesuit University

View shared research outputs
Top Co-Authors

Avatar

Bruce C. Howard

Wheeling Jesuit University

View shared research outputs
Top Co-Authors

Avatar

Do-Yong Park

Illinois State University

View shared research outputs
Top Co-Authors

Avatar

Michael Harrison

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Tatyana Li

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge