Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jimmy de la Torre is active.

Publication


Featured researches published by Jimmy de la Torre.


Psychometrika | 2004

Higher-order latent trait models for cognitive diagnosis

Jimmy de la Torre; Jeff Douglas

Higher-order latent traits are proposed for specifying the joint distribution of binary attributes in models for cognitive diagnosis. This approach results in a parsimonious model for the joint distribution of a high-dimensional attribute vector that is natural in many situations when specific cognitive information is sought but a less informative item response model would be a reasonable alternative. This approach stems from viewing the attributes as the specific knowledge required for examination performance, and modeling these attributes as arising from a broadly-defined latent trait resembling theϑ of item response models. In this way a relatively simple model for the joint distribution of the attributes results, which is based on a plausible model for the relationship between general aptitude and specific knowledge. Markov chain Monte Carlo algorithms for parameter estimation are given for selected response distributions, and simulation results are presented to examine the performance of the algorithm as well as the sensitivity of classification to model misspecification. An analysis of fraction subtraction data is provided as an example.


Psychometrika | 2011

The Generalized DINA Model Framework

Jimmy de la Torre

The G-DINA (generalized deterministic inputs, noisy “and” gate) model is a generalization of the DINA model with more relaxed assumptions. In its saturated form, the G-DINA model is equivalent to other general models for cognitive diagnosis based on alternative link functions. When appropriate constraints are applied, several commonly used cognitive diagnosis models (CDMs) can be shown to be special cases of the general models. In addition to model formulation, the G-DINA model as a general CDM framework includes a component for item-by-item model estimation based on design and weight matrices, and a component for item-by-item model comparison based on the Wald test. The paper illustrates the estimation and application of the G-DINA model as a framework using real and simulated data. It concludes by discussing several potential implications of and relevant issues concerning the proposed framework.


Journal of Educational and Behavioral Statistics | 2009

DINA Model and Parameter Estimation: A Didactic

Jimmy de la Torre

Cognitive and skills diagnosis models are psychometric models that have immense potential to provide rich information relevant for instruction and learning. However, wider applications of these models have been hampered by their novelty and the lack of commercially available software that can be used to analyze data from this psychometric framework. To address this issue, this article focuses on one tractable and interpretable skills diagnosis model—the DINA model—and presents it didactically. The article also discusses expectation-maximization and Markov chain Monte Carlo algorithms in estimating its model parameters. Finally, analyses of simulated and real data are presented.Cognitive and skills diagnosis models are psychometric models that have immense potential to provide rich information relevant for instruction and learning. However, wider applications of these models have been hampered by their novelty and the lack of commercially available software that can be used to analyze data from this psychometric framework. To address this issue, this article focuses on one tractable and interpretable skills diagnosis model—the DINA model—and presents it didactically. The article also discusses expectation-maximization and Markov chain Monte Carlo algorithms in estimating its model parameters. Finally, analyses of simulated and real data are presented.


Applied Psychological Measurement | 2009

A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options

Jimmy de la Torre

Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as right or wrong. The dichotomization approach to the analysis of MC data ignores the potential diagnostic information that can be found in the distractors and is therefore deemed diagnostically suboptimal. To maximize the diagnostic value of MC assessments, this article prescribes how MC options should be constructed to make them more cognitively diagnostic and proposes a cognitive diagnosis model for analyzing such data. The article discusses the specification of the proposed model and estimation of its parameters. Moreover, results of a simulation study evaluating the viability of the model and an estimation algorithm are presented. Finally, practical considerations concerning the proposed framework are discussed.Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as right or wrong. The dichotomization approach to the analysis of MC data ignores the potential diagnostic information that can be found in the distractors and is therefore deemed diagnostically suboptimal. To maximize the diagnostic value of MC assessments, this article prescribes how MC options should be constructed to make them more cognitively diagnostic and proposes a cognitive diagnosis model for analyzing such data. The article discusses the specification of the proposed model and estimation of its parameters. Moreover, results of a simulation study evaluating the viability of the model and an estimation algorithm are presented. Finally, practical consideration...


Journal of Educational and Behavioral Statistics | 2005

Making the Most of What We Have: A Practical Application of Multidimensional Item Response Theory in Test Scoring

Jimmy de la Torre; Richard J. Patz

This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained. The efficiency of the proposed method is most pronounced when highly correlated abilities are estimated from multiple short tests. Employing Markov chain Monte Carlo techniques allows for straightforward estimation of model parameters.


Applied Psychological Measurement | 2009

Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach.

Jimmy de la Torre; Hao Song

Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when the domains are disparate, assuming a single underlying ability across the domains is not tenable. Moreover, estimating domain proficiencies based on short tests can result in unreliable scores. This article presents a higher-order item response theory framework where an overall and multiple domain abilities are specified in the same model. Using a Markov chain Monte Carlo method in a hierarchical Bayesian framework, the overall and domain-specific abilities, and their correlations, are estimated simultaneously. The feasibility and effectiveness of the proposed model are investigated under varied conditions in a simulation study and illustrated using actual assessment data. Implications of the model for future test analysis and ability estimation are also discussed. Index terms: higher-order ability estimation, item response theory, multidimensionality, domain scoring, Markov chain Monte Carlo


Applied Psychological Measurement | 2006

Markov Chain Monte Carlo estimation of item parameters for the generalized graded unfolding model

Jimmy de la Torre; Stephan Stark; Oleksandr S. Chernyshenko

The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response options, and sample size were manipulated. Results indicate that the two methods are comparable in terms of item parameter estimation accuracy. Although the MML estimates exhibit slightly smaller bias than MCMC estimates, they also show greater variability, which results in larger root mean squared errors. Of the two methods, only MCMC provides reasonable standard error estimates for all items.


Psychometrika | 2016

A General Method of Empirical Q-matrix Validation

Jimmy de la Torre; Chia Yi Chiu

In contrast to unidimensional item response models that postulate a single underlying proficiency, cognitive diagnosis models (CDMs) posit multiple, discrete skills or attributes, thus allowing CDMs to provide a finer-grained assessment of examinees’ test performance. A common component of CDMs for specifying the attributes required for each item is the Q-matrix. Although construction of Q-matrix is typically performed by domain experts, it nonetheless, to a large extent, remains a subjective process, and misspecifications in the Q-matrix, if left unchecked, can have important practical implications. To address this concern, this paper proposes a discrimination index that can be used with a wide class of CDM subsumed by the generalized deterministic input, noisy “and” gate model to empirically validate the Q-matrix specifications by identifying and replacing misspecified entries in the Q-matrix. The rationale for using the index as the basis for a proposed validation method is provided in the form of mathematical proofs to several relevant lemmas and a theorem. The feasibility of the proposed method was examined using simulated data generated under various conditions. The proposed method is illustrated using fraction subtraction data.


Applied Psychological Measurement | 2010

Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach.

Jimmy de la Torre; Yuan Hong

Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much small...Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article proposes an approach that accounts for the dimensionality of the assessments in the calibration process. This approach is based on the higher-order item response theory (HO-IRT) model. The HO-IRT model is a multi-unidimensional model that uses in-test collateral information and represents it in the correlational structure of the domains through a higher-order latent trait formulation. Using Markov chain Monte Carlo in a hierarchical Bayesian framework, the item parameters, the overall and domain-specific abilities, and their correlations are estimated simultaneously. The feasibility and effectiveness of the proposed approach are investigated under varied conditions in a simulation study and illustrated using actual assessment data.


Applied Psychological Measurement | 2013

A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes.

Jinsong Chen; Jimmy de la Torre

Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, “and” gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts to specify attribute levels and is a general model that subsumes various reduced models. In addition to model formulation, the authors evaluate the viability of the proposed model by examining how well the model parameters can be estimated under various conditions, and compare its classification accuracy against that of the conventional G-DINA model with a modified classification rule. A real-data example is used to illustrate the application of the model in practice.

Collaboration


Dive into the Jimmy de la Torre's collaboration.

Top Co-Authors

Avatar

Francisco J. Abad

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hartono Tjoe

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Julio Olea

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Miguel A. Sorrel

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge