Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anne Corinne Huggins-Manley is active.

Publication


Featured researches published by Anne Corinne Huggins-Manley.


Structural Equation Modeling | 2015

The Partial Credit Model and Generalized Partial Credit Model as Constrained Nominal Response Models, With Applications in Mplus

Anne Corinne Huggins-Manley; James Algina

The purpose of this article is to demonstrate constraining the nominal response model in Mplus software to calibrate data under the partial credit model (PCM) and generalized partial credit model (GPCM). Currently, many researchers are uncertain if the PCM and GPCM can be estimated within Mplus. Through model constraint commands in Mplus, we demonstrate that both models can be estimated in recent versions of this software. We present an example of this approach with data from 522 respondents on a subset of items from the Math Self-Efficacy Scale (Betz & Hackett, 1983). It is demonstrated that the presented model code is a viable way of estimating the models in Mplus.


Educational and Psychological Measurement | 2017

Psychometric Consequences of Subpopulation Item Parameter Drift

Anne Corinne Huggins-Manley

This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated with a lack of invariance in dimensionality structure of an anchor test, a lack of invariance in scaling coefficients across subpopulations, and a lack of invariance in ability estimates. It is demonstrated that these effects go beyond what can be understood from item parameter drift or differential item functioning.


Archive | 2016

The Specification of Attribute Structures and Its Effects on Classification Accuracy in Diagnostic Test Design

Ren Liu; Anne Corinne Huggins-Manley

Diagnostic test has gained attention for its potentiality to produce fine-grained information about examinees. The dependency among attributes (i.e. attribute structure) is one of the most important factors affecting diagnostic test design. This article introduces four types of attribute structures and examines the effects of the attribute number, structure and level on classification accuracy and reliability. Results from the study help researchers and practitioners understand factors that affect classification when specifying attributes, and design diagnostic tests that provide accurate information about examinees.


International Journal of Testing | 2016

Evaluation of Model Fit in Cognitive Diagnosis Models.

Jinxiang Hu; M. David Miller; Anne Corinne Huggins-Manley; Yi-Hsin Chen

Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC), and absolute fit indices (RMSEA2, ABS(fcor) and MAX(χ2jj′)). These fit indices were assessed under different CDM settings with respect to Q-matrix misspecification and CDM misspecification. Results showed that relative fit indices selected the correct DINA model most of the times and selected the correct G-DINA model well across most conditions. Absolute fit indices rejected the true DINA model if the Q-matrix was misspecified in any way. Absolute fit indices rejected the true G-DINA model whenever the Q-matrix was under-specified. RMSEA2 could be artificially low when the Q-matrix was over-specified.


Educational and Psychological Measurement | 2018

Retrofitting Diagnostic Classification Models to Responses from IRT-Based Assessment Forms.

Ren Liu; Anne Corinne Huggins-Manley; Okan Bulut

Developing a diagnostic tool within the diagnostic measurement framework is the optimal approach to obtain multidimensional and classification-based feedback on examinees. However, end users may seek to obtain diagnostic feedback from existing item responses to assessments that have been designed under either the classical test theory or item response theory frameworks. Retrofitting diagnostic classification models to existing assessments designed under other psychometric frameworks could be a plausible approach to obtain more actionable scores or understand more about the constructs themselves. This study (a) discusses the possibility and problems of retrofitting, (b) proposes a step-by-step retrofitting framework, and (c) explores the information one can gain from retrofitting through an empirical application example. While retrofitting may not always be an ideal approach to diagnostic measurement, this article aims to invite discussions through presenting the possibility, challenges, process, and product of retrofitting.


Structural Equation Modeling | 2017

Assessing the Sensitivity of Weighted Least Squares Model Fit Indexes to Local Dependence in Item Response Theory Models

Anne Corinne Huggins-Manley; HyunSuk Han

Given the relationships of item response theory (IRT) models to confirmatory factor analysis (CFA) models, IRT model misspecifications might be detectable through model fit indexes commonly used in categorical CFA. The purpose of this study is to investigate the sensitivity of weighted least squares with adjusted means and variance (WLSMV)-based root mean square error of approximation, comparative fit index, and Tucker–Lewis Index model fit indexes to IRT models that are misspecified due to local dependence (LD). It was found that WLSMV-based fit indexes have some functional relationships to parameter estimate bias in 2-parameter logistic models caused by violations of LD. Continued exploration into these functional relationships and development of LD-detection methods based on such relationships could hold much promise for providing IRT practitioners with global information on violations of local independence.


Structural Equation Modeling | 2018

Models for Semiordered Data to Address Not Applicable Responses in Scale Measurement

Anne Corinne Huggins-Manley; James Algina; Sherry Zhou

The purpose of this study is to develop and evaluate unidimensional models that can handle semiordered data within scale items (i.e., items with multiple ordered response categories, and one additional nominal response category). We apply the models to scale data with not applicable (NA) responses to compare the model performance to conditions in which NA responses are treated as missing and ignored. We also conduct a small simulation study based on the operational study to evaluate the parameter recovery of the models under the operational conditions. Findings indicate that the proposed models show promise for (a) reducing standard errors of trait estimates for persons who select NA responses, (b) reducing nonresponse bias in trait estimates for persons who select NA responses, and (c) providing substantive information to practitioners about the nature of the relationship between NA selection and the trait of measurement.


International Journal of Testing | 2018

Exploring a Source of Uneven Score Equity across the Test Score Range

Anne Corinne Huggins-Manley; Yuxi Qiu; Randall D. Penfield

Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have performed research for the purpose of understanding why score equity can be inconsistent across the score range of some tests. The purpose of this study is to explore a source of uneven subpopulation score equity across the score range of a test. It is hypothesized that the difficulty of anchor items displaying differential item functioning (DIF) is directly related to the score location at which issues of score inequity are observed. The simulation study supports the hypothesis that the difficulty of DIF items has a systematic impact on the uneven nature of conditional score equity.


Educational and Psychological Measurement | 2018

Evaluating the Accuracy of the Empirical Item Characteristic Curve Preequating Method in the Presence of Test Speededness

Yuxi Qiu; Anne Corinne Huggins-Manley

This study aimed to assess the accuracy of the empirical item characteristic curve (EICC) preequating method given the presence of test speededness. The simulation design of this study considered the proportion of speededness, speededness point, speededness rate, proportion of missing on speeded items, sample size, and test length. After crossing all of the manipulated factors and then normalizing the evaluation criteria (bias and root mean square difference [RMSD]) with regard to test length, the results revealed that (1) when test speededness was present, conversions from the EICC preequating method tended to be positively distorted; (2) no practically meaningful moderation effect associated with sample size was found on the relationship between test speededness and the accuracy of EICC preequating; and (3) the location of the speededness point was the driving factor in terms of its impact on the accuracy of EICC preequating. Implications and suggestions were discussed.


Communications in Statistics - Simulation and Computation | 2017

Sensitivity analysis and choosing between alternative polytomous IRT models using Bayesian model comparison criteria

Marcelo A. da Silva; Jorge L. Bazán; Anne Corinne Huggins-Manley

ABSTRACT Polytomous Item Response Theory (IRT) models are used by specialists to score assessments and questionnaires that have items with multiple response categories. In this article, we study the performance of five model comparison criteria for comparing fit of the graded response and generalized partial credit models using the same dataset when the choice between the two is unclear. Simulation study is conducted to analyze the sensitivity of priors and compare the performance of the criteria using the No-U-Turn Sampler algorithm, under a Bayesian approach. The results were used to select a model for an application in mental health data.

Collaboration


Dive into the Anne Corinne Huggins-Manley's collaboration.

Top Co-Authors

Avatar

Ren Liu

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuxi Qiu

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinxiang Hu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge