Learning Disability Quarterly | 2019

The Potential for Automated Text Evaluation to Improve the Technical Adequacy of Written Expression Curriculum-Based Measurement

 
 
 
 
 

Abstract


Written-expression curriculum-based measurement (WE-CBM) is used for screening and progress monitoring students with or at risk of learning disabilities (LD) for academic supports; however, WE-CBM has limitations in technical adequacy, construct representation, and scoring feasibility as grade-level increases. The purpose of this study was to examine the structural and external validity of automated text evaluation with Coh-Metrix versus traditional WE-CBM scoring for narrative writing samples (7-min duration) collected in fall and winter from 144 second- through fifth-grade students. Seven algorithms were applied to train models of Coh-Metrix and traditional WE-CBM scores to predict holistic quality of the writing samples as evidence of structural validity; then, external validity was evaluated via correlations with rated quality on other writing samples. Key findings were that (a) structural validity coefficients were higher for Coh-Metrix compared with traditional WE-CBM but similar in the external validity analyses, (b) external validity coefficients were higher than reported in prior WE-CBM studies with holistic or analytic ratings as a criterion measure, and (c) there were few differences in performance across the predictive algorithms. Overall, the results highlight the potential use of automated text evaluation for WE-CBM scoring. Implications for screening and progress monitoring are discussed.

Volume 42
Pages 117 - 128
DOI 10.1177/0731948718803296
Language English
Journal Learning Disability Quarterly

Full Text