Assessing Writing | 2019

An investigation of the text features of discrepantly-scored ESL essays: A mixed methods study

 

Abstract


Abstract In this explanatory sequential mixed methods study I investigated which textual features are common in L2 essays scored inconsistently by two raters. In the first quantitative phase, I compiled a set of 109 test essays on one topic that received two matching scores or two discrepant scores. I used automatized tools (i.e., Coh-Metrix and Authorial Voice Analyzer) to measure the essays’ complexity, fluency, and interactional metadiscourse. For accuracy and impressionistic measures, two raters coded the essays’ linguistic errors and gave them scores on apparent length and handwriting quality. With this comprehensive set of textual measures representing different levels of language use, I performed a discriminant function analysis and found that seven textual features (i.e., spelling, syntactic diversity, noun phrase density, negation density, voice strength, conceptual cohesion, and apparent length) predicted score discrepancy with a 94.4% accuracy rate. In the second qualitative phase, I used intro- and retrospective data to understand what textual features raters found distinct in the discrepantly scored essays and whether they referred to the seven statistically identified textual features. They attended to spelling errors, apparent length, authorial voice, and syntactic diversity, but did not consider the three other features that were uncovered in the quantitative phase of the study (i.e., negation density, conceptual cohesion, and noun phrase density).

Volume 39
Pages 1-13
DOI 10.1016/J.ASW.2018.10.003
Language English
Journal Assessing Writing

Full Text