Proceedings of the 50th ACM Technical Symposium on Computer Science Education | 2019

An Item Response Theory Evaluation of a Language-Independent CS1 Knowledge Assessment

 
 
 
 

Abstract


Tests serve an important role in computing education, measuring achievement and differentiating between learners with varying knowledge. But tests may have flaws that confuse learners or may be too difficult or easy, making test scores less valid and reliable. We analyzed the Second Computer Science 1 (SCS1) concept inventory, a widely used assessment of introductory computer science (CS1) knowledge, for such flaws. The prior validation study of the SCS1 used Classical Test Theory and was unable to determine whether differences in scores were a result of question properties or learner knowledge. We extended this validation by modeling question difficulty and learner knowledge separately with Item Response Theory (IRT) and performing expert review on problematic questions. We found that three questions measured knowledge that was unrelated to the rest of the SCS1, and four questions were too difficult for our sample of 489 undergrads from two universities.

Volume None
Pages None
DOI 10.1145/3287324.3287370
Language English
Journal Proceedings of the 50th ACM Technical Symposium on Computer Science Education

Full Text