Computing Education Practice 2021 | 2021

Analysis of an automatic grading system within first year Computer Science programming modules

 
 

Abstract


Reliable and pedagogically sound automated feedback and grading systems are highly coveted by educators. Automatic grading systems are useful for ensuring equity of grading of student submissions to assignments and providing timely feedback on the work. Many of these systems test submissions to assignments based on test cases and the outputs that they achieve, while others use unit tests to check the submissions. The approach presented in this paper checks submissions based on test cases but also analyses what the students actually wrote in their code. Assignment questions are constructed based around the concepts that the student are currently learning in lectures, and the patterns searched for in their submissions are based on these concepts. In this paper we show how to implement this approach effectively. We analyse the use of an automatic grading system within first year Computer Science programming modules and show that the system is straightforward to use and suited for novice programmers, while providing automatic grading and feedback. Evaluation received from students, demonstrators and lecturers show the system is extremely beneficial. The evaluation shows that such systems allow demonstrators more time to assist students during labs. Lecturers can also provide instant feedback to students while keeping track of their progress and identifying where the gaps in students’ knowledge are.

Volume None
Pages None
DOI 10.1145/3437914.3437973
Language English
Journal Computing Education Practice 2021

Full Text