Marije Lesterhuis
University of Antwerp
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marije Lesterhuis.
Assessment in Education: Principles, Policy & Practice | 2016
Tine van Daal; Marije Lesterhuis; Liesje Coertjens; Vincent Donche; Sven De Maeyer
Abstract Recently, comparative judgement has been introduced as an alternative method for scoring essays. Although this method is promising in terms of obtaining reliable scores, empirical evidence concerning its validity is lacking. The current study examines implications resulting from two critical assumptions underpinning the use of comparative judgement, namely: its holistic characteristic and how the final rank order reflects the shared consensus on what makes for a good essay. Judges’ justifications that underpin their decisions are qualitatively analysed to obtain insight into the dimensions of academic writing they take into account. The results show that most arguments are directly related to the competence description. However, judges also use their expertise in order to judge the quality of essays. Additionally, judges differ in terms of how they conceptualise writing quality, and regarding the extent to which they tap into their own expertise. Finally, this study explores diverging conceptualisation of misfitting judges.
Communications in computer and information science | 2015
Anneleen Mortier; Marije Lesterhuis; Peter Vlerick; Sven De Maeyer
Nowadays, comparative judgment (CJ) emerged as an alternative method for assessing competences and performances (e.g. Pollitt, 2012). In this method, various assessors compare independently several representations of different students and decide each time which of them demonstrate the best performance of the given competence. This study investigated students’ attitudes (honesty, relevancy and trustworthiness) towards feedback that is based upon this method. Additionally, it studied the importance of specific tips in CJ-based feedback.
Frontiers in Education | 2018
Renske Bouwer; Marije Lesterhuis; Pieterjan Bonne; Sven De Maeyer
In higher education, writing tasks are often accompanied by criteria indicating key aspects of writing quality. Sometimes, these criteria are also illustrated with examples of varying quality. It is, however, not yet clear how students learn from shared criteria and examples. This research aims to investigate the learning effects of two different instructional approaches: applying criteria to examples and comparative judgment. International business students were instructed to write a five-paragraph essay, preceded by a 30-minutes peer assessment in which they evaluated the quality of a range of example essays. Half of the students evaluated the quality of the example essays using a list of teacher-designed criteria (criteria condition; n = 20), the other group evaluated by pairwise comparisons (comparative judgment condition; n = 20). Students were also requested to provide peer feedback. Results show that the instructional approach influenced the kind of aspects students commented on when giving feedback. Students in the comparative judgment condition provided relatively more feedback on higher order aspects such as the content and structure of the text than students in the criteria condition. This was only the case for improvement feedback; for feedback on strengths there were no significant differences. Positive effects of comparative judgment on students’ own writing performance were only moderate and non-significant in this small sample. Although the transfer effects were inconclusive, this study nevertheless shows that comparative judgment can be as powerful as applying criteria to examples. Comparative judgement inherently activates students to engage with exemplars at a higher textual level and enables students to evaluate more example essays by comparison than by criteria. Further research is needed on the long-term and indirect effects of comparative judgment, as it might influence students’ conceptualization of writing, without directly improving their writing performance.
Frontiers in Education | 2017
Tine van Daal; Marije Lesterhuis; Liesje Coertjens; Marie-Thérèse van de Kamp; Vincent Donche; Sven De Maeyer
Nowadays, comparative judgment is used to assess competences. Judges compare two pieces of student work and judge which of both is better regarding the competence assessed. Using these pairwise comparison data, students’ work is scaled according to its quality. Since student work is highly information loaded and heterogeneous of nature, this raises the question whether judges can handle this type of complex judgments? However, research into the complexity of comparative judgment and its relation with decision accuracy is currently lacking. Therefore, this study initiates a theoretical framework on the complexity of comparative judgment and relates it to decision accuracy. Based on this framework two hypotheses are formulated and their plausibility is examined. The first hypothesis states that the distance between two pieces of student work on the rank-order (rank-order distance) is negatively related to experienced complexity, irrespectively of decision accuracy. In contrast, hypothesis 2 expects decision accuracy to moderate the relation between rank-order distance and experienced complexity. A negative relation is expected for accurate decisions. Meanwhile, inaccurate decisions are assumed to result in higher experienced complexity than accurate decisions, irrespective of rank-order distance. In both hypotheses, judges are expected to vary in mean experienced complexity as well as in the strength of the expected relationship between rank-order distance and experienced complexity. Using an information-theoretic approach, both hypotheses are translated into a statistical model and their relative fit is assessed. All analyses are replicated on three samples. Sample 1 and 2 comprise CJ data on the assessment of writing, while sample 3 contain pairwise comparison data on the assessment of visual arts. Results unambiguously confirm the moderating role of decision accuracy (hypothesis 2). Inaccurate decisions are experienced as more complex than accurate decisions, irrespective of rank-order distance. Meanwhile, for accurate decisions, rank-order distance is negatively related to experienced complexity. In line with expectations, differences between judges are found in mean experienced complexity and in the strength of the relationship between rank-order distance and experienced complexity. Suggestions for further theory development are formulated based on the results of this study.
British Journal of Educational Psychology | 2018
Leen Catrysse; David Gijbels; Vincent Donche; Sven De Maeyer; Marije Lesterhuis; Piet Van den Bossche
Innovative practices for higher education assessment and measurement / Cano, Elena [edit.]; e.a. [edit.] | 2017
Marije Lesterhuis; San Verhavert; Liesje Coertjens; Vincent Donche; Sven De Maeyer
European Journal of Information Systems | 2018
Tanguy Coenen; Liesje Coertjens; Peter Vlerick; Marije Lesterhuis; Anneleen Mortier; Vincent Donche; Pieter Ballon; Sven De Maeyer
Tijdschrift voor hoger onderwijs. - Deventer, 1983, currens | 2015
Marije Lesterhuis; Vincent Donche; Sven De Maeyer; Tine van Daal; Roos Van Gasse; Liesje Coertjens; San Verhavert; Anneleen Mortier; Tanguy Coenen; Peter Vlerick; Jan Vanhoof; Peter Van Petegem
Toetsrevolutie Naar een feedbackcultuur in het hoger onderwijs | 2018
Renske Bouwer; Maarten Goossens; Anneleen Mortier; Marije Lesterhuis; Sven De Maeyer
L1-educational Studies in Language and Literature | 2018
Marije Lesterhuis; T. van Daal; R. Van Gasse; Liesje Coertjens; Vincent Donche