Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liesje Coertjens is active.

Publication


Featured researches published by Liesje Coertjens.


Journal of Psychoeducational Assessment | 2012

Longitudinal Measurement Invariance of Likert-Type Learning Strategy Scales: Are We Using the Same Ruler at Each Wave?.

Liesje Coertjens; Vincent Donche; Sven De Maeyer; Gert Vanthournout; Peter Van Petegem

Whether or not learning strategies change during the course of higher education is an important topic in the Student Approaches to Learning field. However, there is a dearth of any empirical evaluations in the literature as to whether or not the instruments in this research domain measure equivalently over time. Therefore, this study details the procedure of longitudinal measurement invariance testing of self-report Likert-type scales, using the case of learning strategies. The sample consists of 245 University College students who filled out the Inventory of Learning Styles—Short Version three times. Using the WLSMV estimator to take into account the ordinal nature of the data, a series of models with progressively more stringent constraints were estimated using Mplus 6.1. The results indicate that longitudinal measurement invariance holds for all but two learning strategy scales. The implications for longitudinal analysis using scales with varying degrees of measurement invariance are discussed.


PLOS ONE | 2013

Modeling change in learning strategies throughout higher education: a multi-indicator latent growth perspective.

Liesje Coertjens; Vincent Donche; Sven De Maeyer; Gert Vanthournout; Peter Van Petegem

The change in learning strategies during higher education is an important topic of research in the Student Approaches to Learning field. Although the studies on this topic are increasingly longitudinal, analyses have continued to rely primarily on traditional statistical methods. The present research is innovative in the way it uses a multi-indicator latent growth analysis in order to more accurately estimate the general and differential development in learning strategy scales. Moreover, the predictive strength of the latent growth models are estimated. The sample consists of one cohort of Flemish University College students, 245 of whom participated in the three measurement waves by filling out the processing and regulation strategies scales of the Inventory of Learning Styles – Short Versions. Independent-samples t-tests revealed that the longitudinal group is a non-random subset of students starting University College. For each scale, a multi-indicator latent growth model is estimated using Mplus 6.1. Results suggest that, on average, during higher education, students persisting in their studies in a non-delayed manner seem to shift towards high-quality learning and away from undirected and surface-oriented learning. Moreover, students from the longitudinal group are found to vary in their initial levels, while, unexpectedly, not in their change over time. Although the growth models fit the data well, significant residual variances in the latent factors remain.


Assessment in Education: Principles, Policy & Practice | 2016

Validity of comparative judgement to assess academic writing: examining implications of its holistic character and building on a shared consensus

Tine van Daal; Marije Lesterhuis; Liesje Coertjens; Vincent Donche; Sven De Maeyer

Abstract Recently, comparative judgement has been introduced as an alternative method for scoring essays. Although this method is promising in terms of obtaining reliable scores, empirical evidence concerning its validity is lacking. The current study examines implications resulting from two critical assumptions underpinning the use of comparative judgement, namely: its holistic characteristic and how the final rank order reflects the shared consensus on what makes for a good essay. Judges’ justifications that underpin their decisions are qualitatively analysed to obtain insight into the dimensions of academic writing they take into account. The results show that most arguments are directly related to the competence description. However, judges also use their expertise in order to judge the quality of essays. Additionally, judges differ in terms of how they conceptualise writing quality, and regarding the extent to which they tap into their own expertise. Finally, this study explores diverging conceptualisation of misfitting judges.


Applied Psychological Measurement | 2018

Scale Separation Reliability: What does it mean in the context of comparative judgment?

San Verhavert; Sven De Maeyer; Vincent Donche; Liesje Coertjens

Comparative judgment (CJ) is an alternative method for assessing competences based on Thurstone’s law of comparative judgment. Assessors are asked to compare pairs of students work (representations) and judge which one is better on a certain competence. These judgments are analyzed using the Bradly–Terry–Luce model resulting in logit estimates for the representations. In this context, the Scale Separation Reliability (SSR), coming from Rasch modeling, is typically used as reliability measure. But, to the knowledge of the authors, it has never been systematically investigated if the meaning of the SSR can be transferred from Rasch to CJ. As the meaning of the reliability is an important question for both assessment theory and practice, the current study looks into this. A meta-analysis is performed on 26 CJ assessments. For every assessment, split-halves are performed based on assessor. The rank orders of the whole assessment and the halves are correlated and compared with SSR values using Bland–Altman plots. The correlation between the halves of an assessment was compared with the SSR of the whole assessment showing that the SSR is a good measure for split-half reliability. Comparing the SSR of one of the halves with the correlation between the two respective halves showed that the SSR can also be interpreted as an interrater correlation. Regarding SSR as expressing a correlation with the truth, the results are mixed.


PLOS ONE | 2017

To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data

Liesje Coertjens; Vincent Donche; Sven De Maeyer; Gert Vanthournout; Peter Van Petegem

Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students’ changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles–Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.


Archive | 2018

Oranges and Apples? Using Comparative Judgement for Reliable Briefing Paper Assessment in Simulation Games

Pierpaolo Settembri; Roos Van Gasse; Liesje Coertjens; Sven De Maeyer

Achieving a fair and rigorous assessment of participants in simulation games represents a major challenge. Not only does the difficulty apply to the actual negotiation part, but it also extends to the written assignments that typically accompany a simulation. For one thing, if different raters are involved, it is important to assure that differences in severity do not affect the grades. Recently, comparative judgement (CJ) has been introduced as a method allowing for a team-based grading. This chapter discusses in particular the potential of comparative judgement for assessing briefing papers from 84 students. Four assessors completed 622 comparisons in the Digital Platform for the Assessment of Competences (D-PAC) tool. Results indicate a reliability level of 0.71 for the final rank order, which had demanded a time investment around 10.5 h from the team of assessors. Next to this, there was no evidence of bias towards the most important roles in the simulation game. The study also details how the obtained rank orders were translated into grades, ranging from 11 to 17 out of 20. These elements showcase CJ’s advantage in reaching adequate reliability levels for briefing papers in an efficient manner.


Frontiers in Education | 2017

The Complexity of Assessing Student Work Using Comparative Judgment: The Moderating Role of Decision Accuracy

Tine van Daal; Marije Lesterhuis; Liesje Coertjens; Marie-Thérèse van de Kamp; Vincent Donche; Sven De Maeyer

Nowadays, comparative judgment is used to assess competences. Judges compare two pieces of student work and judge which of both is better regarding the competence assessed. Using these pairwise comparison data, students’ work is scaled according to its quality. Since student work is highly information loaded and heterogeneous of nature, this raises the question whether judges can handle this type of complex judgments? However, research into the complexity of comparative judgment and its relation with decision accuracy is currently lacking. Therefore, this study initiates a theoretical framework on the complexity of comparative judgment and relates it to decision accuracy. Based on this framework two hypotheses are formulated and their plausibility is examined. The first hypothesis states that the distance between two pieces of student work on the rank-order (rank-order distance) is negatively related to experienced complexity, irrespectively of decision accuracy. In contrast, hypothesis 2 expects decision accuracy to moderate the relation between rank-order distance and experienced complexity. A negative relation is expected for accurate decisions. Meanwhile, inaccurate decisions are assumed to result in higher experienced complexity than accurate decisions, irrespective of rank-order distance. In both hypotheses, judges are expected to vary in mean experienced complexity as well as in the strength of the expected relationship between rank-order distance and experienced complexity. Using an information-theoretic approach, both hypotheses are translated into a statistical model and their relative fit is assessed. All analyses are replicated on three samples. Sample 1 and 2 comprise CJ data on the assessment of writing, while sample 3 contain pairwise comparison data on the assessment of visual arts. Results unambiguously confirm the moderating role of decision accuracy (hypothesis 2). Inaccurate decisions are experienced as more complex than accurate decisions, irrespective of rank-order distance. Meanwhile, for accurate decisions, rank-order distance is negatively related to experienced complexity. In line with expectations, differences between judges are found in mean experienced complexity and in the strength of the relationship between rank-order distance and experienced complexity. Suggestions for further theory development are formulated based on the results of this study.


Education Research International | 2015

Motives of masters for the teaching profession : development of the MMTP questionnaire

Wil Meeus; Marlies Baeten; Liesje Coertjens

Increasing teacher shortages provide incentives for conducting research into the motives of future teachers aspiring to work in education. The present study builds on previous research into motivation for entering the teaching profession. Given the shortage of studies carried out with direct empirical foundations, multiphase factor analyses, and large respondent groups, the present research focuses on developing the questionnaire Motives of Masters for the Teaching Profession (MMTP) while meeting these methodological criteria. Master’s students described their motivations for entering the teaching profession. Confirmatory factor analysis was carried out in order to confirm the factor structure produced by the exploratory factor analysis . On the basis of content and statistical arguments, a 7-factor solution was obtained and a 35-item questionnaire was produced. Future cross-contextual research on the MMTP should attempt to improve the generalizability of the questionnaire.


Higher Education | 2010

Instructional development for teachers in higher education: impact on teaching approach

Ann Stes; Liesje Coertjens; Peter Van Petegem


International Journal of Science and Mathematics Education | 2010

DO SCHOOLS MAKE A DIFFERENCE IN THEIR STUDENTS’ ENVIRONMENTAL ATTITUDES AND AWARENESS? EVIDENCE FROM PISA 2006

Liesje Coertjens; Jelle Boeve-de Pauw; Sven De Maeyer; Peter Van Petegem

Collaboration


Dive into the Liesje Coertjens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eva Kyndt

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge