Renske de Kleijn
Utrecht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Renske de Kleijn.
Studies in Higher Education | 2012
Renske de Kleijn; M. Tim Mainhard; Paulien C. Meijer; Albert Pilot; Mieke Brekelmans
Masters thesis supervision is a complex task given the two-fold goal of the thesis (learning and assessment). An important aspect of supervision is the supervisor–student relationship. This quantitative study (N = 401) investigates how perceptions of the supervisor–student relationship are related to three dependent variables: final grade, perceived supervisor contribution to learning, and student satisfaction. The supervisor–student relationship was conceptualised by means of two interpersonal dimensions: control and affiliation. The results indicated that a greater degree of affiliation was related to higher outcome measures. Control was positively related to perceived supervisor contribution to learning and satisfaction, but, for satisfaction, a ceiling effect occurred. The relation between control and the final grade was U-shaped, indicating that the average level of perceived control is related to the lowest grades. The results imply that it is important for supervisors to be perceived as highly affiliated and that control should be carefully balanced.
Assessment & Evaluation in Higher Education | 2013
Renske de Kleijn; M. Tim Mainhard; Paulien C. Meijer; Mieke Brekelmans; Albert Pilot
A growing body of research has investigated student perceptions of written feedback in higher education coursework, but few studies have considered feedback perceptions in one-on-one and face-to-face contexts such as master’s thesis projects. In this article, student perceptions of feedback are explored in the context of the supervision of master’s thesis projects, using review studies with respect to effective feedback in coursework situations. Online questionnaires were administered to collect data from three cohorts of master’s students who were either working on their thesis or had recently finished it (N = 1016). The results of the study indicate that students perceive the focus of feedback in terms of a focus on task and self-regulation; they perceive the goal-relatedness of feedback in terms of feed up (goal-setting) and feed back-forward (how am I going and where to next?); and elaboration of feedback is perceived in terms of positive and negative feedback. Furthermore, students that perceive the feedback to be positive, and to provide information on how they are going and what next steps to take, are the most satisfied with their supervision and perceive they are learning most from their supervisor. The findings are discussed in relation to findings in coursework settings, and are explained using goal orientation theories.
Medical Teacher | 2013
Renske de Kleijn; Rianne A. M. Bouwmeester; Magda Ritzen; S. Ramaekers; Harold V.M. van Rijen
Background: Formative assessments intend to provide feedback on student performance in order to improve and accelerate learning. Several studies have indicated that students using online formative assessments (OFAs), have better results on their exams. Aims: The present study aims to provide insight in student reasons for using or not using available OFAs. Method: Three OFAs with feedback were available in a second year undergraduate course in physiology for biomedical sciences students (N = 147). First, students received an open questionnaire about why they did (not) complete the first two OFAs. Based on this data, a closed questionnaire was developed and distributed among students. Exploratory factor analysis (EFA) was applied. Results: The results indicate reasons why students do (not) use the OFAs. The EFA for using the OFAs indicated three factors, that were interpreted as collecting (1) feed up, (2) feed forward, and (3) feed back information. The main reasons for not using the OFAs were lack of time and having completed the questions before. Conclusions: Students’ reasons for using OFAs can be described in terms of collecting feed up, forward and back information and students’ reasons for not using OFAs can be student-, teacher-, or mode-related.
Assessment & Evaluation in Higher Education | 2017
Frans J. Prins; Renske de Kleijn; Jan van Tartwijk
A rubric for research theses was developed, based on the manual of the American Psychological Association, to be used as an assessment tool for teachers and students. The aim was to make students aware of what is expected, get familiar with criteria, and interpret teacher and peer feedback. In two studies, it was examined whether students use and value these functions. In the first study, a rubric was provided to 105 Educational Sciences students working on their bachelor’s thesis. Questionnaire data indicated that students did value the rubric for the intended functions, although rubric use was not related to ability. In a panel interview, teachers stated that the number of proficiency levels should be increased to be able to distinguish between good and excellent students adequately, and that a criterion concerning student’s role during supervision should be added. Therefore, in the second study, 11 teachers were interviewed about their motives to give high grades and about the supervision process. This lead to an extra criterion concerning student’s role during supervision and an additional proficiency level to assess excellent performance. It is argued that an adequate course organisation is conditional for the rubric’s effectiveness.
Medical Teacher | 2017
Rianne Poot; Renske de Kleijn; Harold V.M. van Rijen; Jan van Tartwijk
Abstract Background: A reported problem with e-learning is sustaining students’ motivation. We propose a framework explaining to what extent an e-learning task is motivating. This framework includes students’ perceived Value of the task, Competence in executing the task, Autonomy over how to carry out the task, and Relatedness. Methods: To test this framework, students generated items in an online environment and answered questions developed by their fellow students. Motivation was measured by analyzing engagement with the task, with an open-ended questionnaire about engagement, and with the motivated strategies for learning questionnaire (MSLQ). Results: Students developed 59 questions and answered 1776 times on the questions. Differences between students who did or did not engage in the task are explained by the degree of self-regulation, time management, and effort regulation students report. There was a significant relationship between student engagement and achievement after controlling for previous academic achievement. Conclusions: This study proposes a way of explaining the motivational value of an e-learning task by looking at students’ perceived competence, autonomy, value of the task, and relatedness. Student-generated items are considered of high task value, and help to perceive relatedness between students. With the right instruction, students feel competent to engage in the task.
Assessment & Evaluation in Higher Education | 2018
Jonne Vulperhorst; Christel Lutz; Renske de Kleijn; Jan van Tartwijk
Abstract To refine selective admission models, we investigate which measure of prior achievement has the best predictive validity for academic success in university. We compare the predictive validity of three core high school subjects to the predictive validity of high school grade point average (GPA) for academic achievement in a liberal arts university programme. Predictive validity is compared between the Dutch pre-university (VWO) and the International Baccalaureate (IB) diploma. Moreover, we study how final GPA is predicted by prior achievement after students complete their first year. Path models were separately run for VWO (n = 314) and IB (n = 113) graduates. For VWO graduates, high school GPA explained more variance than core subject grades in first-year GPA and final GPA. For IB graduates, we found the opposite. Subsequent path models showed that after students’ completion of the first year, final GPA is best predicted by a combination of first-year GPA and high school GPA. Based on our small-scale results, we cautiously challenge the use of high school GPA as the norm for measuring prior achievement. Which measure of prior achievement best predicts academic success in university may depend on the diploma students enter with.
The International Journal of Qualitative Methods | 2018
Renske de Kleijn; Anouschka van Leeuwen
Arguably, quality assurance is more complicated in qualitative studies than in quantitative studies. Several procedures for quality assurance are available, among which the audit procedure as proposed by Akkerman, Admiraal, Brekelmans, and Oost. In this article, we reflect on this procedure based on our own experiences as well as based on a review of studies in which the audit procedure was employed. More specifically, we discuss (1) the choice for an auditor and the relationship between auditee and auditor and (2) the function of the audit. We propose that future auditees (a) explicitly report on the auditee–auditor relationship, (b) explicitly report on the function of their audit, and (c) have their audit trail documents available for review. With this methodological position paper, we aim to contribute to the current call to make social science studies and their conclusions more transparent and thereby to enhance the quality of qualitative studies.Arguably, quality assurance is more complicated in qualitative studies than in quantitative studies. Several procedures for quality assurance are available, among which the audit procedure as proposed by Akkerman, Admiraal, Brekelmans, and Oost. In this article, we reflect on this procedure based on our own experiences as well as based on a review of studies in which the audit procedure was employed. More specifically, we discuss (1) the choice for an auditor and the relationship between auditee and auditor and (2) the function of the audit. We propose that future auditees (a) explicitly report on the auditee–auditor relationship, (b) explicitly report on the function of their audit, and (c) have their audit trail documents available for review. With this methodological position paper, we aim to contribute to the current call to make social science studies and their conclusions more transparent and thereby to enhance the quality of qualitative studies.
Computers in Education | 2018
R. Filius; Renske de Kleijn; Sabine G. Uijl; Frans J. Prins; Harold V.M. van Rijen; Diederick E. Grobbee
Abstract This study is focused on how peer feedback in SPOCs (Small Private Online Courses) can effectively lead to deep learning. Promoting deep learning in online courses, such as SPOCs, is often a challenge. We aimed for deep learning by reinforcement of ‘feedback dialogue’ as scalable intervention. Students provided peer feedback as a dialogue, both individually and as a group. They were instructed to rate each others feedback, which was aimed at deep learning. Data from questionnaires from 41 students of a master epidemiology course were used to measure for each feedback assignment to what extent deep learning was perceived. The feedback received by students who scored extremely high or low on the questionnaire was analyzed in order to find out which features of the feedback led to deep learning. In addition, students were interviewed to retrieve information about the underlying mechanisms. The results support the view that peer feedback instruction and peer feedback rating lead to peer feedback dialogues that, in turn, promote deep learning in SPOCs. The value of peer feedback appears to predominantly result from the dialogue it triggers, rather than the feedback itself. Especially helpful for students is the constant attention to how one provides peer feedback: by instruction, by having to rate feedback and therefore by repeatedly having to reflect. The dialogue is strengthened because students question feedback from peers in contrast to feedback from their instructor. As a result, they continue to think longer and deeper, which enables deep learning.
Higher Education Research & Development | 2015
Renske de Kleijn; Paulien C. Meijer; Mieke Brekelmans; Albert Pilot
Medical science educator | 2016
Rianne A. M. Bouwmeester; Renske de Kleijn; Olle ten Cate; Harold V.M. van Rijen; Hendrika E. Westerveld