Jimmie Leppink
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jimmie Leppink.
Behavior Research Methods | 2013
Jimmie Leppink; Fred Paas; Cees van der Vleuten; Tamara van Gog; Jeroen J. G. van Merriënboer
According to cognitive load theory, instructions can impose three types of cognitive load on the learner: intrinsic load, extraneous load, and germane load. Proper measurement of the different types of cognitive load can help us understand why the effectiveness and efficiency of learning environments may differ as a function of instructional formats and learner characteristics. In this article, we present a ten-item instrument for the measurement of the three types of cognitive load. Principal component analysis on data from a lecture in statistics for PhD students (n = 56) in psychology and health sciences revealed a three-component solution, consistent with the types of load that the different items were intended to measure. This solution was confirmed by a confirmatory factor analysis of data from three lectures in statistics for different cohorts of bachelor students in the social and health sciences (ns = 171, 136, and 148), and received further support from a randomized experiment with university freshmen in the health sciences (n = 58).
Perspectives on medical education | 2015
Jimmie Leppink; Angelique van den Heuvel
Cognitive Load Theory (CLT) has started to find more applications in medical education research. Unfortunately, misconceptions such as lower cognitive load always being beneficial to learning and the continued use of dated concepts and methods can result in improper applications of CLT principles in medical education design and research. This review outlines how CLT has evolved and presents a synthesis of current-day CLT principles in a holistic model for medical education design. This model distinguishes three dimensions: task fidelity: from literature (lowest) through simulated patients to real patients (highest); task complexity: the number of information elements; and instructional support: from worked examples (highest) through completion tasks to autonomous task performance (lowest). These three dimensions together constitute three steps to proficient learning: (I) start with high support on low-fidelity low-complexity tasks and gradually fade that support as learners become more proficient; (II) repeat I for low-fidelity but higher-complexity tasks; and (III) repeat I and II in that order at subsequent levels of fidelity. The numbers of fidelity levels and complexity levels within fidelity levels needed depend on the aims of the course, curriculum or individual learning trajectory. This paper concludes with suggestions for future research based on this model.
Medical Education | 2015
Alexandre Lafleur; Luc Côté; Jimmie Leppink
Some characteristics of assessments exert a strong influence on how students study. Understanding these pre‐assessment learning effects is of key importance to the designing of medical assessments that foster students’ reasoning abilities. Perceptions of the task demands of an assessment significantly influence students’ cognitive processes. However, why and how certain tasks positively ‘drive’ learning remain unknown. Medical tasks can be assessed as coherent meaningful whole tasks (e.g. examining a patient based on his complaint to find the diagnosis) or can be divided into simpler part tasks (e.g. demonstrating the physical examination of a pre‐specified disease). Comparing the benefits of whole‐task and part‐task assessments in a randomised controlled experiment could guide the design of ‘assessments for learning’.
Perspectives on medical education | 2015
Jimmie Leppink
A substantial part of medical education research focuses on learning in teams (e.g., departments, problem-based learning groups) or centres (e.g., clinics, institutions) that are followed over time. Individual students or employees sharing the same team or centre tend to be more similar in learning than students or employees from different teams or centres. In other words, when students or employees are nested within teams or centres, there is a within-team or within-centre correlation that should be taken into account in the analysis of data obtained from individuals in these teams or centres. Further, when individuals are measured several times on the same performance (or other) variable, these repeated measurements tend to be correlated, that is: we are dealing with an intra-individual correlation that should be taken into account when analyzing data obtained from these individuals. In such a study context, many researchers resort to methods that cannot account for intra-team and/or intra-individual correlation and this may result in incorrect conclusions with regard to effects and relations of interest. This comparison paper presents the benefits which result from adopting a proper multilevel perspective on the conceptualization and estimation of effects and relations of interest.
Medical Teacher | 2016
Jimmie Leppink; Robbert Duvivier
Abstract During their course, medical students have to become proficient in a variety of competencies. For each of these competencies, educational design can use cognitive load theory to consider three dimensions: task fidelity: from literature (lowest) through simulated patients (medium) to real patients (highest); task complexity: the number of information elements in a learning task; and instructional support: from worked examples (highest) through completion tasks (medium) to autonomous task performance (lowest). One should integrate any competency into a medical curriculum such that training in that competency facilitates the students’ journey that starts from high instructional support on low-complexity low-fidelity learning tasks all the way to high-complexity tasks in high-fidelity environments carried out autonomously. This article presents twelve tips on using cognitive load theory or, more specifically, a set of four tips for each of task fidelity, task complexity, and instructional support, to achieve that aim.
Educational Research and Evaluation | 2012
Jimmie Leppink; Nick J. Broers; Tjaart Imbos; Cees van der Vleuten; Martijn P. F. Berger
This study investigated the effects of different teaching and learning methods for statistics for 2 levels of prior knowledge on cognitive load, propositional knowledge, and conceptual understanding. Teaching methods were whether or not to provide students with propositional information, and learning strategies were self-explaining the learning material and explaining in pairs. The results indicate that prior knowledge facilitates propositional knowledge development and leads to differential effects of teaching and learning methods on conceptual understanding: Only low prior knowledge students profit from additional information in the learning task and/or explaining in pairs. An implication of these findings is that low prior knowledge students should be guided into the subject matter by means of working in pairs on learning tasks that comprise additional information. Once students have developed more knowledge of the subject matter, they should be stimulated to work individually on learning tasks that do not comprise additional information.
Perspectives on medical education | 2016
Jimmie Leppink; Patricia O’Sullivan; Kal Winston
The overall purpose of the ‘Statistical Points and Pitfalls’ series is to help readers and researchers alike increase awareness of how to use statistics and why/how we fall into inappropriate choices or interpretations. We hope to help readers understand common misconceptions and give clear guidance on how to avoid common pitfalls by offering simple tips to improve your reporting of quantitative research findings. Each entry discusses a commonly encountered inappropriate practice and alternatives from a pragmatic perspective with minimal mathematics involved. We encourage readers to share comments on or suggestions for this section on Twitter, using the hashtag: #mededstats.
BMC Medical Education | 2015
Esther M. Bergman; Anique B. H. de Bruin; Marc A. T. M. Vorstenbosch; J.G.M. Kooloos; Ghita C. W. M. Puts; Jimmie Leppink; Albert Scherpbier; Cees van der Vleuten
BackgroundIt is generally assumed that learning in context increases performance. This study investigates the relationship between the characteristics of a paper-patient context (relevance and familiarity), the mechanisms through which the cognitive dimension of context could improve learning (activation of prior knowledge, elaboration and increasing retrieval cues), and test performance.MethodsA total of 145 medical students completed a pretest of 40 questions, of which half were with a patient vignette. One week later, they studied musculoskeletal anatomy in the dissection room without a paper-patient context (control group) or with (ir)relevant-(un)familiar context (experimental groups), and completed a cognitive load scale. Following a short delay, the students completed a posttest.ResultsSurprisingly, our results show that students who studied in context did not perform better than students who studied without context. This finding may be explained by an interaction of the participants’ expertise level, the nature of anatomical knowledge and students’ approaches to learning. A relevant-familiar context only reduced the negative effect of learning the content in context. Our results suggest discouraging the introduction of an uncommon disease to illustrate a basic science concept. Higher self-perceived learning scores predict higher performance. Interestingly, students performed significantly better on the questions with context in both tests, possibly due to a ‘framing effect’.ConclusionsSince studies focusing on the physical and affective dimensions of context have also failed to find a positive influence of learning in a clinically relevant context, further research seems necessary to refine our theories around the role of context in learning.
Legal and Criminological Psychology | 2009
Tom Smeets; Jimmie Leppink; Marko Jelicic; Harald Merckelbach
Purpose. The Gudjonsson Suggestibility Scale (GSS; Gudjonsson, 1984, 1997) is a well-established forensic tool for measuring interrogative suggestibility. However, one restriction of this tool is that it requires an extensive testing procedure. The present study examined whether shorter versions of the GSS yield similar results as the original GSS procedure. Methods. One group (N = 20) was given a shortened version of the GSS that consisted of an immediate recall test and the specific questions. GSS scores in this group were compared with those in a group (N = 20) that had the standard procedure which includes a retention interval and immediate and delayed recall tests. A third group (N = 20) was administered a shortened procedure in which the 20 GSS questions immediately followed the GSS story. In the fourth group (N = 20), participants were given the retention interval, but no recall tests were administered. Results. ANOVA showed no differences in GSS scores amongst the four groups. Post hoc power analyses indicated that these non-significant findings were not the result of a power problem and that larger sample sizes are expected to yield comparable results. Further analyses showed that neither the retention delay nor the recall tests affected suggestibility scores. Conclusions. These results suggest that shortened procedures for administering the GSS may be employed in situations where time is a key factor.
Medical Teacher | 2017
Marie-Laurence Tremblay; Alexandre Lafleur; Jimmie Leppink; Diana Dolmans
Abstract Context: Simulated clinical immersion (SCI) is used in undergraduate healthcare programs to expose the learner to real-life situations in authentic simulated clinical environments. For novices, the environment in which the simulation occurs can be distracting and stressful, hence potentially compromising learning. Objectives: This study aims to determine whether SCI (with environment) imposes greater extraneous cognitive load and stress on undergraduate pharmacy students than simulated patients (SP) (without environment). It also aims to explore how features of the simulated environment influence students’ perception of learning. Methods: In this mixed-methods study, 143 undergraduate pharmacy students experienced both SCI and SP in a crossover design. After the simulations, participants rated their cognitive load and emotions. Thirty-five students met in focus groups to explore their perception of learning in simulation. Results: Intrinsic and extraneous cognitive load and stress scores in SCI were significantly but modestly higher compared to SP. Qualitative findings reveal that the physical environment in SCI generated more stress and affected students? focus. In SP, students concentrated on clinical reasoning. SCI stimulated a focus on data collection but impeded in-depth problem solving processes. Conclusion: The physical environment in simulation influences what and how students learn. SCI was reported as more cognitively demanding than SP. Our findings emphasize the need for the development of adapted instructional design guidelines in simulation for novices.