Bas A. de Leng
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bas A. de Leng.
Medical Education | 2006
Astrid J. S. F. Visschers-Pleijers; Diana Dolmans; Bas A. de Leng; Ineke H. A. P. Wolfhagen; Cees P.M. van der Vleuten
Introduction Collaborative learning, including problem‐based learning (PBL), is a powerful learning method. Group interaction plays a crucial role in stimulating student learning. However, few studies on learning processes in medical education have examined group interactions. Most studies on collaboration within PBL used self‐reported data rather than observational data. We investigated the following types of interactions in PBL tutorial groups: learning‐oriented interactions (exploratory questioning, cumulative reasoning and handling conflicts about knowledge); procedural interactions, and irrelevant/off‐task interactions.
Medical Teacher | 2009
Sören Huwendiek; Bas A. de Leng; Nabil Zary; Martin R. Fischer; Jorge G. Ruiz; Rachel Ellaway
Introduction: Although on-screen “virtual patients (VPs)” have been around for decades it is only now that they are entering the mainstream, and as such they are new to most of the medical education community. There is significant variety in the form, function, and efficacy of different VPs and there is, therefore, a growing need to clarify and distinguish between them. This article seeks to clarify VP concepts and approaches using a typology of VP designs. Methods: The authors developed a VP design typology based on the literature, a review of existing VP systems, and their personal experience with VPs. This draft framework was refined using a Delphi study involving experts in the field, and was then validated by applying it in the description of different VP designs. Results: Nineteen factors were synthesized around four categories: general (title, description, language, identifier, provenance, and typical study time); educational (educational level, educational modes, coverage, and objectives); instructional design (path type, user modality, media use, narrative use, interactivity use, and feedback use); technical (originating system, format, integration, and dependence). Conclusion: This empirically derived VP design typology provides a common reference point for all those wishing to report on or study VPs.
Medical Education | 2009
Sören Huwendiek; Friedrich Reichert; Hans-Martin Bosse; Bas A. de Leng; Cees van der Vleuten; Martin Haag; Georg F. Hoffmann; Burkhard Tönshoff
Objectives This study aimed to examine what students perceive as the ideal features of virtual patient (VP) design in order to foster learning with a special focus on clinical reasoning.
Medical Education | 2006
Bas A. de Leng; Diana Dolmans; Arno M. M. Muijtjens; Cees van der Vleuten
Objective To investigate the effects of a virtual learning environment (VLE) on group interaction and consultation of information resources during the preliminary phase, self‐study phase and reporting phase of the problem‐based learning process in an undergraduate medical curriculum.
Medical Teacher | 2013
Sören Huwendiek; Cecilia Duncker; Friedrich Reichert; Bas A. de Leng; Diana Dolmans; Cees van der Vleuten; Martin Haag; Georg F. Hoffmann; Burkhard Tönshoff
Context: E-learning resources, such as virtual patients (VPs), can be more effective when they are integrated in the curriculum. To gain insights that can inform guidelines for the curricular integration of VPs, we explored students’ perceptions of scenarios with integrated and non-integrated VPs aimed at promoting clinical reasoning skills. Methods: During their paediatric clerkship, 116 fifth-year medical students were given at least ten VPs embedded in eight integrated scenarios and as non-integrated add-ons. The scenarios differed in the sequencing and alignment of VPs and related educational activities, tutor involvement, number of VPs, relevance to assessment and involvement of real patients. We sought students’ perceptions on the VP scenarios in focus group interviews with eight groups of 4–7 randomly selected students (n = 39). The interviews were recorded, transcribed and analysed qualitatively. Results: The analysis resulted in six themes reflecting students’ perceptions of important features for effective curricular integration of VPs: (i) continuous and stable online access, (ii) increasing complexity, adapted to students’ knowledge, (iii) VP-related workload offset by elimination of other activities, (iv) optimal sequencing (e.g.: lecture – 1 to 2 VP(s) – tutor-led small group discussion – real patient) and (V) optimal alignment of VPs and educational activities, (vi) inclusion of VP topics in assessment. Conclusions: The themes appear to offer starting points for the development of a framework to guide the curricular integration of VPs. Their impact needs to be confirmed by studies using quantitative controlled designs.
Medical Education | 2010
Sören Huwendiek; Bas A. de Leng
Context and setting Virtual patients (VPs) are used increasingly in medical education, especially to teach clinical reasoning. Under the electronic virtual patients (eVIP) Project (http://www.virtualpatients. eu), which is co-funded by the European Union, 320 VPs will be developed and repurposed to different cultures, languages and educational settings. Both the design and the curricular integration of VPs (blended learning) seem to be essential for their success. Why the idea was necessary To date, no standardised instruments to evaluate the design and curricular integration of VPs have been published. Extensive translational research at the practitioner level could add substantially to the limited knowledge we have about the affordances and limitations of VP design and curricular integration with regard to the teaching of clinical reasoning. What was done In order to enable adequate comparisons between the designs and curricular integration of VPs in different institutions, we developed four instruments with a special focus on clinical reasoning: (i) a checklist enabling reviewers such as teachers and authors to characterise the design of a VP in detail; (ii) a questionnaire assessing students’ experiences with VPs for learning clinical reasoning skills; (iii) a checklist enabling reviewers to characterise the curricular integration of VPs in detail, and (iv) a questionnaire assessing students’ experiences with the curricular integration of VPs in relation to clinical reasoning skills. The following sources informed the development of the VP design instruments: (i) literature on strategies to teach clinical reasoning; (ii) literature on the design of teaching cases, and (iii) the results of a focus group study exploring students’ opinions on the ideal design of VPs for learning clinical reasoning skills. The resulting student questionnaire consists of 14 questions clustered into the following five main categories: (i) authenticity of patient encounter and consultation; (ii) professional approach in the consultation; (iii) coaching during consultation; (iv) learning effect; (v) overall judgement. The following sources informed the development of the VP integration instruments: (i) literature on a community of inquiry model; (ii) a blended learning framework; (iii) criteria for categorising didactic scenarios; (iv) results of a focus group study exploring students’ opinions on the ideal curricular integration of VPs for learning clinical reasoning skills, and (v) literature concerning educational strategies to teach clinical diagnostic reasoning. The resulting student VP integration questionnaire consists of 20 questions, clustered in five main categories: (i) teaching presence; (ii) cognitive presence; (iii) social presence; (iv) learning effect, and (v) overall judgement. The comments resulting from the review of the instruments by the eVIP Project partners were used to refine them. The instruments were then tested on the target groups, further refined and again tested and refined. Responses are given on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). Each instrument also includes open-ended questions. The multilingual instruments are available via the eVIP website (http://www.virtualpatients.eu/resources/ evaluation-tool-kit/). Evaluation of results and impact The first studies using the above-mentioned instruments indicate that the instruments are suitable for comparing and improving the design and curricular integration of VPs across national borders. We are looking forward to receiving feedback from others who have used the instruments.
Medical Teacher | 2015
Sören Huwendiek; Bas A. de Leng; Andrzej A. Kononowicz; Romy Kunzmann; Arno M. M. Muijtjens; Cees van der Vleuten; Georg F. Hoffmann; Burkhard Tönshoff; Diana Dolmans
Abstract Background: Virtual patients (VPs) are increasingly used to train clinical reasoning. So far, no validated evaluation instruments for VP design are available. Aims: We examined the validity of an instrument for assessing the perception of VP design by learners. Methods: Three sources of validity evidence were examined: (i) Content was examined based on theory of clinical reasoning and an international VP expert team. (ii) The response process was explored in think-aloud pilot studies with medical students and in content analyses of free text questions accompanying each item of the instrument. (iii) Internal structure was assessed by exploratory factor analysis (EFA) and inter-rater reliability by generalizability analysis. Results: Content analysis was reasonably supported by the theoretical foundation and the VP expert team. The think-aloud studies and analysis of free text comments supported the validity of the instrument. In the EFA, using 2547 student evaluations of a total of 78 VPs, a three-factor model showed a reasonable fit with the data. At least 200 student responses are needed to obtain a reliable evaluation of a VP on all three factors. Conclusion: The instrument has the potential to provide valid information about VP design, provided that many responses per VP are available.
Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2009
Bas A. de Leng; Arno M. M. Muijtjens; Cees van der Vleuten
Introduction: This study investigates the effects of working face to face in small groups on the processes that occur when students elaborate on computer-based simulated cases. Methods: We performed a randomized controlled experiment that was designed to measure the effect of “social context” (triads versus individuals) on students’ perceptions of the elaboration process and on the time spent on the different parts of the computer case. We sought students’ perceptions using a questionnaire that was administered to all participating students (N = 47) and we examined the actions of the students working in triads (N = 12) and individually (N = 11) by analyzing the log files of the computer case. Results: The results demonstrated no significant effect of social context on the degree of elaboration of the computer case. Conclusions: Working with computer-based simulated cases in small groups as opposed to individually in itself is not enough to increase the scope and depth of the elaboration of computer cases.
Medical Teacher | 2017
Sören Huwendiek; Friedrich Reichert; Cecilia Duncker; Bas A. de Leng; Cees van der Vleuten; Arno M. M. Muijtjens; Hans-Martin Bosse; Martin Haag; Georg F. Hoffmann; Burkhard Tönshoff; Diana Dolmans
Abstract Background: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students’ acceptance, their educational impact, and psychometric characteristics when used in a summative end-of-clinical-clerkship pediatric exam. Methods: Fifth-year medical students (n = 377) took a computer-based exam that included 6–9 KFPs and 9–20 crSBAs which assessed their clinical reasoning skills, in addition to an objective structured clinical exam (OSCE) that assessed their clinical skills. Each KFP consisted of a case vignette and three key features using a “long-menu” question format. We explored students’ perceptions of the KFPs and crSBAs in eight focus groups and analyzed statistical data of 11 exams. Results: Compared to crSBAs, KFPs were perceived as more realistic and difficult, providing a greater stimulus for the intense study of clinical reasoning, and were generally well accepted. The statistical analysis revealed no difference in difficulty, but KFPs resulted more reliable and efficient than crSBAs. The correlation between the two formats was high, while KFPs correlated more closely with the OSCE score. Conclusions: KFPs with long-menu exams seem to bring about a positive educational effect without psychometric drawbacks.
BMC Medical Education | 2012
Willem J. M. Koops; Cees van der Vleuten; Bas A. de Leng; Luc H. E. H. Snoeckx
BackgroundMedical students in clerkship are continuously confronted with real and relevant patient problems. To support clinical problem solving skills, students perform a Critical Appraisal of a Topic (CAT) task, often resulting in a paper. Because such a paper may contain errors, students could profit from discussion with peers, leading to paper revision. Active peer discussion by a Computer Supported Collaborative Learning (CSCL) environment show positive medical students perceptions on subjective knowledge improvement. High students’ activity during discussions in a CSCL environment demonstrated higher task-focussed discussion reflecting higher levels of knowledge construction. However, it remains unclear whether high discussion activity influences students’ decisions revise their CAT paper. The aim of this research is to examine whether students who revise their critical appraisal papers after discussion in a CSCL environment show more task-focussed activity and discuss more intensively on critical appraisal topics than students who do not revise their papers.MethodsForty-seven medical students, stratified in subgroups, participated in a structured asynchronous online discussion of individual written CAT papers on self-selected clinical problems. The discussion was structured by three critical appraisal topics. After the discussion, the students could revise their paper. For analysis purposes, all students’ postings were blinded and analysed by the investigator, unaware of students characteristics and whether or not the paper was revised. Postings were counted and analysed by an independent rater, Postings were assigned into outside activity, non-task-focussed activity or task-focussed activity. Additionally, postings were assigned to one of the three critical appraisal topics. Analysis results were compared by revised and unrevised papers.ResultsTwenty-four papers (51.6%) were revised after the online discussion. The discussions of the revised papers showed significantly higher numbers of postings, more task-focussed activities, and more postings about the two critical appraisal topics: “appraisal of the selected article(s)”, and “relevant conclusion regarding the clinical problem”.ConclusionA CSCL environment can support medical students in the execution and critical appraisal of authentic tasks in the clinical workplace. Revision of CAT papers appears to be related to discussions activity, more specifically reflecting high task-focussed activity of critical appraisal topics.