Jan-Maarten Luursema
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan-Maarten Luursema.
American Journal of Surgery | 2013
Alexander Alken; Edward C.T.H. Tan; Jan-Maarten Luursema; Cornelia R. M. G. Fluit; Harry van Goor
BACKGROUND The aim of this study was to examine the quality and quantity of feedback and instruction from faculty members during an acute trauma surgery team training using a newly designed observational feedback instrument. METHODS During the training, 11 operating teams, each consisting of 1 instructor coaching 2 trainees, were videotaped and audiotaped. Forty-five minutes of identical operating scenarios were reviewed and analyzed. Using a new observational feedback instrument, feedback and instruction, containing different levels of specific information related to technical and nontechnical skills, were noted. RESULTS Instructors more often provided instruction (25.8 ± 10.6 times) than feedback (4.4 ± 3.5 times). Most feedback and instruction contained either nonspecific or less specific information and referred to technical skills. Instructors addressed communication skills more specifically. CONCLUSIONS Coaching by faculty members predominantly contained unspecific instructions regarding technical skills. The observational feedback instrument enabled scoring of the coaching activities.
Journal of Surgical Education | 2015
Jan-Maarten Luursema; M.M. Rovers; Alexander Alken; Bas Kengen; Harry van Goor
BACKGROUND Surgical training is moving away from the operating room toward simulation-based skills training facilities. This has led to the development of proficiency-based training courses in which expert performance data are used for feedback and assessment. However, few expert value data sets have been published, and no standard method for generating expert values has been adopted by the field. METHODS To investigate the effect of different proficiency value data sets on simulator training courses, we (1) compared 2 published expert performance data sets for the LapSim laparoscopic virtual-reality simulator (by van Dongen et al. and Heinrichs et al.) and (2) assessed the effect of using either set on LapSim training data obtained from 16 local residents in surgery and gynecology. RESULTS Across all simulator tasks, the experts consulted by van Dongen et al. performed better on motion efficiency, but not on duration or damage control. Applying both proficiency sets to training data collected during a basic skills laparoscopic simulator course, residents would have graduated on an average in 1.5 fewer sessions using the Heinrichs expert values compared with the van Dongen expert values. CONCLUSIONS The selection of proficiency values for proficiency-based simulator training courses affects training length, skills level assessment, and training costs. Standardized, well-controlled methods are necessary to create valid and reliable expert values for use in training and research.
International journal of continuing engineering education and life-long learning | 2004
Piet Kommers; Jan-Maarten Luursema; Steffan G.J. Rödel; Bob Geelkerken; Eelco Kunst
Virtual reality (VR) is becoming a serious candidate for a learning environment for complex skills like vascular interventions. The diagnostics, dimensioning and insertion of the endograft stent has been modelled as a decision-making process and now faces its implementation in a VR learning space. Beyond the topological and morphological aspects it is the orientation and navigation in earlier-performed successful interventions that offer the opportunity for a competence-based learning process before the candidate surgeon enters the clinical stage.
Surgical Endoscopy and Other Interventional Techniques | 2018
W.M. IJgosse; H. van Goor; Jan-Maarten Luursema
BackgroundResidents find it hard to commit to structural laparoscopic skills training. Serious gaming has been proposed as a solution on the premise that it is effective and more motivating than traditional simulation. We establish construct validity for the laparoscopic serious game Underground by comparing laparoscopic simulator performance for a control group and an Underground training group.MethodsA four-session laparoscopic basic skills course is part of the medical master students surgical internship at the Radboud University Medical Centre. Four cohorts, representing 107 participants, were assigned to either the Underground group or the control group. The control group trained on the FLS video trainer and the LapSim virtual reality simulator for four sessions. The Underground group played Underground for three sessions followed by a transfer session on the FLS video trainer and the LapSim. To assess the effect of engaging in serious gameplay on performance on two validated laparoscopic simulators, initial performance on the FLS video trainer and the LapSim was compared between the control group (first session) and the Underground group (fourth session).ResultsWe chose task duration as a proxy for laparoscopic performance. The Underground group outperformed the control group on all three LapSim tasks: Camera navigation F(1) = 12.71, p < .01; Instrument navigation F(1) = 8.04, p < .01; and Coordination F(1) = 6.36, p = .01. There was no significant effect of playing Underground for performance on the FLS video trainer Peg Transfer task, F(1) = 0.28, p = .60.ConclusionsWe demonstrated skills transfer between a serious game and validated laparoscopic simulator technology. Serious gaming may become a valuable, cost-effective addition to the skillslab, if transfer to the operating room can be established. Additionally, we discuss sources of transferable skills to help explain our and previous findings.
American Journal of Surgery | 2018
Wouter M. IJgosse; Bas Kengen; Harry van Goor; Jan-Maarten Luursema
BACKGROUND Creating and updating expert performance-based standards for simulators is labor intensive and requires the regular availability of expert surgeons. We investigated how peer performance based standards compare to expert performance based standards. METHODS One hundred medical students took part in a four-session laparoscopic basic skills simulator training course. Performance for the FLS videotrainer tasks were compared between students who received feedback based on either peer standards, expert standards or no feedback at all (control group). RESULTS No difference in performance between our feedback groups was found. Compared to the control group, they were 18-36% faster but made 52% more errors for tasks on the FLS video trainer (U range [93.5-957], average p < .01). CONCLUSIONS We demonstrated that feedback based on peer standards is equally effective as feedback based on expert standards. The found trade-off between speed and error is not desirable and warrants further investigation.
American Journal of Surgery | 2017
Alexander Alken; Jan-Maarten Luursema; Mariska Weenk; Simon T.K. Yauw; Cornelia R. M. G. Fluit; Harry van Goor
Archive | 2018
Alexander Alken; Cornelia Fluit; Jan-Maarten Luursema; Harry van Goor
/data/revues/00029610/unassign/S000296101400172X/ | 2014
Alexander Alken; Edward Tan; Jan-Maarten Luursema; Cornelia Fluit; Harry van Goor
/data/revues/00904295/v81i3/S0090429512012800/ | 2013
Willem M. Brinkman; Jan-Maarten Luursema; Bas Kengen; Barbara M. A. Schout; J. Alfred Witjes; Ruud L.M. Bekkers
Psychological Research-psychologische Forschung | 2003
Jan-Maarten Luursema; Piet Kommers