Anneke W. M. Kramer
Radboud University Nijmegen Medical Centre
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anneke W. M. Kramer.
Medical Education | 2004
Anneke W. M. Kramer; Herman Düsman; L. H. C. Tan; J. J. M. Jansen; R.P.T.M. Grol; C.P.M. van der Vleuten
Purpose The evidence suggests that a longitudinal training of communication skills embedded in a rich clinical context is most effective. In this study we evaluated the acquisition of communication skills under such conditions.
Medical Education | 2001
Roy Remmen; Albert Scherpbier; Cees van der Vleuten; J. Denekens; Anselm Derese; I. Hermann; R.J.I. Hoogenboom; Anneke W. M. Kramer; Herman Van Rossum; Paul Van Royen; Leo Bossaert
Training in physical diagnostic skills is an important part of undergraduate medical education. The objective of this study was to study the outcome of skills training at four medical schools.
Advances in Health Sciences Education | 2011
Elisabeth A. M. Pelgrim; Anneke W. M. Kramer; H.G.A. Mokkink; L. van den Elsen; Richard Grol; C.P.M. van der Vleuten
We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the ‘clinical evaluation exercise’ is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond ‘happiness data’ are badly needed.
Medical Education | 2003
Anneke W. M. Kramer; Arno M. M. Muijtjens; Koos Jansen; Herman Düsman; Lisa Tan; Cees van der Vleuten
Purpose Earlier studies of absolute standard setting procedures for objective structured clinical examinations (OSCEs) show inconsistent results. This study compared a rational and an empirical standard setting procedure. Reliability and credibility were examined first. The impact of a reality check was then established.
Medical Education | 2012
Elisabeth A. M. Pelgrim; Anneke W. M. Kramer; H.G.A. Mokkink; Cees van der Vleuten
Medical Education 2012: 46:604–612
Medical Education | 2002
Anneke W. M. Kramer; J. J. M. Jansen; P. Zuithoff; Herman Düsman; L. H. C. Tan; R.P.T.M. Grol; C.P.M. van der Vleuten
Purpose To examine the validity of a written knowledge test of skills for performance on an OSCE in postgraduate training for general practice.
BMC Family Practice | 2011
Geurt Essers; Sandra van Dulmen; Chris van Weel; Cees van der Vleuten; Anneke W. M. Kramer
BackgroundCommunication is a key competence for health care professionals. Analysis of registrar and GP communication performance in daily practice, however, suggests a suboptimal application of communication skills. The influence of context factors could reveal why communication performance levels, on average, do not appear adequate. The context of daily practice may require different skills or specific ways of handling these skills, whereas communication skills are mostly treated as generic. So far no empirical analysis of the context has been made. Our aim was to identify context factors that could be related to GP communication.MethodsA purposive sample of real-life videotaped GP consultations was analyzed (N = 17). As a frame of reference we chose the MAAS-Global, a widely used assessment instrument for medical communication. By inductive reasoning, we analyzed the GP behaviour in the consultation leading to poor item scores on the MAAS-Global. In these cases we looked for the presence of an intervening context factor, and how this might explain the actual GP communication behaviour.ResultsWe reached saturation after having viewed 17 consultations. We identified 19 context factors that could potentially explain the deviation from generic recommendations on communication skills. These context factors can be categorized into doctor-related, patient-related, and consultation-related factors.ConclusionsSeveral context factors seem to influence doctor-patient communication, requiring the GP to apply communication skills differently from recommendations on communication. From this study we conclude that there is a need to explicitly account for context factors in the assessment of GP (and GP registrar) communication performance. The next step is to validate our findings.
Medical Teacher | 2010
Fred Tromp; Myrra Vernooij-Dassen; Anneke W. M. Kramer; Richard Grol; Ben Bottema
Background: The Nijmegen Professionalism Scale, an instrument for assessing professional behaviour of general practitioner (GP) trainees, consists of four domains: professional behaviour towards patients, other professionals, society and oneself. The purpose of the instrument is to provide formative feedback. Aim: The aim of this study was to examine the psychometric properties of the Nijmegen Professionalism Scale. Methods: Both GP trainers and their GP trainees participated. Factor analysis was conducted for each domain. Factor structures of trainee and trainer groups were compared. Measure of congruence used was Tuckers phi. Cronbachs α was used to establish reliability. Results: Factor structures of the instrument used by GP trainers and trainees were similar. Two factors for each domain were found: domain 1, Respecting patients interests and Professional distance; domain 2, Collaboration skills and Management skills; domain 3, Responsibility and Quality management; and domain 4, Reflection and learning and Dealing with emotions. Congruence measures were substantial (>0.90). Reliability ranged from 0.78 to 0.95. Conclusion: This study to validate the instrument represents one further step. To construct a sound validity argument, a much broader range of evidence is required. Nevertheless, this study shows that the Nijmegen Professionalism Scale is a reliable tool for assessing professional behaviour.
Medical Teacher | 2013
Elisabeth A. M. Pelgrim; Anneke W. M. Kramer; H.G.A. Mokkink; C.P.M. van der Vleuten
Background: Although the literature suggests that reflection has a positive impact on learning, there is a paucity of evidence to support this notion. Aim: We investigated feedback and reflection in relation to the likelihood that feedback will be used to inform action plans. We hypothesised that feedback and reflection present a cumulative sequence (i.e. trainers only pay attention to trainees’ reflections when they provided specific feedback) and we hypothesised a supplementary effect of reflection. Method: We analysed copies of assessment forms containing trainees’ reflections and trainers’ feedback on observed clinical performance. We determined whether the response patterns revealed cumulative sequences in line with the Guttman scale. We further examined the relationship between reflection, feedback and the mean number of specific comments related to an action plan (ANOVA) and we calculated two effect sizes. Results: Both hypotheses were confirmed by the results. The response pattern found showed an almost perfect fit with the Guttman scale (0.99) and reflection seems to have supplementary effect on the variable action plan. Conclusions: Reflection only occurs when a trainer has provided specific feedback; trainees who reflect on their performance are more likely to make use of feedback. These results confirm findings and suggestions reported in the literature.
BMC Medical Education | 2012
Elisabeth A. M. Pelgrim; Anneke W. M. Kramer; H.G.A. Mokkink; Cees van der Vleuten
BackgroundResearch has shown that narrative feedback, (self) reflections and a plan to undertake and evaluate improvements are key factors for effective feedback on clinical performance. We investigated the quantity of narrative comments comprising feedback (by trainers), self-reflections (by trainees) and action plans (by trainer and trainee) entered on a mini-CEX form that was modified for use in general practice training and to encourage trainers and trainees to provide narrative comments. In view of the importance of specificity as an indicator of feedback quality, we additionally examined the specificity of the comments.MethodWe collected and analysed modified mini-CEX forms completed by GP trainers and trainees. Since each trainee has the same trainer for the duration of one year, we used trainer-trainee pairs as the unit of analysis. We determined for all forms the frequency of the different types of narrative comments and rated their specificity on a three-point scale: specific, moderately specific, not specific. Specificity was compared between trainee-trainer pairs.ResultsWe collected 485 completed modified mini-CEX forms from 54 trainees (mean of 8.8 forms per trainee; range 1–23; SD 5.6). Trainer feedback was more frequently provided than trainee self-reflections, and action plans were very rare. The comments were generally specific, but showed large differences between trainee-trainer pairs.ConclusionThe frequency of self-reflection and action plans varied, all comments were generally specific and there were substantial and consistent differences between trainee-trainer pairs in the specificity of comments. We therefore conclude that feedback is not so much determined by the instrument as by the users. Interventions to improve the educational effects of the feedback procedure should therefore focus more on the users than on the instruments.