Marjan J. B. Govaerts
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marjan J. B. Govaerts.
Advances in Health Sciences Education | 2011
Marjan J. B. Govaerts; Lambert Schuwirth; C.P.M. van der Vleuten; Arno M. M. Muijtjens
Traditional psychometric approaches towards assessment tend to focus exclusively on quantitative properties of assessment outcomes. This may limit more meaningful educational approaches towards workplace-based assessment (WBA). Cognition-based models of WBA argue that assessment outcomes are determined by cognitive processes by raters which are very similar to reasoning, judgment and decision making in professional domains such as medicine. The present study explores cognitive processes that underlie judgment and decision making by raters when observing performance in the clinical workplace. It specifically focuses on how differences in rating experience influence information processing by raters. Verbal protocol analysis was used to investigate how experienced and non-experienced raters select and use observational data to arrive at judgments and decisions about trainees’ performance in the clinical workplace. Differences between experienced and non-experienced raters were assessed with respect to time spent on information analysis and representation of trainee performance; performance scores; and information processing––using qualitative-based quantitative analysis of verbal data. Results showed expert-novice differences in time needed for representation of trainee performance, depending on complexity of the rating task. Experts paid more attention to situation-specific cues in the assessment context and they generated (significantly) more interpretations and fewer literal descriptions of observed behaviors. There were no significant differences in rating scores. Overall, our findings seemed to be consistent with other findings on expertise research, supporting theories underlying cognition-based models of assessment in the clinical workplace. Implications for WBA are discussed.
Medical Education | 2013
Marjan J. B. Govaerts; Cees van der Vleuten
Although work‐based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings.
Medical Education | 2014
Andrea Gingerich; Jennifer R. Kogan; Peter Yeates; Marjan J. B. Govaerts; Eric S. Holmboe
Performance assessments, such as workplace‐based assessments (WBAs), represent a crucial component of assessment strategy in medical education. Persistent concerns about rater variability in performance assessments have resulted in a new field of study focusing on the cognitive processes used by raters, or more inclusively, by assessors.
Medical Education | 2008
Marjan J. B. Govaerts
1 Wolf F. Lessons to be learned from evidence-based medicine: practice and promise of evidence-based medicine and evidence-based education. Med Teach 2000;22:251–9. 2 The Campbell Collaboration. http://www.campbellcollaboration.org/. [Accessed 27 September 2007.] 3 van der Vleuten CPM, Dolmans DHJM, Scherpbier AJJA. The need for evidence in education. Med Teach 2000;22:246–50. 4 Todres M, Stephenson A, Jones R. Medical education research remains the poor relation. BMJ 2007;335:333–5. 5 Norman G. RCT = results confounded and trivial: the perils of grand educational experiments. Med Educ 2003;37:582–4. 6 Norman GR. Reflections on BEME. Med Teach 2000;22:141–4. 7 Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA 2007;298:1002– 9. 8 NHS Public Health Resources Unit. Appraisal tools. http://www.phru. nhs.uk/Pages/PHD/resources.htm. [Accessed 27 September 2007.]
Medical Teacher | 2015
C.P.M. van der Vleuten; Lambert Schuwirth; Erik W. Driessen; Marjan J. B. Govaerts; Sylvia Heeneman
Abstract Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.
Medical Teacher | 2012
Erik W. Driessen; Jan van Tartwijk; Marjan J. B. Govaerts; Pim W. Teunissen; Cees van der Vleuten
The differences of learning experiences in the workplace put challenges on the assessment: the assessment programme should be aligned with the general competency framework of the curriculum and also fit to the differences in learning contexts of the workplace. We used van der Vleutens programmatic assessment model to develop a workplace-based assessment programme for final year clerkships. We aimed to design a programme that stimulates learning, supports assessment decision, is feasible and non-bureaucratic. The first experiences with the programme show that students think that the programme has high learning value and the assessment is sufficiently robust. Many of the commonly reported weaknesses of work-based assessment (not a good fit with the educational context, too complex, too bureaucratic and too much work) were not mentioned by the students.
European Journal of Training and Development | 2013
Marjan J. B. Govaerts; Margje Van de Wiel; Cees van der Vleuten
Purpose – This study aims to investigate quality of feedback as offered by supervisor-assessors with varying levels of assessor expertise following assessment of performance in residency training in a health care setting. It furthermore investigates if and how different levels of assessor expertise influence feedback characteristics. Design/methodology/approach – Experienced (n=18) and non-experienced (n=16) supervisor-assessors with different levels of assessor expertise in general practice (GP) watched two videotapes, each presenting a trainee in a “real-life” patient encounter. After watching each videotape, participants documented performance ratings, wrote down narrative feedback comments and verbalized their feedback. Deductive content analysis of feedback protocols was used to explore quality of feedback. Between-group differences were assessed using qualitative-based quantitative analysis of feedback data. Findings – Overall, specificity and usefulness of both written and verbal feedback was limit...
Academic Medicine | 2015
Joyce M.W. Moonen–van Loon; Karlijn Overeem; Marjan J. B. Govaerts; B.H. Verhoeven; Cees van der Vleuten; Erik W. Driessen
Purpose Residency programs around the world use multisource feedback (MSF) to evaluate learners’ performance. Studies of the reliability of MSF show mixed results. This study aimed to identify the reliability of MSF as practiced across occasions with varying numbers of assessors from different professional groups (physicians and nonphysicians) and the effect on the reliability of the assessment for different competencies when completed by both groups. Method The authors collected data from 2008 to 2012 from electronically completed MSF questionnaires. In total, 428 residents completed 586 MSF occasions, and 5,020 assessors provided feedback. The authors used generalizability theory to analyze the reliability of MSF for multiple occasions, different competencies, and varying numbers of assessors and assessor groups across multiple occasions. Results A reliability coefficient of 0.800 can be achieved with two MSF occasions completed by at least 10 assessors per group or with three MSF occasions completed by 5 assessors per group. Nonphysicians’ scores for the “Scholar” and “Health advocate” competencies and physicians’ scores for the “Health advocate” competency had a negative effect on the composite reliability. Conclusions A feasible number of assessors per MSF occasion can reliably assess residents’ performance. Scores from a single occasion should be interpreted cautiously. However, every occasion can provide valuable feedback for learning. This research confirms that the (unique) characteristics of different assessor groups should be considered when interpreting MSF results. Reliability seems to be influenced by the included assessor groups and competencies. These findings will enhance the utility of MSF during residency training.
The Clinical Teacher | 2006
Marjan J. B. Govaerts
T raining in the clinical setting is a vital part of medical education, as it guides a trainee’s learning towards a standard of professional competence. Performance of authentic tasks in clinical practice typically requires the integration of knowledge, skills, judgement and attitudes – all indispensable for the development of professional competence. Although active participation in patient care can provide very powerful learning experiences, learning in practice does not occur automatically. Ericsson shows that significant improvement of performance is acquired only through the ongoing evaluation of performance and feedback. Without feedback, trainees will not be made aware of deficits, poor performance will go uncorrected, and good performance will not be reinforced.
BMJ Open | 2018
Carolin Sehlbach; Marjan J. B. Govaerts; Sharon Mitchell; Gernot Rohde; Frank W.J.M. Smeenk; Erik W. Driessen
Objectives With increased cross-border movement, ensuring safe and high-quality healthcare has gained primacy. The purpose of recertification is to ensure quality of care through periodically attesting doctors’ professional proficiency in their field. Professional migration and facilitated cross-border recognition of qualifications, however, make us question the fitness of national policies for safeguarding patient care and the international accountability of doctors. Design and setting We performed document analyses and conducted 19 semistructured interviews to identify and describe key characteristics and effective components of 10 different European recertification systems, each representing one case (collective case study). We subsequently compared these systems to explore similarities and differences in terms of assessment criteria used to determine process quality. Results Great variety existed between countries in terms and assessment formats used, targeting cognition, competence and performance (Miller’s assessment pyramid). Recertification procedures and requirements also varied significantly, ranging from voluntary participation in professional development modules to the mandatory collection of multiple performance data in a competency-based portfolio. Knowledge assessment was fundamental to recertification in most countries. Another difference concerned the stakeholders involved in the recertification process: while some systems exclusively relied on doctors’ self-assessment, others involved multiple stakeholders but rarely included patients in assessment of doctors’ professional competence. Differences between systems partly reflected different goals and primary purposes of recertification. Conclusion Recertification systems differ substantially internationally with regard to the criteria they apply to assess doctors’ competence, their aims, requirements, assessment formats and patient involvement. In the light of professional mobility and associated demands for accountability, we recommend that competence assessment includes patients’ perspectives, and recertification practices be shared internationally to enhance transparency. This can help facilitate cross-border movement, while guaranteeing high-quality patient care.