Lambert Schuwirth
Flinders University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lambert Schuwirth.
Medical Teacher | 2012
C.P.M. van der Vleuten; Lambert Schuwirth; Erik W Driessen; Joost Dijkstra; Dineke Tigelaar; Liesbeth Baartman; J.M.F.M. van Tartwijk
We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Medical Education | 2005
Erik W Driessen; C.P.M. van der Vleuten; Lambert Schuwirth; J. van Tartwijk; Jan D. Vermunt
Aim Because it deals with qualitative information, portfolio assessment inevitably involves some degree of subjectivity. The use of stricter assessment criteria or more structured and prescribed content would improve interrater reliability, but would obliterate the essence of portfolio assessment in terms of flexibility, personal orientation and authenticity. We resolved this dilemma by using qualitative research criteria as opposed to reliability in the evaluation of portfolio assessment.
Medical Education | 2003
Lambert Schuwirth; C.P.M. van der Vleuten
Context Simulation‐based testing methods have been developed to meet the need for assessment procedures that are both authentic and well‐structured. It is widely acknowledged that, although the authenticity of a procedure may be a contributing factor to its validity, authenticity alone never is a sufficient factor.
Medical Education | 2012
Lucie Walters; Jennene Greenhill; Janet Richards; Helena Ward; Narelle Campbell; Julie Ash; Lambert Schuwirth
Medical Education 2012: 46: 1028–1041
Advances in Health Sciences Education | 2011
Marjan J. B. Govaerts; Lambert Schuwirth; C.P.M. van der Vleuten; Arno M. M. Muijtjens
Traditional psychometric approaches towards assessment tend to focus exclusively on quantitative properties of assessment outcomes. This may limit more meaningful educational approaches towards workplace-based assessment (WBA). Cognition-based models of WBA argue that assessment outcomes are determined by cognitive processes by raters which are very similar to reasoning, judgment and decision making in professional domains such as medicine. The present study explores cognitive processes that underlie judgment and decision making by raters when observing performance in the clinical workplace. It specifically focuses on how differences in rating experience influence information processing by raters. Verbal protocol analysis was used to investigate how experienced and non-experienced raters select and use observational data to arrive at judgments and decisions about trainees’ performance in the clinical workplace. Differences between experienced and non-experienced raters were assessed with respect to time spent on information analysis and representation of trainee performance; performance scores; and information processing––using qualitative-based quantitative analysis of verbal data. Results showed expert-novice differences in time needed for representation of trainee performance, depending on complexity of the rating task. Experts paid more attention to situation-specific cues in the assessment context and they generated (significantly) more interpretations and fewer literal descriptions of observed behaviors. There were no significant differences in rating scores. Overall, our findings seemed to be consistent with other findings on expertise research, supporting theories underlying cognition-based models of assessment in the clinical workplace. Implications for WBA are discussed.
Advances in Health Sciences Education | 2010
Joost Dijkstra; C.P.M. van der Vleuten; Lambert Schuwirth
Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals.
Medical Education | 2001
Lambert Schuwirth; M. M. Verheggen; C.P.M. van der Vleuten; H. P. A. Boshuizen; Geert-Jan Dinant
To assess whether case‐based questions elicit different thinking processes from factual knowledge‐based questions.
Medical Education | 2002
Lambert Schuwirth; Lesley Southgate; Gayle G. Page; Neil Paget; J M J Lescop; S R Lew; Winnie Wade; M Barón‐Maldonado
Introduction An essential element of practice performance assessment involves combining the results of various procedures in order to see the whole picture. This must be derived from both objective and subjective assessment, as well as a combination of quantitative and qualitative assessment procedures. Because of the severe consequences an assessment of practice performance may have, it is essential that the procedure is both defensible to the stakeholders and fair in that it distinguishes well between good performers and underperformers.
Medical Education | 2002
Richard Hays; Helena Davies; Jonathan Beard; L.J.M. Caldon; Elizabeth Farmer; P.M. Finucane; Peter McCrorie; David Newble; Lambert Schuwirth; G.R. Sibbald
Background While much is now known about how to assess the competence of medical practitioners in a controlled environment, less is known about how to measure the performance in practice of experienced doctors working in their own environments. The performance of doctors depends increasingly on how well they function in teams and how well the health care system around them functions.
Medical Education | 2011
Steven J. Durning; Anthony R. Artino; Louis N. Pangaro; Cees van der Vleuten; Lambert Schuwirth
Medical Education 2011: 45: 927–938