Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stuart Lubarsky is active.

Publication


Featured researches published by Stuart Lubarsky.


Medical Education | 2011

Script concordance testing: a review of published validity evidence

Stuart Lubarsky; Bernard Charlin; David A. Cook; Colin Chalk; Cees van der Vleuten

Medical Education 2011: 45: 329–338


Medical Education | 2012

Clinical reasoning processes: unravelling complexity through graphical representation

Bernard Charlin; Stuart Lubarsky; Bernard Millette; Françoise Crevier; Marie-Claude Audétat; Anne Charbonneau; Nathalie Caire Fon; Léa Hoff; Christian Bourdy

Medical Education 2012: 46: 454–463


Medical Teacher | 2013

Script concordance testing: From theory to practice: AMEE Guide No. 75

Stuart Lubarsky; Valérie Dory; Paul Duggan; Robert Gagnon; Bernard Charlin

The script concordance test (SCT) is used in health professions education to assess a specific facet of clinical reasoning competence: the ability to interpret medical information under conditions of uncertainty. Grounded in established theoretical models of knowledge organization and clinical reasoning, the SCT has three key design features: (1) respondents are faced with ill-defined clinical situations and must choose between several realistic options; (2) the response format reflects the way information is processed in challenging problem-solving situations; and (3) scoring takes into account the variability of responses of experts to clinical situations. SCT scores are meant to reflect how closely respondents’ ability to interpret clinical data compares with that of experienced clinicians in a given knowledge domain. A substantial body of research supports the SCTs construct validity, reliability, and feasibility across a variety of health science disciplines, and across the spectrum of health professions education from pre-clinical training to continuing professional development. In practice, its performance as an assessment tool depends on careful item development and diligent panel selection. This guide, intended as a primer for the uninitiated in SCT, will cover the basic tenets, theoretical underpinnings, and construction principles governing script concordance testing.


Teaching and Learning in Medicine | 2010

Assessment in the Context of Uncertainty Using the Script Concordance Test: More Meaning for Scores

Bernard Charlin; Robert Gagnon; Stuart Lubarsky; C. Lambert; Sarkis Meterissian; Colin Chalk; Johanne Goudreau; Cees van der Vleuten

Background: The Script Concordance Test (SCT) uses authentic, ill-defined clinical cases to compare medical learners’ judgment skills with those of experienced physicians. SCT scores are meant to measure the degree of concordance between the performance of examinees and that of the reference panel. Raw test scores have meaning only if statistics (mean and standard deviation) describing the panel’s performance are concurrently provided. Purpose: The purpose of this study is to suggest a method for reporting scores that standardizes panel mean and standard deviation, allowing examinees to immediately gauge their performance relative to panel members. Methods: Based on a statistical method of standardization, a new method for computing SCT scores is described. According to this method, test raw scores are converted into a scale in which the panel mean is set as the value of reference, and the standard deviation of the panel serves as a yardstick by which examinee performance is measured. Results: The effect of this transformation on four data sets obtained from SCTs in radio-oncology, surgery, neurology, and nursing is discussed. Conclusion: This transformation method proposes a common metric basis for reporting SCT scores and provides examinees with clear, interpretable insights into their performance relative to that of physicians of the field. We recommend reporting SCT scores with the mean and standard deviation of panel scores set at standard scores of 80 and 5, respectively. Beyond SCT, our transformation method may be generalizable to the scoring of other test formats in which the performance of examinees and those of a panel of reference undertaking the same cognitive tasks are compared.


Canadian Journal of Neurological Sciences | 2009

The Script Concordance Test: a new tool assessing clinical judgement in neurology.

Stuart Lubarsky; Colin Chalk; Driss Kazitani; Robert Gagnon; Bernard Charlin

BACKGROUND Clinical judgment, the ability to make appropriate decisions in uncertain situations, is central to neurological practice, but objective measures of clinical judgment in neurology trainees are lacking. The Script Concordance Test (SCT), based on script theory from cognitive psychology, uses authentic clinical scenarios to compare a trainees judgment skills with those of experts. The SCT has been validated in several medical disciplines, but has not been investigated in neurology. METHODS We developed an Internet-based neurology SCT (NSCT) comprising 24 clinical scenarios with three to four questions each. The scenarios were designed to reflect the uncertainty of real-life clinical encounters in adult neurology. The questions explored aspects of the scenario in which several responses might be acceptable; trainees were asked to judge which response they considered to be best. Forty-one PGY1-PGY5 neurology residents and eight medical students from three North American neurology programs (McGill, Calgary, and Mayo Clinic) completed the NSCT. The responses of trainees to each question were compared with the aggregate responses of an expert panel of 16 attending neurologists. RESULTS The NSCT demonstrated good reliability (Cronbach alpha = 0.79). Neurology residents scored higher than medical students and lower than attending neurologists, supporting the tests construct validity. Furthermore, NSCT scores discriminated between senior (PGY3-5) and junior residents (PGY1-2). CONCLUSIONS Our NSCT is a practical and reliable instrument, and our findings support its construct validity for assessing judgment in neurology trainees. The NSCT has potentially widespread applications as an evaluation tool, both in neurology training and for licensing examinations.


Journal of Continuing Education in The Health Professions | 2015

Considering "Nonlinearity" Across the Continuum in Medical Education Assessment: Supporting Theory, Practice, and Future Research Directions.

Steven J. Durning; Stuart Lubarsky; Dario M. Torre; Valérie Dory; Eric S. Holmboe

&NA; The purpose of this article is to propose new approaches to assessment that are grounded in educational theory and the concept of “nonlinearity.” The new approaches take into account related phenomena such as “uncertainty,” “ambiguity,” and “chaos.” To illustrate these approaches, we will use the example of assessment of clinical reasoning, although the principles we outline may apply equally well to assessment of other constructs in medical education. Theoretical perspectives include a discussion of script theory, assimilation theory, self‐regulated learning theory, and situated cognition. Assessment examples to include script concordance testing, concept maps, self‐regulated learning microanalytic technique, and work‐based assessment, which parallel the above‐stated theories, respectively, are also highlighted. We conclude with some practical suggestions for approaching nonlinearity.


Medical Education | 2013

‘What would my classmates say?’ An international study of the prediction-based method of course evaluation

Johanna Schönrock-Adema; Stuart Lubarsky; Colin Chalk; Yvonne Steinert; Janke Cohen-Schotanus

Objectives  Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction‐based methods of course evaluation ‐ in which students estimate their peers’ opinions rather than provide their own personal opinions ‐ require significantly fewer respondents to achieve comparable results and are less subject to biasing influences. This international study seeks further support for the validity of these findings by investigating: (i) the performance of the prediction‐based method, and (ii) its potential for bias.


Medical Education | 2013

Scoring the Script Concordance Test: not a black and white issue.

Stuart Lubarsky; Robert Gagnon; Bernard Charlin

1 Dornan T. Experienced Based Learning. Learning Clinical Medicine in Workplaces. PhD Thesis. Maastricht: Maastricht University, 2006. 2 Smith ES, Tallentire VR, Cameron HS,Wood SM. The effect of contributing to patient care on medical students’ workplace learning.Med Educ 2013;47:1184–96. 3 Hattie J, Timperley H. The power of feedback. Rev Educ Res 2007;77 (1):81–112. 4 Ashford SJ, Blatt R, VandeWalle D. Reflections on the looking glass: a review of research on feedbackseeking behaviour in organisations. J Manag 2003;39 (6):773– 800. 5 Bok HGJ, Teunissen PW, Spruijt A, Fokkema JPI, van Beukelen P, Jaarsma DADC, van der Vleuten CPM. Clarifying students’ feedback-seeking behaviour in clinical clerkships. Med Educ 2013;47:282–91. 6 Dornan T, Boshuizen H, King N, Scherpbier AJJA. Experience-based learning: a model linking the processes and outcomes of medical students’ workplace learning. Med Educ 2007;41: 84–91. 7 Teunissen PW, Stapel DA, van der Vleuten CPM, Scherpbier AJJA, Boor K, Scheele F. Who wants feedback? An investigation of the variables influencing residents’ feedback-seeking behaviour in relation to night shifts. Acad Med 2009;84 (7):910–7. 8 van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach 2012;34:205– 14. 9 Slootweg I, Lombarts K, van der Vleuten CPM, Mann K, Jacobs J, Scherpbier AJJA. Clinical teachers’ views on how teaching teams deliver and manage residency training. Med Teach 2013;35:46–52. 10 Cooke M, Irby DM, Sullivan W, Ludmerer KM. American medical education 100 years after the Flexner report. New Engl J Med 2006;355 :1339–44. 11 Driessen EW, van Tartwijk J, Govaerts M, Teunissen PW, van der Vleuten CPM. The use of programmatic assessment in the clinical workplace: a Maastricht case report. Med Teach 2012;34:226–31.


Journal of the American Geriatrics Society | 2012

Assessment of Undergraduate Clinical Reasoning in Geriatric Medicine: Application of a Script Concordance Test

Ronaldo D. Piovezan; Osvladir Custódio; Maysa Seabra Cendoroglo; Nildo Alves Batista; Stuart Lubarsky; Bernard Charlin

A challenging aspect of geriatric practice is that it often requires decision‐making under conditions of uncertainty. The Script Concordance Test (SCT) is an assessment tool designed to measure clinical data interpretation, an important element of clinical reasoning under uncertainty. The purpose of this study was to develop and analyze the validity of results of an SCT administered to undergraduate students in geriatric medicine. An SCT consisting of 13 cases and 104 items covering a spectrum of common geriatric problems was designed and administered to 41 undergraduate medical students at a medical school in São Paulo, Brazil. A reference panel of 21 practicing geriatricians contributed to the tests score key. The responses were analyzed, and the psychometric properties of the tool were investigated. The tests internal consistency and discriminative capacity to distinguish students from experienced geriatricians supported construct validity. The Cronbach alpha for the test was 0.84, and mean scores for the experts were found to be significantly higher than those of the students (80.0 and 70.7, respectively; P < .001). This study demonstrated robust evidence of reliability and validity of an SCT developed for use in geriatric medicine for assessing clinical reasoning skills under conditions of uncertainty in undergraduate medical students. These findings will be of interest to those involved in assessing clinical competence in geriatrics and will have important potential application in medical school examinations.


Academic Medicine | 2017

Knowledge Syntheses in Medical Education: Demystifying Scoping Reviews

Aliki Thomas; Stuart Lubarsky; Steven J. Durning; Meredith Young

An unprecedented rise in health professions education (HPE) research has led to increasing attention and interest in knowledge syntheses. There are many different types of knowledge syntheses in common use, including systematic reviews, meta-ethnography, rapid reviews, narrative reviews, and realist reviews. In this Perspective, the authors examine the nature, purpose, value, and appropriate use of one particular method: scoping reviews. Scoping reviews are iterative and flexible and can serve multiple main purposes: to examine the extent, range, and nature of research activity in a given field; to determine the value and appropriateness of undertaking a full systematic review; to summarize and disseminate research findings; and to identify research gaps in the existing literature. Despite the advantages of this methodology, there are concerns that it is a less rigorous and defensible means to synthesize HPE literature. Drawing from published research and from their collective experience with this methodology, the authors present a brief description of scoping reviews, explore the advantages and disadvantages of scoping reviews in the context of HPE, and offer lessons learned and suggestions for colleagues who are considering conducting scoping reviews. Examples of published scoping reviews are provided to illustrate the steps involved in the methodology.

Collaboration


Dive into the Stuart Lubarsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Gagnon

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

C. Lambert

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge