Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachel Yudkowsky is active.

Publication


Featured researches published by Rachel Yudkowsky.


Teaching and Learning in Medicine | 2006

Procedures for establishing defensible absolute passing scores on performance examinations in health professions education.

Steven M. Downing; Ara Tekian; Rachel Yudkowsky

Background: Establishing credible, defensible, and acceptable passing scores for performance-type examinations in real-world settings is a challenge for health professions educators. Our purpose in this article is to provide step-by-step instructions with worked examples for 5 absolute standard-setting methods that can be used to establish acceptable passing scores for performance examinations such as Objective Structured Clinical Examinations or standardized patient encounters. Summary: All standards reflect the subjective opinions of experts. In this how-to article, we demonstrate procedures for systematically capturing these expert opinions using 5 research-based methods (Angoff, Ebel, Hofstee, Borderline Group, and Contrasting Groups). We discuss issues relating to selection of judges, use of performance data, and decision-making processes. Conclusions: Different standard-setting methods produce different passing scores; there is no gold standard. The key to defensible standards lies in the choice of credible judges and in the use of a systematic approach to collecting their judgments. Ultimately, all standards are policy decisions.


Annals of Internal Medicine | 2010

Contextual Errors and Failures in Individualizing Patient Care: A Multicenter Study

Saul J. Weiner; Alan Schwartz; Frances M. Weaver; Julie H. Goldberg; Rachel Yudkowsky; Gunjan Sharma; Amy Binns-Calvey; Ben Preyss; Marilyn M. Schapira; Stephen D. Persell; Elizabeth R. Jacobs; Richard I. Abrams

BACKGROUNDnA contextual error occurs when a physician overlooks elements of a patients environment or behavior that are essential to planning appropriate care. In contrast to biomedical errors, which are not patient-specific, contextual errors represent a failure to individualize care.nnnOBJECTIVEnTo explore the frequency and circumstances under which physicians probe contextual and biomedical red flags and avoid treatment error by incorporating what they learn from these probes.nnnDESIGNnAn incomplete randomized block design in which unannounced, standardized patients visited 111 internal medicine attending physicians between April 2007 and April 2009 and presented variants of 4 scenarios. In all scenarios, patients presented both a contextual and a biomedical red flag. Responses to probing about flags varied in whether they revealed an underlying complicating biomedical or contextual factor (or both) that would lead to errors in management if overlooked.nnnSETTINGn14 practices, including 2 academic clinics, 2 community-based primary care networks with multiple sites, a core safety net provider, and 3 U.S. Department of Veterans Affairs facilities.nnnMEASUREMENTSnPrimary outcomes were the proportion of visits in which physicians probed for contextual and biomedical factors in response to hints or red flags and the proportion of visits that resulted in error-free treatment plans.nnnRESULTSnPhysicians probed fewer contextual red flags (51%) than biomedical red flags (63%). Probing for contextual or biomedical information in response to red flags was usually necessary but not sufficient for an error-free plan of care. Physicians provided error-free care in 73% of the uncomplicated encounters, 38% of the biomedically complicated encounters, 22% of the contextually complicated encounters, and 9% of the combined biomedically and contextually complicated encounters.nnnLIMITATIONSnOnly 4 case scenarios were used. The study assessed physicians propensity to make errors when every encounter provided an opportunity to do so and did not measure actual error rates that occur in primary care settings because of inattention to context.nnnCONCLUSIONnInattention to contextual information, such as a patients transportation needs, economic situation, or caretaker responsibilities, can lead to contextual error, which is not currently measured in assessments of physician performance.nnnPRIMARY FUNDING SOURCEnU.S. Department of Veterans Affairs Health Services Research and Development Service


Academic Medicine | 2006

Developing an institution-based assessment of resident communication and interpersonal skills

Rachel Yudkowsky; Steven M. Downing; Leslie J. Sandlow

Purpose The authors describe the development and validation of an institution-wide, cross-specialty assessment of residents communication and interpersonal skills, including related components of patient care and professionalism. Method Residency program faculty, the department of medical education, and the Clinical Performance Center at the University of Illinois at Chicago College of Medicine collaborated to develop six standardized patient-based clinical simulations. The standardized patients rated the residents performance. The assessment was piloted in 2003 for internal medicine and family medicine and was subsequently adapted for other specialties, including surgery, pediatrics, obstetrics–gynecology, and neurology. We present validity evidence based on the content, internal structure, relationship to other variables, feasibility, acceptability, and impact of the 2003 assessment. Results Seventy-nine internal medicine and family medicine residents participated in the initial administration of the assessment. A factor analysis of the 18 communication scale items resulted in two factors interpretable as “communication” and “interpersonal skills.” Median internal consistency of the scale (coefficient alpha) was 0.91. Generalizability of the assessment ranged from 0.57 to 0.82 across specialties. Case-specific items provided information about group-level deficiencies. Cost of the assessment was about


Evaluation & the Health Professions | 2007

Rater errors in a clinical skills assessment of medical students.

Cherdsak Iramaneerat; Rachel Yudkowsky

250 per resident. Once the initial cases had been developed and piloted, they could be adapted for other specialties with minimal additional effort, at a cost saving of about


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2013

Practice on an augmented reality/haptic simulator and library of virtual brains improves residents' ability to perform a ventriculostomy

Rachel Yudkowsky; Cristian Luciano; Pat Banerjee; Alan Schwartz; Ali Alaraj; G. Michael Lemole; Fady T. Charbel; Kelly Smith; Silvio Rizzi; Richard W. Byrne; Bernard R. Bendok; David M. Frim

1,000 per program. Conclusion Centrally developed, institution-wide competency assessment uses resources efficiently to relieve individual programs of the need to “reinvent the wheel” and provides program directors and residents with useful information for individual and programmatic review.


Academic Medicine | 2004

Toward meaningful evaluation of clinical competence: the role of direct observation in clerkship ratings.

Memoona Hasnain; Karen J. Connell; Steven M. Downing; Allan J. Olthoff; Rachel Yudkowsky

The authors used a many-faceted Rasch measurement model to analyze rating data from a clinical skills assessment of 173 fourth-year medical students to investigate four types of rater errors: leniency, inconsistency, the halo effect, and restriction of range. Students performed six clinical tasks with 6 standardized patients (SPs) selected from a pool of 17 SPs. SPs rated the performance of each student in six skills: history taking, physical examination, interpersonal skills, communication technique, counseling skills, and physical examination etiquette. SPs showed statistically significant differences in their rating severity, indicating rater leniency error. Four SPs exhibited rating inconsistency. Four SPs restricted their ratings in high categories. Only 1 SP exhibited a halo effect. Administrators of objective structured clinical examinations should be vigilant for various types of rater errors and attempt to reduce or eliminate those errors to improve the validity of inferences based on objective structured clinical examination scores.


Medical Decision Making | 2007

Evaluating physician performance at individualizing care: a pilot study tracking contextual errors in medical decision making.

Saul J. Weiner; Alan Schwartz; Rachel Yudkowsky; Gordon D. Schiff; Frances M. Weaver; Julie H. Goldberg; Kevin B. Weiss

Introduction Ventriculostomy is a neurosurgical procedure for providing therapeutic cerebrospinal fluid drainage. Complications may arise during repeated attempts at placing the catheter in the ventricle. We studied the impact of simulation-based practice with a library of virtual brains on neurosurgery residents’ performance in simulated and live surgical ventriculostomies. Methods Using computed tomographic scans of actual patients, we developed a library of 15 virtual brains for the ImmersiveTouch system, a head- and hand-tracked augmented reality and haptic simulator. The virtual brains represent a range of anatomies including normal, shifted, and compressed ventricles. Neurosurgery residents participated in individual simulator practice on the library of brains including visualizing the 3-dimensional location of the catheter within the brain immediately after each insertion. Performance of participants on novel brains in the simulator and during actual surgery before and after intervention was analyzed using generalized linear mixed models. Results Simulator cannulation success rates increased after intervention, and live procedure outcomes showed improvement in the rate of successful cannulation on the first pass. However, the incidence of deeper, contralateral (simulator) and third-ventricle (live) placements increased after intervention. Residents reported that simulations were realistic and helpful in improving procedural skills such as aiming the probe, sensing the pressure change when entering the ventricle, and estimating how far the catheter should be advanced within the ventricle. Conclusions Simulator practice with a library of virtual brains representing a range of anatomies and difficulty levels may improve performance, potentially decreasing complications due to inexpert technique.


Advances in Health Sciences Education | 2009

Evaluating the effectiveness of rating instruments for a communication skills assessment of medical residents

Cherdsak Iramaneerat; Carol M. Myford; Rachel Yudkowsky; Tali Lowenstein

Problem Statement and Purpose. The lack of direct observation by faculty may affect meaningful judgments of clinical competence. The purpose of this study was to explore the influence of direct observation on reliability and validity evidence for family medicine clerkship ratings of clinical performance. Method. Preceptors rating family medicine clerks (n = 172) on a 16-item evaluation instrument noted the data-source for each rating: note review, case discussion, and/or direct observation. Mean data-source scores were computed and categorized as low, medium or high, with the high-score group including the most direct observation. Analyses examined the influence of data-source on interrater agreement and associations between clerkship clinical scores (CCS) and scores from the National Board of Medical Examiners (NBME®) subject examination as well as a fourth-year standardized patient-based clinical competence examination (M4CCE). Results. Interrater reliability increased as a function of data-source; for the low, medium, and high groups, intraclass correlation coefficients were .29, .50, and .74, respectively. For the high-score group, there were significant positive correlations between CCS and NBME score (r = .311, p = .054); and between CCS and M4CCE (r = .423, p = .009). Conclusion. Reliability and validity evidence for clinical competence is enhanced when more direct observation is included as a basis for clerkship ratings.


BMJ Quality & Safety | 2012

Uncharted territory: measuring costs of diagnostic errors outside the medical record

Alan Schwartz; Saul J. Weiner; Frances M. Weaver; Rachel Yudkowsky; Gunjan Sharma; Amy Binns-Calvey; Ben Preyss; Neil Jordan

Objectives. Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or—more specifically—to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. Methods. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. Results. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. Conclusions. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.


Academic Medicine | 2002

Microteaching and standardized students support faculty development for clinical teaching.

Mark H. Gelula; Rachel Yudkowsky

The investigators used evidence based on response processes to evaluate and improve the validity of scores on the Patient-Centered Communication and Interpersonal Skills (CIS) Scale for the assessment of residents’ communication competence. The investigators retrospectively analyzed the communication skills ratings of 68 residents at the University of Illinois at Chicago (UIC). Each resident encountered six standardized patients (SPs) portraying six cases. SPs rated the performance of each resident using the CIS Scale—an 18-item rating instrument asking for level of agreement on a 5-category scale. A many-faceted Rasch measurement model was used to determine how effectively each item and scale on the rating instrument performed. The analyses revealed that items were too easy for the residents. The SPs underutilized the lowest rating category, making the scale function as a 4-category rating scale. Some SPs were inconsistent when assigning ratings in the middle categories. The investigators modified the rating instrument based on the findings, creating the Revised UIC Communication and Interpersonal Skills (RUCIS) Scale—a 13-item rating instrument that employs a 4-category behaviorally anchored rating scale for each item. The investigators implemented the RUCIS Scale in a subsequent communication skills OSCE for 85 residents. The analyses revealed that the RUCIS Scale functioned more effectively than the CIS Scale in several respects (e.g., a more uniform distribution of ratings across categories, and better fit of the items to the measurement model). However, SPs still rarely assigned ratings in the lowest rating category of each scale.

Collaboration


Dive into the Rachel Yudkowsky's collaboration.

Top Co-Authors

Avatar

Steven M. Downing

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Schwartz

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Carol M. Myford

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georges Bordage

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Saul J. Weiner

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Yoon Soo Park

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Abbas Hyderi

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Adnan Alseidi

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge