Publication


Featured researches published by Georges Bordage.


Medical Education | 2006

Residents anticipating, eliciting and interpreting physical findings

Rachel Yudkowsky; Georges Bordage; Tali Lowenstein; Janet Riddle

(PBIL) was created as a tool to solicit resident reflections on recent patient care events in which they had participated during their medical school training which could have been improved upon. Log completion includes the resident’s description of an adverse patient care event and his or her perception of the event’s severity and preventability and contributing personal and systems factors. The PBIL was distributed to all incoming categorical and preliminary internal medicine residents at a large academic medical centre during orientation for the 2005–06 academic year. Evaluation of results and impact A total of 73 residents (97%) completed the PBIL. Resident reflections classified 25 (34%) of the reported events as moderately severe or severe; 9 (12%) of the described events resulted in patient death. A total of 68 (93%) of the events described were considered to be preventable. Residents ascribed errors largely to personal factors (41%), team factors (22%), and factors related to the patient’s condition (15%), with 8%, 12% and 2% assigned to systems, institutional and regulatory factors, respectively. Resident logs each identified multiple specific personal, team and systems factors that could have prevented or mitigated the described events. Personal factors identified by residents that could have prevented or mitigated the described incidents included more experience (71%), better situational awareness (63%), better judgement (56%) and improved knowledge (52%). Team factors identified included improved communication between doctors (45%), improved communication between doctors and allied health and ⁄ or nursing staff (41%), and better transfer of care (27%). Potentially mitigating systems factors identified by residents included better information technology (30%), more workable policy ⁄ procedures (27%), better support or supervision (26%), and improved response to early warning signs (25%). This baseline assessment of resident PBIL reflections indicates that entering internal medicine residents can recognise significant improvement opportunities with both personal and systems-level implications. However, our results indicate that residents may attribute errors to personal rather than systems factors. This implies a need for greater attention to systems-based educational initiatives, which are currently underway at our institution.


Annals of Surgery | 2008

Reliability and validity of key feature cases for the self-assessment of colon and rectal surgeons.

Judith L. Trudel; Georges Bordage; Steven M. Downing

Objective:The purpose of this study was to determine the reliability and validity of the scores from “key feature” cases in the self-assessment of colon and rectal surgeons. Background:Key feature (KF) cases specifically focus on the assessment of the unique challenges, critical decisions, and difficult aspects of the identification and management of clinical problems in practice. KF cases have been used to assess medical students and residents but rarely for specialists. Methods:Responses from all 256 participants taking the American Society of Colon and Rectal Surgeons (ASCRS) Colon and Rectal Surgery Educational Program (CARSEP) V Self-assessment Examination (SAE) from 1997 to 2002 were scored and analyzed, including score reliability, item analysis for the factual (50 multiple-choice questions (MCQ)) and applied (9 KF cases) knowledge portions of the SAE, and the effect of examination preparation, examination setting, specialization, Board certification, and clinical experience on scores. Results:The reliability (Cronbach alpha) of the scores for the MCQ and KF components was 0.97 and 0.95, respectively. The applied KF component of the SAE was more difficult than the factual MCQ component (0.52 versus 0.80, P < 0.001). Mean item discrimination (upper-lower groups) was 0.59 and 0.66 for the MCQ and KF components, respectively. Taking the test at the annual meeting was harder than at home (0.41 versus 0.81, P < 0.001). Content-related validity evidence for the KF cases was supported by mapping KF cases onto the examination blueprint and by judgments from expert colorectal surgeons about the challenging and critical nature of the KFs used. Construct validity of the KF cases was supported by incremental performance related to types of practice (general, anorectal, and colorectal), levels and types of Board certification, and years of clinical experience. Conclusions:The self-assessment of surgical specialists, in this case colorectal surgeons, using KF cases is possible and yielded reliable and valid scores.


Archive | 1997

Maintaining and Enhancing Key Decision-Making Skills from Graduation into Practice: An Exploratory Study

Georges Bordage; C. A. Brailovsky; T. Cohen; G. Page

Key feature-based exams focus on clinicians’ ability to make decisions about critical steps in the identification and management of clinical problems. The purpose of this study was to explore the extent to which these key decisions are maintained or enhanced from graduation into practice. A booklet containing 18 clinical problems testing 44 key features (KFs) was administered to 98 graduating students and 21 volunteer general practitioners who were three to five years into practice. Students and experienced physicians had similar mean scores (t-tests, p>.05) for 36 of the 44 KFs (82%). Physicians scored significantly higher (+25%, p<.05) on three KFs, two related to treatment and one to investigation. The physicians scored significantly lower (-28.4%, p<.05) on two history related KFs, two physical exam KFs, and one investigation KF. The results from this exploratory study suggest that experience did not have a big effect on key decision-making skills, with experienced physicians showing slightly enhanced management decisions and some lessened skills for data acquisition. A content analysis of these latter items revealed that the experienced physicians were not failing to meet the scoring criteria by taking shortcuts (e.g., only supplying 2 out of 3 keyed responses). Instead, they were considering different leading diagnoses (the “wrong ballpark”). For example, they did not order a lower GI investigation in an elderly person, thus failing to consider GI bleeding as a possible etiology, a condition that is imminently treatable. One possible explanation for the shrinking differential diagnosis may be the low incidence of certain diagnoses in practice; out of sight, out of mind.


Academic Medicine | 1997

When to recommend compulsory versus optional Cme programs? A study to establish criteria

François Miller; André Jacques; Carlos Brailovsky; André Sindon; Georges Bordage

When should remedial continuing medical education (CME) be compulsory for family physicians? When should it be optional? Should it be structured or not? In 1993-1994, the authors addressed this need for criteria by conducting a study that used reports on 14 physicians who had undergone a structured oral interview (SOI) at the College of Physicians of Quebec. (The SOI is a day-long encounter during which two specially trained physician-interviewers present a physician with standardized clinical cases that focus on ten specific aspects of a family physicians competence.) The 14 SOI reports were reviewed by 12 external physician-judges in an attempt to see how consistently they could link the ten aspects of competence, as shown in the reports, to five particular types of recommended remedial CME programs (the strictest being “compulsory program with suspended license” and the most lenient being “simple suggestions for improvement”). There was substantial agreement among the judges when choosing between compulsory and optional programs (kappa = 0.63, p < .05). The main criteria used when recommending an optional program were overall strengths and the quality of clinical reasoning. The same two criteria were also used for recommending a compulsory program, but the judges also considered three additional factors: the physicians ability to recognize his or her limits and how he or she handled referrals and prescribed medications. Many of the criteria used by the judges were based on unique information that came out of observations and interactions during the SOIs, such as quality of argumentation, sustaining a train of thought, sense for the case as a whole, or awareness of ones limits. Finally, the external judges corroborated the decisions previously made by the College of Physicians of Quebec concerning the appropriate CME programs for the 14 physicians.


Academic Medicine | 2014

SNAPPS-Plus: An educational prescription for students to facilitate formulating and answering clinical questions

James Nixon; Terry Wolpaw; Alan Schwartz; Briar L. Duffy; Jeremiah Menk; Georges Bordage

Purpose To analyze the content and quality of PICO-formatted questions (Patient–Intervention–Comparison–Outcome), and subsequent answers, from students’ educational prescriptions added to the final SNAPPS Select step (SNAPPS-Plus). Method Internal medicine clerkship students at the University of Minnesota Medical Center were instructed to use educational prescriptions to complement their bedside SNAPPS case presentations from 2006 to 2010. Educational prescriptions were collected from all eligible students and coded for topic of uncertainty, PICO conformity score, presence of answer, and quality of answer. Spearman rank–order correlation coefficient was used to compare ordinal variables, Kruskal–Wallis test to compare distribution of PICO scores between groups, and McNemar exact test to test for association between higher PICO scores and presence of an answer. Results A total of 191 education prescriptions were coded from 191 eligible students, of which 190 (99%) included a question and 176 (93%, 176/190) an answer. Therapy questions constituted 59% (112/190) of the student-generated questions; 19% (37/190) were related to making a diagnosis. Three-fifths of the questions (61%, 116/190) were scored either 7 or 8 on the 8-point PICO conformity scale. The quality of answers varied, with 37% (71/190) meeting all criteria for high quality. There was a positive correlation between the PICO conformity score and the quality of the answers (Spearman rank–order correlation coefficient = 0.726; P < .001). Conclusions The SNAPPS-Plus technique was easily integrated into the inpatient clerkship structure and guaranteed that virtually every case presentation following this model had a well-formulated question and answer.


Academic Medicine | 2015

Characteristics and Implications of Diagnostic Justification Scores Based on the New Patient Note Format of the USMLE Step 2 CS Exam.

Rachel Yudkowsky; Yoon Soo Park; Abbas Hyderi; Georges Bordage

Background To determine the psychometric characteristics of diagnostic justification scores based on the patient note format of the United States Medical Licensing Examination Step 2 Clinical Skills exam, which requires students to document history and physical findings, differential diagnoses, diagnostic justification, and plan for immediate workup. Method End-of-third-year medical students at one institution wrote notes for five standardized patient cases in May 2013 (n = 180) and 2014 (n = 177). Each case was scored using a four-point rubric to rate each of the four note components. Descriptive statistics and item analyses were computed and a generalizability study done. Results Across cases, 10% to 48% provided no diagnostic justification or had several missing or incorrect links between history and physical findings and diagnoses. The average intercase correlation for justification scores ranged from 0.06 to 0.16; internal consistency reliability of justification scores (coefficient alpha across cases) was 0.38. Overall, justification scores had the highest mean item discrimination across cases. The generalizability study showed that person–case interaction (12%) and task–case interaction (13%) had the largest variance components, indicating substantial case specificity. Conclusions The diagnostic justification task provides unique information about student achievement and curricular gaps. Students struggled to correctly justify their diagnoses; performance was highly case specific. Diagnostic justification was the most discriminating element of the patient note and had the greatest variability in student performance across cases. The curriculum should provide a wide range of clinical cases and emphasize recognition and interpretation of clinically discriminating findings to promote the development of clinical reasoning skills.


Medical Education | 2014

Script concordance tests: strong inferences about examinees require stronger evidence.

Matthew Lineberry; Clarence D. Kreiter; Georges Bordage

On our first point, that disagreements of scoring panellists on SCT items are often illogical and have no place in a test’s scoring key, Lubarsky et al. note that ‘response variability among the members of an SCT panel is... a key determinant of the test’s discriminatory power’, where discrimination is generally between residents and panellists. As Cook et al. have noted, medical education research has put too much emphasis and uncritical faith in such knowngroups validity evidence. A host of facets differentiate earlyfrom late-career professionals, including psychosocial development with age, shifts in demographic characteristics, and differences in exposure to trends in medical practice, for instance. If panellists outscore residents, how do we know that this reflects their ‘clinical reasoning skill’ rather than any of a dozen other constructs?


Academic Medicine | 2009

Publishing ethics in medical education journals

Julie Brice; John Bligh; Georges Bordage; Jerry A. Colliver; David A. Cook; Kevin W. Eva; Ronald M. Harden; Steven L. Kanter; Geoffrey R. Norman

One issue that has repeatedly surfaced in nearly all journals is unethical or questionable practices regarding submission. Although flagrant breaches such as plagiarism are uncommon (or undetected), some practices, such as copy of text from one manuscript to another, submission of nearly identical analyses to different journals (“salami slicing”), inadequate attribution of prior work, and inappropriate authorship reporting (honorary authors) appear all too frequently. We believe that most of these breaches are committed in good faith and not malevolence, and simply represent a lack of awareness on the part of authors of accepted standards, and perhaps some legitimate differences in interpreting ethical standards and guidelines.


Teaching and Learning in Medicine | 2006

APPLIED RESEARCH: Standardized Versus Real Hospitalized Patients to Teach History-Taking and Physical Examination Skills

William R. Gilliland; Louis N. Pangaro; Steven M. Downing; Richard E. Hawkins; Deborah M. Omori; Eric S. Marks; Graceanne Adamo; Georges Bordage

Background: Despite the nearly universal practice of using standardized patients in introduction to clinical medicine (ICM) courses, no studies have compared the performance of students trained with standardized patients to that of those trained with hospitalized patients with regard to short- and long-term educational outcomes. Purpose: To examine the differential effect of the use of standardized patients in a simulation center versus the use of hospitalized patients in affiliated teaching hospitals for teaching history-taking and physical examination skills. Methods: This was a nonrandomized cohort study based on self-selection involving students from 2 academic years enrolled in an ICM course who received the final block of their ICM instruction with either standardized patients in the simulation center or hospitalized patients in affiliated hospitals. The primary, end-of-ICM (preclerkship)-outcome variables (k = 10) were data from a final observed history and physical examination, an observed structured clinical examination, a National Board of Medical Examiners subject examination for clinical medicine, and ICM preceptor evaluations. Secondary, postclerkship outcome variables (k = 5) included internal medicine clerkship scores, teacher ratings from the clerkship, and examination scores from end-of-clerkship tests. The statistical significance was set at p <. 05. Results: No statistical differences were found between the means of the 2 groups in any of the primary or secondary outcome variables; there was significant power. Conclusions: The use of standardized patients in a simulated setting compared to the use of hospitalized patients in affiliated teaching hospitals is not a disadvantage in the education of students in ICM courses.


Medical Teacher | 2016

Twelve tips on writing abstracts and titles: How to get people to use and cite your work

David A. Cook; Georges Bordage

Abstract The authors share 12 practical tips on creating effective titles and abstracts for a journal publication or conference presentation. When crafting a title authors should: (1) start thinking of the title from the start; (2) brainstorm many key words, create permutations, and ask others for input; (3) strive for an informative and indicative title; (4) start the title with the most important words; and (5) wait to finalize the title until the very end. When writing the abstract, authors should: (6) wait until the end to write the abstract; (7) copy and paste from main text as the starting point; (8) start with a detailed structured format; (9) describe what they did; (10) describe what they found; (11) highlight what readers can do with this information; and (12) ensure that the abstract aligns with the full text and conforms to submission guidelines.

Researchain Logo
Decentralizing Knowledge