Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Humphrey-Murto is active.

Publication


Featured researches published by Susan Humphrey-Murto.


Academic Medicine | 2002

Standard setting: a comparison of case-author and modified borderline-group methods in a small-scale OSCE.

Susan Humphrey-Murto; John Macfadyen

Purpose To compare cut scores resulting from the case-author method and the modified borderline-group method (MBG) of standard setting in an undergraduate objective structured clinical examination (OSCE), and to review the feasibility of using the MBG method of standard setting in a small-scale OSCE. Method Sixty-one fourth-year medical students underwent a ten-station OSCE examination. For the eight stations used in this study, cut scores were established using the case-author and MBG methods. Cut scores and pass rates were compared for individual stations and the entire exam. Results The case-author and MBG methods of standard setting produced different cut scores for the entire examination (5.77 and 5.31, respectively) and for each station individually. The percentage of students failing the examination based on the case-author cut score was 42.2%, and based on the MBG cut score it was 15.25%. Conclusions The case-author and MBG methods of standard setting produced different cut scores in an undergraduate OSCE. Overall, the MBG method was the more credible and defensible method of standard setting, and appeared well suited to a small-scale OSCE.


Medical Education | 2011

Comparison of student examiner to faculty examiner scoring and feedback in an OSCE

Geneviève Moineau; Barbara Power; Anne‐Marie J Pion; Timothy J. Wood; Susan Humphrey-Murto

Medical Education 2011: 45: 183–191


Academic Medicine | 2005

A Comparison of Physician Examiners and Trained Assessors in a High-stakes Osce Setting

Susan Humphrey-Murto; Sydney Smee; Claire Touchie; Timothy J. Wood; David Blackmore

Background The Medical Council of Canada (MCC) administers an objective structured clinical examination for licensure. Traditionally, physician examiners (PE) have evaluated these examinees. Recruitment of physicians is becoming more difficult. Determining if alternate scorers can be used is of increasing importance. Method In 2003, the MCC ran a study using trained assessors (TA) simultaneously with PEs. Four examination centers and three history-taking stations were selected. Health care workers were recruited as the TAs. Results A 3 × 2 × 4 mixed analyses of variance indicated no significant difference between scorers (F1,462 = .01, p = .94). There were significant interaction effects, which were, localized to site 1/station 3, site 3/station 2, and site 4/station1. Pass/fail decisions would have misclassified 14.4–25.01% of examinees. Conclusion Trained assessors may be a valid alternative to PE for completing checklists in history-taking stations, but their role in completing global ratings is not supported by this study.


Medical Education | 2014

Progress testing: is there a role for the OSCE?

Debra Pugh; Claire Touchie; Timothy J. Wood; Susan Humphrey-Murto

The shift from a time‐based to a competency‐based framework in medical education has created a need for frequent formative assessments. Many educational programmes use some form of written progress test to identify areas of strength and weakness and to promote continuous improvement in their learners. However, the role of performance‐based assessments, such as objective structured clinical examinations (OSCEs), in progress testing remains unclear.


Academic Medicine | 2011

Does an emotional intelligence test correlate with traditional measures used to determine medical school admission

John J. Leddy; Geneviève Moineau; Derek Puddester; Timothy J. Wood; Susan Humphrey-Murto

Background As medical school admission committees are giving increased consideration to noncognitive measures, this study sought to determine how emotional intelligence (EI) scores relate to other traditional measures used in the admissions process. Method EI was measured using an ability-based test (Mayer-Salovey-Caruso Emotional Intelligence Test, or MSCEIT) in two consecutive cohorts of medical school applicants (2006 and 2007) qualifying for the admission interview. Pearson correlations between EI scores and traditional measures (i.e., weighted grade point average [wGPA], autobiographical sketch scores, and interview scores) were calculated. Results Of 659 applicants, 68% participated. MSCEIT scores did not correlate with traditional measures (r = −0.06 to 0.09, P > .05), with the exception of a small correlation with wGPA in the 2007 cohort (r = −0.13, P < .05). Conclusions The lack of substantial relationships between EI scores and traditional medical school admission measures suggests that EI evaluates a construct fundamentally different from traits captured in our admission process.


Teaching and Learning in Medicine | 2004

Teaching the musculoskeletal examination: are patient educators as effective as rheumatology faculty?

Susan Humphrey-Murto; C. Douglas Smith; Claire Touchie; Timothy C. Wood

Background: Effective education of clinical skills is essential if doctors are to meet the needs of patients with rheumatic disease, but shrinking faculty numbers has made clinical teaching difficult. A solution to this problem is to utilize patient educators. Purpose: This study evaluates the teaching effectiveness of patient educators compared to rheumatology faculty using the musculoskeletal (MSK) examination. Method: Sixty-two 2nd-year medical students were randomized to receive instruction from patient educators or faculty. Tutorial groups received instructions during three, 3-hr sessions. Clinical skills were evaluated by a 9 station objective structured clinical examination. Students completed a tutor evaluation form to assess their level of satisfaction with the process. Results: Faculty-taught students received a higher overall mark (66.5% vs. 62.1%,) and fewer failed than patient educator-taught students (5 vs. 0, p = 0.02). Students rated faculty educators higher than patient educators (4.13 vs. 3.58 on a 5-point Likert scale). Conclusion: Rheumatology faculty appear to be more effective teachers of the MSK physical exam than patient educators.


Medical Teacher | 2017

Using consensus group methods such as Delphi and Nominal Group in medical education research

Susan Humphrey-Murto; Lara Varpio; Carol Gonsalves; Timothy J. Wood

Abstract Consensus group methods are widely used in research to identify and measure areas where incomplete evidence exists for decision-making. Despite their widespread use, these methods are often inconsistently used and reported. Using examples from the three most commonly used methods, the Delphi, Nominal Group and RAND/UCLA; this paper and associated Guide aim to describe these methods and to highlight common weaknesses in methodology and reporting. The paper outlines a series of recommendations to assist researchers using consensus group methods in providing a comprehensive description and justification of the steps taken in their study.


Medical Teacher | 2016

The OSCE progress test – Measuring clinical skill development over residency training

Debra Pugh; Claire Touchie; Susan Humphrey-Murto; Timothy J. Wood

Abstract Purpose: The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills. Methods: Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents’ performance progressed over time. Results: Residents’ total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97). Conclusions: The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.


Medical Education | 2016

Do OSCE progress test scores predict performance in a national high-stakes examination?

Debra Pugh; Farhan Bhanji; Gary Cole; Jonathan Dupre; Rose Hatala; Susan Humphrey-Murto; Claire Touchie; Timothy J. Wood

Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency‐based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high‐stakes examination.


Academic Medicine | 2014

Does emotional intelligence at medical school admission predict future academic performance

Susan Humphrey-Murto; John J. Leddy; Timothy J. Wood; Derek Puddester; Geneviève Moineau

Purpose Medical school admissions committees are increasingly considering noncognitive measures like emotional intelligence (EI) in evaluating potential applicants. This study explored whether scores on an EI abilities test at admissions predicted future academic performance in medical school to determine whether EI could be used in making admissions decisions. Method The authors invited all University of Ottawa medical school applicants offered an interview in 2006 and 2007 to complete the Mayer–Salovey–Caruso EI Test (MSCEIT) at the time of their interview (105 and 101, respectively), then again at matriculation (120 and 106, respectively). To determine predictive validity, they correlated MSCEIT scores to scores on written examinations and objective structured clinical examinations (OSCEs) administered during the four-year program. They also correlated MSCEIT scores to the number of nominations for excellence in clinical performance and failures recorded over the four years. Results The authors found no significant correlations between MSCEIT scores and written examination scores or number of failures. The correlations between MSCEIT scores and total OSCE scores ranged from 0.01 to 0.35; only MSCEIT scores at matriculation and OSCE year 4 scores for the 2007 cohort were significantly correlated. Correlations between MSCEIT scores and clinical nominations were low (range 0.12–0.28); only the correlation between MSCEIT scores at matriculation and number of clinical nominations for the 2007 cohort were statistically significant. Conclusions EI, as measured by an abilities test at admissions, does not appear to reliably predict future academic performance. Future studies should define the role of EI in admissions decisions.

Collaboration


Dive into the Susan Humphrey-Murto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lara Varpio

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sydney Smee

Medical Council of Canada

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge