Linette P. Ross
National Board of Medical Examiners
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Linette P. Ross.
Academic Medicine | 1996
Linette P. Ross; Brian E. Clauser; Melissa J. Margolis; Orr Na; Daniel J. Klass
No abstract available.
Academic Medicine | 2010
Carol Morrison; Linette P. Ross; Thomas Fogle; Aggie Butler; Judith G. Miller; Gerard F. Dillon
Background This study examined the relationship between performance on the National Board of Medical Examiners Comprehensive Basic Science Self-Assessment (CBSSA) and performance on United States Medical Licensing Examination Step 1. Method The study included 12,224 U.S. and Canadian medical school students who took CBSSA prior to their first Step 1 attempt. Linear and logistic regression analyses investigated the relationship between CBSSA performance and performance on Step 1, and how that relationship was related to interval between exams. Results CBSSA scores explained 67% of the variation in first Step 1 scores as the sole predictor variable and 69% of the variation when time between CBSSA attempt and first Step 1 attempt was also included as a predictor. Logistic regression results showed that examinees with low scores on CBSSA were at higher risk of failing their first Step 1 attempt. Conclusions Results suggest that CBSSA can provide students with a realistic self-assessment of their readiness to take Step 1.
Academic Medicine | 1997
Brian E. Clauser; Linette P. Ross; Ronald J. Nungester; Stephen G. Clyman
No abstract available.
Academic Medicine | 1996
Brian E. Clauser; Stephen G. Clyman; Melissa J. Margolis; Linette P. Ross
No abstract available.
Teaching and Learning in Medicine | 1992
Alton I. Sutnick; Linette P. Ross; Marjorie P. Wilson
A test to assess clinical competence requires the inclusion of a domain consisting of multiple clinical competencies. Although some can be tested in simulated clinical settings with standardized patients, others should not be tested in such integrated clinical encounters because of the limited amount of time that can be allotted in a case‐specific context. This is particularly true of diagnosis/management competencies, which include problem identification and differential diagnosis, interpretation of diagnostic and laboratory procedures, and patient management. In this study, responses to all 139 multiple‐choice questions (MCQs) addressing diagnosis/management competencies in the July 1989 Day 2 component of the Foreign Medical Graduate Examination in the Medical Sciences (FMGEMS) were compared with the entire Day 2 scores and with the other categories of MCQs in that component. The results show that FMGEMS Day 2 scores are reliable in measuring the ability of examinees to address diagnosis/management com...
Teaching and Learning in Medicine | 2014
Carol Morrison; Linette P. Ross; Laurel Sample; Aggie Butler
Background: The Comprehensive Clinical Science Self-Assessment (CCSSA) is a web-administered multiple-choice examination that includes content that is typically covered during the core clinical clerkships in medical school. Because the content of CCSSA items resembles the content of the items on Step 2 Clinical Knowledge (CK), CCSSA is intended to be a tool for students to help assess whether they are prepared for Step 2 CK and to become familiar with its content, format, and pacing. Purposes: This study examined the relationship between performance on the National Board of Medical Examiners® CCSSA and performance on the United States Medical Licensing Examination® Step 2 CK for U.S./Canadian (USMGs) and international medical school students/graduates (IMGs). Methods: The study included 9,789 participants who took CCSSA prior to their first Step 2 CK attempt. Linear and logistic regression analyses investigated the relationship between CCSSA performance and performance on Step 2 CK for both USMGs and IMGs. Results: CCSSA scores explained 58% of the variation in first Step 2 CK scores for USMGs and 60% of the variation for IMGs; the relationship was somewhat different for the two groups as indicated by statistically different intercepts and slopes for the regression lines based on each group. Logistic regression results showed that examinees in both groups with low scores on CCSSA were at a higher risk of failing their first Step 2 CK attempt. Conclusions: Results suggest that CCSSA can provide students with a valuable practice tool and a realistic self-assessment of their readiness to take Step 2 CK.
Academic Medicine | 2015
Linda N. Peterson; Shayna A. Rusticus; Linette P. Ross
Purpose Accreditation standards require medical schools to use comparable assessment methods to ensure students in rotation-based clerkships and longitudinal integrated clerkships (LICs) achieve the same learning objectives. The National Board of Medical Examiners (NBME) Clinical Science Subject Examinations (subject exams) are commonly used, but an integrated examination like the NBME Comprehensive Clinical Science Examination (CCSE) may be better suited for LICs. This study examined the comparability of the CCSE and five commonly required subject exams. Method In 2009–2010, third-year medical students in rotation-based clerkships at the University of British Columbia Faculty of Medicine completed subject exams in medicine, obstetrics–gynecology, pediatrics, psychiatry, and surgery for summative purposes following each rotation and a year-end CCSE for formative purposes. Data for 205 students were analyzed to determine the relationship between scores on the CCSE (and its five discipline subscales) and the five subject exams and the impact of clerkship rotation order. Results The correlation between the CCSE score and the average score on the five subject exams was high (0.80–0.93). Four subject exam scores were significant predictors of the CCSE score, and scores on the subject exams explained 65%–87% of CCSE score variance. Scores on each subject exam—but not rotation order—were statistically significant in predicting corresponding CCSE discipline subscale scores. Conclusions The results provide evidence that these five subject exams and the CCSE measure similar constructs. This suggests that assessment of clerkship-year students’ knowledge using the CCSE is comparable to assessment using this set of subject exams.
Archive | 1997
Brian E. Clauser; Melissa J. Margolis; Linette P. Ross; Ronald J. Nungester; Daniel J. Klass
The use of checklists for scoring standardized patient evaluations reduces the rating task to recording whether a defined behaviour was or was not displayed. Although this may enhance objectivity, checklist-based scoring has the potential limitation that it may fail to account for the complexity of the judgment process used by experts. Minimally, it is clear that experts may consider the behaviours represented by some checklist items to be more important than others. The research described in this paper examines the potential to increase the correspondence between checklist scores and clinician ratings by weighting checklist items using regression-derived item weights. Results show that the expected increase in correspondence between checklist scores and clinician ratings occurred for all cases in which the correlation between the unweighted checklist scores and the ratings was less than. 92. Cross-validation of the results with an independent set of ratings is provided as are generaliz-ability analyses of the ratings, weighted, and unweighted scores.
Archive | 1997
Brian E. Clauser; Linette P. Ross; R. M. Luecht; Ronald J. Nungester; Stephen G. Clyman
In circumstances where performance assessments of physician’s clinical skills are used to make important promotional or curric-ular decisions, it may be necessary to produce multiple equivalent forms of the assessment. Relatively little has been reported regarding appropriate methods for establishing equivalence across such forms. The purpose of this paper is to examine the potential usefulness of the Rasch model for equating forms of a computer-based simulation of physicians’ clinical skills. In addition to assessing the fit of the model to the test data, the paper provides a comparison of the Rasch model to other approaches that have been used to equate clinical skills assessments (e.g., standardized-patient-based examinations). The potential advantages of the Rasch model are discussed as are considerations for its application.
Journal of Educational Measurement | 1997
Brian E. Clauser; Melissa J. Margolis; Stephen G. Clyman; Linette P. Ross