Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kimberly A. Swygert is active.

Publication


Featured researches published by Kimberly A. Swygert.


Academic Medicine | 2010

The impact of repeat information on examinee performance for a large-scale standardized-patient examination.

Kimberly A. Swygert; Kevin P. Balog; Ann C. Jobe

Purpose The United States Medical Licensing Examination series Step 2 Clinical Skills (CS) examination is a high-stakes performance assessment that uses standardized patients (SPs) to assess the clinical skills of physicians. Each Step 2 CS examination form involves 12 SPs, each of whom portrays a different clinical scenario or case. Examinees who fail and repeat the examination may encounter repeat information—the same SP, the same case, or the same SP portraying the same case. The goal of this study was twofold: to investigate score gains for all repeat examinees, regardless of whether they experienced repeat information, and to perform additional analyses for only those examinees who did encounter repeat information. Method The dataset consisted of 3,045 Step 2 CS repeat examinees who initially tested between April 2005 and December 2007. The authors used paired t tests and analysis of variance models to assess mean score gains (first attempt versus second attempt) and to determine standardized mean differences between encounters with repeat information and those without. The authors ran each set of analyses by test score component and by examinee subgroup. Results The authors observed significant mean score increases on second attempt examinations for the entire group of repeat examinees. However, they observed no significant score increases for the subgroup of examinees who encountered repeat information. Conclusions Examinees taking Step 2 CS for the second time improve on average, and those with prior exposure to exam information do not appear to benefit unfairly from this exposure.


Academic Medicine | 2011

A multilevel analysis of examinee gender, standardized patient gender, and United States medical licensing examination step 2 clinical skills communication and interpersonal skills scores.

Monica M. Cuddy; Kimberly A. Swygert; David B. Swanson; Ann C. Jobe

Background Women typically demonstrate stronger communication skills on performance-based assessments using human raters in medical education settings. This study examines the effects of examinee and rater gender on communication and interpersonal skills (CIS) scores from the performance-based component of the United States Medical Licensing Examination, the Step 2 Clinical Skills (CS) examination. Method Data included demographic and performance information for examinees that took Step 2 CS for the first time in 2009. The sample contained 27,910 examinees, 625 standardized patient/case combinations, and 278,776 scored patient encounters. Hierarchical linear modeling techniques were employed with CIS scores as the outcome measure. Results Females tend to slightly outperform males on CIS, when other variables related to performance are taken into account. No evidence of an examinee and rater gender interaction effect was found. Conclusions Results provide validity evidence supporting the interpretation and use of Step 2 CS CIS scores.


Academic Medicine | 2006

Assessing the underlying structure of the United States Medical Licensing Examination Step 2 test of clinical skills using confirmatory factor analysis.

Andre F. De Champlain; Kimberly A. Swygert; David B. Swanson; John R. Boulet

Background The purpose of the present study was to assess the fit of three factor analytic (FA) models with a representative set of United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) cases and examinees based on substantive considerations. Method Checklist, patient note, communication and interpersonal skills, as well as spoken English proficiency data were collected from 387 examinees on a set of four USMLE Step 2 CS cases. The fit of skills-based, case-based, and hybrid models was assessed. Results Findings show that a skills-based model best accounted for performance on the set of four CS cases. Conclusion Results of this study provide evidence to support the structural aspect of validity. The proficiency set used by examinees when performing on the Step 2 CS cases is consistent with the scoring rubric employed and the blueprint used in form assembly. These findings will be discussed in light of past research in this area.


Journal of General Internal Medicine | 2012

The Impact of Postgraduate Training on USMLE® Step 3® and its Computer-Based Case Simulation Component

Richard A. Feinberg; Kimberly A. Swygert; Steven A. Haist; Gerard F. Dillon; Constance T. Murray

BACKGROUNDThe United States Medical Licensing Examination® (USMLE®) Step 3® examination is a computer-based examination composed of multiple choice questions (MCQ) and computer-based case simulations (CCS). The CCS portion of Step 3 is unique in that examinees are exposed to interactive patient-care simulations.OBJECTIVEThe purpose of the following study is to investigate whether the type and length of examinees’ postgraduate training impacts performance on the CCS component of Step 3, consistent with previous research on overall Step 3 performance.DESIGNRetrospective cohort studyPARTICIPANTSMedical school graduates from U.S. and Canadian institutions completing Step 3 for the first time between March 2007 and December 2009 (n = 40,588).METHODSPost-graduate training was classified as either broadly focused for general areas of medicine (e.g. pediatrics) or narrowly focused for specific areas of medicine (e.g. radiology). A three-way between-subjects MANOVA was utilized to test for main and interaction effects on Step 3 and CCS scores between the demographic characteristics of the sample and type of residency. Additionally, to examine the impact of postgraduate training, CCS scores were regressed on Step 1 and Step 2 Clinical Knowledge (CK) scores. Residuals from the resulting regressions were plotted.RESULTSThere was a significant difference in CCS scores between broadly focused (μ = 216, σ = 17) and narrowly focused (μ=211, σ = 16) residencies (p < 0.001). Examinees in broadly focused residencies performed better overall and as length of training increased, compared to examinees in narrowly focused residencies. Predictors of Step 1 and Step 2 CK explained 55% of overall Step 3 variability and 9% of CCS score variability.CONCLUSIONSFactors influencing performance on the CCS component may be similar to those affecting Step 3 overall. Findings are supportive of the validity of the Step 3 program and may be useful to program directors and residents in considering readiness to take this examination.


Academic Medicine | 2008

The Generalizability of Documentation Scores from the Usmle Step 2 Clinical Skills Examination

Brian E. Clauser; Polina Harik; Melissa J. Margolis; Janet Mee; Kimberly A. Swygert; Thomas Rebbecchi

Background This research examined various sources of measurement error in the documentation score component of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination. Method A generalizability theory framework was employed to examine the documentation ratings for 847 examinees who completed the USMLE Step 2 Clinical Skills examination during an eight-day period in 2006. Each patient note was scored by two different raters allowing for a persons-crossed-with-raters-nested-in-cases design. Results The results suggest that inconsistent performance on the part of raters makes a substantially greater contribution to measurement error than case specificity. Double scoring the notes significantly increases precision. Conclusions The results provide guidance for improving operational scoring of the patient notes. Double scoring of the notes may produce an increase in the precision of measurement equivalent to that achieved by lengthening the test by more than 50%. The study also cautions researchers that when examining sources of measurement error, inappropriate data-collection designs may result in inaccurate inferences.


Advances in Health Sciences Education | 2012

Gender differences in examinee performance on the Step 2 Clinical Skills® data gathering (DG) and patient note (PN) components

Kimberly A. Swygert; Monica M. Cuddy; Marta van Zanten; Steven A. Haist; Ann C. Jobe

Multiple studies examining the relationship between physician gender and performance on examinations have found consistent significant gender differences, but relatively little information is available related to any gender effect on interviewing and written communication skills. The United States Medical Licensing Examination (USMLE®) Step 2 Clinical Skills® (CS®) examination is a multi-station examination where examinees (physicians in training) interact with, and are rated by, standardized patients (SPs) portraying cases in an ambulatory setting. Data from a recent complete year (2009) were analyzed via a series of hierarchical linear models to examine the impact of examinee gender on performance on the data gathering (DG) and patient note (PN) components of this examination. Results from both components show that not only do women have higher scores on average, but women continue to perform significantly better than men when other examinee and case variables are taken into account. Generally, the effect sizes are moderate, reflecting an approximately 2% score advantage by encounter. The advantage for female examinees increased for encounters that did not require a physical examination (for the DG component only) and for encounters that involved a Women’s Health issue (for both components). The gender of the SP did not have an impact on the examinee gender effect for DG, indicating a desirable lack of interaction between examinee and SP gender. The implications of the findings, especially with respect to the validity of the use of the examination outcomes, are discussed.


Academic Medicine | 2011

Evaluating Construct Equivalence and Criterion-related Validity for Repeat Examinees on a Standardized Patient Examination

Mark R. Raymond; Nilufer Kahraman; Kimberly A. Swygert; Kevin P. Balog

Purpose Prior studies report large score gains for examinees who fail and later repeat standardized patient (SP) assessments. Although research indicates that score gains on SP exams cannot be attributed to memorizing previous cases, no studies have investigated the empirical validity of scores for repeat examinees. This report compares single-take and repeat examinees in terms of both internal (construct) validity and external (criterion-related) validity. Method Data consisted of test scores for examinees who took the United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam between July 16, 2007, and September 12, 2009. The sample included 12,090 examinees who completed Step 2 CS on one occasion and another 4,030 examinees who completed the exam on two occasions. The internal measures included four separately scored performance domains of the Step 2 CS examination, whereas the external measures consisted of scores on three written assessments of medical knowledge (Step 1, Step 2 clinical knowledge, and Step 3). The authors subjected the four Step 2 CS domains to confirmatory factor analysis and evaluated correlations between Step 2 CS scores and the three written assessments for single-take and repeat examinees. Results The factor structure for repeat examinees on their first attempt was markedly different from the factor structure for single-take examinees, but it became more similar to that for single-take examinees by their second attempt. Scores on the second attempt correlated more highly with all three external measures. Conclusions The findings support the validity of scores for repeat examinees on their second attempt.


Military Medicine | 2015

Does the MCAT Predict Medical School and PGY-1 Performance?

Aaron Saguil; Ting Dong; Robert J. Gingerich; Kimberly A. Swygert; Jeffrey S. LaRochelle; Anthony R. Artino; David F. Cruess; Steven J. Durning

BACKGROUND The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U. S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice-based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. METHODS This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. RESULTS MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. DISCUSSION MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


Academic Medicine | 2014

Is poor performance on NBME clinical subject examinations associated with a failing score on the USMLE step 3 examination

Ting Dong; Kimberly A. Swygert; Steven J. Durning; Aaron Saguil; Christopher M. Zahn; Kent J. DeZee; William R. Gilliland; David F. Cruess; Erin K. Balog; Jessica Servey; David R. Welling; Matthew Ritter; Matthew Goldenberg; Laura B. Ramsay; Anthony R. Artino

Purpose To investigate the association between poor performance on National Board of Medical Examiners clinical subject examinations across six core clerkships and performance on the United States Medical Licensing Examination Step 3 examination. Method In 2012, the authors studied matriculants from the Uniformed Services University of the Health Sciences with available Step 3 scores and subject exam scores on all six clerkships (Classes of 2007–2011, N = 654). Poor performance on subject exams was defined as scoring one standard deviation (SD) or more below the mean using the national norms of the corresponding test year. The association between poor performance on the subject exams and the probability of passing or failing Step 3 was tested using contingency table analyses and logistic regression modeling. Results Students performing poorly on one subject exam were significantly more likely to fail Step 3 (OR 14.23 [95% CI 1.7–119.3]) compared with students with no subject exam scores that were 1 SD below the mean. Poor performance on more than one subject exam further increased the chances of failing (OR 33.41 [95% CI 4.4–254.2]). This latter group represented 27% of the entire cohort, yet contained 70% of the students who failed Step 3. Conclusions These findings suggest that individual schools could benefit from a review of subject exam performance to develop and validate their own criteria for identifying students at risk for failing Step 3.


Advances in Health Sciences Education | 2012

Measurement Precision for Repeat Examinees on a Standardized Patient Examination.

Mark R. Raymond; Kimberly A. Swygert; Nilufer Kahraman

Examinees who initially fail and later repeat an SP-based clinical skills exam typically exhibit large score gains on their second attempt, suggesting the possibility that examinees were not well measured on one of those attempts. This study evaluates score precision for examinees who repeated an SP-based clinical skills test administered as part of the US Medical Licensing Examination sequence. Generalizability theory was used as the basis for computing conditional standard errors of measurement (SEM) for individual examinees. Conditional SEMs were computed for approximately 60,000 single-take examinees and 5,000 repeat examinees who completed the Step 2 Clinical Skills Examination® between 2007 and 2009. The study focused exclusively on ratings of communication and interpersonal skills. Conditional SEMs for single-take and repeat examinees were nearly indistinguishable across most of the score scale. US graduates and IMGs were measured with equal levels of precision at all score levels, as were examinees with differing levels of skill speaking English. There was no evidence that examinees with the largest score changes were measured poorly on either their first or second attempt. The large score increases for repeat examinees on this SP-based exam probably cannot be attributed to unexpectedly large errors of measurement.

Collaboration


Dive into the Kimberly A. Swygert's collaboration.

Top Co-Authors

Avatar

David B. Swanson

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Mark R. Raymond

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Ting Dong

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Aaron Saguil

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Anthony R. Artino

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Brian E. Clauser

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Nilufer Kahraman

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Steven A. Haist

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Christopher M. Zahn

Uniformed Services University of the Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge