Gerard F. Dillon
National Board of Medical Examiners
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerard F. Dillon.
Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2009
John R. Boulet; Sydney Smee; Gerard F. Dillon; John R. Gimpel
Although standardized patients have been employed for formative assessment for over 40 years, their use in high-stakes medical licensure examinations has been a relatively recent phenomenon. As part of the medical licensure process in the United States and Canada, the clinical skills of medical students, medical school graduates, and residents are evaluated in a simulated clinical environment. All of the evaluations attempt to provide the public with some assurance that the person who achieves a passing score has the knowledge and/or requisite skills to provide safe and effective medical services. Although the various standardized patient-based licensure examinations differ somewhat in terms of purpose, content, and scope, they share many commonalities. More important, given the extensive research that was conducted to support these testing initiatives, combined with their success in promoting educational activities and in identifying individuals with clinical skills deficiencies, they provide a framework for validating new simulation modalities and extending simulation-based assessment into other areas.
Academic Medicine | 2002
Gerard F. Dillon; Stephen G. Clyman; Brian E. Clauser; Melissa J. Margolis
In the early to mid-1990s, the National Board of Medical Examiners (NBME) examinations were replaced by the United States Medical Licensing Examination (USMLE). The USMLE, which was designed to have three components or Steps, was administered as a paper-and-pencil test until the late 1990s, when it moved to a computer-based testing (CBT) format. The CBT format provided the opportunity to realize the results of simulation research and development that had occurred during the prior two decades. A milestone in this effort occurred in November 1999 when, with the implementation of the computer-delivered USMLE Step 3 examination, the Primumt Computer-based Case Simulations (CCSs) were introduced. In the year preceding this introduction and the more than two years of operational use since the introduction, numerous challenges have been addressed. Preliminary results of this initial experience have been promising. This paper introduces the relevant issues, describes some pertinent research findings, and identifies next steps for research.
Academic Medicine | 2006
Polina Harik; Brian E. Clauser; Irina Grabovsky; Melissa J. Margolis; Gerard F. Dillon; John R. Boulet
Background This research examined relationships between and among scores from the United States Medical Licensing Examination (USMLE) Step 1, Step 2 Clinical Knowledge (CK), and subcomponents of the Step 2 Clinical Skills (CS) examination. Method Correlations and failure rates were produced for first-time takers who tested during the first year of Step 2 CS Examination administration (June 2004 to July 2005). Results True-score correlations were high between patient note (PN) and data gathering (DG), moderate between communication and interpersonal skills and DG, and low between the remaining score pairs. There was little overlap between examinees failing Step 2 CK and the different components of Step 2 CS. Conclusion Results suggest that combining DG and PN scores into a single composite score is reasonable and that relatively little redundancy exists between Step 2 CK and CS scores.
Academic Medicine | 2006
Monica M. Cuddy; David B. Swanson; Gerard F. Dillon; Matthew C. Holtman; Brian E. Clauser
Background This study examines: (1) the relationships between examinee characteristics and United States Medical Licensing Examination Step 2 Clinical Knowledge (CK) performance; (2) the effect of gender and examination timing (time per item) on the relationship between Steps 1 and 2 CK; and (3) the effect of school characteristics on the relationships between examinee characteristics and Step 2 CK performance. Method A series of hierarchical linear models (examinees-nested-in-schools) predicting Step 2 CK scores was fit to the data set. The sample included 54,487 examinees from 114 U.S. Liaison Committee on Medical Education–accredited medical schools. Results Consistent with past examinee-level research, women generally outperformed men on Step 2 CK, and examinees who received more time per item generally outperformed examinees who received less time per item. Step 1 score was generally more strongly associated with Step 2 CK performance for men and for examinees who received less time per item. School-level characteristics (size, average Step 1 performance) influenced the relationship between Steps 1 and 2 CK. Conclusion Both examinee-level and school-level characteristics are important for understanding Step 2 CK performance.
JAMA | 2013
Steven A. Haist; Peter J. Katsufrakis; Gerard F. Dillon
The United States Medical Licensing Examination (USMLE) is the only route to medical licensure for graduates of the Liaison Committee on Medical Education (LCME)–accreditedmedicalschoolsintheUnitedStatesand forallgraduatesofinternationalmedicalschools.Itcurrently consists of 4 examinations: Step 1, assessing application of foundational science; Step 2 Clinical Skills, assessing communication, physical examination, and data interpretation skills; Step 2 Clinical Knowledge, assessing knowledge of clinical medicine; and Step 3, assessing application of clinicalknowledgeandpatientmanagement.Intheearly1990s, the USMLE replaced the National Board of Medical Examiners(NBME)certificationexaminationsandtheFederation LicensingExamination(FLEX)program.Sinceits inception, the USMLE has undergone gradual evolution in design and format, with some major changes that include computerized examination delivery and the use of computerized patient simulations in 1999, and standardized patients introduced in 2004 to assess clinical skills. In 2004, USMLE undertook an in-depth review of the program’s purpose, design, and format. The review resultedin5majorrecommendationsadoptedin2009:(1)to focus on assessments that support state licensing authorities’ decisions about a physician’s readiness to provide patient care at entry into supervised practice and entry into unsupervised practice; (2) to adopt a general competencies schema consistent with national standards for the overall design, development, and scoring of USMLE; (3) to emphasizethescientificfoundationsofmedicineinallcomponents of USMLE; (4) to continue and enhance the assessmentofclinicalskills importanttomedicalpractice;and (5) to introduce assessment of an examinee’s ability to obtain, interpret, and apply scientific and clinical information. Why change the USMLE? Medicine and clinical practice evolve, mandating revision in education and assessment. These have included increasing the use of clinical cases to facilitate problem-based learning early in the educational process,1 widespread use of standardized patients to teach and assess trainees,2 revisiting the basic sciences during the fourth-year of medical school,3 and evolving technology resulting in increased adoption of high fidelity simulations for teaching and assessment.4,5 The mandate to USMLE was that to remain relevant to evolving practice requiresanevolutioninfocusanddesignofassessmentthat parallels educational change. The USMLE review also identified unintended consequencesofthecurrentexaminationprogram.Asexamples, many medical students prepared for Step 1 with a “binge and purge” mentality; because students failed to recognize the value of the basic sciences in medical practice, many memorized information for short-term retention. Planned changes to emphasize basic sciences throughout USMLE (the third recommendation) strive to change this mentality by reinforcing a physician’s ability to apply foundational science in patient care throughout the USMLE. In the Step 2ClinicalSkillsexamination,scoringofthestandardizedpatient cases via a history checklist caused many examinees toaskasmanyquestionsaspossibleintheshortestamount oftime.This“shotgun”approachtohistory-takingdoesnot resemble how medical educators and communication experts teach patient communication and does not reflect a behavior that is in the best interest of patients. Consistent with the first recommendation for developmentof licensureandcertificationexaminationsthatreflect practice readiness,6 the NBME undertook a series of analyses to inform changes to the USMLE. Subsequent activities of successful examinees compiled via questionnaires, analyses of health care records, and direct observationareusedinpracticeanalyses,alongwithexpertjudgment. Five national databases were analyzed and surveys were conducted of beginning interns7 and newly licensed physicians. Only 15% of the interns’ experiences were ambulatory-based; interns were required to perform a variety of procedures, often with general attending supervision. Also common were complex communication tasks; information retrieval, evaluation, and integration; and ordering and interpreting as well as performing a variety of procedures. Moonlighting activities among residents were prevalent. The practice analyses suggest modifying the practice setting reflected in the examinations, assessing knowledge about a variety of procedures, and expanding represented competencies to include complex communication tasks and evidence-based medicine skills. In response to the second recommendation, the USMLE adopted the Accreditation Council for Graduate Medical Education general medical competency-based schema. Under this schema, the 6 competencies (medical knowledge,patientcare,communicationandinterpersonal skills, practice-based learning and improvement, professionalism,andsystems-basedpractice)andassociatedsubcompetenciesareusedtoguidetestdesign,contentdevelopment, and score reporting, and to organize content within examinations. The competencies will provide a framework for feedback to examinees and schools and will shape the USMLE research agenda. Recent and future changes to USMLE are outlined in the Table. In response to the fourth recommendation, the Step 2 Clinical Skills examination continues to evolve. The apVIEWPOINT
Academic Medicine | 2010
Carol Morrison; Linette P. Ross; Thomas Fogle; Aggie Butler; Judith G. Miller; Gerard F. Dillon
Background This study examined the relationship between performance on the National Board of Medical Examiners Comprehensive Basic Science Self-Assessment (CBSSA) and performance on United States Medical Licensing Examination Step 1. Method The study included 12,224 U.S. and Canadian medical school students who took CBSSA prior to their first Step 1 attempt. Linear and logistic regression analyses investigated the relationship between CBSSA performance and performance on Step 1, and how that relationship was related to interval between exams. Results CBSSA scores explained 67% of the variation in first Step 1 scores as the sole predictor variable and 69% of the variation when time between CBSSA attempt and first Step 1 attempt was also included as a predictor. Logistic regression results showed that examinees with low scores on CBSSA were at higher risk of failing their first Step 1 attempt. Conclusions Results suggest that CBSSA can provide students with a realistic self-assessment of their readiness to take Step 1.
Academic Medicine | 2008
Katherine Berg; Marci Winward; Brian E. Clauser; Judith A. Veloski; Dale Berg; Gerard F. Dillon; J. Jon Veloski
Background Little is known about the relationship between performance on clinical assessments during medical school and performance on similar licensing tests. Method Correlation coefficients were computed and corrected for measurement error using data for 217 students who completed a school’s clinical assessment and took the Step 2 Clinical Skills (CS) examination. Results Observed (and corrected) correlations between the two tests were 0.18 (0.32) for Data Gathering, 0.35 (0.75) for Documentation, and 0.32 (0.56) for Communication/Interpersonal Skills. The highest correlation within each test was between Documentation and Data Gathering. The lowest was between Documentation and Communication/ Interpersonal Skills. Conclusions The pattern of correlations supports each test’s construct validity. The low correlations suggest that the tests are not redundant, and do not support using the scores on the school’s assessment to predict performance on Step 2 CS. Future studies of these relationships need to address the time between the two assessments and the effect of intervening remedial programs.
Academic Medicine | 2014
Kathleen Z. Holtzman; David B. Swanson; Wenli Ouyang; Gerard F. Dillon; John R. Boulet
Purpose To investigate country-to-country variation in performance across clinical science disciplines and tasks for examinees taking the Step 2 Clinical Knowledge (CK) component of the United States Medical Licensing Examination. Method In 2012 the authors analyzed demographic characteristics, total scores, and percent-correct clinical science discipline and task scores for more than 88,500 examinees taking Step 2 CK for the first time during the 2008–2010 academic years. For each examinee and score, differences between the score and the mean performance of examinees at U.S. MD-granting medical schools were calculated, and mean differences by country of medical school were tabulated for analysis of country-to-country variation in performance by clinical discipline and task. Results Controlling for overall performance relative to U.S. examinees, results showed that international medical graduates (IMGs) performed best in Surgery and worst in Psychiatry for clinical discipline scores; for clinical tasks, IMGs performed best in Understanding Mechanisms of Disease and worst in Promoting Preventive Medicine and Health Maintenance. The pattern of results was strongest for IMGs attending schools in the Middle East and Australasia, present to a lesser degree for IMGs attending schools in Europe, and absent for IMGs attending Caribbean medical schools. Conclusions Country-to-country differences in relative performance were present for both clinical discipline and task scores. Possible explanations include differences in learning outcomes, curriculum emphasis and clinical experience, standards of care, and culture, as well as the effects of English as a second language and relative emphasis on preparing students to take the Step 2 CK exam.
Academic Medicine | 2003
Amy Sawhill; Gerard F. Dillon; Douglas R. Ripkey; Richard E. Hawkins; David B. Swanson
Purpose. This study examined the extent to which differences in clinical experience, gained in postgraduate training programs, affect performance on Step 3 of the United States Medical Licensing Examination (USMLE). Method. Subjects in the study were 36,805 U.S. and Canadian medical school graduates who took USMLE Step 3 for the first time between November 1999 and December 2002. Regression analyses examined the relation between length and type of postgraduate training and Step 3 score after controlling for prior performance on previous USMLE examinations. Results. Results indicate that postgraduate training in programs that provide exposure to a broad range of patient problems, and continued training in such areas, improves performance on Step 3. Conclusions. Study data reaffirm the validity of the USMLE Step 3 examination, and the information found in the pattern of results across specialties may be useful to residents and program directors.
Journal of General Internal Medicine | 2012
Richard A. Feinberg; Kimberly A. Swygert; Steven A. Haist; Gerard F. Dillon; Constance T. Murray
BACKGROUNDThe United States Medical Licensing Examination® (USMLE®) Step 3® examination is a computer-based examination composed of multiple choice questions (MCQ) and computer-based case simulations (CCS). The CCS portion of Step 3 is unique in that examinees are exposed to interactive patient-care simulations.OBJECTIVEThe purpose of the following study is to investigate whether the type and length of examinees’ postgraduate training impacts performance on the CCS component of Step 3, consistent with previous research on overall Step 3 performance.DESIGNRetrospective cohort studyPARTICIPANTSMedical school graduates from U.S. and Canadian institutions completing Step 3 for the first time between March 2007 and December 2009 (n = 40,588).METHODSPost-graduate training was classified as either broadly focused for general areas of medicine (e.g. pediatrics) or narrowly focused for specific areas of medicine (e.g. radiology). A three-way between-subjects MANOVA was utilized to test for main and interaction effects on Step 3 and CCS scores between the demographic characteristics of the sample and type of residency. Additionally, to examine the impact of postgraduate training, CCS scores were regressed on Step 1 and Step 2 Clinical Knowledge (CK) scores. Residuals from the resulting regressions were plotted.RESULTSThere was a significant difference in CCS scores between broadly focused (μ = 216, σ = 17) and narrowly focused (μ=211, σ = 16) residencies (p < 0.001). Examinees in broadly focused residencies performed better overall and as length of training increased, compared to examinees in narrowly focused residencies. Predictors of Step 1 and Step 2 CK explained 55% of overall Step 3 variability and 9% of CCS score variability.CONCLUSIONSFactors influencing performance on the CCS component may be similar to those affecting Step 3 overall. Findings are supportive of the validity of the Step 3 program and may be useful to program directors and residents in considering readiness to take this examination.