John C. Burkhardt
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John C. Burkhardt.
Journal of Emergency Medicine | 2014
Laura R. Hopson; John C. Burkhardt; R. Brent Stansfield; Taher Vohra; Danielle Turner-Lawrence; Eve Losman
BACKGROUND The Multiple Mini-Interview (MMI) uses multiple, short-structured contacts to evaluate communication and professionalism. It predicts medical school success better than the traditional interview and application. Its acceptability and utility in emergency medicine (EM) residency selection are unknown. OBJECTIVE We theorized that participants would judge the MMI equal to a traditional unstructured interview and it would provide new information for candidate assessment. METHODS Seventy-one interns from 3 programs in the first month of training completed an eight-station MMI focused on EM topics. Pre- and post-surveys assessed reactions. MMI scores were compared with application data. RESULTS EM grades correlated with MMI performance (F[1, 66] = 4.18; p < 0.05) with honors students having higher scores. Higher third-year clerkship grades were associated with higher MMI performance, although this was not statistically significant. MMI performance did not correlate with match desirability and did not predict most other components of an application. There was a correlation between lower MMI scores and lower global ranking on the Standardized Letter of Recommendation. Participants preferred a traditional interview (mean difference = 1.36; p < 0.01). A mixed format (traditional interview and MMI) was preferred over a MMI alone (mean difference = 1.1; p < 0.01). MMI performance did not significantly correlate with preference for the MMI. CONCLUSIONS Although the MMI alone was viewed less favorably than a traditional interview, participants were receptive to a mixed-methods interview. The MMI does correlate with performance on the EM clerkship and therefore can measure important abilities for EM success. Future work will determine whether MMI performance predicts residency performance.
Medical Education | 2016
Larry D. Gruppen; John C. Burkhardt; James T. Fitzgerald; Martha M. Funnell; Hilary M. Haftel; Monica L. Lypson; Patricia B. Mullan; Sally A. Santen; Kent J. Sheets; Caren M. Stalburg; John A. Vasquez
Competency‐based education (CBE) has been widely cited as an educational framework for medical students and residents, and provides a framework for designing educational programmes that reflect four critical features: a focus on outcomes, an emphasis on abilities, a reduction of emphasis on time‐based training, and promotion of learner centredness. Each of these features has implications and potential challenges for implementing CBE.
Medical Teacher | 2015
James T. Fitzgerald; John C. Burkhardt; Steven J. Kasten; Patricia B. Mullan; Sally A. Santen; Kent J. Sheets; Antonius Tsai; John A. Vasquez; Larry D. Gruppen
Abstract There is a growing demand for health sciences faculty with formal training in education. Addressing this need, the University of Michigan Medical School created a Master in Health Professions Education (UM-MHPE). The UM-MHPE is a competency-based education (CBE) program targeting professionals. The program is individualized and adaptive to the learner’s situation using personal mentoring. Critical to CBE is an assessment process that accurately and reliably determines a learner’s competence in educational domains. The program’s assessment method has two principal components: an independent assessment committee and a learner repository. Learners submit evidence of competence that is evaluated by three independent assessors. The assessments are presented to an Assessment Committee who determines whether the submission provides evidence of competence. The learner receives feedback on the submission and, if needed, the actions needed to reach competency. During the program’s first year, six learners presented 10 submissions for review. Assessing learners in a competency-based program has created challenges; setting standards that are not readily quantifiable is difficult. However, we argue it is a more genuine form of assessment and that this process could be adapted for use within most competency-based formats. While our approach is demanding, we document practical learning outcomes that assess competence.
Western Journal of Emergency Medicine | 2015
Mark Silverberg; Moshe Weizberg; Tiffany Murano; Jessica L. Smith; John C. Burkhardt; Sally A. Santen
Introduction The primary objective of this study was to determine the prevalence of remediation, competency domains for remediation, the length, and success rates of remediation in emergency medicine (EM). Methods We developed the survey in Surveymonkey™ with attention to content and response process validity. EM program directors responded how many residents had been placed on remediation in the last three years. Details regarding the remediation were collected including indication, length and success. We reported descriptive data and estimated a multinomial logistic regression model. Results We obtained 126/158 responses (79.7%). Ninety percent of programs had at least one resident on remediation in the last three years. The prevalence of remediation was 4.4%. Indications for remediation ranged from difficulties with one core competency to all six competencies (mean 1.9). The most common were medical knowledge (MK) (63.1% of residents), patient care (46.6%) and professionalism (31.5%). Mean length of remediation was eight months (range 1–36 months). Successful remediation was 59.9% of remediated residents; 31.3% reported ongoing remediation. In 8.7%, remediation was deemed “unsuccessful.” Training year at time of identification for remediation (post-graduate year [PGY] 1), longer time spent in remediation, and concerns with practice-based learning (PBLI) and professionalism were found to have statistically significant association with unsuccessful remediation. Conclusion Remediation in EM residencies is common, with the most common areas being MK and patient care. The majority of residents are successfully remediated. PGY level, length of time spent in remediation, and the remediation of the competencies of PBLI and professionalism were associated with unsuccessful remediation.
Western Journal of Emergency Medicine | 2016
William J. Peterson; Laura R. Hopson; Sorabh Khandelwal; Melissa White; Fiona E. Gallahue; John C. Burkhardt; Aimee M. Rolston; Sally A. Santen
Introduction This study investigates the impact of the Doximity rankings on the rank list choices made by residency applicants in emergency medicine (EM). Methods We sent an 11-item survey by email to all students who applied to EM residency programs at four different institutions representing diverse geographical regions. Students were asked questions about their perception of Doximity rankings and how it may have impacted their rank list decisions. Results Response rate was 58% of 1,372 opened electronic surveys. This study found that a majority of medical students applying to residency in EM were aware of the Doximity rankings prior to submitting rank lists (67%). One-quarter of these applicants changed the number of programs and ranks of those programs when completing their rank list based on the Doximity rankings (26%). Though the absolute number of programs changed on the rank lists was small, the results demonstrate that the EM Doximity rankings impact applicant decision-making in ranking residency programs. Conclusion While applicants do not find the Doximity rankings to be important compared to other factors in the application process, the Doximity rankings result in a small change in residency applicant ranking behavior. This unvalidated ranking, based principally on reputational data rather than objective outcome criteria, thus has the potential to be detrimental to students, programs, and the public. We feel it important for specialties to develop consensus around measurable training outcomes and provide freely accessible metrics for candidate education.
Western Journal of Emergency Medicine | 2017
Korie L. Zink; Marcia Perry; Kory S. London; Olivia Floto; Benjamin Bassin; John C. Burkhardt; Sally A. Santen
Introduction As patients become increasingly involved in their medical care, physician-patient communication gains importance. A previous study showed that physician self-disclosure (SD) of personal information by primary care providers decreased patient rating of the provider communication skills. Objective The objective of this study was to explore the incidence and impact of emergency department (ED) provider self-disclosure on patients’ rating of provider communication skills. Methods A survey was administered to 520 adult patients or parents of pediatric patients in a large tertiary care ED during the summer of 2014. The instrument asked patients whether the provider self-disclosed and subsequently asked patients to rate providers’ communication skills. We compared patients’ ratings of communication measurements between encounters where self-disclosure occurred to those where it did not. Results Patients reported provider SD in 18.9% of interactions. Provider SD was associated with more positive patient perception of provider communication skills (p<0.05), more positive ratings of provider rapport (p<0.05) and higher satisfaction with provider communication (p<0.05). Patients who noted SD scored their providers’ communication skills as “excellent” (63.4%) compared to patients without self-disclosure (47.1%). Patients reported that they would like to hear about their providers’ experiences with a similar chief complaint (64.4% of patients), their providers’ education (49%), family (33%), personal life (21%) or an injury/ailment unlike their own (18%). Patients responded that providers self-disclose to make patients comfortable/at ease and to build rapport. Conclusion Provider self-disclosure in the ED is common and is associated with higher ratings of provider communication, rapport, and patient satisfaction.
Academic Medicine | 2016
John C. Burkhardt; Stephen L. DesJardins; Carol A. Teener; Sally A. Santen
Purpose In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). Method U-M and American Medical College Application Service data (2006–2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic “dashboard” was created for practical use. Results Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. Conclusions An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.
PLOS ONE | 2018
John C. Ray; Laura R. Hopson; William J. Peterson; Sally A. Santen; Sorabh Khandelwal; Fiona E. Gallahue; Melissa White; John C. Burkhardt
Background Relatively little is understood about which factors influence students’ choice of specialty and when learners ultimately make this decision. Objective The objective is to understand how experiences of medical students relate to the timing of selection of Emergency Medicine (EM) as a specialty. Of specific interest were factors such as how earlier and more positive specialty exposure may impact the decision-making process of medical students. Methods A cross-sectional survey study of EM bound 4th year US medical students (MD and DO) was performed exploring when and why students choose EM as their specialty. An electronic survey was distributed in March 2015 to all medical students who applied to an EM residency at 4 programs representing different geographical regions. Descriptive analyses and multinomial logistic regressions were performed. Results 793/1372 (58%) responded. Over half had EM experience prior to medical school. When students selected EM varied: 13.9% prior to, 50.4% during, and 35.7% after their M3 year. Early exposure, presence of an EM residency program, previous employment in the ED, experience as a pre-hospital provider, and completion of an M3 EM clerkship were associated with earlier selection. Delayed exposure to EM was associated with later selection of EM. Conclusions Early exposure and prior life experiences were associated with choosing EM earlier in medical school. The third year was identified as the most common time for definitively choosing the specialty.
Academic Emergency Medicine | 2011
John C. Burkhardt; Terry Kowalenko; William J. Meurer
Journal of Leadership Studies | 2002
John C. Burkhardt